CN115620364A - Certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition - Google Patents
Certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition Download PDFInfo
- Publication number
- CN115620364A CN115620364A CN202211260916.1A CN202211260916A CN115620364A CN 115620364 A CN115620364 A CN 115620364A CN 202211260916 A CN202211260916 A CN 202211260916A CN 115620364 A CN115620364 A CN 115620364A
- Authority
- CN
- China
- Prior art keywords
- image
- certificate photo
- portrait
- formula
- detection method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a certificate photo generation detection method and a certificate photo generation detection system based on intelligent three-dimensional portrait acquisition, belonging to the technical field of face recognition and certificate photo detection; the method comprises the following steps: the method comprises the steps of adopting a Hua-Shi 100UC card reader and a Kinect DK camera to collect three-dimensional portrait to identify a source image and check identity information; performing head posture detection and affine transformation correction on the three-dimensional portrait; cutting the size of the identification photo image; replacing the background of the identification photo image and beautifying the portrait; and respectively carrying out three types of quality detection of color cast, definition and exposure on the certificate photo image. Aiming at the problems of low edge clustering precision and weak background clustering anti-noise capability in a portrait segmentation and clustering algorithm in the traditional certificate photo generation system or method, the invention adopts a sparrow search algorithm SSA combined with the portrait clustering algorithm of a K-Means method, further perfects the certificate photo generation process on the basis of keeping high-efficiency generation of the certificate photo, and supports users to obtain the certificate photo with higher quality and better meeting the standard.
Description
Technical Field
The invention belongs to the technical field of face recognition and certificate photo detection, and particularly relates to a certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition.
Background
In the current society, the application of identification cards is very wide, and the identity card, the resume of job, the passport and the driving license which are commonly used require a bearer to provide a single-person photo which meets the specified standard. The traditional certificate photo making process needs professional photographers and professional retouching personnel to spend a long time, so some software companies develop a convenient self-service certificate photo generating system, namely, only the user needs to upload the person photo to the system, and a series of certificate making and detection operations can be completed according to system prompts.
Common self-service certificate photo generation systems have beautiful figure shows, online certificate photos, certificate star and the like, and the self-service certificate photo generation systems can greatly improve the efficiency problem of traditional certificate photo generation. However, the existing system has the following problems:
1) The specification and size of the generated certificate photo are oriented to multiple purposes such as scholarly calendar examination, financial accounting and the like, and the functions of whitening, face thinning, skin grinding and the like are provided for enhancing the subjective quality and the visual quality of the certificate photo. However, the beautiful picture clearly requires that the portrait must be shot under a solid background wall, and the subjective quality enhancement function needs to be manually completed by the user.
2) The 'on-line certificate photo' is based on a certificate photo generation platform in a client-server form, a certificate maker uploads a source image through a webpage and selects a size specification to complete generation of the certificate photo, but a certificate photo owner quality enhancement function is not provided.
3) The star of the certificate provides functions of intelligent size cutting, certificate background color replacement and the like, but the background replacement is finished by manually scrawling a background area, and the effect is often unsatisfactory.
The system can cause self-service certificate making failure or generated certificate pictures not to accord with related certificate making standards due to lack of a source image quality pre-detection process; meanwhile, the operation with low intelligence degree cannot achieve high-efficiency evidence making.
Disclosure of Invention
The purpose of the invention is: the method and the system for generating and detecting the certificate photo solve the problems of low human image background segmentation precision and poor background clustering anti-noise capability in the existing certificate photo generation system or method, use a human eye positioning setaface algorithm, position and cut the human image according to the required size of the certificate photo after zooming the human eye space size, greatly improve the speed and efficiency of human image cutting, improve the user experience on the basis of ensuring the quality of the certificate photo, further perfect the certificate photo generation process, and support the user to obtain the certificate photo with higher quality and more in line with the standard.
In order to achieve the purpose, the invention adopts the following technical scheme: the identification photo generation and detection method based on intelligent three-dimensional portrait acquisition comprises the following steps:
s1, acquiring a source image and verifying identity information:
reading user identity card information by adopting a Huashi 100UC card reader, developing a Kinect DK camera through a Kinect SDK to collect a three-dimensional portrait, and performing SIFT feature matching through detecting a scale space extreme value, accurately positioning key points and main direction distribution to complete source image identification and identity information verification;
s2, source image detection and correction:
reading the three-dimensional portrait passed through the check in the step S1 to detect the head posture, firstly, utilizing a positioning algorithm to detect human eyes to accurately position the pupil position, solving an included angle formed by a connecting line of two eyes and the horizontal direction, judging whether the head deflection direction is left side deflection or right side deflection through priori knowledge, and carrying out affine transformation correction on the image according to a solved rotation transformation affine matrix;
s3, cutting the size of the identification photo image:
standard eye distance E according to the requirements of certificate photo type sd Determining the scaling R of the corrected image in step S2 s Scaling the corrected imageScaling of R s The expression of (a) is:in the formula, E d The human eye distance of the corrected image is obtained;
standard overhead distance Y according to the requirements of certificate photo type st Width W of standard certificate photo sd And high H sd Solving the cutting coordinate (X) at the upper left corner of the cutting frame by the parameter p ,Y p ) Automatically cutting the zoomed image; cutting coordinate (X) p ,Y p ) The calculation formula of (2) is as follows:Y p =Y t *R s -Y st in the formula (X) 0 ,Y 0 )、(X 1 ,Y 1 ) Respectively the left and right coordinates and Y of human eyes t The distance between the head and the top;
s4, background replacement and portrait beautification of the identification photo image:
adopting a K-means algorithm based on sparrow algorithm optimization to segment the portrait foreground and the background which are automatically cut in the step S3, firstly searching a global optimal clustering center by using the sparrow algorithm, transferring the optimal clustering center point to the K-means algorithm for clustering, taking the background clustering center point for carrying out mask operation to obtain the portrait foreground, and selecting colors to fill the background to complete the background replacement of the image;
smoothing the face image automatically cut in the step S3 by adopting a double-exponential edge protection smoothing filter Beeps filtering method, and beautifying the face by keeping the details of eyes and edges;
s5, quality detection of the identification photo image:
respectively carrying out three quality detections of color cast, definition and exposure on the certificate photo image after the background replacement and the portrait beautification in the step S4; the color cast detection method comprises the following steps: after the RGB image format is converted into the Lab image format, solving a color cast factor K by adopting an equivalent circle color cast detection method, wherein the possibility of the whole image color cast is low when the K value is not more than 1.5;
the definition detection method comprises the following steps: firstly, low-pass filtering an image to be evaluated to obtain a fuzzy picture, calculating the structural similarity SSIM between the image to be evaluated and the fuzzy picture, obtaining the structural definition NRSS without a reference image according to the structural similarity SSIM, and when the definition NRSS is not less than 1, determining that the definition of the whole image is qualified;
the exposure detection method comprises the following steps: converting an input image into a color space from an RGB space to an HSV space, and judging the exposure degree on a V component by using a pixel mean value: if the average value is more than 40, determining the exposure is overexposed; if the average value is below 17, the exposure degree is judged to be insufficient; the others are qualified.
In step S1, the specific method for detecting the extreme value of the scale space, the accurate positioning of the key point, and the allocation of the principal direction includes:
s11, searching all scale positions on a source image, wherein the scale space of an image is defined as the convolution of a Gaussian function with a variable scale and the source image, and the theoretical formula of the scale space L (x, y, sigma) is as follows:
in the formula, G (x, y, sigma) is a Gaussian function, I (x, y) is a source image, (x, y) represents the position of an image pixel, m and n represent the dimension of a Gaussian template, sigma is a scale space factor, and the smaller the dimension represents, the less the image is smoothed, the smaller the corresponding dimension is;
s12, identifying key points which are invariable in scale and rotation through the scale space function, carrying out Gaussian blur and down sampling of different scales, namely alternate point sampling on the image, and carrying out local extremum detection through a DOG Gaussian difference pyramid function, wherein the calculation formula of the DOG Gaussian difference pyramid function is as follows:
d (x, y, σ) = [ G (x, y, k σ) -G (x, y, σ) ] + I (x, y) = L (x, y, k σ) -L (x, y, σ), where k is a proportionality coefficient;
s13, performing curve fitting on the DOG Gaussian difference pyramid function of the scale space to improve the stability of the key points, wherein the DOG Gaussian difference isThe Taylor fitting function expansion of the pyramid function in the scale space is as follows:
in the formula D T Transposed matrix of D, X T Is a transpose matrix of X, D (X) is the precise position of the key point;
s14, aiming at the key points detected in the DOG Gaussian difference pyramid function, acquiring the gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of the Gaussian difference pyramid image where the key points are located, counting the gradient and direction of the pixels in the neighborhood by using a histogram after calculation is completed, and taking the maximum value in the histogram as the main direction of the key points.
In the step S1, the specific method for identifying the source image and verifying the identity information includes:
s15, respectively establishing a key point description subset for the template graph reference image and the real-time graph update image, completing the identification of the source image by comparing the key point description subsets in the two point sets, wherein the similarity measurement of the key point description subset with 128 dimensions adopts Euclidean distance,
the expression of the subset of keypoint descriptors in the template graph is: r i =(r i1 ,r i2 ,…,r i128 ),
The expression of the keypoint descriptor subset in the real-time graph is as follows: s i =(s i1 ,s i2 ,…,s i128 ),
in the formula, i is the number of key points, j is the dimension, and a matched key point descriptor subset d (R) is obtained i ,S i ) The requirements are satisfied: distance R in real-time map i Nearest point S j Distance R from real-time image i Nearest point S p The ratio of (A) to (B) is less than Threshold, which is a Threshold;
and S16, matching the key points by adopting a data structure of a kd tree to complete searching, wherein the searching content is to search the source original image feature points most adjacent to the feature points of the target image and the source image feature points next adjacent to the feature points of the target image by taking the key points of the target image as a reference, and complete identity verification.
In step S2, the specific method of source image detection and correction processing is as follows:
s21, detecting human eyes by using a positioning algorithm, and returning coordinates (x) of the upper left corner of the rectangular frame of the human eye area 0 ,y 0 ) The value, the width w and the height h of the rectangular frame are calculated to obtain the pupil coordinate (x) 1 ,y 1 ) Wherein, in the process, calculating an included angle formed by a connecting line of the two eyes and the horizontal direction according to the pupil coordinates;
s22, the head inclines to the left and the right, and the left eye pupil coordinate is set as A (x) A ,y A ) And the right eye pupil coordinate is B (x) B ,y B ) In the pixel coordinate system, when the horizontal and vertical coordinates of the left eye are smaller than the horizontal and vertical coordinates of the right eye, i.e. x A <x B ,y A <y B When the image is rotated counterclockwise according to the rotation angle; when the abscissa of the left eye is smaller than the abscissa of the right eye and the ordinate of the left eye is greater than the ordinate of the right eye, i.e. x A <x B ,y A >y B When the image is rotated clockwise according to the rotation angle;
s23, translating the center of the connecting line of the two eyes as a rotation center to the origin of coordinates, rotating according to the rotation angle, translating the origin of coordinates to the rotation center, solving an affine transformation matrix T,
any point (x, y) in the image is recorded as (x ', y') after affine transformation, and an expression after matrix affine transformation is as follows:
the translation transformation matrix M for translating the rotation center point to the coordinate origin is as follows:
wherein center is x As the x coordinate of the center point, center y Is a coordinate of the center point y and is a coordinate of the center point y,
the rotation transformation matrix R for rotating the image by an angle theta around the origin is as follows:
a post-rotation translation matrix M' that translates the center of rotation from the origin of coordinates to the home position is:
the expression of the affine transformation matrix T obtained by solving in combination with the above formula is as follows:
and S24, performing matrix operation on the image according to the solved affine transformation matrix to complete rotation, and automatically rotating and transforming the inclined head to the front of the front view.
In the step S4, the face image is beautified by adopting a method of a Beeps filtering of a double-exponential edge protection smoothing filter, and the algorithm flow is as follows: after forward recursion operation is carried out on the original image, backward recursion operation is carried out on the original image, and weighted combination is carried out on the result according to the following formula to obtain the human image beautifying operation;
in the formula (I), the compound is shown in the specification,for the result of the forward recursion operation, phi k]Is directed toAnd lambda is a weighting coefficient as a result of the post-recursion operation.
In step S5, the formula for calculating the color cast factor K is:in the formula, D is an average chromaticity and is obtained by calculating the average chromaticity value at the positions a and b in units of pixels and recording the average chromaticity value as the center coordinates of the equivalent circle, and C is a chromaticity center and is obtained by calculating the average deviation of the image at the positions a and b.
In step S5, the specific definition detection method includes: the method comprises the following steps of firstly carrying out low-pass filtering on an image to be evaluated to obtain a fuzzy picture, extracting gradient information of the image to be evaluated and the fuzzy picture by utilizing a Sobel operator, defining the gradient image of the image to be evaluated and the gradient image of the fuzzy picture as G, and finding out N image blocks with the richest gradient information in the gradient image G, so that a calculation formula for calculating the structure definition NRSS is as follows:
in the formula, SSIM (x) i ,y i ) (x) as the similarity structure of the ith image Block i ,y i ) And the image is expressed as a pixel point set of the ith image block.
The system comprises a source image acquisition and processing module for acquiring a source image, verifying identity information, detecting the source image and correcting the source image, a certificate image generation module for cutting the size of the certificate image, replacing the background and beautifying the portrait, and a certificate image quality detection module for detecting the color cast, the definition and the exposure of the certificate image, wherein the source image acquisition and processing module adopts a Huawei 100UC card reader and a Kinect DK camera to carry out acquisition and processing, the certificate image generation module and the certificate image quality detection module realize a calculation program based on a memory and a processor, and after a user operates and finishes camera connection in a reset camera menu, the steps of the method can be realized.
The beneficial effects of the invention are:
1) The invention provides a cutting method using human eye positioning, aiming at the problems of longer cutting time and lower cutting efficiency in the traditional size cutting method, compared with most size cutting methods based on image segmentation, the invention processes the segmented portrait and extracts the boundary of the whole portrait after completing accurate segmentation, although the segmentation is more precise, the consumed time is too long, and the use experience of a user is greatly influenced.
2) The head posture detection is carried out on the input portrait photos, and if the head is inclined, shooting is required again or inclination correction is carried out; performing image segmentation on the generated certificate photo with standard size, segmenting a human body target, designing and adopting a portrait clustering algorithm based on a Sparrow Search Algorithm (SSA) and a K-Means method, so as to obtain more accurate clustering segmentation precision and finish background replacement; the system carries out quality evaluation on the photos after the credentials are generated, and the quality evaluation comprises the following steps: the detection functions of size parameters, definition, camera exposure degree, color cast, human face highlight region detection and the like conveniently and quickly verify whether the certificate photo is qualified or not; the method can improve the problem that the self-service certificate making fails or the generated certificate photo does not accord with the related certificate making standard in the source image quality pre-detection process, and simultaneously solves the problem that the operation with low intelligent degree cannot achieve high-efficiency certificate making.
Drawings
FIG. 1 is a general flow chart of the detection method of the present invention;
FIG. 2 is a flowchart of the source image detection and correction process in the detection method of the present invention;
FIG. 3 is a flow chart of the sizing of a verification photograph image in the detection method of the present invention;
FIG. 4 is a flow chart of the sparrow-K-means algorithm in the detection method of the present invention.
Detailed Description
The invention is further explained below with reference to the figures and the embodiments.
The embodiment is as follows: as shown in fig. 1, the invention provides a certificate photo generation and detection method based on intelligent three-dimensional portrait acquisition, which comprises the following steps:
s1, acquiring a source image and verifying identity information:
reading user identity card information by adopting a Huashi 100UC card reader, developing a Kinect DK camera through a Kinect SDK to collect a three-dimensional portrait, and performing SIFT feature matching through detecting a scale space extreme value, accurately positioning key points and main direction distribution to complete source image identification and identity information verification;
the specific method for detecting the extreme value of the scale space, accurately positioning the key point and distributing the main direction comprises the following steps:
s11, searching all scale positions on a source image, wherein the scale space of an image is defined as the convolution of a Gaussian function with a variable scale and the source image, and the theoretical formula of the scale space L (x, y, sigma) is as follows:
wherein G (x, y, sigma) is a Gaussian function, I (x, y) is a source image, (x, y) represents the position of an image pixel, m and n represent the dimension of a Gaussian template, sigma is a scale space factor, the smaller the dimension represents the smoother the image is, the smaller the corresponding dimension is, the small dimension corresponds to the detail feature of the image, and the large dimension corresponds to the general appearance feature of the image;
s12, identifying key points which are invariable in scale and rotation through the scale space function, carrying out Gaussian blur and down sampling of different scales, namely alternate point sampling on the image, and carrying out local extremum detection through a DOG Gaussian difference pyramid function, wherein the calculation formula of the DOG Gaussian difference pyramid function is as follows:
d (x, y, σ) = [ G (x, y, k σ) -G (x, y, σ) ]) I (x, y) = L (x, y, k σ) -L (x, y, σ), where k is a fixed value of 1.15 in the Sift operator;
s13, carrying out curve fitting on the DOG Gaussian difference pyramid function in the scale space to improve the stability of the key points, wherein the Taylor fitting function expansion of the DOG Gaussian difference pyramid function in the scale space is as follows:
wherein D (X) is the precise location of the key point, D T A transposed matrix of D, X T Is a transposed matrix of X and is,represents an offset from the center of the interpolation,when the offset in any dimension is greater than 0.5, the interpolation center is already offset to the adjacent points, the position of the current key point must be changed, and the interpolation is repeated on the new position by using a sub-pixel interpolation method until convergence, and the accurate position D (X) of the key point is obtained in the process;
s14, aiming at the key points detected in the DOG Gaussian difference pyramid function, the gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of the Gaussian difference pyramid image where the key points are located are collected, after calculation is completed, the gradient and direction of the pixels in the neighborhood are counted by using a histogram, and the maximum value in the histogram is used as the main direction of the key point.
The specific method for identifying the source image and verifying the identity information comprises the following steps:
s15, respectively establishing a key point description subset for the template graph reference image and the real-time graph update image, completing the identification of the source image by comparing the key point description subsets in the two point sets, wherein the similarity measurement of the key point description subset with 128 dimensions adopts Euclidean distance,
the expression of the subset of keypoint descriptors in the template graph is: r i =(r i1 ,r i2 ,…,r i128 ),
The expression of the keypoint descriptor subset in the real-time graph is as follows: s. the i =(s i1 ,s i2 ,…,s i128 ),
in the formula, i is the number of key points, j is the dimension, and a matched key point descriptor subset d (R) is obtained i ,S i ) The requirements are satisfied: distance R in real-time map i Nearest point S j Distance R from real-time image i Nearest point S p The ratio of (1) to (2) is less than Threshold, and the Threshold is 1.2 according to experience;
and S16, matching the key points by adopting a data structure of a kd tree to complete searching, wherein the searching content is to search the source original image feature points most adjacent to the feature points of the target image and the source image feature points next adjacent to the feature points of the target image by taking the key points of the target image as a reference, and complete identity verification.
S2, source image detection and correction:
as shown in fig. 2, reading the three-dimensional portrait passed through the verification in step S1 to perform head posture detection, firstly performing eye detection by using a positioning algorithm to accurately position a pupil position, solving an included angle formed by a connecting line of two eyes and a horizontal direction, judging whether the head deflection direction is left-side deflection or right-side deflection by priori knowledge, and performing affine transformation correction on the image according to a solved rotation transformation affine matrix;
the specific method for source image detection and correction processing comprises the following steps:
s21, detecting human eyes by using a positioning algorithm, and returning coordinates (x) of the upper left corner of a rectangular frame of a human eye area 0 ,y 0 ) The value, the width w and the height h of the rectangular frame are calculated to obtain the pupil coordinate (x) 1 ,y 1 ) Wherein, in the process, calculating an included angle formed by a connecting line of two eyes and the horizontal direction according to the pupil coordinates;
s22, the head tilts to the left or the right, and the left eye pupil coordinate is A (x) A ,y A ) And the right eye pupil coordinate is B (x) B ,y B ) In the pixel coordinate system, when the horizontal and vertical coordinates of the left eye are smaller than the horizontal and vertical coordinates of the right eye, i.e. x A <x B ,y A <y B When the image is rotated counterclockwise according to the rotation angle; when the abscissa of the left eye is smaller than the abscissa of the right eye and the ordinate of the left eye is greater than the ordinate of the right eye, i.e. x A <x B ,y A >y B When the image is rotated clockwise according to the rotation angle;
s23, translating the center of the connecting line of the two eyes serving as a rotation center to a coordinate origin, rotating according to the rotation angle, translating the coordinate origin to the rotation center, and solving an affine transformation matrix T;
and S24, performing matrix operation on the image according to the solved affine transformation matrix to complete rotation, and automatically rotating and transforming the inclined head to the front of the front view.
S3, cutting the size of the identification photo image:
standard eye distance E according to the requirements of certificate photo type sd Determining the scaling R of the image modified in step S2 s Scaling the corrected image by a scaling ratio R s The expression of (c) is:in the formula, E d The human eye distance of the corrected image is obtained;
standard overhead distance Y according to the type of identification photo st Width W of standard certificate photo sd And high H sd Solving the cutting coordinate (X) at the upper left corner of the cutting frame by the parameter p ,Y p ) Automatically cutting the zoomed image; cutting coordinate (X) p ,Y p ) The calculation formula of (2) is as follows:Y p =Y t *R s -Y st in the formula (X) 0 ,Y 0 )、(X 1 ,Y 1 ) Respectively the left and right coordinates and Y of human eyes t The distance between the head and the top;
the size cutting comprises two cutting modes of automatic cutting and manual cutting, wherein the automatic cutting is as shown in figure 3, and the manual cutting is to draw different types of certificate photo cutting frames according to different certificate photo types through interaction with a mouse.
S4, background replacement and portrait beautification of the identification photo image:
as shown in fig. 4, segmenting the automatically cut portrait foreground and background in step S3 by using a K-means algorithm based on sparrow algorithm optimization, firstly searching a global optimal clustering center by using a sparrow algorithm, transferring the optimal clustering center point to the K-means algorithm for clustering, taking the background clustering center point for masking operation to obtain the portrait foreground, and selecting a color to fill the background to complete the background replacement of the image;
smoothing the face image automatically cut in the step S3 by adopting a Beeps filtering method of a double-exponential edge protection smoothing filter, keeping the details of eyes and edges, and beautifying the face so that the original true degree of the image is not lost;
the algorithm flow is as follows: carrying out forward recursion operation on the original image, then carrying out backward recursion operation on the original image, and carrying out weighted combination on the result according to the following formula to obtain a portrait beautifying operation;
in the formula (I), the compound is shown in the specification,for the result of the forward recursion operation, phi k]λ is a weighting coefficient for the backward recursion operation result.
S5, quality detection of the identification photo image:
respectively carrying out three quality detections of color cast, definition and exposure on the certificate photo image after the background replacement and the portrait beautification in the step S4; the color cast detection method comprises the following steps: after the RGB image format is converted into the Lab image format, solving a color cast factor K by adopting an equivalent circle color cast detection method, wherein the possibility of the whole image color cast is low when the K value is not more than 1.5;
the formula for calculating the color cast factor K is as follows:in the formula, D is an average chromaticity and is obtained by calculating the average chromaticity in the positions a and b in units of pixels and calculating the center coordinates of the equivalent circle, and C is a chromaticity center and is obtained by calculating the average deviation of the image in the positions a and b.
The definition detection method comprises the following steps: performing low-pass filtering on an image to be evaluated to obtain a fuzzy picture, extracting gradient information of the image to be evaluated and the fuzzy picture by using a Sobel operator, defining the gradient image of the image to be evaluated and the fuzzy picture as G, finding out N image blocks with the richest gradient information in the gradient image G, calculating structural similarity SSIM between the image to be evaluated and the fuzzy picture, obtaining structural definition NRSS without a reference image according to the structural similarity SSIM, and considering that the definition of the whole image is qualified when the definition NRSS is not less than 1;
the formula for calculating the structural definition NRSS is as follows:in the formula, SSIM (x) i ,y i ) (x) a similarity structure expressed as the ith image block i ,y i ) Respectively expressed as the pixel point set of the ith image block.
The exposure detection method comprises the following steps: converting an input image into a color space from an RGB space to an HSV space, and judging the exposure degree on a V component by using a pixel mean value: judging the exposure degree of the image according to the exposure value of the image, and if the average value is more than 40, judging the image to be overexposed; if the average value is below 17, the exposure degree is judged to be insufficient; the others are qualified.
The system comprises a source image acquisition and processing module used for acquiring source images, verifying identity information, detecting and correcting the source images, a certificate photo generation module used for cutting the size of the certificate photo images, replacing the background and beautifying the photos, and a certificate photo quality detection module used for detecting the color deviation, the definition and the exposure of the certificate photo images, wherein the source image acquisition and processing module adopts a Hua-Shi 100UC card reader and a Kinect DK camera to carry out acquisition and processing work, the certificate photo generation module and the certificate photo quality detection module realize a calculation program based on a memory and a processor, and a user can realize the steps after finishing the camera connection in a reset camera menu.
The invention provides a cutting method using human eye positioning, aiming at the problems of longer cutting time and lower cutting efficiency in the traditional size cutting method, and compared with most size cutting methods based on image segmentation, the invention processes the segmented portrait and extracts the boundary of the whole portrait, although the segmentation is more precise, the consumed time is overlong, and the use experience of a user is greatly influenced.
The above description is only for the purpose of illustrating the technical solutions of the present invention and not for the purpose of limiting the same, and other modifications or equivalent substitutions made by those skilled in the art to the technical solutions of the present invention should be covered within the scope of the claims of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (8)
1. A certificate photo generation and detection method based on intelligent three-dimensional portrait acquisition is characterized in that: the method comprises the following steps:
s1, acquiring a source image and verifying identity information:
reading user identity card information by adopting a Huashi 100UC card reader, developing a Kinect DK camera through a Kinect SDK to collect a three-dimensional portrait, and performing SIFT feature matching through detecting a scale space extreme value, accurately positioning key points and distributing main directions to complete source image identification and identity information verification;
s2, source image detection and correction:
reading the three-dimensional portrait passed through the check in the step S1 to detect the head posture, firstly, utilizing a positioning algorithm to detect human eyes to accurately position the pupil position, solving an included angle formed by a connecting line of two eyes and the horizontal direction, judging whether the head deflection direction is left side deflection or right side deflection through priori knowledge, and carrying out affine transformation correction on the image according to a solved rotation transformation affine matrix;
s3, cutting the size of the identification photo image:
standard eye distance E according to the requirements of certificate photo type sd Determining the scaling R of the corrected image in step S2 s Scaling the corrected image by a scaling ratio R s The expression of (c) is:in the formula, E d The human eye distance of the corrected image is obtained;
standard overhead distance Y according to the type of identification photo st Width W of standard certificate photo sd And high H sd Solving the cutting coordinate (X) at the upper left corner of the cutting frame by the parameter p ,Y p ) Automatically cutting the zoomed image; cutting coordinate (X) p ,Y p ) The calculation formula of (c) is:Y p =Y t *R s -Y st in the formula (X) 0 ,Y 0 )、(X 1 ,Y 1 ) Respectively the left and right coordinates and Y of human eyes t Is the vertex distance;
s4, background replacement and portrait beautification of the identification photo image:
segmenting the portrait foreground and the background which are automatically cut in the step S3 by adopting a K-means algorithm based on sparrow algorithm optimization, firstly searching a global optimal clustering center by utilizing the sparrow algorithm, transmitting the optimal clustering center point to the K-means algorithm for clustering, taking the background clustering center point for carrying out mask operation to obtain the portrait foreground, and selecting colors to fill the background to finish background replacement of the image;
smoothing the face image automatically cut in the step S3 by adopting a double-exponential edge protection smoothing filter Beeps filtering method, and beautifying the face by keeping the details of eyes and edges;
s5, quality detection of the identification photo image:
respectively carrying out three quality detections of color cast, definition and exposure on the certificate photo image subjected to the background replacement and the portrait beautification in the step S4; the color cast detection method comprises the following steps: after the RGB image format is converted into the Lab image format, solving a color cast factor K by adopting a color cast detection method of an equivalent circle, wherein the possibility of the color cast of the whole image is low when the K value is not more than 1.5;
the definition detection method comprises the following steps: firstly, low-pass filtering an image to be evaluated to obtain a fuzzy picture, calculating the structural similarity SSIM between the image to be evaluated and the fuzzy picture, obtaining the structural definition NRSS without a reference image according to the structural similarity SSIM, and considering the definition of the whole image to be qualified when the definition NRSS is not less than 1;
the exposure detection method comprises the following steps: converting an input image into a color space from an RGB space to an HSV space, and judging the exposure degree on a V component by using a pixel mean value: if the average value is more than 40, determining the exposure is overexposed; if the average value is below 17, the exposure degree is judged to be insufficient; the others are qualified.
2. The identification photo generation detection method based on intelligent three-dimensional portrait acquisition as claimed in claim 1, characterized in that: in step S1, the specific method for detecting the extreme value of the scale space, the accurate positioning of the key point, and the allocation of the principal direction includes:
s11, searching all scale positions on a source image, wherein the scale space of an image is defined as the convolution of a Gaussian function with a variable scale and the source image, and the theoretical formula of the scale space L (x, y, sigma) is as follows:
in the formula, G (x, y, sigma) is a Gaussian function, I (x, y) is a source image, (x, y) represents the position of an image pixel, m and n represent the dimension of a Gaussian template, sigma is a scale space factor, and the smaller the dimension of the image is, the smaller the image is smoothed, the smaller the corresponding scale is;
s12, identifying key points which are unchanged in scale and rotation through the scale space function, performing Gaussian blur and down sampling, namely alternate point sampling, on the image in different scales, and performing local extremum detection through a DOG Gaussian difference pyramid function, wherein the DOG Gaussian difference pyramid function has a calculation formula as follows:
d (x, y, σ) = [ G (x, y, k σ) -G (x, y, σ) ] = I (x, y) = L (x, y, k σ) -L (x, y, σ), where k is a proportionality coefficient;
s13, carrying out curve fitting on the DOG Gaussian difference pyramid function in the scale space to improve the stability of the key points, wherein the Taylor fitting function expansion of the DOG Gaussian difference pyramid function in the scale space is as follows:
in the formula, D T A transposed matrix of D, X T Is the transpose matrix of X, D (X) is the exact location of the key point;
s14, aiming at the key points detected in the DOG Gaussian difference pyramid function, the gradient and direction distribution characteristics of pixels in a 3 sigma neighborhood window of the Gaussian difference pyramid image where the key points are located are collected, after calculation is completed, the gradient and direction of the pixels in the neighborhood are counted by using a histogram, and the maximum value in the histogram is used as the main direction of the key point.
3. The intelligent three-dimensional portrait acquisition-based identification photo generation and detection method according to claim 2, characterized in that: in the step S1, the specific method for identifying the source image and verifying the identity information includes:
s15, respectively establishing a key point description subset for the template graph reference image and the real-time graph update image, completing the identification of the source image by comparing the key point description subsets in the two point sets, wherein the similarity measurement of the key point description subset with 128 dimensions adopts Euclidean distance,
the expression of the subset of keypoint descriptors in the template graph is: r i =(r i1 ,r i2 ,…,r i128 ),
The expression of the key point description subset in the real-time graph is as follows: s i =(s i1 ,s i2 ,…,s i128 ),
in the formula, i is the number of key points, j is the dimension, and a matched key point descriptor subset d (R) is obtained i ,S i ) The requirements are satisfied: distance R in real-time map i Nearest point S j Distance R from real-time image i Nearest point S p The ratio of (A) to (B) is less than Threshold, which is a Threshold;
and S16, matching the key points by adopting a data structure of a kd tree to complete searching, wherein the searching content is to search the source original image feature points closest to the feature points of the target image and the source image feature points next to the feature points of the target image by taking the key points of the target image as a reference so as to complete identity verification.
4. The intelligent three-dimensional portrait acquisition-based identification photo generation and detection method according to claim 1, characterized in that: in step S2, the specific method of source image detection and correction processing is as follows:
s21, detecting human eyes by using a positioning algorithm, and returning coordinates (x) of the upper left corner of a rectangular frame of a human eye area 0 ,y 0 ) The value, the width w and the height h of the rectangular frame are calculated to obtain the pupil coordinate (x) 1 ,y 1 ) Wherein, in the step (A), calculating an included angle formed by a connecting line of two eyes and the horizontal direction according to the pupil coordinates;
s22, the head tilts to the left or the right, and the left eye pupil coordinate is A (x) A ,y A ) And the right eye pupil coordinate is B (x) B ,y B ) In the pixel coordinate system, when the horizontal and vertical coordinates of the left eye are smaller than the horizontal and vertical coordinates of the right eye, i.e. x A <x B ,y A <y B When the image is rotated counterclockwise according to the rotation angle; when the abscissa of the left eye is smaller than the abscissa of the right eye and the ordinate of the left eye is greater than the ordinate of the right eye, i.e. x A <x B ,y A >y B When the image is rotated clockwise according to the rotation angle;
s23, translating the center of the connecting line of the two eyes as a rotation center to the origin of coordinates, rotating according to the rotation angle, translating the origin of coordinates to the rotation center, solving an affine transformation matrix T,
any point (x, y) in the image is recorded as (x ', y') after affine transformation, and an expression after matrix affine transformation is as follows:
a translation transformation matrix M for translating the rotation center point to the origin of coordinates is:
wherein center is x As the x coordinate of the center point, center y Is a coordinate of the center point y and is a coordinate of the center point y,
and a rotation transformation matrix R for rotating the image by an angle theta around the origin is as follows:
a post-rotation translation matrix M' that translates the center of rotation from the origin of coordinates to the home position is:
the expression of the affine transformation matrix T obtained by solving in combination with the above formula is as follows:
and S24, performing matrix operation on the image according to the solved affine transformation matrix to complete rotation, and automatically rotating and transforming the inclined head to the front of the front view.
5. The intelligent three-dimensional portrait acquisition-based identification photo generation and detection method according to claim 1, characterized in that: in the step S4, the face image is beautified by using a method of a double-exponential edge protection smoothing filter Beeps filtering, and the algorithm flow is as follows: carrying out forward recursion operation on the original image, then carrying out backward recursion operation on the original image, and carrying out weighted combination on the result according to the following formula to obtain a portrait beautifying operation;
6. The intelligent three-dimensional portrait acquisition-based identification photo generation and detection method according to claim 1, characterized in that: in step S5, the formula for calculating the color cast factor K is:in the formula, D is an average chromaticity and is obtained by calculating the average chromaticity value at the positions a and b in units of pixels and recording the average chromaticity value as the center coordinates of the equivalent circle, and C is a chromaticity center and is obtained by calculating the average deviation of the image at the positions a and b.
7. The intelligent three-dimensional portrait acquisition-based identification photo generation and detection method according to claim 1, characterized in that: in step S5, the specific definition detection method includes: the method comprises the following steps of firstly carrying out low-pass filtering on an image to be evaluated to obtain a fuzzy picture, extracting gradient information of the image to be evaluated and the fuzzy picture by utilizing a Sobel operator, defining the gradient image of the image to be evaluated and the gradient image of the fuzzy picture as G, and finding out N image blocks with the richest gradient information in the gradient image G, so that a calculation formula for calculating the structure definition NRSS is as follows:in the formula, SSIM (x) i ,y i ) (x) as the similarity structure of the ith image Block i ,y i ) And the pixel point set is represented as the ith image block.
8. A detection system of a certificate photo generation detection method based on intelligent three-dimensional portrait acquisition is characterized in that: the system comprises a source image acquisition and processing module for acquiring and verifying source images and identity information, detecting and correcting the source images, a certificate photo generation module for cutting the size of a certificate photo image, replacing a background and beautifying a portrait, and a certificate photo quality detection module for detecting the color cast, definition and exposure of the certificate photo image, wherein the source image acquisition and processing module adopts a Huawei 100UC card reader and a Kinect DK camera for acquisition and processing, the certificate photo generation module and the certificate photo quality detection module realize a calculation program based on a memory and a processor, and a user can realize the method steps as claimed in claims 1-7 after completing the camera connection in a reset camera menu.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211260916.1A CN115620364A (en) | 2022-10-14 | 2022-10-14 | Certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211260916.1A CN115620364A (en) | 2022-10-14 | 2022-10-14 | Certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115620364A true CN115620364A (en) | 2023-01-17 |
Family
ID=84862988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211260916.1A Pending CN115620364A (en) | 2022-10-14 | 2022-10-14 | Certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115620364A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117412158A (en) * | 2023-10-09 | 2024-01-16 | 广州翼拍联盟网络技术有限公司 | Method, device, equipment and medium for processing photographed original image into multiple credentials |
CN118196439A (en) * | 2024-05-20 | 2024-06-14 | 山东浪潮科学研究院有限公司 | Certificate photo color auditing method based on visual language model and multiple agents |
-
2022
- 2022-10-14 CN CN202211260916.1A patent/CN115620364A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117412158A (en) * | 2023-10-09 | 2024-01-16 | 广州翼拍联盟网络技术有限公司 | Method, device, equipment and medium for processing photographed original image into multiple credentials |
CN117412158B (en) * | 2023-10-09 | 2024-09-03 | 广州翼拍联盟网络技术有限公司 | Method, device, equipment and medium for processing photographed original image into multiple credentials |
CN118196439A (en) * | 2024-05-20 | 2024-06-14 | 山东浪潮科学研究院有限公司 | Certificate photo color auditing method based on visual language model and multiple agents |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109408653B (en) | Human body hairstyle generation method based on multi-feature retrieval and deformation | |
CN113177977B (en) | Non-contact three-dimensional human body size measuring method | |
US7577297B2 (en) | Pattern identification method, device thereof, and program thereof | |
Milborrow et al. | Multiview active shape models with sift descriptors for the 300-w face landmark challenge | |
CN104778721B (en) | The distance measurement method of conspicuousness target in a kind of binocular image | |
US8577099B2 (en) | Method, apparatus, and program for detecting facial characteristic points | |
CN115620364A (en) | Certificate photo generation and detection method and system based on intelligent three-dimensional portrait acquisition | |
CN111126125A (en) | Method, device and equipment for extracting target text in certificate and readable storage medium | |
CN109087261B (en) | Face correction method based on unlimited acquisition scene | |
US20140043329A1 (en) | Method of augmented makeover with 3d face modeling and landmark alignment | |
CN110363047A (en) | Method, apparatus, electronic equipment and the storage medium of recognition of face | |
WO2014144408A2 (en) | Systems, methods, and software for detecting an object in an image | |
EP1960928A2 (en) | Example based 3d reconstruction | |
JP2007213378A (en) | Method for detecting face of specific expression, imaging control method, device and program | |
CN112634125B (en) | Automatic face replacement method based on off-line face database | |
Grewe et al. | Fully automated and highly accurate dense correspondence for facial surfaces | |
CN109711268B (en) | Face image screening method and device | |
CN106485253B (en) | A kind of pedestrian of maximum particle size structured descriptor discrimination method again | |
CN103080979A (en) | System and method for synthesizing portrait sketch from photo | |
CN113392856B (en) | Image forgery detection device and method | |
CN111695431A (en) | Face recognition method, face recognition device, terminal equipment and storage medium | |
CN109525786A (en) | Method for processing video frequency, device, terminal device and storage medium | |
JP4952267B2 (en) | Three-dimensional shape processing apparatus, three-dimensional shape processing apparatus control method, and three-dimensional shape processing apparatus control program | |
CN113066173B (en) | Three-dimensional model construction method and device and electronic equipment | |
CN110111368B (en) | Human body posture recognition-based similar moving target detection and tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |