CN108108685B - Method and device for carrying out face recognition processing - Google Patents

Method and device for carrying out face recognition processing Download PDF

Info

Publication number
CN108108685B
CN108108685B CN201711363025.8A CN201711363025A CN108108685B CN 108108685 B CN108108685 B CN 108108685B CN 201711363025 A CN201711363025 A CN 201711363025A CN 108108685 B CN108108685 B CN 108108685B
Authority
CN
China
Prior art keywords
image
face
pixel point
descreened
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711363025.8A
Other languages
Chinese (zh)
Other versions
CN108108685A (en
Inventor
范晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201711363025.8A priority Critical patent/CN108108685B/en
Publication of CN108108685A publication Critical patent/CN108108685A/en
Application granted granted Critical
Publication of CN108108685B publication Critical patent/CN108108685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The disclosure relates to a method and a device for carrying out face recognition processing, and belongs to the technical field of face recognition. The method comprises the following steps: processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images; when a face recognition instruction is received, acquiring a reference face image containing a reticulate pattern for face recognition, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information; and performing face recognition processing on the acquired face image based on the descreened image. By adopting the method and the device, the accuracy of the face recognition technology can be improved.

Description

Method and device for carrying out face recognition processing
Technical Field
The present disclosure relates to the field of face recognition technologies, and in particular, to a method and an apparatus for performing face recognition processing.
Background
Due to the non-contact identity authentication mode and the characteristics of accuracy and convenience, the face recognition technology is gaining attention in all aspects of our life.
The face recognition technology is also gradually applied in the financial field, for example, a real-name system service can be handled by using the face recognition technology, and the specific application may be that when a user handles a service on an automated teller machine, after the user inserts a bank card into the automated teller machine, the automated teller machine can perform verification based on a user face image acquired on site and a face image on an identity card used when the user handles the bank card, and if the user face images are consistent, the user can handle the service on the automated teller machine.
In carrying out the present disclosure, the inventors found that at least the following problems exist:
in order to prevent hackers and the like from stealing the identity card image and carrying out illegal activities, the identity card image inquired from the database is the identity card image added with the reticulate patterns, and the reticulate patterns cause great interference to equipment during face recognition, so that the accuracy of the face recognition technology can be reduced.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method and apparatus for performing a face recognition process. The technical scheme is as follows:
according to an embodiment of the present disclosure, there is provided a method of performing face recognition processing, the method including:
processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images;
when a face recognition instruction is received, acquiring a reference face image containing a reticulate pattern for face recognition, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information;
and performing face recognition processing on the acquired face image based on the descreened image.
Optionally, the determining, according to the reference face image, the average image, and the image principal component information, a descreened image corresponding to the reference face image includes:
determining the corresponding characteristics of the reference facial image according to the reference facial image, the average image and the image principal component information;
and determining a descreened image corresponding to the reference face image according to the features, the average image and the image principal component information.
Optionally, the determining, according to the reference face image, the average image, and the image principal component information, a feature corresponding to the reference face image includes:
according to the formula Xs ═ PT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
determining a descreened image corresponding to the reference face image according to the features, the average image and the image principal component information includes:
according to the formula Ir ═ P × Xs) + ImAnd determining a descreened image Ir corresponding to the reference face image.
Optionally, the performing, based on the descreened image, a facial recognition process includes:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
Optionally, the method further includes:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
Optionally, the method further includes:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
According to an embodiment of the present disclosure, there is provided an apparatus for performing face recognition processing, the apparatus including:
the processing module is used for processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images;
the determining module is used for acquiring a reference face image containing a reticulate pattern for carrying out face recognition when a face recognition instruction is received, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information;
and the face recognition module is used for carrying out face recognition processing on the acquired face image based on the descreened image.
Optionally, the determining module includes:
a first determination unit configured to determine a feature corresponding to the reference face image based on the reference face image, the average image, and the image principal component information;
and the second determining unit is used for determining the descreened image corresponding to the reference face image according to the characteristics, the average image and the image principal component information.
Optionally, the first determining unit is further configured to determine the value P according to the formula XsT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
the second determination unit is further configured to determine (P × Xs) + I according to the formula IrmAnd determining a descreened image Ir corresponding to the reference face image.
Optionally, the face recognition module is further configured to:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
Optionally, the processing module is further configured to:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
Optionally, the determining module is further configured to:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
According to an embodiment of the present disclosure, there is also provided a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the method for performing facial recognition processing.
According to an embodiment of the present disclosure, there is also provided a computer-readable storage medium, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the method for performing facial recognition processing described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the present disclosure, in the method for performing face recognition processing, a plurality of face images without textures are processed based on an image principal component analysis algorithm, so as to obtain an average image and image principal component information corresponding to the plurality of face images. Then, when a face recognition instruction is received, a reference face image containing a reticulate pattern for face recognition is obtained, and a descreened image corresponding to the reference face image is determined according to the reference face image, the average image and the image principal component information. And finally, carrying out face recognition processing on the acquired face image based on the descreened image. The collected face image is subjected to face recognition processing based on the descreened image, so that the accuracy of the face recognition technology can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a method of performing a face recognition process according to an embodiment;
FIG. 2 is a schematic diagram of a truncated rectangular face region image according to an embodiment;
FIG. 3 is a flow diagram illustrating a method of performing a face recognition process according to an embodiment;
FIG. 4 is a schematic diagram illustrating an apparatus for performing facial recognition processing according to an embodiment;
FIG. 5 is a schematic diagram illustrating an apparatus for performing facial recognition processing according to an embodiment;
FIG. 6 is a schematic diagram illustrating an apparatus for performing facial recognition processing according to an embodiment;
fig. 7 is a schematic diagram illustrating an apparatus for performing a face recognition process according to an embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The embodiment of the disclosure provides a method for performing facial recognition processing, which can be implemented by a server or a terminal. The terminal may be a tablet computer, a desktop computer, a notebook computer, an automated teller machine mentioned in the background art, or the like.
The server may include a transceiver, processor, memory, etc. The transceiver is used for data transmission with the terminal, for example, the transceiver may send the target avatar display attribute to the terminal, and the transceiver may include a WiFi (Wireless-Fidelity, Wireless Fidelity) component, an antenna, a matching circuit, a modem, and the like. The processor, which may be a CPU (Central Processing Unit), may be configured to perform a process such as performing a principal component analysis on a plurality of images, obtaining an average image corresponding to the plurality of images and image principal component information, and the like. The Memory may be a RAM (Random Access Memory), a Flash (Flash Memory), and the like, and may be configured to store received data, data required by the processing procedure, data generated in the processing procedure, and the like, such as an image.
The terminal may include a processor, memory, etc. The processor, which may be a CPU (Central Processing Unit), may be configured to perform a process such as performing a principal component analysis on a plurality of images, obtaining an average image corresponding to the plurality of images and image principal component information, and the like. The Memory may be a RAM (random access Memory), a Flash (Flash Memory), and the like, and may be used to store data, data required by the processing process, data generated in the processing process, and the like, such as an image and the like.
The terminal may also include a transceiver, input components, display components, audio output components, and the like. And the transceiver can be used for data transmission with the server, and the transceiver can comprise a Bluetooth component, a WiFi (Wireless-Fidelity V) component, an antenna, a matching circuit, a modem and the like. The input means may be a touch screen, keyboard, mouse, etc. The audio output component may be a speaker, headphones, or the like. In this embodiment, for convenience of description, a terminal is taken as an example of an execution subject.
The embodiment of the disclosure provides a method for face recognition processing, which can be applied to real-name authentication, for example, when a user transacts business on an automatic teller machine, after the user inserts a bank card into the automatic teller machine, the automatic teller machine can verify the business based on a user face image acquired on site and a face image on an identity card used when the user transacts the bank card, and if the user face images are consistent, the user can transact business on the automatic teller machine. However, the automatic teller machine may have a moire pattern on the identification card image acquired from the database for storing the identification card image, which may affect the accuracy of face recognition. In order to solve the problem, the present disclosure provides a method for performing a face recognition process, and as shown in fig. 1, a process flow of the method may include the following steps:
in step 101, a plurality of face images not including a texture are processed based on an image principal component analysis algorithm, and an average image and image principal component information corresponding to the plurality of face images are obtained.
Most of the data of the image is stored in a two-dimensional matrix form, so that the image can be analyzed and processed by adopting a matrix theory and a matrix algorithm. The Principal Component Analysis (PCA) is a common method for analyzing data in statistics, and PAC is also called K-L transform from a matrix perspective, so images can be processed using the Principal Component Analysis algorithm.
In implementation, before the terminal processes all the images, it needs to pre-process all the images so that all the images have the same number of rows and columns, and corresponding processing may be to obtain a plurality of initial face images not including a moire pattern, and respectively intercept a rectangular face region image in each initial face image; and respectively zooming each face area image to a preset size to obtain a plurality of face images without reticulate patterns. Wherein the initial facial image may be an image of the face of any person. Since the face area in each face image is different, the size of the clipping rectangle is related to the face area in each face image, and the clipping principle may be based on the distance between two points in the face image, for example, taking the distance between two eyes in the face area as a reference, and perform left-right up-down screenshot, which may be as follows:
after the terminal acquires the initial face image, the center positions of the two eyes in the initial face image may be identified first, the left-eye center position may be denoted as a, the right-eye center position may be denoted as B for convenience of description, and the distance between a and B may be determined and denoted as d. Then, as shown in fig. 2, the terminal completes the clipping of the face area in the initial face image with the straight line on the left side of a and at a vertical distance d from a in the initial face image as the left edge, the straight line on the right side of B and at a vertical distance d from B in the initial face image as the right edge, the straight line above a and at a vertical distance d from a in the initial face image as the upper edge, and the straight line below a and at a vertical distance 2d from a in the initial face image as the lower edge. Finally, the terminal scales the rectangular face area to a preset size, for example, to W and H for width and height, respectively, where the size of the width and height of each face image is the same.
And the terminal preprocesses all the acquired initial face images without the reticulate patterns to obtain corresponding face images without the reticulate patterns. The plurality of facial images are then processed using a PAC, which may be the steps of:
first, each face image is processed into a column vector, which may be by pulling the pixels of each face image into a column vector, column by column, end to end. For example, for a matrix with size n ═ a × b, the second column of data is arranged at the end of the first column of data, the third column of data is arranged at the end of the second column of data, and so on, the column vector corresponding to the matrix can be obtained, and each sub-face image can be represented as a column vector with number n of rows. Then, the terminal collects the column vectors corresponding to all the face images together to obtain a face image set I consisting of all the face images, wherein if the number of the face images is m, I is a matrix of n × m. Finally, the terminal averages all the face images, and the calculation formula may be:
Figure BDA0001510237080000071
in the formula ImTo average the image, IiIs any one of the m face images.
After the terminal calculates the average image corresponding to all the face images, the terminal determines the image principal component information based on the average image, wherein the image principal component information is a matrix P formed by the feature vectors of the orthonormal of the matrix I, and the matrix P is an m × n matrix. Thus, the terminal determines an average image I of the face image set ImAnd the image principal component information P, the two data are stored in a memory and used directly in the subsequent face recognition processing. In the process of determining the average image and the image principal component information by using the principal component analysis algorithm, it is easy to acquire a sample image (a plurality of face images not including the texture) and the calculation process is simple.
In step 102, when a face recognition instruction is received, a reference face image containing a texture for face recognition is acquired, and a descreened image corresponding to the reference face image is determined according to the reference face image, the average image and the image principal component information.
The reference face image is a face image on the user identification card acquired by the terminal from the database.
In an implementation, the reference face image is also an image obtained by preprocessing an initial reference face image by the terminal, and the preprocessing process is the same as the preprocessing process for the face image without the texture, and accordingly, the preprocessing process may be: when a user inserts a bank card into a terminal such as an automatic teller machine and the terminal receives a face recognition instruction, the terminal firstly acquires an initial reference face image containing reticulate patterns from a database storing identity card information; then, the terminal intercepts a rectangular face area image from the initial reference face image; and finally, the terminal scales the face area image to a preset size to obtain a reference face image containing the reticulate pattern. In the preprocessing, the principle of intercepting the rectangle and the scaling of the face region image to the preset size are the same as above, and are not described herein again.
After the terminal preprocesses the initial reference face image to obtain a reference face image, it determines a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information, and the corresponding processing may be performed according to the flowchart shown in fig. 3:
in step 1021, features corresponding to the reference face image are determined based on the reference face image, the average image, and the image principal component information.
In practice, each image IiCan be projected onto the image principal component information, i.e., each image can be represented by a linear combination of orthogonally normalized eigenvectors in the matrix P. Then the reference face image I0The feature Xs is projected to the image principal component information, wherein, the projection formula may be,
Xs=PT×(I0-Im)
in the formula: i is0As a reference face image, ImFor an average picture, the matrix P is the picture principal component information, PTIs the transpose of matrix P.
In step 1022, a descreened image corresponding to the reference face image is determined based on the features, the average image, and the image principal component information.
In implementation, after the terminal determines the feature corresponding to the reference face image, the terminal may further reconstruct the reference face image based on the feature, and then the reconstructed image is a descreened image Ir, where the reconstructed image formula is as follows:
Ir=(P×Xs)+Im
in step 103, a face recognition process is performed on the captured face image based on the descreened image.
In the embodiment, the descreened image is an image reconstructed on the basis of the reference face image and the average image, and it is known that the face image in the descreened image is similar to the face image in the reference face image only and cannot accurately represent the face image in the reference face image. To further make the face image in the reconstructed descreened image more realistic to the face image in the reference face image, the corresponding process may be:
firstly, the terminal determines a differential image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image, wherein the differential image IdMay be, Id=ABS(Ir-I0) ABS () represents taking the absolute value of the expression in parentheses. And then, the terminal determines a synthetic image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthetic image, if a first pixel value of the pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i. And finally, the terminal carries out face recognition processing on the acquired face image based on the composite image.
In implementation, since the reference face image contains the texture, and the descreened image does not contain the texture, the position larger than the preset threshold value is a texture region, the position not larger than the preset threshold value is a region not containing the texture, for the texture region, the position corresponding to the position is filled with the descreened image, and for the region not containing the texture, the position corresponding to the position is filled with the reference face image, so that the finally obtained composite image does not contain the texture and is closer to the reference face image. When the above-mentioned synthesis is performed at each position in the synthesized image, a formula can be used to represent:
Figure BDA0001510237080000091
in the formula: in is the composite image and A is a predetermined value.
In this way, after the terminal determines the composite image by using the above method, the captured face image may be compared with the composite image, for example, a difference image between the two images may be calculated, if a pixel corresponding to the difference image is smaller than a threshold value, the captured face image matches the composite image, and the face recognition is passed, and if a pixel corresponding to the difference image is larger than the threshold value, the captured face image does not match the composite image, and the face image does not pass. The synthesized image which corresponds to the reference facial image and does not contain the reticulate pattern is compared with the collected image for facial recognition, so that the accuracy of the face recognition technology can be improved.
Based on the above, the terminal processes a plurality of facial images without textures by using a principal component analysis algorithm in the previous period to obtain an average image and image principal component information, and then when the terminal performs facial recognition on the acquired facial image, facial image reconstruction is performed based on the average image, the image principal component information and the acquired reference facial image with textures, and facial recognition processing is performed by using the reconstructed synthesized image without textures and the acquired facial image, so that the accuracy of the face recognition technology can be improved.
In the embodiment of the present disclosure, the method for performing the face recognition processing is to process a plurality of face images without textures based on an image principal component analysis algorithm to obtain an average image and image principal component information corresponding to the plurality of face images. Then, when a face recognition instruction is received, a reference face image containing a reticulate pattern for face recognition is obtained, and a descreened image corresponding to the reference face image is determined according to the reference face image, the average image and the image principal component information. And finally, carrying out face recognition processing on the acquired face image based on the descreened image. The collected face image is subjected to face recognition processing based on the descreened image, so that the accuracy of the face recognition technology can be improved.
Yet another exemplary embodiment of the present disclosure provides an apparatus for performing face recognition processing, which may be a terminal in the above embodiments, as shown in fig. 4, the apparatus including:
the processing module 410 is configured to process a plurality of face images without textures based on an image principal component analysis algorithm to obtain an average image and image principal component information corresponding to the plurality of face images;
a determining module 420, configured to, when a face recognition instruction is received, acquire a reference face image containing a texture for performing face recognition, and determine a descreened image corresponding to the reference face image according to the reference face image, the average image, and the image principal component information;
and a face recognition module 430, configured to perform face recognition processing on the acquired face image based on the descreened image.
Optionally, as shown in fig. 5, the determining module 420 includes:
a first determining unit 421 configured to determine a feature corresponding to the reference face image based on the reference face image, the average image, and the image principal component information;
a second determining unit 422, configured to determine a descreened image corresponding to the reference face image according to the feature, the average image, and the image principal component information.
Optionally, the first determining unit 421 is further configured to determine the value P according to the formula XsT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
the second determining unit 422 is further configured to determine the second value according to the formula Ir ═ P × Xs) + ImAnd determining a descreened image Ir corresponding to the reference face image.
Optionally, the face recognition module 430 is further configured to:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
Optionally, the processing module 410 is further configured to:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
Optionally, the determining module 420 is further configured to:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In the embodiment of the present disclosure, the apparatus for performing face recognition processing first processes a plurality of face images without textures based on an image principal component analysis algorithm to obtain an average image and image principal component information corresponding to the plurality of face images. Then, when a face recognition instruction is received, a reference face image containing a reticulate pattern for face recognition is obtained, and a descreened image corresponding to the reference face image is determined according to the reference face image, the average image and the image principal component information. And finally, carrying out face recognition processing on the acquired face image based on the descreened image. The collected face image is subjected to face recognition processing based on the descreened image, so that the accuracy of the face recognition technology can be improved.
It should be noted that: in the device for performing face recognition processing according to the above embodiment, when performing face recognition processing, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. In addition, the apparatus for performing face recognition processing and the method embodiment for performing face recognition processing provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
Yet another exemplary embodiment of the present disclosure shows a structural diagram of a terminal. The terminal may be a tablet computer, a desktop computer, a notebook computer, an automated teller machine mentioned in the background, and the like.
Referring to fig. 6, terminal 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the terminal 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the terminal 600. Examples of such data include instructions for any application or method operating on terminal 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 606 provides power to the various components of terminal 600. Power components 606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for audio output device 600.
The multimedia component 608 comprises a screen providing an output interface between the terminal 600 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal 600 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the audio output device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing various aspects of status assessment for the terminal 600. For example, sensor component 614 can detect an open/closed state of terminal 600, relative positioning of components, such as a display and keypad of terminal 600, change in position of terminal 600 or a component of terminal 600, presence or absence of user contact with terminal 600, orientation or acceleration/deceleration of terminal 600, and temperature change of terminal 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the terminal 600 and other devices in a wired or wireless manner. The terminal 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the terminal 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Yet another embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of a terminal, enable the terminal to perform:
processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images;
when a face recognition instruction is received, acquiring a reference face image containing a reticulate pattern for face recognition, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information;
and performing face recognition processing on the acquired face image based on the descreened image.
Optionally, the determining, according to the reference face image, the average image, and the image principal component information, a descreened image corresponding to the reference face image includes:
determining the corresponding characteristics of the reference facial image according to the reference facial image, the average image and the image principal component information;
and determining a descreened image corresponding to the reference face image according to the features, the average image and the image principal component information.
Optionally, the determining, according to the reference face image, the average image, and the image principal component information, a feature corresponding to the reference face image includes:
according to the formula Xs ═ PT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
determining a descreened image corresponding to the reference face image according to the features, the average image and the image principal component information includes:
according to the formula Ir ═ P × Xs) + ImAnd determining a descreened image Ir corresponding to the reference face image.
Optionally, the performing, based on the descreened image, a facial recognition process includes:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
Optionally, the method further includes:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
Optionally, the method further includes:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
In the embodiment of the present disclosure, in the method for performing face recognition processing, a plurality of face images without textures are processed based on an image principal component analysis algorithm, so as to obtain an average image and image principal component information corresponding to the plurality of face images. Then, when a face recognition instruction is received, a reference face image containing a reticulate pattern for face recognition is obtained, and a descreened image corresponding to the reference face image is determined according to the reference face image, the average image and the image principal component information. And finally, carrying out face recognition processing on the acquired face image based on the descreened image. The collected face image is subjected to face recognition processing based on the descreened image, so that the accuracy of the face recognition technology can be improved.
Fig. 7 is a block diagram illustrating a method 1900 of performing facial recognition processing in accordance with an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 7, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method of performing facial recognition processing.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Device 1900 may include memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors include instructions for:
processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images;
when a face recognition instruction is received, acquiring a reference face image containing a reticulate pattern for face recognition, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information;
and performing face recognition processing on the acquired face image based on the descreened image.
Optionally, the determining, according to the reference face image, the average image, and the image principal component information, a descreened image corresponding to the reference face image includes:
determining the corresponding characteristics of the reference facial image according to the reference facial image, the average image and the image principal component information;
and determining a descreened image corresponding to the reference face image according to the features, the average image and the image principal component information.
Optionally, the determining, according to the reference face image, the average image, and the image principal component information, a feature corresponding to the reference face image includes:
according to the formula Xs ═ PT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
determining a descreened image corresponding to the reference face image according to the features, the average image and the image principal component information includes:
according to the formula Ir ═ P × Xs) + ImAnd determining a descreened image Ir corresponding to the reference face image.
Optionally, the performing, based on the descreened image, a facial recognition process includes:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
Optionally, the method further includes:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
Optionally, the method further includes:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
In the embodiment of the present disclosure, in the method for performing face recognition processing, a plurality of face images without textures are processed based on an image principal component analysis algorithm, so as to obtain an average image and image principal component information corresponding to the plurality of face images. Then, when a face recognition instruction is received, a reference face image containing a reticulate pattern for face recognition is obtained, and a descreened image corresponding to the reference face image is determined according to the reference face image, the average image and the image principal component information. And finally, carrying out face recognition processing on the acquired face image based on the descreened image. The collected face image is subjected to face recognition processing based on the descreened image, so that the accuracy of the face recognition technology can be improved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of performing facial recognition processing, the method comprising:
processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images;
when a face recognition instruction is received, acquiring a reference face image containing a reticulate pattern for face recognition, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information;
performing face recognition processing on the acquired face image based on the descreened image;
wherein the determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information includes:
according to the formula Xs ═ PT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
according to the formula Ir ═ P × Xs) + ImAnd determining a descreened image Ir corresponding to the reference face image.
2. The method of claim 1, wherein performing a facial recognition process based on the descreened image comprises:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
3. The method of claim 1, further comprising:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
4. The method of claim 1, further comprising:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
5. An apparatus that performs a face recognition process, the apparatus comprising:
the processing module is used for processing a plurality of face images without reticulate patterns based on an image principal component analysis algorithm to obtain average images and image principal component information corresponding to the plurality of face images;
the determining module is used for acquiring a reference face image containing a reticulate pattern for carrying out face recognition when a face recognition instruction is received, and determining a descreened image corresponding to the reference face image according to the reference face image, the average image and the image principal component information;
the face recognition module is used for carrying out face recognition processing on the collected face image based on the descreened image;
wherein the determining module comprises:
a first determination unit for determining P according to the formula XsT×(I0-Im) Determining a feature Xs corresponding to the reference face image, wherein I0For the reference face image, ImFor the average image, a matrix P is image principal component information;
a second determination unit for determining (P × Xs) + I according to the formula IrmAnd determining a descreened image Ir corresponding to the reference face image.
6. The apparatus of claim 5, wherein the facial recognition module is further configured to:
determining a difference image according to the absolute value of the difference between the alignment pixel points of the reference face image and the descreened image;
determining a synthesized image based on the reference facial image, the descreened image and the differential image, wherein for a pixel point i in the synthesized image, if a first pixel value of a pixel point corresponding to the pixel point i in the differential image is greater than a preset threshold value, a second pixel value of the pixel point corresponding to the pixel point i in the descreened image is determined as the pixel value of the pixel point i, and if the first pixel value of the pixel point corresponding to the pixel point i in the differential image is not greater than the preset threshold value, a third pixel value of the pixel point corresponding to the pixel point i in the reference facial image is determined as the pixel value of the pixel point i;
and performing face recognition processing on the acquired face image based on the composite image.
7. The apparatus of claim 5, wherein the processing module is further configured to:
acquiring a plurality of initial face images without reticulate patterns, and respectively intercepting a rectangular face area image in each initial face image;
and respectively scaling each face area image to a preset size to obtain a plurality of face images without reticulate patterns.
8. The apparatus of claim 5, wherein the determining module is further configured to:
acquiring an initial reference face image containing a reticulate pattern, and intercepting a rectangular face area image in the initial reference face image;
and scaling the facial area image to a preset size to obtain the reference facial image containing the reticulate pattern.
9. A terminal, characterized in that it comprises a processor and a memory, in which at least one instruction is stored, which is loaded and executed by the processor to implement a method of facial recognition processing according to any one of claims 1 to 4.
10. A computer-readable storage medium having stored thereon at least one instruction which is loaded and executed by a processor to implement a method of performing facial recognition processing according to any one of claims 1 to 4.
CN201711363025.8A 2017-12-15 2017-12-15 Method and device for carrying out face recognition processing Active CN108108685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711363025.8A CN108108685B (en) 2017-12-15 2017-12-15 Method and device for carrying out face recognition processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711363025.8A CN108108685B (en) 2017-12-15 2017-12-15 Method and device for carrying out face recognition processing

Publications (2)

Publication Number Publication Date
CN108108685A CN108108685A (en) 2018-06-01
CN108108685B true CN108108685B (en) 2022-02-08

Family

ID=62209794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711363025.8A Active CN108108685B (en) 2017-12-15 2017-12-15 Method and device for carrying out face recognition processing

Country Status (1)

Country Link
CN (1) CN108108685B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210425B (en) * 2019-06-05 2023-06-30 平安科技(深圳)有限公司 Face recognition method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1929541A (en) * 2005-09-09 2007-03-14 鸿友科技股份有限公司 Method for removing cob-webbing of digital image
CN101162502A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for removing glasses during human recognition
CN101369310A (en) * 2008-09-27 2009-02-18 北京航空航天大学 Robust human face expression recognition method
CN101552860A (en) * 2009-05-13 2009-10-07 西安理工大学 De-screening method of halftone image based on dot detection and dot padding
CN102567957A (en) * 2010-12-30 2012-07-11 北京大学 Method and system for removing reticulate pattern from image
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1929541A (en) * 2005-09-09 2007-03-14 鸿友科技股份有限公司 Method for removing cob-webbing of digital image
CN101162502A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for removing glasses during human recognition
CN101369310A (en) * 2008-09-27 2009-02-18 北京航空航天大学 Robust human face expression recognition method
CN101552860A (en) * 2009-05-13 2009-10-07 西安理工大学 De-screening method of halftone image based on dot detection and dot padding
CN102567957A (en) * 2010-12-30 2012-07-11 北京大学 Method and system for removing reticulate pattern from image
CN105930797A (en) * 2016-04-21 2016-09-07 腾讯科技(深圳)有限公司 Face verification method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Scanned Image Descreening With Image Redundancy and Adaptive Filtering》;Bin Sun等;《IEEE Transactions on Image Processing》;20141231;第23卷;第3698-3710页 *
《半色调扫描图像的小波去网纹算法》;赵蔚等;《计算机工程与应用》;20041031(第20期);第87-88,135页 *
《基于CMYK四色印刷模型的自适应去网纹算法》;陆晓梅;《现代计算机》;20150731(第7期);第72-76,80页 *

Also Published As

Publication number Publication date
CN108108685A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
CN108197586B (en) Face recognition method and device
US11532180B2 (en) Image processing method and device and storage medium
US11636653B2 (en) Method and apparatus for synthesizing virtual and real objects
CN106651955B (en) Method and device for positioning target object in picture
JP6134446B2 (en) Image division method, image division apparatus, image division device, program, and recording medium
CN106250894B (en) Card information identification method and device
CN107545248B (en) Biological characteristic living body detection method, device, equipment and storage medium
CN107798654B (en) Image buffing method and device and storage medium
EP3057304A1 (en) Method and apparatus for generating image filter
CN107944367B (en) Face key point detection method and device
US11030733B2 (en) Method, electronic device and storage medium for processing image
WO2016192325A1 (en) Method and device for processing logo on video file
EP3312702B1 (en) Method and device for identifying gesture
CN107688781A (en) Face identification method and device
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN107220614B (en) Image recognition method, image recognition device and computer-readable storage medium
CN107330868A (en) image processing method and device
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN108010009B (en) Method and device for removing interference image
CN107729886B (en) Method and device for processing face image
CN107730443B (en) Image processing method and device and user equipment
CN107292901B (en) Edge detection method and device
US9665925B2 (en) Method and terminal device for retargeting images
CN108108685B (en) Method and device for carrying out face recognition processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant