CN111488811A - Face recognition method and device, terminal equipment and computer readable medium - Google Patents

Face recognition method and device, terminal equipment and computer readable medium Download PDF

Info

Publication number
CN111488811A
CN111488811A CN202010247554.7A CN202010247554A CN111488811A CN 111488811 A CN111488811 A CN 111488811A CN 202010247554 A CN202010247554 A CN 202010247554A CN 111488811 A CN111488811 A CN 111488811A
Authority
CN
China
Prior art keywords
image area
face recognition
face
image
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010247554.7A
Other languages
Chinese (zh)
Other versions
CN111488811B (en
Inventor
罗茜
张斯尧
王思远
蒋杰
张�诚
李乾
谢喜林
黄晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Qianshitong Intelligent Technology Co ltd
Original Assignee
Changsha Qianshitong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Qianshitong Intelligent Technology Co ltd filed Critical Changsha Qianshitong Intelligent Technology Co ltd
Priority to CN202010247554.7A priority Critical patent/CN111488811B/en
Publication of CN111488811A publication Critical patent/CN111488811A/en
Application granted granted Critical
Publication of CN111488811B publication Critical patent/CN111488811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the field of image processing, and an embodiment of the invention provides a face recognition method, which comprises the following steps: s1, acquiring an image area of the face to be recognized and key points of the image area; s2, normalizing the image area based on the key points; and S3, performing face recognition according to the non-occlusion area in the normalized image area. Meanwhile, a corresponding face recognition device and a corresponding storage medium are also provided. The method and the device can improve the accuracy of face recognition, and are simple in model and quick in calculation.

Description

Face recognition method and device, terminal equipment and computer readable medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a face recognition method, a face recognition apparatus, and a corresponding terminal device and computer readable medium.
Background
The appearance of the deep convolutional neural network makes face recognition greatly improved, and the verification precision on data sets such as L FW (L analog Faces in the Wild) even exceeds the human level, so the face recognition technology gets more and more attention.
In the practical application of face recognition, the problem of low recognition rate caused by the fact that the face is shielded exists. At present, for the identification of the shielded face, the identification based on partial face feature identification or the reconstruction identification based on the shielded face is mainly adopted, but in the existing method, the problems of low identification rate, complex calculation and the like exist, and the identification precision needs to be further improved.
Disclosure of Invention
In view of this, the present invention is directed to a face recognition method and a network, so as to at least solve the problem of the reduction of the recognition accuracy rate caused by the existence of face occlusion in the current face recognition.
In a first aspect of the present invention, a face recognition method is provided, which includes:
s1, acquiring an image area of the face to be recognized and key points of the image area;
s2, normalizing the image area based on the key points;
and S3, performing face recognition according to the non-occlusion area of the normalized image area.
Optionally, the acquiring an image region of a face to be recognized and a key point of the image region in S1 includes:
and acquiring the image area and key points of the image area by adopting a trained convolutional neural network.
Optionally, the trained convolutional neural network includes: and training by taking the P-Net network as a frame and adopting the image with the image area mark and the key point marked with the image area as a training set, and adjusting network parameters through a loss function until iteration reaches preset times.
Optionally, the loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function, and a key point loss function.
Optionally, the normalizing the image region based on the key point in S2 includes:
and based on the key points, adjusting the size, the angle and the position of the face in the image area through coordinate transformation to enable the face to conform to a preset standard form.
Optionally, the performing, in S3, face recognition according to the non-occluded area of the normalized image area includes:
s31, taking the normalized image area as a target object,
s32, segmenting the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting the histogram sequence feature of the image area determined as the non-occlusion;
s35, the image area determined as the occlusion is the target object;
s36, repeating the steps S32 to S35 for a preset number of times;
and S37, performing face recognition according to all the acquired histogram sequence characteristics.
Optionally, the segmenting the target object in S32 includes:
and generating a mask by using the Markov random field enhanced characteristic information, and segmenting the image region determined to be shielded by using the mask.
Optionally, the determining, in step S33, whether each image region after segmentation is occluded or unoccluded includes:
s331, performing wavelet transformation of different scales and directions on each image region to generate characteristics;
s332, reducing the dimension of the features, and classifying the reduced features by adopting a support vector machine;
s333, determining whether each image area is shielded according to the classification result.
In the second aspect of the present invention, there is also provided a face recognition apparatus, comprising:
the system comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image area of a face to be recognized and key points of the image area; the normalization module is used for normalizing the image area based on the key points; and the detection module is used for carrying out face recognition according to the non-occlusion area of the normalized image area.
A storage medium of a third aspect of the present invention stores instructions that, when executed on a computer, cause the computer to perform the aforementioned face recognition method.
Through the technical scheme provided by the invention, the following beneficial effects are achieved: the invention provides a method and a network applied to face recognition with shielding, which can improve the accuracy of face recognition, and have simple model and quick calculation.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a face recognition apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention, as shown in fig. 1. The embodiment provides a face recognition method, which comprises the following steps:
s1, acquiring an image area of the face to be recognized and key points of the image area;
s2, normalizing the image area based on the key points;
and S3, performing face recognition according to the non-occlusion area of the normalized image area.
Therefore, the image area containing the human face is extracted through the pre-processing of the image, and the accuracy of the human face recognition can be improved by preprocessing the image area. And further normalizing the image area, transforming parameters such as the size, the angle and the position of the face to be recognized, avoiding the reduction of the recognition rate caused by the error of the parameters, distinguishing the shielded area and the non-shielded area in the image, only recognizing the non-shielded area, avoiding the interference of the shielded area to the image recognition, and further improving the recognition accuracy. According to the embodiment of the invention, through the image processing, the interference and the error in the image recognition can be reduced, so that the accuracy of the face recognition is improved.
Specifically, the general steps of the present embodiment are as follows: and extracting an image area and key points thereof, wherein the image area is an area where a face is located, the key points are identification key points or position key points in the face, and the accurate area to be identified and the key points are extracted to be used as a basis for the next normalization. Image normalization refers to a process of transforming an image into a fixed standard form by performing a series of standard processing transformations, which is also referred to as a normalized image. The normalized image is shielded and identified, so that the areas such as a mask or a sunglasses in the face area are ignored and are concentrated in the non-shielded area, and the influence of a shielding object on the face identification accuracy is avoided.
In an embodiment provided by the present invention, the acquiring an image area of a human face to be recognized and a key point of the image area in S1 includes:
and acquiring the image area and key points of the image area by adopting a trained convolutional neural network. The convolutional neural network is adopted to identify the image area and the key points, so that the identification efficiency can be effectively improved, and the method has the advantages of high accuracy and rapid calculation compared with other acquisition modes. Further, the trained convolutional neural network comprises: and training by taking the P-Net network as a frame and adopting the image with the image area mark and the key point marked with the image area as a training set, and adjusting network parameters through a loss function until iteration reaches preset times. In addition, in the embodiment, it is preferable to set network parameters of each layer based on a P-Net network, train by using an image with an image region mark and a key point marked with the image region as a training set, and adjust network parameters by a loss function, and a specific training process of a neural network refers to the prior art, which is not described herein again. The trained P-Net network can quickly and accurately identify the face region and the key points of the face region in the target image.
In one embodiment provided by the present invention, the loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function, and a keypoint loss function. The loss function is an important component of the neural network model, and the appropriate loss function is the guarantee of fast convergence of the convolutional neural network. The aforementioned loss function of the trained convolutional neural network comprises:
face detection cross entropy loss function
Figure BDA0002434343610000041
Wherein Xi represents a real label of the sample, pi represents the probability that the network output is a human face, and N is the total number of the training samples;
face bounding box regression loss function
Figure BDA0002434343610000042
Wherein the content of the first and second substances,
Figure BDA0002434343610000043
representing the bounding box coordinates, y, obtained after the network outputiA real bounding box representing the target;
loss function of key point
Figure BDA0002434343610000044
Wherein the content of the first and second substances,
Figure BDA0002434343610000045
coordinates, z, representing key points obtained after network outputiReal coordinates representing the key points;
the total loss function is
L=min(λ1L12L23L3)
Wherein λ is1、λ2、λ3Is the weight that each loss takes.
In an embodiment of the present invention, the normalizing the image region based on the key point in S2 includes: and based on the key points, adjusting the size, the angle and the position of the face in the image area through coordinate transformation to enable the face to conform to a preset standard form. For example, a transformation matrix of the image is calculated according to the coordinate positions of the left eye and the right eye of the human face,
Figure BDA0002434343610000046
wherein S isx,SyIs the scaling factor of the image in the x and y directions, respectively, theta is the rotation angle, txDenotes the translation distance, t, in the x-directionyBy translating the distance in the y direction, operations such as scaling, rotation, and translation of the image region can be performed by the transformation matrix, and the image region can be made to be an image conforming to the standard.
In an embodiment of the present invention, the performing face recognition according to the non-occlusion region of the normalized image region in S3 includes:
s31, taking the normalized image area as a target object,
s32, segmenting the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting the histogram sequence feature of the image area determined as the non-occlusion;
s35, the image area determined as the occlusion is the target object;
s36, repeating the steps S32 to S35 for a preset number of times;
and S37, performing face recognition according to all the acquired histogram sequence characteristics.
In the embodiment, the method of judging after image segmentation is mainly adopted, the segmented images are classified into shielded or unshielded regions, only the unshielded regions are subjected to feature extraction, and the features of the shielded regions are ignored, so that the identification accuracy is improved. However, image segmentation has a problem of segmentation granularity, partial occlusion may exist in a segmented image, and at this time, an image determined as occlusion is removed as a whole, so that many image features are lost, and the accuracy of image recognition is reduced. Therefore, the present embodiment reduces the loss of the features of the image by re-dividing the occlusion image. And can carry out progressive segmentation for many times so as not to lose the face information in the image as much as possible.
In an embodiment provided by the present invention, the segmenting the target object in S32 includes:
and generating a mask by using the Markov random field enhanced characteristic information, and segmenting the image region determined to be shielded by using the mask. The method for generating the mask by using the Markov random field enhanced feature information specifically comprises the steps of assuming that a set of all pixel points of an image is S, adopting a classification result of an SVM (support vector machine) on image shielding as an initialization classification label W, namely, a pixel is marked as 1 when shielding exists, and a pixel is marked as 0 when no shielding exists, estimating a posterior probability maximization label, namely the maximum value of P (W | S), by combining prior probability and conditional probability, and when P (W | S) takes the maximum value, obtaining the most suitable classification result of shielding and non-shielding by each pixel point
Figure BDA0002434343610000051
P (S | W) represents the conditional probability, is a likelihood function of P (W | S) and represents the relation of whether each classified pixel point is matched with the real pixel point distribution; p(s) represents the distribution of the input image, and is a constant; p (w) represents a classification type. The pixel points in the image can be regarded as a Markov random field, that is, the classification probability of a certain pixel point in the image is only related to adjacent points and is not related to other points with long distances, a small region formed by the pixel point and the surrounding adjacent points is called a potential energy cluster, for example, the pixel point and the pixel point adjacent to the left side form a potential energy cluster.
P (W) can be obtained by potential energy function
Figure BDA0002434343610000052
Where z is a partition function and is a normalized constant, the parameter T controls the shape of P (W), the larger T the flatter C is the set of potential energy clusters
Figure BDA0002434343610000053
Figure BDA0002434343610000054
Wherein, Vc(wc) Is potential energy of the potential energy group, β is a coupling coefficient, and s and t are two adjacent pixel points respectively.
P (S | W) utilizes the marking information to estimate the values of the pixel points, and the distribution of the pixel points in the marking point class meets the Gaussian distribution, so that the classification result can be judged according to the value of a certain pixel point.
The method comprises the steps of utilizing a Markov random field to segment an occluded image again, improving segmentation accuracy, comparing L GBPHS (histogram sequence) characteristics of a non-occluded human face region, identifying a human face through a histogram identification degree matching method, for example, conducting matching identification through histogram intersection operation, and obtaining a new histogram sequence based on local Gabor transformation through further coding of a L BP operator on the basis of Gabor characteristics L GBPHS characteristics.
In an embodiment provided by the present invention, the determining in step S33 that each image area after being segmented is occluded or unoccluded includes:
s331, performing wavelet transformation of different scales and directions on each image region to generate characteristics;
s332, reducing the dimension of the features, and classifying the reduced features by adopting a support vector machine;
s333, determining whether each image area is shielded according to the classification result.
Specifically, the wavelet transform uses Gabor wavelets, which are composed of sinusoidal carriers and gaussian envelopes and have the characteristic of obtaining optimal localization in the spatial domain and the frequency domain at the same time, so that local structure information corresponding to spatial frequency, spatial position and direction selectivity can be well described. Since the extracted Gabor features are large in size, PCA (principal component analysis) is adopted to maximize the variance of the Gabor features in the projection subspace, so that the dimension of the feature vector is reduced while the discrimination capability of the Gabor features is maintained.
Fig. 2 is a model structure diagram of a face recognition apparatus according to an embodiment of the present invention, as shown in fig. 2. In an embodiment provided by the present invention, there is also provided a face recognition apparatus, including:
the system comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image area of a face to be recognized and key points of the image area;
the normalization module is used for normalizing the image area based on the key points;
and the detection module is used for carrying out face recognition according to the non-occlusion area of the normalized image area.
The acquisition module comprises: and acquiring the image area and key points of the image area by adopting a trained convolutional neural network.
The trained convolutional neural network comprises: and training by taking the P-Net network as a frame and adopting the image with the image area mark and the key point marked with the image area as a training set, and adjusting network parameters through a loss function until iteration reaches preset times.
The loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function and a key point loss function.
The normalization module comprises: and based on the key points, adjusting the size, the angle and the position of the face in the image area through coordinate transformation to enable the face to conform to a preset standard form.
The detection module includes:
s31, taking the normalized image area as a target object;
s32, segmenting the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting the histogram sequence feature of the image area determined as the non-occlusion;
s35, the image area determined as the occlusion is the target object;
s36, repeating the steps S32 to S35 for a preset number of times;
and S37, performing face recognition according to all the acquired histogram sequence characteristics.
The target object is segmented in the detection module, and the segmentation comprises the following steps: and generating a mask by using the Markov random field enhanced characteristic information, and segmenting the image region determined to be shielded by using the mask.
Determining whether each segmented image area is blocked or unblocked in the detection module, comprising:
s331, performing wavelet transformation of different scales and directions on each image region to generate characteristics;
s332, reducing the dimension of the features, and classifying the reduced features by adopting a support vector machine;
s333, determining whether each image area is shielded according to the classification result.
For details of the implementation of the face recognition apparatus, reference is made to the face recognition method described above, and details are not repeated here.
In an embodiment provided by the present invention, a computer or a server is further provided, and the computer or the server is loaded with the foregoing face recognition apparatus, so that it can complete the foregoing face recognition method.
Fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention, as shown in fig. 3. The terminal device 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102, such as a program for performing face recognition, stored in said memory 101 and executable on said processor 100. The processor 100 executes the computer program 102 to implement the steps in the above-described method embodiments, for example, the steps related to the face recognition method shown in fig. 1. Alternatively, the processor 100, when executing the computer program 102, implements the functions of the modules/units in the above-mentioned embodiments of the apparatuses, such as the functions of the relevant modules of the face recognition apparatus shown in fig. 2.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 102 in the terminal device 10. For example, the computer program 102 may be divided into an acquisition module, a normalization module, and a detection module (module in a virtual device), each module having the following specific functions:
the system comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image area of a face to be recognized and key points of the image area;
the normalization module is used for normalizing the image area based on the key points;
and the detection module is used for carrying out face recognition according to the non-occlusion area of the normalized image area.
The terminal device 10 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. Terminal device 10 may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 3 is merely an example of a terminal device 10 and does not constitute a limitation of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit of the terminal device 10 and an external storage device. The memory 101 is used for storing the computer program and other programs and data required by the terminal device 10. The memory 101 may also be used to temporarily store data that has been output or is to be output.
In one embodiment of the present invention, the storage medium stores instructions that, when executed on a computer, cause the computer to execute the aforementioned face recognition method.
The implementation method provided by the invention has a good recognition effect on the face with the shielding, and experiments prove that the accuracy of face recognition can be effectively improved.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium and used to implement the steps of the above embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A face recognition method is characterized by comprising the following steps:
s1, acquiring an image area of the face to be recognized and key points of the image area;
s2, normalizing the image area based on the key points;
and S3, performing face recognition according to the non-occlusion area in the normalized image area.
2. The method according to claim 1, wherein the step of obtaining the image area of the face to be recognized and the key points of the image area in S1 includes:
and acquiring the image area and key points of the image area by adopting a trained convolutional neural network.
3. The face recognition method of claim 2, wherein the trained convolutional neural network comprises: training by taking a P-Net network as a frame and adopting an image with an image region mark and a key point marked with the image region as a training set, and adjusting network parameters through a loss function until iteration reaches preset times;
the loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function and a key point loss function.
4. The method according to claim 1, wherein the normalizing the image region based on the key points in S2 includes:
and based on the key points, adjusting the size, the angle and the position of the face in the image area through coordinate transformation to enable the face to conform to a preset standard form.
5. The method according to claim 1, wherein the step of performing face recognition according to the non-occluded area in the normalized image area in S3 includes:
s31, taking the normalized image area as a target object,
s32, segmenting the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting the histogram sequence feature of the image area determined as the non-occlusion;
s35, the image area determined as the occlusion is the target object;
s36, repeating the steps S32 to S35 for a preset number of times;
and S37, performing face recognition according to all the acquired histogram sequence characteristics.
6. The face recognition method of claim 5, wherein the segmenting the target object in the step S32 includes:
and generating a mask by using the Markov random field enhanced characteristic information, and segmenting the image region determined to be shielded by using the mask.
7. The face recognition method according to claim 5, wherein the step S33 of determining whether each image region after segmentation is occluded or unoccluded comprises:
s331, performing wavelet transformation of different scales and directions on each image region to generate characteristics;
s332, reducing the dimension of the features, and classifying the reduced features by adopting a support vector machine;
s333, determining whether each image area is shielded according to the classification result.
8. A face recognition apparatus, characterized in that the face recognition apparatus comprises:
the system comprises an acquisition module, a recognition module and a processing module, wherein the acquisition module is used for acquiring an image area of a face to be recognized and key points of the image area;
the normalization module is used for normalizing the image area based on the key points;
and the detection module is used for carrying out face recognition according to the non-occlusion area in the normalized image area.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the steps of the method of any one of claims 1 to 7.
CN202010247554.7A 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium Active CN111488811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010247554.7A CN111488811B (en) 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010247554.7A CN111488811B (en) 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111488811A true CN111488811A (en) 2020-08-04
CN111488811B CN111488811B (en) 2023-08-22

Family

ID=71812564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010247554.7A Active CN111488811B (en) 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111488811B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914812A (en) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 Image processing model training method, device, equipment and storage medium
CN111968291A (en) * 2020-08-26 2020-11-20 重庆康普达科技有限公司 Face recognition intelligent column

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996039A (en) * 2014-05-06 2014-08-20 河海大学 SAR image channel extraction method combining gray-level threshold-value segmentation and contour shape identification
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
KR20160061856A (en) * 2014-11-24 2016-06-01 삼성전자주식회사 Method and apparatus for recognizing object, and method and apparatus for learning recognizer
CN206224639U (en) * 2016-11-14 2017-06-06 华南理工大学 A kind of face recognition door control system with occlusion detection function
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary
US20180158246A1 (en) * 2016-12-07 2018-06-07 Intel IP Corporation Method and system of providing user facial displays in virtual or augmented reality for face occluding head mounted displays
CN108932458A (en) * 2017-05-24 2018-12-04 上海云从企业发展有限公司 Restore the facial reconstruction method and device of glasses occlusion area
EP3428843A1 (en) * 2017-07-14 2019-01-16 GB Group plc Improvements relating to face recognition
CN109684973A (en) * 2018-12-18 2019-04-26 哈尔滨工业大学 The facial image fill system of convolutional neural networks based on symmetrical consistency
CN110263768A (en) * 2019-07-19 2019-09-20 深圳市科葩信息技术有限公司 A kind of face identification method based on depth residual error network
CN110298284A (en) * 2019-06-24 2019-10-01 火石信科(广州)科技有限公司 A kind of recognition methods for reading and writing scene and read and write position
US20200034657A1 (en) * 2017-07-27 2020-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for occlusion detection on target object, electronic device, and storage medium
CN110895693A (en) * 2019-09-12 2020-03-20 华中科技大学 Authentication method and authentication system for anti-counterfeiting information of certificate

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150205997A1 (en) * 2012-06-25 2015-07-23 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN103996039A (en) * 2014-05-06 2014-08-20 河海大学 SAR image channel extraction method combining gray-level threshold-value segmentation and contour shape identification
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking
KR20160061856A (en) * 2014-11-24 2016-06-01 삼성전자주식회사 Method and apparatus for recognizing object, and method and apparatus for learning recognizer
CN206224639U (en) * 2016-11-14 2017-06-06 华南理工大学 A kind of face recognition door control system with occlusion detection function
US20180158246A1 (en) * 2016-12-07 2018-06-07 Intel IP Corporation Method and system of providing user facial displays in virtual or augmented reality for face occluding head mounted displays
CN108932458A (en) * 2017-05-24 2018-12-04 上海云从企业发展有限公司 Restore the facial reconstruction method and device of glasses occlusion area
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
EP3428843A1 (en) * 2017-07-14 2019-01-16 GB Group plc Improvements relating to face recognition
US20200034657A1 (en) * 2017-07-27 2020-01-30 Tencent Technology (Shenzhen) Company Limited Method and apparatus for occlusion detection on target object, electronic device, and storage medium
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN109684973A (en) * 2018-12-18 2019-04-26 哈尔滨工业大学 The facial image fill system of convolutional neural networks based on symmetrical consistency
CN110298284A (en) * 2019-06-24 2019-10-01 火石信科(广州)科技有限公司 A kind of recognition methods for reading and writing scene and read and write position
CN110263768A (en) * 2019-07-19 2019-09-20 深圳市科葩信息技术有限公司 A kind of face identification method based on depth residual error network
CN110895693A (en) * 2019-09-12 2020-03-20 华中科技大学 Authentication method and authentication system for anti-counterfeiting information of certificate

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
傅勇;潘晴;田妮莉;杨志景;BINGO WING-KUEN LING;EVERETT.X.WANG;: "改进级联卷积神经网络的平面旋转人脸检测", vol. 41, no. 03, pages 856 - 861 *
孔令美等: "基于小波变换和小波神经网络的3D遮挡人脸识别方法", 湘潭大学自然科学学报, vol. 37, no. 04, pages 82 - 86 *
封筠等: "梯度方向直方图特征的人耳身份识别方法", 南京大学学报(自然科学版), vol. 48, no. 04, pages 452 - 458 *
李冬梅;熊承义;高志荣;周城;汪汉新;: "基于异值区域消除的遮挡人脸识别", vol. 42, no. 03, pages 289 - 295 *
邵一鸣;孙红星;陈虹羊;: "基于深度学习的人脸遮挡检测方法", vol. 42, no. 06, pages 454 - 461 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914812A (en) * 2020-08-20 2020-11-10 腾讯科技(深圳)有限公司 Image processing model training method, device, equipment and storage medium
CN111914812B (en) * 2020-08-20 2022-09-16 腾讯科技(深圳)有限公司 Image processing model training method, device, equipment and storage medium
CN111968291A (en) * 2020-08-26 2020-11-20 重庆康普达科技有限公司 Face recognition intelligent column
CN111968291B (en) * 2020-08-26 2022-05-17 重庆康普达科技有限公司 Face recognition intelligent column

Also Published As

Publication number Publication date
CN111488811B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
Nogueira et al. Evaluating software-based fingerprint liveness detection using convolutional networks and local binary patterns
Montazer et al. An improved radial basis function neural network for object image retrieval
Chauhan et al. Brain tumor detection and classification in MRI images using image and data mining
Hemalatha et al. A computational model for texture analysis in images with fractional differential filter for texture detection
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111191569A (en) Face attribute recognition method and related device thereof
CN111353385B (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN111488811B (en) Face recognition method, device, terminal equipment and computer readable medium
CN113592030B (en) Image retrieval method and system based on complex value singular spectrum analysis
CN114444565A (en) Image tampering detection method, terminal device and storage medium
CN111898408B (en) Quick face recognition method and device
CN109902720B (en) Image classification and identification method for depth feature estimation based on subspace decomposition
CN111950403A (en) Iris classification method and system, electronic device and storage medium
Gani et al. Copy move forgery detection using DCT, PatchMatch and cellular automata
Hamidi et al. Local selected features of dual‐tree complex wavelet transform for single sample face recognition
Florindo et al. Texture descriptors by a fractal analysis of three-dimensional local coarseness
Krupiński et al. Binarization of degraded document images with generalized Gaussian distribution
CN113313124B (en) Method and device for identifying license plate number based on image segmentation algorithm and terminal equipment
CN111553195B (en) Three-dimensional face shielding discrimination method based on multi-bitmap tangent plane and multi-scale uLBP
CN112070116B (en) Automatic artistic drawing classification system and method based on support vector machine
CN111753723B (en) Fingerprint identification method and device based on density calibration
Ying et al. Simulation of computer image recognition technology based on image feature extraction
Zayed et al. A New Refined-TLBO Aided Bi-Generative Adversarial Network for Finger Vein Recognition
CN117173485B (en) Intelligent classification system method and system for lung cancer tissue pathological images
Ragul An Ingenious Texture and Shape Feature Extraction in Remote Sensing Images by Means of Multi Kernel Principal Component analysis with Pyramidal Wavelet Transform and Canny Edge Detection Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant