CN111488811B - Face recognition method, device, terminal equipment and computer readable medium - Google Patents

Face recognition method, device, terminal equipment and computer readable medium Download PDF

Info

Publication number
CN111488811B
CN111488811B CN202010247554.7A CN202010247554A CN111488811B CN 111488811 B CN111488811 B CN 111488811B CN 202010247554 A CN202010247554 A CN 202010247554A CN 111488811 B CN111488811 B CN 111488811B
Authority
CN
China
Prior art keywords
image area
face recognition
face
image
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010247554.7A
Other languages
Chinese (zh)
Other versions
CN111488811A (en
Inventor
罗茜
张斯尧
王思远
蒋杰
张�诚
李乾
谢喜林
黄晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Qianshitong Intelligent Technology Co ltd
Original Assignee
Changsha Qianshitong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Qianshitong Intelligent Technology Co ltd filed Critical Changsha Qianshitong Intelligent Technology Co ltd
Priority to CN202010247554.7A priority Critical patent/CN111488811B/en
Publication of CN111488811A publication Critical patent/CN111488811A/en
Application granted granted Critical
Publication of CN111488811B publication Critical patent/CN111488811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of image processing, and provides a face recognition method, which comprises the following steps: s1, acquiring an image area of a face to be identified and key points of the image area; s2, normalizing the image area based on the key points; s3, carrying out face recognition according to the non-occlusion region in the normalized image region. A corresponding face recognition device and a corresponding storage medium are also provided. The method and the device can improve the accuracy of face recognition, and are simple in model and quick in calculation.

Description

Face recognition method, device, terminal equipment and computer readable medium
Technical Field
The present application relates to the field of image processing, and in particular, to a face recognition method, a face recognition device, and corresponding terminal equipment and computer readable medium.
Background
Face recognition technology is one of the current research hotspots and is widely applied to the fields of retail, security and the like. The appearance of the deep convolutional neural network makes the face recognition make a great progress, and the verification accuracy on data sets such as LFW (Labeled Faces in the Wild) even exceeds the human level, so that the face recognition technology is paid more attention.
In the practical application of face recognition, the problem of low recognition rate caused by the fact that the face is blocked exists. At present, recognition based on partial face feature recognition or recognition based on reconstruction of a blocked face is mainly adopted for recognition of the blocked face, but in the existing method, the problems of low recognition rate, complex calculation and the like exist, and further improvement of recognition accuracy is required.
Disclosure of Invention
In view of this, the present application aims to provide a face recognition method and a network, so as to at least solve the problem of reduced recognition accuracy caused by face occlusion in the current face recognition.
In a first aspect of the present application, there is provided a face recognition method, the face recognition method comprising:
s1, acquiring an image area of a face to be identified and key points of the image area;
s2, normalizing the image area based on the key points;
s3, face recognition is carried out according to the non-occlusion region of the normalized image region.
Optionally, the step of obtaining the image area of the face to be identified and the key point of the image area in S1 includes:
and acquiring the key points of the image area by adopting the trained convolutional neural network.
Optionally, the trained convolutional neural network includes: and training by taking the P-Net network as a framework and taking images with image area marks and key points marked with the image areas as a training set, and adjusting network parameters through a loss function until iteration reaches preset times.
Optionally, the loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function, and a key point loss function.
Optionally, normalizing the image area based on the keypoints in S2 includes:
and adjusting the size, angle and position of the face in the image area through coordinate transformation based on the key points so as to enable the face to accord with a preset standard form.
Optionally, in S3, performing face recognition according to the non-occluded area of the normalized image area includes:
s31, taking the normalized image area as a target object,
s32, dividing the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting histogram sequence features of the image area determined to be non-occluded;
s35, taking the image area determined to be blocked as a target object;
s36, repeating the steps S32 to S35 for a preset number of times;
s37, performing face recognition according to all the obtained histogram sequence features.
Optionally, the segmenting the target object in S32 includes:
generating a mask using the Markov random field enhanced feature information, and segmenting the image region determined to be occluded using the mask.
Optionally, the determining in step S33 that each segmented image area is occlusion or non-occlusion includes:
s331, carrying out wavelet transformation of different scales and directions on each image area to generate characteristics;
s332, performing dimension reduction on the features, and classifying the dimension reduced features by adopting a support vector machine;
s333, determining whether each image area is blocked according to the classification result.
In a second aspect of the present application, there is also provided a face recognition apparatus including:
the acquisition module is used for acquiring an image area of the face to be identified and key points of the image area; the normalization module is used for normalizing the image area based on the key points; and the detection module is used for carrying out face recognition according to the non-occlusion area of the normalized image area.
A third aspect of the present application is a storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the aforementioned face recognition method.
Through the technical scheme provided by the application, the method has the following beneficial effects: the application provides a method and a network for face recognition with shielding, which can improve the accuracy of face recognition, and have the advantages of simple model and quick calculation.
Additional features and advantages of the application will be set forth in the detailed description which follows.
Drawings
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a face recognition device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present application, as shown in fig. 1. The present embodiment provides a face recognition method, including:
s1, acquiring an image area of a face to be identified and key points of the image area;
s2, normalizing the image area based on the key points;
s3, face recognition is carried out according to the non-occlusion region of the normalized image region.
Therefore, the image area containing the face is extracted through the prepositive processing of the image, and the image area is preprocessed, so that the accuracy of face recognition can be improved. And further normalize the image area, transform parameters such as size, angle and position of the face to be identified, avoid reducing the recognition rate because of the error of parameter, distinguish by distinguishing the shielding area in the image and the non-shielding area, only distinguish the non-shielding area, avoid the interference of the shielding area to the image recognition, thus further promote the recognition accuracy. Through the image processing, the embodiment of the application can reduce the interference and error in the image recognition, thereby improving the accuracy of face recognition.
Specifically, the general procedure of this embodiment is as follows: the method comprises the steps of extracting an image area and key points thereof, wherein the image area is an area where a human face is located, the key points are identification key points or position key points in the human face, and the accurate area to be identified and the key points are extracted to serve as a basis for the next normalization. Image normalization refers to the process of transforming an image into a fixed standard form by performing a series of standard process transformations on the image, also known as normalized images. By carrying out shielding recognition on the normalized image, areas such as a mask or a sunglasses in the face area are ignored, and areas which are not shielded are focused on, so that the influence of a shielding object on the face recognition accuracy is avoided.
In one embodiment of the present application, the step S1 of obtaining an image area of a face to be identified and key points of the image area includes:
and acquiring the key points of the image area by adopting the trained convolutional neural network. The convolutional neural network is adopted to identify the image area and the key points, so that the identification efficiency can be effectively improved, and compared with other acquisition modes, the convolutional neural network has the advantages of being good in accuracy and rapid in calculation. Further, the trained convolutional neural network comprises: and training by taking the P-Net network as a framework and taking images with image area marks and key points marked with the image areas as a training set, and adjusting network parameters through a loss function until iteration reaches preset times. In addition, in this embodiment, network parameters of each layer are preferably set based on a P-Net network, and an image with an image area mark and a key point marked with the image area is used as a training set for training, and the network parameters are adjusted through a loss function, and the training process of a specific neural network is referred to the prior art, and is not repeated here. The trained P-Net network can quickly and accurately identify the face region and key points of the face region in the target image.
In one embodiment of the present application, the loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function, and a keypoint loss function. The loss function is an important component of the neural network model, and a proper loss function is the guarantee of rapid convergence of the convolutional neural network. The foregoing trained convolutional neural network loss function comprises:
cross entropy loss function for face detection
Wherein Xi represents the real label of the sample, pi represents the probability of the network output as a human face, and N is the total number of training samples;
face bounding box regression loss function
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the bounding box coordinates, y, obtained after network output i A real bounding box representing the object;
key point loss function
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the coordinates, z, of the key points obtained after network output i Representing the real coordinates of the key points;
the total loss function is
L=min(λ 1 L 12 L 23 L 3 )
Wherein lambda is 1 、λ 2 、λ 3 Is the weight that each loss takes.
In one embodiment of the present application, normalizing the image area in S2 based on the keypoints includes: and adjusting the size, angle and position of the face in the image area through coordinate transformation based on the key points so as to enable the face to accord with a preset standard form. For example, the transformation matrix of the image is calculated according to the central coordinate positions of the left eye and the right eye of the human face,
wherein S is x ,S y Scaling factors of the image in x and y directions respectively, θ is a rotation angle, t x Represents the translation distance in the x-direction, t y By shifting the distance in the y-direction, the scaling, rotation, and translation operations on the image area can be performed by the transformation matrix, so that the image is a standard-compliant image.
In one embodiment of the present application, in S3, the performing face recognition according to the non-occluded area of the normalized image area includes:
s31, taking the normalized image area as a target object,
s32, dividing the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting histogram sequence features of the image area determined to be non-occluded;
s35, taking the image area determined to be blocked as a target object;
s36, repeating the steps S32 to S35 for a preset number of times;
s37, performing face recognition according to all the obtained histogram sequence features.
In the embodiment, the method of judging after the image is segmented is mainly adopted, the segmented image is classified into blocked or non-blocked, and the feature extraction is only carried out on the non-blocked area, so that the feature of the blocked part is ignored, and the recognition accuracy is improved. The problem of segmentation granularity exists in image segmentation, partial shielding is most likely to exist in the segmented image, and the image judged to be shielded is removed entirely at the moment, so that a plurality of image features can be lost, and the accuracy of image recognition is reduced. Therefore, the present embodiment reduces the feature loss of the image by re-dividing the occlusion image. And the progressive segmentation can be carried out for a plurality of times so as to avoid losing the face information in the image as much as possible.
In one embodiment of the present application, the dividing the target object in S32 includes:
generating a mask using the Markov random field enhanced feature information, and segmenting the image region determined to be occluded using the mask. The method for generating mask by using Markov random field enhanced characteristic information specifically comprises the steps of assuming that all pixel points of an image are set to be S, adopting a classification result of SVM on image shielding as an initialized classification label W, namely, marking a pixel as 1 when shielding exists, marking a pixel as 0 when no shielding exists, estimating a maximum value of posterior probability, namely P (W|S), by combining prior probability and conditional probability, and obtaining an optimal classification result of whether shielding exists or not by each pixel point when P (W|S) takes the maximum value
P (S|W) represents conditional probability, is a likelihood function of P (W|S), and represents a relation of whether each pixel point classified into the class is matched with the real pixel point distribution or not; p (S) represents the distribution of the input image, and is a constant; p (W) represents a classification type. The pixel in the image can be regarded as a markov random field, that is, the classification probability of a certain pixel in the image is related to adjacent points only and is not related to other points far away, a small area formed by the pixel and surrounding adjacent points is called a potential energy group, for example, the pixel and the pixel adjacent to the left of the potential energy group.
P (W) can be found by potential energy function
Where z is the partitioning function, is a normalization constant, the parameter T controls the shape of P (W), the larger T is the flatter T is, and C is the set of potential energy groups
Wherein V is c (w c ) Is potential energy of potential energy groups, beta is a coupling coefficient, and s and t are two adjacent pixel points respectively.
P (S|W) estimates the value of a pixel by using the marker information, and if the distribution of the pixel in the marker classification satisfies the Gaussian distribution, the classification result can be judged according to the value of a certain pixel.
The method comprises the steps of segmenting an occlusion image again by using a Markov random field, improving segmentation accuracy, and then comparing LGBPHS (histogram sequence) characteristics of a non-occlusion face region, and identifying the face by using a histogram correspondence matching method, for example, matching and identifying by using histogram traffic operation. The LGBPHS features are further encoded by an LBP operator on the basis of Gabor features, and a new histogram sequence based on local Gabor transformation is obtained.
In one embodiment of the present application, the determining in step S33 that each image area after segmentation is occlusion or non-occlusion includes:
s331, carrying out wavelet transformation of different scales and directions on each image area to generate characteristics;
s332, performing dimension reduction on the features, and classifying the dimension reduced features by adopting a support vector machine;
s333, determining whether each image area is blocked according to the classification result.
Specifically, the wavelet transformation adopts a Gabor wavelet, and the Gabor wavelet is composed of a sinusoidal carrier and a gaussian envelope, and has the characteristic of simultaneously obtaining optimal localization in a space domain and a frequency domain, so that local structure information corresponding to spatial frequency, spatial position and direction selectivity can be well described. Because of the large size of the extracted Gabor features, PCA (principal component analysis) is employed to maximize the variance of the Gabor features in the projection subspace, so that the Gabor features reduce the dimensionality of the feature vectors while maintaining their discriminatory power.
Fig. 2 is a schematic diagram of a face recognition device according to an embodiment of the present application, as shown in fig. 2. In one embodiment of the present application, there is also provided a face recognition apparatus, including:
the acquisition module is used for acquiring an image area of the face to be identified and key points of the image area;
the normalization module is used for normalizing the image area based on the key points;
and the detection module is used for carrying out face recognition according to the non-occlusion area of the normalized image area.
The acquisition module comprises: and acquiring the key points of the image area by adopting the trained convolutional neural network.
The trained convolutional neural network comprises: and training by taking the P-Net network as a framework and taking images with image area marks and key points marked with the image areas as a training set, and adjusting network parameters through a loss function until iteration reaches preset times.
The loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function and a key point loss function.
The normalization module comprises: and adjusting the size, angle and position of the face in the image area through coordinate transformation based on the key points so as to enable the face to accord with a preset standard form.
The detection module comprises:
s31, taking the normalized image area as a target object;
s32, dividing the target object;
s33, determining each segmented image area as blocked or unblocked;
s34, extracting histogram sequence features of the image area determined to be non-occluded;
s35, taking the image area determined to be blocked as a target object;
s36, repeating the steps S32 to S35 for a preset number of times;
s37, performing face recognition according to all the obtained histogram sequence features.
The detecting module for dividing the target object comprises: generating a mask using the Markov random field enhanced feature information, and segmenting the image region determined to be occluded using the mask.
Determining that each segmented image area is blocked or unblocked in the detection module comprises the following steps:
s331, carrying out wavelet transformation of different scales and directions on each image area to generate characteristics;
s332, performing dimension reduction on the features, and classifying the dimension reduced features by adopting a support vector machine;
s333, determining whether each image area is blocked according to the classification result.
Details of the embodiment of the face recognition device refer to the face recognition method described above, and will not be described herein.
In an embodiment of the present application, a computer or a server is further provided, where the aforementioned face recognition device is loaded on the computer or the server, so that the foregoing face recognition method can be completed.
Fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present application, as shown in fig. 3. The terminal device 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102 stored in the memory 101 and executable on the processor 100, such as a program for performing face recognition. The processor 100, when executing the computer program 102, implements the steps of the method embodiments described above, for example, the relevant steps of the face recognition method shown in fig. 1. Alternatively, the processor 100 may implement the functions of the modules/units in the above-described device embodiments when executing the computer program 102, for example, the functions of the relevant modules of the face recognition device shown in fig. 2.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used to describe the execution of the computer program 102 in the terminal device 10. For example, the computer program 102 may be divided into an acquisition module, a normalization module, and a detection module (a module in a virtual device), each of which functions specifically as follows:
the acquisition module is used for acquiring an image area of the face to be identified and key points of the image area;
the normalization module is used for normalizing the image area based on the key points;
and the detection module is used for carrying out face recognition according to the non-occlusion area of the normalized image area.
The terminal device 10 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. Terminal device 10 may include, but is not limited to, a processor 100, a memory 101. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the terminal device 10 and is not limiting of the terminal device 10, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The processor 100 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the terminal device 10. The memory 101 is used for storing the computer program as well as other programs and data required by the terminal device 10. The memory 101 may also be used to temporarily store data that has been output or is to be output.
In one embodiment of the present application, the storage medium stores instructions that, when executed on a computer, cause the computer to perform the aforementioned face recognition method.
By the implementation method provided by the application, the method has a good recognition effect on the face with shielding, and experiments prove that the accuracy of face recognition can be effectively improved.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (7)

1. A face recognition method, characterized in that the face recognition method comprises:
s1, acquiring an image area of a face to be identified and key points of the image area;
s2, normalizing the image area based on the key points;
s3, carrying out face recognition according to the non-occlusion region in the normalized image region;
in the step S3, face recognition is performed according to the non-occlusion region in the normalized image region, including:
s31, taking the normalized image area as a target object,
s32, dividing the target object;
the segmenting the target object in S32 includes:
generating a mask by adopting Markov random field enhanced characteristic information, and dividing the image area determined to be blocked by adopting the mask;
s33, determining each segmented image area as blocked or unblocked;
in the step S33, determining that each segmented image area is occluded or non-occluded includes:
s331, carrying out wavelet transformation of different scales and directions on each image area to generate characteristics;
s332, performing dimension reduction on the features, and classifying the dimension reduced features by adopting a support vector machine;
s333, determining whether each image area is blocked or not according to the classification result;
s34, extracting histogram sequence features of the image area determined to be non-occluded;
s35, taking the image area determined to be blocked as a target object;
s36, repeating the steps S32 to S35 for a preset number of times;
s37, performing face recognition according to all the obtained histogram sequence features.
2. The face recognition method according to claim 1, wherein the step of obtaining the image area of the face to be recognized and the key points of the image area in S1 includes:
and acquiring the key points of the image area by adopting the trained convolutional neural network.
3. The face recognition method of claim 2, wherein the trained convolutional neural network comprises: taking a P-Net network as a framework, training by taking an image with an image area mark and a key point marked with the image area as a training set, and adjusting network parameters through a loss function until iteration reaches preset times;
the loss function is related to a face detection cross entropy loss function, a face bounding box regression loss function and a key point loss function.
4. The face recognition method according to claim 1, wherein normalizing the image region based on the keypoints in S2 includes:
and adjusting the size, angle and position of the face in the image area through coordinate transformation based on the key points so as to enable the face to accord with a preset standard form.
5. A face recognition device, characterized in that the face recognition device comprises:
the acquisition module is used for acquiring an image area of the face to be identified and key points of the image area;
the normalization module is used for normalizing the image area based on the key points;
and the detection module is used for carrying out face recognition according to the non-occlusion region in the normalized image region.
6. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when the computer program is executed.
7. A storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the steps of the method of any of claims 1 to 4.
CN202010247554.7A 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium Active CN111488811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010247554.7A CN111488811B (en) 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010247554.7A CN111488811B (en) 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111488811A CN111488811A (en) 2020-08-04
CN111488811B true CN111488811B (en) 2023-08-22

Family

ID=71812564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010247554.7A Active CN111488811B (en) 2020-03-31 2020-03-31 Face recognition method, device, terminal equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111488811B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914812B (en) * 2020-08-20 2022-09-16 腾讯科技(深圳)有限公司 Image processing model training method, device, equipment and storage medium
CN111968291B (en) * 2020-08-26 2022-05-17 重庆康普达科技有限公司 Face recognition intelligent column

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996039A (en) * 2014-05-06 2014-08-20 河海大学 SAR image channel extraction method combining gray-level threshold-value segmentation and contour shape identification
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking
KR20160061856A (en) * 2014-11-24 2016-06-01 삼성전자주식회사 Method and apparatus for recognizing object, and method and apparatus for learning recognizer
CN206224639U (en) * 2016-11-14 2017-06-06 华南理工大学 A kind of face recognition door control system with occlusion detection function
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary
CN108932458A (en) * 2017-05-24 2018-12-04 上海云从企业发展有限公司 Restore the facial reconstruction method and device of glasses occlusion area
EP3428843A1 (en) * 2017-07-14 2019-01-16 GB Group plc Improvements relating to face recognition
CN109684973A (en) * 2018-12-18 2019-04-26 哈尔滨工业大学 The facial image fill system of convolutional neural networks based on symmetrical consistency
CN110263768A (en) * 2019-07-19 2019-09-20 深圳市科葩信息技术有限公司 A kind of face identification method based on depth residual error network
CN110298284A (en) * 2019-06-24 2019-10-01 火石信科(广州)科技有限公司 A kind of recognition methods for reading and writing scene and read and write position
CN110895693A (en) * 2019-09-12 2020-03-20 华中科技大学 Authentication method and authentication system for anti-counterfeiting information of certificate

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514432B (en) * 2012-06-25 2017-09-01 诺基亚技术有限公司 Face feature extraction method, equipment and computer program product
US20180158246A1 (en) * 2016-12-07 2018-06-07 Intel IP Corporation Method and system of providing user facial displays in virtual or augmented reality for face occluding head mounted displays
CN108319953B (en) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996039A (en) * 2014-05-06 2014-08-20 河海大学 SAR image channel extraction method combining gray-level threshold-value segmentation and contour shape identification
CN104091163A (en) * 2014-07-19 2014-10-08 福州大学 LBP face recognition method capable of eliminating influences of blocking
KR20160061856A (en) * 2014-11-24 2016-06-01 삼성전자주식회사 Method and apparatus for recognizing object, and method and apparatus for learning recognizer
CN206224639U (en) * 2016-11-14 2017-06-06 华南理工大学 A kind of face recognition door control system with occlusion detection function
CN108932458A (en) * 2017-05-24 2018-12-04 上海云从企业发展有限公司 Restore the facial reconstruction method and device of glasses occlusion area
CN107292287A (en) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 Face identification method, device, electronic equipment and storage medium
EP3428843A1 (en) * 2017-07-14 2019-01-16 GB Group plc Improvements relating to face recognition
CN107622503A (en) * 2017-08-10 2018-01-23 上海电力学院 A kind of layering dividing method for recovering image Ouluding boundary
CN107463920A (en) * 2017-08-21 2017-12-12 吉林大学 A kind of face identification method for eliminating partial occlusion thing and influenceing
CN109684973A (en) * 2018-12-18 2019-04-26 哈尔滨工业大学 The facial image fill system of convolutional neural networks based on symmetrical consistency
CN110298284A (en) * 2019-06-24 2019-10-01 火石信科(广州)科技有限公司 A kind of recognition methods for reading and writing scene and read and write position
CN110263768A (en) * 2019-07-19 2019-09-20 深圳市科葩信息技术有限公司 A kind of face identification method based on depth residual error network
CN110895693A (en) * 2019-09-12 2020-03-20 华中科技大学 Authentication method and authentication system for anti-counterfeiting information of certificate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于小波变换和小波神经网络的3D遮挡人脸识别方法;孔令美等;湘潭大学自然科学学报;第37卷(第04期);第82-86页 *

Also Published As

Publication number Publication date
CN111488811A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
Nogueira et al. Evaluating software-based fingerprint liveness detection using convolutional networks and local binary patterns
WO2016145940A1 (en) Face authentication method and device
CN111488811B (en) Face recognition method, device, terminal equipment and computer readable medium
CN113269257A (en) Image classification method and device, terminal equipment and storage medium
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
JP3809305B2 (en) Image search apparatus, image search method, and computer-readable storage medium
CN112926592B (en) Trademark retrieval method and device based on improved Fast algorithm
Bellavia et al. HarrisZ+: Harris corner selection for next-gen image matching pipelines
Alsawwaf et al. In your face: person identification through ratios and distances between facial features
El-Abed et al. Quality assessment of image-based biometric information
Xia et al. Fast template matching based on deformable best-buddies similarity measure
CN111353385A (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN111898408B (en) Quick face recognition method and device
CN113344047A (en) Platen state identification method based on improved K-means algorithm
Fan et al. Skew detection in document images based on rectangular active contour
Gani et al. Copy move forgery detection using DCT, PatchMatch and cellular automata
Kunaver et al. Image feature extraction-an overview
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN111553195B (en) Three-dimensional face shielding discrimination method based on multi-bitmap tangent plane and multi-scale uLBP
CN110136100B (en) Automatic classification method and device for CT slice images
CN113658101B (en) Method and device for detecting landmark points in image, terminal equipment and storage medium
Nguyen et al. Fast scene text detection with RT-LoG operator and CNN
Gagula-Palalic et al. Extracting gray level profiles of human chromosomes by curve fitting
CN109934162A (en) Facial image identification and video clip intercept method based on Struck track algorithm
Safaei et al. Robust search-free car number plate localization incorporating hierarchical saliency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant