CN112926424B - Face shielding recognition method, device, readable medium and equipment - Google Patents

Face shielding recognition method, device, readable medium and equipment Download PDF

Info

Publication number
CN112926424B
CN112926424B CN202110183312.0A CN202110183312A CN112926424B CN 112926424 B CN112926424 B CN 112926424B CN 202110183312 A CN202110183312 A CN 202110183312A CN 112926424 B CN112926424 B CN 112926424B
Authority
CN
China
Prior art keywords
image
face
area
recognized
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110183312.0A
Other languages
Chinese (zh)
Other versions
CN112926424A (en
Inventor
岳凯宇
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibee Technology Co Ltd
Original Assignee
Beijing Aibee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibee Technology Co Ltd filed Critical Beijing Aibee Technology Co Ltd
Priority to CN202110183312.0A priority Critical patent/CN112926424B/en
Publication of CN112926424A publication Critical patent/CN112926424A/en
Application granted granted Critical
Publication of CN112926424B publication Critical patent/CN112926424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face shielding recognition method, a device, a readable medium and equipment, wherein the method is used for acquiring an image to be recognized; identifying the image of the image to be identified, obtaining a pixel value of each key area in the image to be identified, and generating a face shielding identification result of the image to be identified; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face; and determining the specific part of the face in the blocked state in the image to be recognized by utilizing the face blocking recognition result of the image to be recognized. The face shielding recognition model is adopted to recognize the key area of the face, so that the accuracy of the face part shielding recognition is improved.

Description

Face shielding recognition method, device, readable medium and equipment
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, a readable medium, and a device for recognizing face occlusion.
Background
In the prior art, the method for shielding and identifying the face part comprises the following steps: and identifying the face area and the non-face area in the image, and if the specific area where the face part is located is identified as the non-face area, identifying that the specific area is blocked. For example, if the area where the mouth is located is identified as a non-face area, then the identification result that the mouth portion of the face is blocked can be obtained.
However, in the conventional method for recognizing the face part by masking, the face region and the non-face region are recognized by the pixel values of the image, so that the masked part cannot be recognized correctly when the face part is masked by the object having the color similar to that of the face. For example, when the hand is used to shield the mouth, the color of the hand and the color of the face are close, so that the region shielded by the hand is easily recognized as the face region, and the final recognition result is that the face part is not shielded, and the recognition result that the mouth part is shielded cannot be accurately recognized. Therefore, the existing method for shielding and identifying the face part has low accuracy, and the part of the face shielded can not be accurately identified.
Disclosure of Invention
Based on the defects in the prior art, the application provides a face shielding recognition method, a device, a readable medium and equipment, so as to accurately recognize the part of the face shielded.
The first aspect of the application discloses a face shielding recognition method, which comprises the following steps:
Acquiring an image to be identified;
Identifying the image of the image to be identified, obtaining a pixel value of each key area in the image to be identified, and generating a face shielding identification result of the image to be identified; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face;
and determining the specific part of the face in the blocked state in the image to be recognized by utilizing the face blocking recognition result of the image to be recognized.
Optionally, in the above method for recognizing facial occlusion, the generating a facial occlusion recognition result of the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, and the method comprises the following steps:
judging whether the key area is in a non-shielded state or not by utilizing the pixel value of the key area;
If the key area is judged to be in a non-blocked state, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set as a preset value, so that an adjusted target image is obtained; wherein the target image and the image to be identified have the same specification.
Optionally, in the above method for recognizing a face occlusion, if there are multiple key regions in the image to be recognized in a non-occluded state, for each key region in the image to be recognized in a non-occluded state, a pixel value set in a specific image region generated at a position of the target image corresponding to the key region is unique, and a size of the specific image region is a preset size.
Optionally, in the above method for recognizing facial occlusion, the determining whether the key region is in a non-occluded state by using pixel values of the key region includes:
and identifying a pixel relation structure in the image to be identified, and determining whether each key area is in a blocked state.
Optionally, in the above method for recognizing facial occlusion, after determining whether the key region is in the non-occluded state by using the pixel value of the key region, the method further includes:
Calculating the duty ratio of the number of target pixel points in the key area to the pixel points of the image to be identified according to the determined key area in the non-shielded state; the target pixel points are pixel points in the key region, which meet the requirements of the specific part of the face pointed by the key region;
wherein: generating a specific image area at the position of the target image corresponding to the key area, and setting the pixel value of the specific image area as a preset value to obtain an adjusted target image, wherein the method comprises the following steps:
Generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in a non-shielded state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image; wherein, the requirements of the duty ratio are as follows: the defect condition of the specific image area is inversely related to the value of the duty cycle.
Optionally, in the above method for recognizing facial occlusion, the recognizing the image of the image to be recognized, obtaining a pixel value of each key area in the image to be recognized, and generating a facial occlusion recognition result of the image to be recognized, includes:
inputting the image to be recognized into a face shielding recognition model, recognizing the image of the image to be recognized by the face shielding recognition model, obtaining a pixel value of each key area in the image to be recognized, and generating a face shielding recognition result of the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face; the face shielding recognition model is obtained by training a neural network model through a plurality of training images and actual face shielding recognition results of each training image.
The second aspect of the present application discloses a face shielding recognition device, comprising:
the acquisition unit is used for acquiring the image to be identified;
The identification unit is used for identifying the image of the image to be identified, obtaining the pixel value of each key area in the image to be identified, and generating a face shielding identification result of the image to be identified; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face;
And the determining unit is used for determining the specific part of the face in the blocked state in the image to be recognized by utilizing the face blocking recognition result of the image to be recognized.
Optionally, in the above-mentioned recognition device for face occlusion, the recognition unit performs generation of a face occlusion recognition result of the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not, and is used for:
Judging whether the key area is in a non-shielded state or not by utilizing the pixel value of the key area; if the key area is judged to be in a non-blocked state, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set as a preset value, so that an adjusted target image is obtained; wherein the target image and the image to be identified have the same specification.
Optionally, in the above facial occlusion recognition device, if the number of key regions in the image to be recognized in the non-occluded state is multiple, for each key region in the image to be recognized in the non-occluded state, a pixel value set in a specific image region generated at a position of the target image corresponding to the key region is unique, and a size of the specific image region is a preset size.
Optionally, in the above facial occlusion recognition device, the recognition unit is configured to, when determining whether the key region is in a non-occluded state by using pixel values of the key region, perform:
and identifying a pixel relation structure in the image to be identified, and determining whether each key area is in a blocked state.
Optionally, the above facial occlusion recognition device further includes:
The computing unit is used for computing the duty ratio of the number of target pixel points in the key area to the pixel points of the image to be identified according to each determined key area in the non-shielded state; the target pixel points are pixel points in the key region, which meet the requirements of the specific part of the face pointed by the key region;
the identification unit generates a specific image area at a position of the target image corresponding to the key area, and sets a pixel value of the specific image area as a preset value, so as to obtain an adjusted target image, wherein the identification unit is used for:
Generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in a non-shielded state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image; wherein, the requirements of the duty ratio are as follows: the defect condition of the specific image area is inversely related to the value of the duty cycle.
Optionally, in the above facial occlusion recognition device, the recognition unit includes:
The recognition subunit is used for inputting the image to be recognized into a face shielding recognition model, recognizing the image of the image to be recognized by the face shielding recognition model, obtaining the pixel value of each key area in the image to be recognized, and generating a face shielding recognition result of the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face; the face shielding recognition model is obtained by training a neural network model through a plurality of training images and actual face shielding recognition results of each training image.
A third aspect of the application discloses a computer-readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements a method as in any of the first aspects above.
In a fourth aspect of the application, an apparatus is disclosed comprising:
One or more processors;
a storage device having one or more programs stored thereon;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the first aspects described above.
According to the technical scheme, in the face shielding recognition method provided by the embodiment of the application, the pixel value of each key area in the image to be recognized can be obtained by recognizing the image of the image to be recognized, and the face shielding recognition result of the image to be recognized is generated. The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, and the key area is an image area of a specific part of the face. Compared with the mode of identifying the face area and the non-face area in the image to be identified in the prior art, the method and the device can obtain the pixel value of each key area in the image to be identified, and further obtain the result of whether each key area in the image to be identified is in a shielded state, so that whether the face area is shielded or not can be accurately determined even if an object with the color close to that of the face is used for shielding the face area, and the accuracy of shielding and identifying the face area is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a face shielding recognition method according to an embodiment of the present application;
FIG. 2a is a diagram of an image to be identified according to an embodiment of the present application;
FIG. 2b is a representation of an adjusted target image from the image to be identified of FIG. 2a, in accordance with an embodiment of the present application;
FIG. 3 is a flowchart of a method for outputting an adjusted target image according to an embodiment of the present application;
FIG. 4a is another image to be identified according to an embodiment of the present application;
FIG. 4b is an adjusted target image from the image to be identified shown in FIG. 4a, in accordance with an embodiment of the present application;
FIG. 5 is a flowchart of a method for creating a face occlusion recognition model according to an embodiment of the present application;
FIG. 6a is a schematic diagram of a training image during processing according to an embodiment of the present application;
FIG. 6b is a schematic diagram of the actual face occlusion recognition result of the training image of FIG. 6a according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating another method for recognizing face occlusion according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for recognizing face occlusion according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of another face shielding recognition device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the embodiment of the application discloses a face shielding recognition method, which specifically comprises the following steps:
S101, acquiring an image to be identified.
The image to be recognized refers to a face image of whether or not an unrecognized face is blocked. The image to be recognized may be especially one with face being shielded or not, or one with face being shielded or not. The image to be identified has pixel value information and position information of each pixel point.
There are many ways to acquire the image to be identified, for example, the image to be identified may be acquired by a camera, and then the acquired image to be identified is acquired. For example, a video of a photographed face may be acquired, and then a plurality of video frames are obtained from the video, and each video frame is acquired as an image to be recognized.
The number of scenes for acquiring the image to be recognized is large, for example, in the process of carrying out face shielding recognition under the scene of shooting the face by the camera, the image to be recognized in the shooting process of the camera is acquired. The method can also be used for processing the video into a plurality of video frames in a scene of processing the video, and each video frame is used as an image to be identified.
S102, inputting an image to be recognized into a face shielding recognition model, recognizing pixel values of each key area in the image to be recognized by the face shielding recognition model, obtaining and outputting a face shielding recognition result of the image to be recognized, wherein the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not, the key area is an image area of a specific part of a face, and the face shielding recognition model is obtained by training a neural network model by a plurality of training images and actual face shielding recognition results of each training image.
The face shielding recognition model is used for recognizing whether the face in the image to be recognized has a shielded part or not, and specifically, the face shielding recognition result is output to explain the face shielding recognition. The face occlusion recognition result can indicate whether each key region in the image to be recognized is in an occluded state. The key region is an image region of a specific part of the face, that is, a face occlusion recognition result of the image to be recognized output by the face occlusion recognition model can indicate whether the specific part of each face in the image to be recognized is in an occluded state. The specific part of the face can comprise a mouth, a left eye, a right eye and a nose, namely the face shielding recognition model recognizes whether a key area where the mouth is located, a key area where the left eye is located, a key area where the right eye is located and a key area where the nose is located are in a shielded state.
Specifically, an image to be recognized is input into a face shielding recognition model, the face shielding recognition model recognizes pixel points in the image to be recognized, and pixel values of each key area in the image to be recognized are recognized. The pixel relation structure refers to a special pixel structure formed by the arrangement relation between the pixel points, and when the key area is in a non-shielded state, the pixel relation structure of the key area is provided with specific characteristics, so that the face shielding recognition model can recognize the pixel value of each key area by recognizing whether the image to be recognized is provided with the pixel relation structure characteristics of each key area, recognizes the pixel value of the key area, recognizes whether the key area is in a shielded state through the characteristics of the pixel relation structure in the key area, the non-shielded pixel value occupation ratio condition in the key area and other factors, and outputs a face shielding recognition result after the result of whether each key area is in the shielded state is obtained. Specifically, through the pixel value and the position of each pixel point in the image to be identified, each key area in the image to be identified can be distinguished, and whether each key area is in a blocked state or not can be identified through the pixel value in each key area.
In the prior art, when face shielding recognition is performed, a face area and a non-face area in an image to be recognized are recognized, and the recognition principle is that the face area and the non-face area can be recognized according to the fact that the characteristics of the pixel values of the face area are different from the characteristics of the pixel values of the non-face area, and if the face area is recognized as the non-face area, the face area is considered to be shielded. However, this face occlusion recognition method has a disadvantage of low accuracy. When the face is shielded by using things similar to the color of the face, all face parts are mistakenly considered not to be in the shielded state, namely, the non-face area does not exist in the image, and the result that the face part of the person is shielded cannot be accurately obtained.
In the embodiment of the application, the face shielding recognition model recognizes the pixel value of each key area in the image to be recognized, and the recognition result of whether each key area is in a shielded state is obtained, namely, the face shielding model has the capability of recognizing whether each face part is in a shielded state, rather than only recognizing the face area and the non-face area as in the prior art, and the face shielding recognition model recognizes the pixel value of each key area, so that the accuracy of face shielding recognition is improved. If something similar to the color of a human face is used for shielding a certain face part in the prior art, when the face shielding recognition model recognizes a key area of the face part, the pixel value of the face part can be recognized not to be in accordance with the pixel relation structure of the face part in the non-shielded state, and the face part can be accurately recognized to be in the shielded state.
Optionally, in a specific embodiment of the present application, performing an implementation of step S102 includes:
Inputting the image to be recognized into a face shielding recognition model, recognizing the pixel value of each key area in the image to be recognized by the face shielding recognition model, if the pixel value of the key area is used for determining that the key area is in a non-shielded state, generating a specific image area at the position of the target image corresponding to the key area, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image.
Wherein the target image has the same specification as the image to be identified. The specification of the target image and the image to be identified being identical means that the resolution of the target image and the image to be identified are identical. For example, if the image to be recognized is an image of 108×108 specification, then the target image is also an image of 108×108.
Specifically, the face occlusion recognition model recognizes the pixel value of each key region in the image to be recognized, if the pixel value of the key region is used for determining that the key region is in a non-occluded state, a specific image region is generated at the position of the target image corresponding to the key region, the pixel value of the specific image region is set as a preset value, and the key region is distinguished from other regions in the image by generating the specific image region with the pixel value as the preset value, so that the key region can be stated to be in the non-occluded state. The target image may be another image with the same specification as the image to be identified, or the image to be identified after the pixel value setting process. The specific image area may be a circular area with a predetermined size, or may be an area with another shape with a predetermined size.
Alternatively, in a specific embodiment of the present application, the pixel values of all other image areas in the target image in which the non-occluded state is not identified may be set to another preset value different from the pixel values of the specific image area. For example, if two key regions in the image to be identified are identified to be in a non-blocked state, a specific image region is generated at a position corresponding to the two key regions in the target image, and a pixel value is set to a preset value, for example, a pixel value is set to 1. And image areas other than the two key areas may be set to a pixel value of 2 at the corresponding target image position. After the setting is completed, an adjusted target image can be obtained, and the pixel values of two key areas in the non-shielded state in the adjusted target image are different from the pixel values of other positions, so that the two key areas can be illustrated in the non-shielded state from the target image. If the face has four key areas in total, but the target image shows that two key areas are in a non-shielded state, then the other two key areas are shielded and are not in a non-shielded state, so that the adjusted target image output from the embodiment of the application can show the face shielding recognition result of the image to be recognized.
Optionally, in an embodiment of the present application, if there are multiple key areas in the image to be identified in the non-occluded state, for each key area in the image to be identified in the non-occluded state, the pixel value set in the specific image area generated at the position of the target image corresponding to the key area is unique, and the size of the specific image area is a preset size.
If the number of the key areas in the image to be identified in the non-occluded state is multiple, in order to distinguish and describe the non-occluded states of the multiple key areas in the target image, the pixel value set in the specific image area generated at the position of the target image corresponding to the key area is unique for each key area in the image to be identified in the non-occluded state, namely, the pixel value of the specific image area generated at the position of each key area is a specific and unique preset value. For example, if the critical areas of the face share the critical areas of the left eye, the critical areas of the right eye, the critical areas of the nose, and the critical areas of the mouth. If the key region of the left eye is in a non-shielded state, a specific image region with a pixel value of 1 is generated at the key region position of the left eye in the target image, if the key region of the right eye is in a non-shielded state, a specific image region with a pixel value of 2 is generated at the key region position of the right eye in the target image, if the key region of the nose is in a non-shielded state, a specific image region with a pixel value of 3 is generated at the key region position of the nose in the target image, and if the key region of the mouth is in a non-shielded state, a specific image region with a pixel value of 5 is generated at the key region position of the mouth in the target image. The size of the specific image area is a preset size. For example, the specific image areas of the left eye and the right eye are each a circular image area having a dot diameter of 10 pixels, and the specific image areas of the nose and the mouth are each a circular image area having a dot diameter of 13 pixels. Alternatively, the pixel value may be set to 5 in other areas not identified as being in the unoccluded state.
For example, as shown in fig. 2a, since the key region of the left eye part, the key region of the right eye part, the key region of the nose part, and the key region of the mouth part in the image to be identified are all in the non-occluded state, a circular specific image region with a radius of a preset value is generated on each key region in the adjusted target image as shown in fig. 2b, and the specific image regions at different key region positions have different pixel values. Thus, it can be seen from fig. 2b that the left eye, right eye, nose, and mouth in the image to be recognized are all in an unoccluded state.
Optionally, referring to fig. 3, in an embodiment of the present application, inputting an image to be recognized into a face occlusion recognition model is performed, and a pixel value of each key region in the image to be recognized is recognized by the face occlusion recognition model, and if it is determined that the key region is in a non-occluded state by using the pixel value of the key region, a specific image region is generated at a position of the target image corresponding to the key region, and the pixel value of the specific image region is set to a preset value, so as to obtain an adjusted target image, and one implementation manner of outputting the adjusted target image includes:
s301, inputting the images to be recognized into a face shielding recognition model, recognizing pixel relation structures in the images to be recognized by the face shielding recognition model, and determining a key area of each image to be recognized in a non-shielded state.
After the image to be recognized is input into the face shielding recognition model, the face shielding recognition model recognizes pixels of the image to be recognized. When the different key areas are in the non-blocked state, the specific pixel relation structure corresponding to the key areas is displayed in the image to be identified, so that the position of each key area in the image to be identified in the non-blocked state can be determined by identifying the pixel relation structure in the image to be identified. That is, determining the critical area of each of the images to be identified in the non-occluded state refers to determining the position of the critical area of each of the images to be identified in the non-occluded state.
If the key area is in a blocked state, the specific pixel relation structure corresponding to the key area cannot be identified when the pixel relation structure in the image to be identified is identified, so that the position of the key area cannot be determined. The face occlusion recognition model learns specific pixel relation structures corresponding to the key areas through training, so that the face occlusion recognition model has the capability of determining the key areas of each image to be recognized in an unoccluded state. Wherein a particular pixel relationship structure may refer to a complete pixel relationship structure that is specific to the critical region. For example, a critical area of the mouth, a particular pixel relationship structure of the critical area of the mouth may refer to a complete pixel structure of the mouth. The specific pixel relationship structure may also refer to a partial pixel relationship structure specific to the key region. For example, the critical area of the mouth, the specific pixel relationship structure of the critical area of the mouth may refer to the pixel structure of the upper lip, or the pixel structure of the lower lip.
For example, the face has a total of four key areas, left eye, right eye, nose, and mouth. When the key area of the left eye is in a non-shielded state, the face shielding recognition model can recognize the specific complete pixel relation structure of the left eye when recognizing the pixel relation structure of the image to be recognized, and the position of the key area of the left eye is determined. And determining the position of the left eye key area, and determining that the left eye key area is in a non-shielding state. Other key areas are the same and are not described in detail herein.
S302, generating a specific image area at the position of the target image corresponding to each determined key area in the non-blocked state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image.
The size of the specific image area is a preset size. After determining that each key region is in the non-occluded state, step S301 generates a specific image region at a position corresponding to the key region of the target image for each key region in the non-occluded state, and sets a pixel value of the specific image region to a preset value, thereby explaining that the key region is in the non-occluded state. And obtaining an adjusted target image after setting the pixel value, and outputting the adjusted target image. Alternatively, after setting the pixel value of the specific image area to the preset value, the pixel value of the area other than the specific image area may be set to a pixel value different from the preset value of the key area.
Optionally, in an embodiment of the present application, after performing step S301, the method further includes:
And calculating the duty ratio of the number of target pixel points in the key area to the pixel points of the image to be identified according to the determined key area in the non-shielded state.
The target pixel points are the pixel points in the key area, which meet the requirements of the specific part of the face pointed by the key area.
And calculating the duty ratio of the number of target pixel points in the key region in the pixel points of the image to be identified according to the determined key region in the non-shielded state, wherein the duty ratio can be used for indicating the proportion of the image region part in the non-shielded state in the key region to the image to be identified. The pixel points meeting the requirements of the specific part designated by the key area may refer to the pixel points in the key area considered to meet the requirements of the unoccluded state of the specific part corresponding to the key area. For example, the pixel points may be pixels satisfying the requirements of the specific pixel relationship structure corresponding to the key region. Although the critical area is determined to be in the non-blocked state, some pixels which are not in the non-blocked state may exist in the critical area, in order to accurately reflect the blocked degree of the critical area, the duty ratio of the number of target pixels in the critical area to the pixels of the image to be identified may be calculated, and the blocked degree inside the critical area may be reflected by using the duty ratio. The higher the duty ratio of the number of the target pixel points in the pixel points of the image to be identified is, the higher the non-shielding degree of the key area is, namely, the more the pixel points in the non-shielding state in the key area are.
For example, if the mouth in the image to be identified is covered by the upper lip, when the pixel relation structure of the image to be identified is identified, the pixel relation structure of the lower lip which is not covered is identified, the position of the key area of the mouth is determined, if the mouth is not covered at all, the ratio of the number of target pixel points in the number of pixel points in the image to be identified is considered to be 30%, and the ratio of the number of target pixel points in the key area calculated at this time is considered to be 15%.
Wherein, when executing step S302, the method includes:
And generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in the non-blocked state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image. Wherein, the requirement of the duty ratio is: the defect condition of a particular image area is inversely related to the value of the duty cycle.
The defect condition of the generated specific image area relative to the key area is inversely related to the duty ratio value, namely, the size of the specific image area is positively related to the duty ratio value. The smaller the duty ratio value, the larger the defect condition of the critical area, and the larger the duty ratio value, the smaller the defect condition of the critical area. For example, if the mouth in the image to be identified is covered by the upper lip, when the pixel relation structure of the image to be identified is identified, the complete pixel relation structure of the non-covered lower lip is identified, the key area position of the mouth is determined, if the mouth is not covered at all, the ratio of the number of target pixel points in the number of pixel points in the image to be identified is 30%, and if the upper lip is covered at this time, the calculated ratio of the number of target pixel points in the key area in the number of pixel points in the image to be identified is 15%, then a specific image area with a incomplete half circle shape can be generated, the pixel value of the specific image area with the circular shape is a preset value, and the size of the specific image area is a preset size.
Since the size of the generated specific image area is the preset size, the incomplete condition of the specific image area is inversely related to the value of the duty ratio, and therefore the occlusion degree of the corresponding key area can be reflected through the generated specific image area.
Optionally, in an embodiment of the present application, if the key area is not identified to be in the unoccluded state, no specific image area is generated. For example, the image to be recognized as shown in fig. 4a is input into the face occlusion recognition model, the key area of the mouth part in the image to be recognized as shown in fig. 4a is occluded, after the face occlusion recognition model recognizes, the adjusted target image as shown in fig. 4b is output, and in the adjusted target image, a circle (i.e. a specific image area) is generated only on the key areas of the left eye, the right eye and the nose part, and the key area of the mouth part is not circular, so that the adjusted target image can indicate whether each key area in the image to be recognized is in an occluded state.
Optionally, referring to fig. 5, in a specific embodiment of the present application, a method for creating a face occlusion recognition model includes:
S501, constructing a training image set.
Wherein, training image set includes: a plurality of training images, and an actual face occlusion recognition result for each training image.
The training image refers to a face image which is not subjected to face occlusion recognition. The training images need to include not only the training images with the face shielded, but also the training images with the face not shielded, so that a face shielding recognition model capable of accurately completing face shielding recognition is trained through rich training image samples. Optionally, in order to improve the recognition accuracy of the trained face occlusion recognition model, the training images in the training image set may further include training images with different face portions occluded. The richer the training image is, the higher the recognition accuracy of the trained face shielding recognition model is.
The actual face occlusion recognition result of the training image is used to describe whether each of the actual key regions of the training image is in an occluded state. The actual face occlusion recognition result may be expressed in a plurality of forms, for example, whether each key region is in an occluded state may be described by using an image, or whether each key region is in an occluded state may be respectively expressed by a value corresponding to each key region. For example, referring to fig. 6a, before training, all the keypoints in the training image shown in fig. 6a are marked, where the keypoints are feature points in the keypoints region to which the keypoints belong. And then, calculating a minimum circumcircle by utilizing all key points in the key region aiming at each key region in the training image shown in fig. 6a, so that all the key points fall into the calculated minimum circumcircle, and generating a minimum circumcircle with a pixel value being a preset value on the position of the minimum circumcircle of each key region in the target image after calculating the minimum circumcircle, thereby obtaining the actual face shielding recognition result of the training image shown in fig. 6 b. As can be seen from the actual face occlusion recognition result of the training image shown in fig. 6b, since each key region in fig. 6a is not in an occluded state, circles of each key region in fig. 6b are generated on the image, the circle pixel value at the position of the key region of the left eye is 1, the circle pixel value at the position of the key region of the right eye is 2, the circle pixel value at the position of the key region of the nose is 3, the circle pixel value at the position of the key region of the mouth is 4, and the other area pixel values are 5. As can be seen from fig. 6b, each of the actual key regions of the training image is in an unoccluded state.
S502, each training image in the training image set is respectively input into a neural network model, and the face shielding recognition result of each training image is respectively obtained and output by the neural network model.
And respectively inputting each training image in the training image set into a neural network model, and respectively obtaining and outputting a face shielding recognition result of each training image by the neural network model. The face occlusion recognition result of the training image output by the neural network model is used for explaining whether each key area in the training image recognized by the neural network model is in an occluded state. The neural network model outputs a plurality of face occlusion recognition results, for example, the face occlusion recognition results can be output in an image form or a matrix form.
S503, continuously adjusting parameters in the neural network model according to errors between the face shielding recognition result of each training image and the actual face shielding recognition result of the training image output by the neural network model until the errors between the face shielding recognition result of each training image and the actual face shielding recognition result of the training image output by the adjusted neural network model meet preset convergence conditions, and determining the adjusted neural network model as the face shielding recognition model.
For each training image, an error exists between the face shielding recognition result of the training image output by the neural network model and the actual face shielding recognition result of the training image, so that parameters in the neural network model need to be continuously adjusted, the error between the face shielding recognition result of the training image output by the neural network model and the actual face shielding recognition result of the training image can meet a preset convergence condition, and then the adjusted neural network model is determined to be the face shielding recognition model.
S103, determining a specific part of the face in the blocked state in the image to be recognized by using the face blocking recognition result of the image to be recognized.
Because the face shielding recognition result of the image to be recognized can indicate whether each key area in the image to be recognized is in a shielded state, the face shielding recognition result of the image to be recognized can determine which key areas are in the shielded state, and the key areas are image areas of specific parts of the face, so that the specific parts of the face in the shielded state in the image to be recognized are determined.
In the face shielding recognition method provided by the embodiment of the application, the image to be recognized is input into the face shielding recognition model, and the pixel value of each key area in the image to be recognized is recognized by the face shielding recognition model, so that the face shielding recognition result of the image to be recognized is obtained and output. The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, the key area is an image area of a specific part of the face, the face shielding recognition model is obtained by training a neural network model through a plurality of training images and the actual face shielding recognition result of each training image, and therefore the specific part of the face in the shielded state in the image to be recognized can be determined through the face shielding recognition result of the image to be recognized obtained through the face shielding recognition model. Compared with the method for identifying the face area and the non-face area in the image to be identified in the prior art, the method for identifying the key area of the face by using the face shielding identification model is adopted, so that whether the face part is shielded or not can be accurately determined even when an object close to the face in color is used for shielding the face part, and the accuracy of the face part shielding identification is improved.
Referring to fig. 7, the embodiment of the application also discloses another face shielding recognition method, which specifically comprises the following steps:
s701, acquiring an image to be identified.
The principle and the execution process of step S701 are the same as those of step S101 shown in fig. 1, and will not be described here again.
S702, identifying images of the images to be identified, obtaining pixel values of each key area in the images to be identified, and generating a face shielding identification result of the images to be identified, wherein the face shielding identification result of the images to be identified is used for explaining whether each key area in the images to be identified is in a shielded state or not, and the key area is an image area of a specific part of the face.
Specifically, the pixel points in the image to be identified are identified, and the pixel value of each key area in the image to be identified is identified. The pixel relation structure refers to a special pixel structure formed by the arrangement relation between the pixel points, and when the key area is in a non-shielded state, the pixel relation structure of the key area is provided with specific characteristics, so that the pixel value of each key area can be identified by identifying whether the pixel relation structure characteristic of each key area exists in the image to be identified, the pixel value of the key area is identified, and whether the key area is in the shielded state is identified by the characteristics of the pixel relation structure in the key area, the non-shielded pixel value occupation ratio condition in the key area and other factors, so that the face shielding identification result of the image to be identified can be generated.
Optionally, the pixel value and the position of each pixel point in the image to be identified can be identified, each key region in the image to be identified can be identified, the pixel value of each key region is further obtained, whether each key region is in a blocked state or not is identified through the pixel value in each key region, and a face blocking identification result of the image to be identified is generated.
In the prior art, when face shielding recognition is performed, a face area and a non-face area in an image to be recognized are recognized, and the recognition principle is that the face area and the non-face area can be recognized according to the fact that the characteristics of the pixel values of the face area are different from the characteristics of the pixel values of the non-face area, and if the face area is recognized as the non-face area, the face area is considered to be shielded. However, this face occlusion recognition method has a disadvantage of low accuracy. When the face is shielded by using things similar to the color of the face, all face parts are mistakenly considered not to be in the shielded state, namely, the non-face area does not exist in the image, and the result that the face part of the person is shielded cannot be accurately obtained.
In the embodiment of the application, the pixel value of each key area in the image to be identified is identified, and the identification result of whether each key area is in a shielded state or not is obtained, namely whether each face part is in a shielded state is identified, instead of only identifying the face area and the non-face area as in the prior art, because the identification is carried out on the pixel value of each key area in the application, the accuracy of face shielding identification is improved. If something similar to the color of a human face is used to cover a certain face part in the prior art, when the key area of the face part is identified, the pixel value of the face part can be identified to be not in accordance with the pixel relation structure of the face part in the non-shielded state, and the face part can be accurately identified to be in the shielded state.
Optionally, when executing step S702, a neural network model with the capability of identifying whether each key region is in an occlusion state may be used to implement step S702, or some image identification algorithms may be used to identify the image of the image to be identified, obtain the pixel value of each key region in the image to be identified, and generate the face occlusion recognition result of the image to be identified. It should be noted that the specific manner of performing step S702 is numerous, including but not limited to those provided in the embodiments of the present application.
Optionally, in a specific embodiment of the present application, performing an implementation of step S702 includes:
Inputting the image to be recognized into a face shielding recognition model, recognizing the image of the image to be recognized by the face shielding recognition model, obtaining the pixel value of each key area in the image to be recognized, and generating a face shielding recognition result of the image to be recognized.
The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, the key area is an image area of a specific part of the face, and the face shielding recognition model is obtained by training a neural network model through a plurality of training images and an actual face shielding recognition result of each training image.
It should be noted that, the image to be recognized is input to the face shielding recognition model, the image of the image to be recognized is recognized by the face shielding recognition model, the pixel value of each key area in the image to be recognized is obtained, and the execution process and the principle of the face shielding recognition result of the image to be recognized are the same as the step S102 shown in fig. 1, which will not be described herein.
Optionally, in a specific embodiment of the present application, the generating a face occlusion recognition result of the image to be recognized in step S702 is performed, where the face occlusion recognition result of the image to be recognized is used to describe whether each key area in the image to be recognized is in an occluded state, and an implementation manner includes:
Judging whether the key area is in a non-shielded state or not by using the pixel value of the key area, if so, generating a specific image area at the position of the target image corresponding to the key area, and setting the pixel value of the specific image area as a preset value to obtain the adjusted target image. And the target image and the image to be identified have the same specification.
It should be noted that, if the key area is determined to be in the non-occluded state by using the pixel values of the key area, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set to be a preset value, so as to obtain the adjusted target image, and the execution process and principle of the method are similar to the execution process and principle of the method that the image to be identified is input to the face occlusion recognition model, the pixel value of each key area in the image to be identified is recognized by the face occlusion recognition model, and if the key area is determined to be in the non-occluded state by using the pixel value of the key area, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set to be a preset value, so as to obtain the adjusted target image, and the adjusted target image is output, which is not described herein.
It should be noted that, when the "determining whether the key area is in the non-occluded state by using the pixel values of the key area" is performed, if the key area is determined to be in the non-occluded state, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set to a preset value, so as to obtain the adjusted target image ", as mentioned in one embodiment of the step S102, besides" inputting the image to be identified into the face occlusion recognition model, recognizing the pixel value of each key area in the image to be identified by the face occlusion recognition model, if the key area is determined to be in the non-occluded state by using the pixel value of the key area, the specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set to the preset value, so as to obtain the adjusted target image, and output the adjusted target image "in the embodiment of the face occlusion recognition model is adopted, some image recognition algorithms, image processing algorithms may be used to implement the execution process and principle.
Optionally, in an embodiment of the present application, if there are multiple key areas in the image to be identified in the non-occluded state, for each key area in the image to be identified in the non-occluded state, the pixel value set in the specific image area generated at the position of the target image corresponding to the key area is unique, and the size of the specific image area is a preset size.
Optionally, in an embodiment of the present application, an implementation of determining whether the critical area is in the unoccluded state by using the pixel value of the critical area includes:
and identifying a pixel relation structure in the image to be identified, and determining whether each key area is in an occluded state.
It should be noted that, in the embodiment of the present application, the execution process and the principle of "identifying the pixel relationship structure in the image to be identified and determining whether each key region is in the blocked state" are similar to the step S301 shown in fig. 3, and will not be repeated here.
It should be further noted that, when the "identifying the pixel relationship structure in the image to be identified and determining whether each key region is in the blocked state" is executed, the implementation process and principle may be implemented by using some image identification algorithms, image processing algorithms, and other manners besides the face blocking identification model in the above step S301.
Optionally, in an embodiment of the present application, determining whether the key area is in a non-occluded state by using pixel values of the key area further includes:
And calculating the duty ratio of the number of target pixel points in the key area to the pixel points of the image to be identified according to the determined key area in the non-shielded state.
The target pixel points are the pixel points in the key area, which meet the requirements of the specific part of the face pointed by the key area. Generating a specific image area at the position of the target image corresponding to the key area, setting the pixel value of the specific image area as a preset value, and obtaining an adjusted target image, wherein the method comprises the following steps: and generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in the non-shielded state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image. Wherein the requirements of the duty ratio are as follows: the defect condition of a particular image area is inversely related to the value of the duty cycle.
It should be noted that, in the embodiment of the present application, the execution process and the principle of "calculating the duty ratio of the number of target pixels in the critical area to the pixels of the image to be identified" for each determined critical area in the non-occluded state are similar to the execution principle and the process of "calculating the duty ratio of the number of target pixels in the critical area to the pixels of the image to be identified" for each determined critical area in the non-occluded state, which are also executed after step S301 is executed in the embodiment shown in fig. 3, and are not repeated herein.
In the embodiment of the application, specific image areas meeting the requirement of the duty ratio are generated at the positions of the target images corresponding to the key areas aiming at each determined key area in the non-shielded state, the pixel values of the specific image areas are set as preset values, the adjusted target images are obtained, and the adjusted target images are output. Wherein the requirements of the duty ratio are as follows: and (3) the execution process and principle of the defect condition of the specific image area and the numerical value of the duty ratio are inversely related, and the execution process and principle of the defect condition of the specific image area and the numerical value of the duty ratio are performed on the position of the specific image area corresponding to the key area of each determined non-shielded state in the execution of the step S302, the specific image area meeting the duty ratio requirement is generated, the pixel value of the specific image area is set as a preset value, the adjusted target image is obtained, and the adjusted target image is output. Wherein, the requirement of the duty ratio is: the incomplete condition of the specific image area is similar to the implementation process and principle of the implementation of the negative correlation of the value of the duty ratio, and will not be repeated here.
It should be further noted that, in the implementation of the embodiment of the present application, "calculating the duty ratio of the number of target pixels in the critical area to the pixel point of the image to be recognized for each determined critical area in the non-occluded state" may be implemented by using a face occlusion recognition model in the implementation of "calculating the duty ratio of the number of target pixels in the critical area to the pixel point of the image to be recognized" in the determined critical area in the non-occluded state, as in the embodiment shown in fig. 3, except that the implementation may be implemented by using some other ways such as an image recognition algorithm, an image processing algorithm, and the like.
S703, determining a specific part of the face in the blocked state in the image to be recognized by using the face blocking recognition result of the image to be recognized.
It should be noted that the execution process and principle of the step S703 are the same as those of the step S103 shown in fig. 1, and will not be repeated here.
In the face shielding recognition method provided by the embodiment of the application, the pixel value of each key area in the image to be recognized can be obtained by recognizing the image of the image to be recognized, and the face shielding recognition result of the image to be recognized is generated. The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, and the key area is an image area of a specific part of the face. Compared with the mode of identifying the face area and the non-face area in the image to be identified in the prior art, the method and the device can obtain the pixel value of each key area in the image to be identified, and further obtain the result of whether each key area in the image to be identified is in a shielded state, so that whether the face area is shielded or not can be accurately determined even if an object with the color close to that of the face is used for shielding the face area, and the accuracy of shielding and identifying the face area is improved.
Referring to fig. 8, based on the above-mentioned method for recognizing a face mask according to the embodiment shown in fig. 1, an embodiment of the present application correspondingly discloses a device for recognizing a face mask, including: a first acquisition unit 801, a first identification unit 802, and a first determination unit 803.
A first acquiring unit 801, configured to acquire an image to be identified.
The first recognition unit 802 is configured to input an image to be recognized into a face shielding recognition model, recognize a pixel value of each key area in the image to be recognized by the face shielding recognition model, and obtain and output a face shielding recognition result of the image to be recognized. The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, the key area is an image area of a specific part of the face, and the face shielding recognition model is obtained by training a neural network model through a plurality of training images and an actual face shielding recognition result of each training image.
Optionally, in a specific embodiment of the present application, the first identifying unit 802 includes:
The first recognition subunit is configured to input an image to be recognized into a face shielding recognition model, recognize a pixel value of each key region in the image to be recognized by the face shielding recognition model, generate a specific image region at a position of the target image corresponding to the key region if the key region is determined to be in a non-shielded state by using the pixel value of the key region, set the pixel value of the specific image region as a preset value, obtain an adjusted target image, and output the adjusted target image. Wherein the target image has the same specification as the image to be identified.
Optionally, in a specific embodiment of the present application, the first identifying subunit includes: a first determination subunit and a setting subunit.
The first determining subunit is used for inputting the images to be identified into the face shielding identification model, identifying the pixel relation structure in the images to be identified by the face shielding identification model, and determining the key area of each image to be identified in the non-shielded state.
The setting subunit is used for generating a specific image area at the position of the target image corresponding to the key area according to each determined key area in the non-blocked state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image. The size of the specific image area is a preset size.
Optionally, in an embodiment of the present application, if there are multiple key areas in the image to be identified in the non-occluded state, for each key area in the image to be identified in the non-occluded state, the pixel value set in the specific image area generated at the position of the target image corresponding to the key area is unique, and the size of the specific image area is a preset size.
Optionally, in a specific embodiment of the present application, the method further includes:
The first calculating unit is used for calculating the duty ratio of the number of target pixel points in the key area to the pixel points of the image to be identified according to the determined key area in the non-shielded state. The target pixel points are the pixel points in the key area, which meet the requirements of the specific part of the face pointed by the key area. Setting a subunit, including: the generation subunit is used for generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in the non-shielded state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image. Wherein, the requirement of the duty ratio is: the defect condition of a particular image area is inversely related to the value of the duty cycle.
Optionally, in a specific embodiment of the present application, the method further includes: the device comprises a construction unit, a training and identifying unit and an adjusting unit.
And the construction unit is used for constructing the training image set. Wherein, training image set includes: a plurality of training images, and an actual face occlusion recognition result for each training image.
The training recognition unit is used for respectively inputting each training image in the training image set into the neural network model, and respectively obtaining and outputting the face shielding recognition result of each training image by the neural network model.
The adjustment unit is used for continuously adjusting parameters in the neural network model according to errors between the face shielding recognition result of each training image and the actual face shielding recognition result of the training image output by the neural network model until the errors between the face shielding recognition result of each training image and the actual face shielding recognition result of the training image output by the adjusted neural network model meet preset convergence conditions, and determining the adjusted neural network model as the face shielding recognition model.
The first determining unit 803 is configured to determine, using a result of face occlusion recognition of an image to be recognized, a specific portion of a face in an occluded state in the image to be recognized.
The specific principle and the execution process of each unit in the face shielding recognition device disclosed in the embodiment of the present application are the same as those of the face shielding recognition method disclosed in the embodiment of the present application, and reference may be made to corresponding parts in the face shielding recognition method disclosed in the embodiment of the present application, so that no redundant description is given here.
In the device for recognizing the face shielding provided by the embodiment of the application, the image to be recognized is input into the face shielding recognition model through the first recognition unit 802, the pixel value of each key area in the image to be recognized is recognized by the face shielding recognition model, and the face shielding recognition result of the image to be recognized is obtained and output. The face occlusion recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in an occluded state, the key area is an image area of a specific part of the face, the face occlusion recognition model is obtained by training a neural network model through a plurality of training images and the actual face occlusion recognition result of each training image, and therefore the first determining unit 803 can determine the specific part of the face in the occluded state in the image to be recognized through the face occlusion recognition result of the image to be recognized obtained through the face occlusion recognition model. Compared with the method for identifying the face area and the non-face area in the image to be identified in the prior art, the method for identifying the key area of the face by using the face shielding identification model is adopted, so that whether the face part is shielded or not can be accurately determined even when an object close to the face in color is used for shielding the face part, and the accuracy of the face part shielding identification is improved.
Referring to fig. 9, based on the method for recognizing a facial mask according to the embodiment shown in fig. 7, an embodiment of the present application correspondingly discloses a device for recognizing a facial mask, including: an acquisition unit 901, an identification unit 902, and a determination unit 903.
An acquisition unit 901 for acquiring an image to be recognized.
The identifying unit 902 is configured to identify an image of the image to be identified, obtain a pixel value of each key region in the image to be identified, and generate a face occlusion identifying result of the image to be identified, where the face occlusion identifying result of the image to be identified is used to indicate whether each key region in the image to be identified is in an occluded state; the key region is an image region of a specific part of the face.
Alternatively, in a specific embodiment of the present application, the recognition unit 902 performs generation of a face occlusion recognition result of an image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, and the method comprises the following steps:
Judging whether the key area is in a non-shielded state or not by using the pixel value of the key area, if so, generating a specific image area at the position of the target image corresponding to the key area, and setting the pixel value of the specific image area as a preset value to obtain the adjusted target image. Wherein the target image has the same specification as the image to be identified.
Optionally, in an embodiment of the present application, if there are multiple key areas in the image to be identified in the non-occluded state, for each key area in the image to be identified in the non-occluded state, the pixel value set in the specific image area generated at the position of the target image corresponding to the key area is unique, and the size of the specific image area is a preset size.
Optionally, in an embodiment of the present application, the identifying unit 902 is configured to, when determining whether the key area is in the unoccluded state by using the pixel value of the key area, determine:
and identifying a pixel relation structure in the image to be identified, and determining whether each key area is in an occluded state.
Optionally, in a specific embodiment of the present application, the method further includes:
And the calculating unit is used for calculating the duty ratio of the number of the target pixel points in the key area to the pixel points of the image to be identified according to each determined key area in the non-shielded state. The target pixel points are the pixel points in the key area, which meet the requirements of the specific part of the face pointed by the key area.
The identifying unit 902 performs generating a specific image area at a position of the target image corresponding to the key area, and sets a pixel value of the specific image area to a preset value, so as to obtain the adjusted target image when:
And generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in the non-blocked state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image. Wherein, the requirement of the duty ratio is: the defect condition of the specific image area is inversely related to the value of the duty cycle.
Optionally, in a specific embodiment of the present application, the identifying unit 902 includes:
The recognition subunit is used for inputting the image to be recognized into the face shielding recognition model, recognizing the image of the image to be recognized by the face shielding recognition model, obtaining the pixel value of each key area in the image to be recognized, and generating the face shielding recognition result of the image to be recognized. The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, the key area is an image area of a specific part of the face, and the face shielding recognition model is obtained by training a neural network model through a plurality of training images and an actual face shielding recognition result of each training image.
A determining unit 903, configured to determine a specific part of the face in the blocked state in the image to be recognized, using the result of the face blocking recognition of the image to be recognized.
The specific principle and the execution process of each unit in the face shielding recognition device disclosed in the embodiment of the present application are the same as those of the face shielding recognition method disclosed in the embodiment of the present application, and reference may be made to corresponding parts in the face shielding recognition method disclosed in the embodiment of the present application, so that no redundant description is given here.
In the device for recognizing the face shielding provided by the embodiment of the application, the recognition unit 902 can be used for recognizing the image of the image to be recognized, so that the pixel value of each key area in the image to be recognized is obtained, and the face shielding recognition result of the image to be recognized is generated. The face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, and the key area is an image area of a specific part of the face. Compared with the mode of identifying the face area and the non-face area in the image to be identified in the prior art, the method and the device can obtain the pixel value of each key area in the image to be identified, and further obtain the result of whether each key area in the image to be identified is in a shielded state, so that whether the face area is shielded or not can be accurately determined even if an object with the color close to that of the face is used for shielding the face area, and the accuracy of shielding and identifying the face area is improved.
The embodiment of the application provides a computer readable medium, on which a computer program is stored, wherein the program, when being executed by a processor, realizes the method for recognizing the face shielding provided by the above method embodiments.
The embodiment of the application provides equipment, which comprises the following components: the face occlusion recognition method comprises one or more processors and a storage device, wherein one or more programs are stored on the storage device, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the face occlusion recognition method provided by the method embodiments.
Those skilled in the art will be able to make or use the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (7)

1. A method for recognizing a face mask, comprising:
Acquiring an image to be identified;
Identifying the image of the image to be identified, obtaining a pixel value of each key area in the image to be identified, and generating a face shielding identification result of the image to be identified; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face;
Determining a specific part of the face in the blocked state in the image to be recognized by using the face blocking recognition result of the image to be recognized;
Generating a face shielding recognition result of the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state, and the method comprises the following steps:
judging whether the key area is in a non-shielded state or not by utilizing the pixel value of the key area;
if the key area is judged to be in a non-blocked state, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set as a preset value, so that an adjusted target image is obtained; wherein the target image and the image to be identified have the same specification;
The judging whether the key area is in a non-shielding state by using the pixel value of the key area comprises the following steps:
and identifying a pixel relation structure in the image to be identified, and determining whether each key area is in a blocked state.
2. The method according to claim 1, wherein if the number of key regions in the image to be identified in the non-occluded state is plural, for each key region in the image to be identified in the non-occluded state, a pixel value set in a specific image region generated at a position of the target image corresponding to the key region is unique, and the size of the specific image region is a preset size.
3. The method of claim 1, wherein the determining whether the critical area is in the non-occluded state using the pixel value of the critical area further comprises:
Calculating the duty ratio of the number of target pixel points in the key area to the pixel points of the image to be identified according to the determined key area in the non-shielded state; the target pixel points are pixel points in the key region, which meet the requirements of the specific part of the face pointed by the key region;
wherein: generating a specific image area at the position of the target image corresponding to the key area, and setting the pixel value of the specific image area as a preset value to obtain an adjusted target image, wherein the method comprises the following steps:
Generating a specific image area meeting the requirement of the duty ratio at the position of the target image corresponding to the key area aiming at each determined key area in a non-shielded state, setting the pixel value of the specific image area as a preset value, obtaining an adjusted target image, and outputting the adjusted target image; wherein, the requirements of the duty ratio are as follows: the defect condition of the specific image area is inversely related to the value of the duty cycle.
4. The method according to claim 1, wherein the identifying the image of the image to be identified, obtaining a pixel value of each key region in the image to be identified, and generating a face occlusion identification result of the image to be identified, includes:
inputting the image to be recognized into a face shielding recognition model, recognizing the image of the image to be recognized by the face shielding recognition model, obtaining a pixel value of each key area in the image to be recognized, and generating a face shielding recognition result of the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face; the face shielding recognition model is obtained by training a neural network model through a plurality of training images and actual face shielding recognition results of each training image.
5. A facial mask recognition device, comprising:
the acquisition unit is used for acquiring the image to be identified;
The identification unit is used for identifying the image of the image to be identified, obtaining the pixel value of each key area in the image to be identified, and generating a face shielding identification result of the image to be identified; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not; the key area is an image area of a specific part of the face;
the determining unit is used for determining a specific part of the face in the blocked state in the image to be recognized by utilizing the face blocking recognition result of the image to be recognized;
the recognition unit executes a face shielding recognition result for generating the image to be recognized; the face shielding recognition result of the image to be recognized is used for explaining whether each key area in the image to be recognized is in a shielded state or not, and is used for:
Judging whether the key area is in a non-shielded state or not by utilizing the pixel value of the key area; if the key area is judged to be in a non-blocked state, a specific image area is generated at the position of the target image corresponding to the key area, and the pixel value of the specific image area is set as a preset value, so that an adjusted target image is obtained; wherein the target image and the image to be identified have the same specification;
The identification unit is used for judging whether the key area is in a non-shielding state or not by using the pixel value of the key area, and is used for:
and identifying a pixel relation structure in the image to be identified, and determining whether each key area is in a blocked state.
6. A computer readable medium, characterized in that a computer program is stored thereon, wherein the program, when executed by a processor, implements the method according to any of claims 1 to 4.
7. An apparatus, comprising:
One or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
CN202110183312.0A 2021-02-10 2021-02-10 Face shielding recognition method, device, readable medium and equipment Active CN112926424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110183312.0A CN112926424B (en) 2021-02-10 2021-02-10 Face shielding recognition method, device, readable medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110183312.0A CN112926424B (en) 2021-02-10 2021-02-10 Face shielding recognition method, device, readable medium and equipment

Publications (2)

Publication Number Publication Date
CN112926424A CN112926424A (en) 2021-06-08
CN112926424B true CN112926424B (en) 2024-05-31

Family

ID=76171508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110183312.0A Active CN112926424B (en) 2021-02-10 2021-02-10 Face shielding recognition method, device, readable medium and equipment

Country Status (1)

Country Link
CN (1) CN112926424B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120113912A (en) * 2011-04-06 2012-10-16 한국수자원공사 The detection and recovery method of occlusion of a face image using a correlation based method
CN108319953A (en) * 2017-07-27 2018-07-24 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN111027504A (en) * 2019-12-18 2020-04-17 上海眼控科技股份有限公司 Face key point detection method, device, equipment and storage medium
CN111160269A (en) * 2019-12-30 2020-05-15 广东工业大学 Face key point detection method and device
CN111428581A (en) * 2020-03-05 2020-07-17 平安科技(深圳)有限公司 Face shielding detection method and system
CN111797773A (en) * 2020-07-07 2020-10-20 广州广电卓识智能科技有限公司 Method, device and equipment for detecting occlusion of key parts of human face
CN111814569A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Method and system for detecting human face shielding area

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120113912A (en) * 2011-04-06 2012-10-16 한국수자원공사 The detection and recovery method of occlusion of a face image using a correlation based method
CN108319953A (en) * 2017-07-27 2018-07-24 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN111027504A (en) * 2019-12-18 2020-04-17 上海眼控科技股份有限公司 Face key point detection method, device, equipment and storage medium
CN111160269A (en) * 2019-12-30 2020-05-15 广东工业大学 Face key point detection method and device
CN111428581A (en) * 2020-03-05 2020-07-17 平安科技(深圳)有限公司 Face shielding detection method and system
CN111814569A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Method and system for detecting human face shielding area
CN111797773A (en) * 2020-07-07 2020-10-20 广州广电卓识智能科技有限公司 Method, device and equipment for detecting occlusion of key parts of human face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的部分遮挡人脸识别;王振华等;电子技术与软件工程(02);第145-147页 *

Also Published As

Publication number Publication date
CN112926424A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
JP7386545B2 (en) Method for identifying objects in images and mobile device for implementing the method
EP3843035A1 (en) Image processing method and apparatus for target recognition
US7925093B2 (en) Image recognition apparatus
US20240021015A1 (en) System and method for selecting images for facial recognition processing
CN111914665B (en) Face shielding detection method, device, equipment and storage medium
KR102476022B1 (en) Face detection method and apparatus thereof
CN111598065B (en) Depth image acquisition method, living body identification method, apparatus, circuit, and medium
JP2005092759A (en) Image processing device and method, red-eye detection method, and program
CN110895802A (en) Image processing method and device
CN110516579B (en) Handheld fundus camera photographing method and device, equipment and storage medium
CN110047059B (en) Image processing method and device, electronic equipment and readable storage medium
CN112836625A (en) Face living body detection method and device and electronic equipment
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN111080542B (en) Image processing method, device, electronic equipment and storage medium
US8325991B2 (en) Device and method for biometrics authentication
CN112926424B (en) Face shielding recognition method, device, readable medium and equipment
CN113012030A (en) Image splicing method, device and equipment
CN115984178A (en) Counterfeit image detection method, electronic device, and computer-readable storage medium
JPH11306348A (en) Method and device for object detection
CN112926515B (en) Living body model training method and device
CN112001387B (en) Method and device for determining focusing area, terminal and storage medium
JP4789526B2 (en) Image processing apparatus and image processing method
JP2021033374A (en) Object recognition device, object recognition method, and object recognition program
JP2004199200A (en) Pattern recognition device, imaging apparatus, information processing system, pattern recognition method, recording medium and program
JP4692151B2 (en) Image recognition method and image recognition apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant