WO2021174819A1 - Face occlusion detection method and system - Google Patents

Face occlusion detection method and system Download PDF

Info

Publication number
WO2021174819A1
WO2021174819A1 PCT/CN2020/118112 CN2020118112W WO2021174819A1 WO 2021174819 A1 WO2021174819 A1 WO 2021174819A1 CN 2020118112 W CN2020118112 W CN 2020118112W WO 2021174819 A1 WO2021174819 A1 WO 2021174819A1
Authority
WO
WIPO (PCT)
Prior art keywords
occlusion
face
image
area
result
Prior art date
Application number
PCT/CN2020/118112
Other languages
French (fr)
Chinese (zh)
Inventor
戴栋根
陆进
陈斌
宋晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021174819A1 publication Critical patent/WO2021174819A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the embodiments of the present application relate to the field of big data technology, and in particular to a method and system for detecting face occlusion.
  • Face recognition technology has broad application prospects in identity verification in many fields.
  • the occlusion of the face image will often appear, such as hair, mask, scarf, hat, sunglasses, etc., and the occlusion has a great impact on face registration and face recognition.
  • the inventor realizes that the existing face occlusion detection method usually recognizes the face image to identify whether each area of the face is occluded, and the recognition accuracy is low, which is not conducive to the intelligent application of the face occlusion detection method.
  • the embodiments of the present application provide a face occlusion detection method, system, computer device, and computer readable storage medium, which are used to solve the problem of low recognition accuracy and low intelligence of the face occlusion detection method.
  • a face occlusion detection method includes:
  • the first occlusion result indicates that the face region image is occluded
  • a final occlusion result is generated according to the first occlusion result and the classification result.
  • an embodiment of the present application also provides a face occlusion detection system, including:
  • the acquisition module is used to acquire the image to be detected
  • An extraction module which is used to obtain an image of a face area from the image to be detected
  • a recognition module configured to recognize the face region image through a face occlusion detection branch model, and perform pixel processing to generate a first occlusion result
  • a classification module configured to classify the face area image through a face occlusion classification branch model and a preset occlusion label when the first occlusion result indicates that the face region image is occluded, to generate a classification result;
  • the generating result module is configured to generate a final occlusion result according to the first occlusion result and the classification result.
  • an embodiment of the present application further provides a computer device.
  • the computer device includes a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the Perform the following steps in the computer program:
  • the first occlusion result indicates that the face region image is occluded
  • a final occlusion result is generated according to the first occlusion result and the classification result.
  • the embodiments of the present application also provide a computer-readable storage medium having a computer program stored in the computer-readable storage medium, and the computer program may be executed by at least one processor, so that the at least A processor performs the following steps:
  • the first occlusion result indicates that the face region image is occluded
  • a final occlusion result is generated according to the first occlusion result and the classification result.
  • the face occlusion detection method, system, computer device, and computer-readable storage medium obtained the first occlusion result of the face occlusion area by recognizing the image of the face region, and use the preset occlusion tag to detect the face
  • the regional image is classified to obtain the classification result of the classification type of the occlusion on the face, and the first occlusion result and the classification type of the occlusion are combined to obtain the final occlusion result, which improves the recognition accuracy of the face occlusion detection and recognizes the occlusion
  • the classification type of objects facilitates the intelligent application of face occlusion detection.
  • FIG. 1 is a flow chart of the steps of the method for detecting face occlusion in the first embodiment of this application;
  • FIG. 2 is a schematic diagram of the process of obtaining a face region image in the method for detecting face occlusion according to the first embodiment of the application;
  • FIG. 3 is a schematic diagram of the flow of generating a first occlusion result in the face occlusion detection method according to Embodiment 1 of the application;
  • FIG. 4 is a schematic diagram of the process of generating classification results of the face occlusion detection method according to the first embodiment of the application;
  • FIG. 5 is a schematic diagram of the training process of the face occlusion detection branch model of the face occlusion detection method according to the first embodiment of the application;
  • FIG. 6 is a schematic diagram of the training process of the face occlusion classification branch model of the face occlusion detection method according to the first embodiment of the application;
  • FIG. 7 is a schematic diagram of program modules of the face occlusion detection system according to the second embodiment of the application.
  • FIG. 8 is a schematic diagram of the hardware structure of the computer device according to the third embodiment of the application.
  • FIG. 1 shows a flowchart of steps of a method for detecting occlusion of a face according to an embodiment of the present application. It can be understood that the flowchart in this method embodiment is not used to limit the order of execution of the steps.
  • the following is an exemplary description with computer equipment as the main body of execution, and the details are as follows:
  • step S100 an image to be detected is acquired.
  • the image to be detected is acquired in real time by the camera acquisition unit, and the image to be detected includes an image of a face area and a background area.
  • Step S200 Obtain a face area image from the image to be detected.
  • the step S200 may further include:
  • Step S201 Obtain the coordinate positions of multiple facial feature points in the image to be detected.
  • the multiple facial feature points are 68 facial feature points.
  • Step S202 Extract a face region image from the image to be detected according to the acquired coordinate positions of multiple facial feature points, the face region image including multiple facial features regions.
  • the multiple facial features include the eyebrow area, the eye area, the nose area, the mouth area, the cheek area, the forehead area, the teeth area, and the like.
  • the landmark algorithm facial feature point extraction algorithm
  • the landmark algorithm calls the predictor to obtain the coordinate positions of the 68 facial feature points.
  • the coordinate position draws a circle at each facial feature point, and the serial number of the 68 facial feature points is marked according to the calibration sequence.
  • the shape of the facial features is determined according to the coordinate positions of multiple facial feature points to obtain multiple facial features in the image to be detected.
  • each facial features area is marked by a rectangular frame.
  • step S300 the face region image is recognized by the face occlusion detection branch model, and pixel processing is performed to generate a first occlusion result.
  • the face area image may be recognized by the face occlusion detection branch model of the face occlusion model to generate the first occlusion result.
  • the face occlusion detection branch model includes a plurality of first convolutional layers, second convolutional layers, first fully connected layers, and the like.
  • the step S300 may further include:
  • Step S301 Perform convolution on the face region image through the face occlusion detection branch model to output multiple convolution feature maps, and the multiple convolution feature maps include multiple first convolution feature maps and second convolution feature maps. Convolution feature map.
  • the image to be detected carrying the image of the face region is input into the face occlusion detection branch model, convolved through multiple first convolutional layers and second convolutional layers, and output multiple through the first fully connected layer.
  • the first convolution feature map is input into the face occlusion detection branch model, convolved through multiple first convolutional layers and second convolutional layers, and output multiple through the first fully connected layer.
  • the face occlusion detection branch model For example, input the to-be-detected image carrying the face region image into the face occlusion detection branch model, convolve through multiple first convolutional layers, and output multiple second convolution feature maps; then add multiple The second convolution feature map is input to the second convolution layer and the first fully connected layer for further convolution to output multiple first convolution feature maps.
  • the second convolution feature map includes occluder features, and the height and width of the second convolution feature map and the height and width of the first convolution feature map are consistent with the height and width of the image to be detected.
  • the features of the shield include data such as the shape, position, and pixels of the shield.
  • the convolution features included in the first convolution feature map are thicker convolution features formed by feature fusion splicing.
  • Step S302 Combine and enlarge a plurality of first convolution feature maps, and adjust the pixel values of each facial feature region and the background region according to a preset rule to obtain a predicted face image.
  • the predicted face image includes an occluded area.
  • the edges are expanded and the pixel values of each facial features region and the background region are adjusted according to a preset rule to obtain a predicted face image.
  • the preset rule is that the pixel value of the unoccluded part in the predicted face image is set to 0, and the pixel value of the occluded part and the background part in the predicted face image is set to 1, that is, the predicted person The unoccluded part of the face image is represented as white, and the background part and the occluded part in the predicted face image are both represented as black.
  • Step S303 Calculate the occlusion ratio of the occlusion area and the five sense organs area where the occlusion area is located.
  • the occlusion area is represented as a black area in each rectangular frame, and the black area is a black part in each rectangular frame in the predicted face image obtained after pixel processing.
  • the occlusion ratio of each occlusion area to the facial features of the rectangular frame in which the occlusion area is located is calculated, that is, the ratio of the black area in each rectangular frame to the entire rectangular frame is calculated to obtain the occlusion ratio of the corresponding occlusion area.
  • Step S304 comparing the occlusion ratio with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
  • the first occlusion result when the occlusion ratio is greater than the occlusion threshold, the first occlusion result indicates that the facial features area is occluded; when the occlusion ratio is less than the occlusion threshold, the first occlusion The occlusion result indicates that the facial features are not occluded.
  • the preset occlusion threshold corresponding to each facial features area may be set to the same preset occlusion threshold, or may be set to different preset occlusion thresholds.
  • the setting of the preset occlusion threshold corresponding to each facial features area can be dynamically adjusted according to different scene requirements. For example, when the scene requirements are high and the ratio of the required occlusion is small, a smaller preset occlusion threshold can be set; when the scene requirements are low, a higher preset occlusion threshold can be set.
  • the occlusion threshold can be manually set according to different scene requirements, for example, the occlusion threshold is set to 95%, 90%, 85%, and so on.
  • Step S400 When the first occlusion result indicates that the face region image is occluded, classify the face region image through the face occlusion classification branch model and a preset occlusion label to generate a classification result.
  • the embodiment of the present application may classify the face region image by using the preset occlusion label in the face occlusion classification branch model of the face occlusion model to generate the classification result.
  • the preset occlusion tags include preset occlusion tags such as hats, bangs, sunglasses, eyes, beards, masks, and scarves.
  • the face occlusion classification branch model includes several hidden layers, and the hidden layers are several third convolutional layers, second fully connected layers, and classification layers, and each layer is connected to each other.
  • the step S400 may further include:
  • Step S401 extracting features of occlusion objects from multiple second convolution feature maps through the face occlusion classification branch model.
  • a plurality of second convolution feature maps carrying features of the occluder is obtained from the last first convolution layer, and the features of the occluder are extracted from the plurality of second convolution feature maps.
  • Step S402 matching the features of the obstruction object with a preset obstruction label to generate a classification result.
  • the classification result indicates that the feature of the occluder corresponds to the matching occlusion label; when the feature of the occluder matches the preset one
  • the occlusion feature is marked to obtain a new occlusion label, and the classification result indicates that the occlusion feature corresponds to the new occlusion label.
  • the new obstruction label is saved in the preset database.
  • the new occlusion label corresponding to the new occlusion label is identified to generate a new occlusion label and save it in the preset database; and according to the new occlusion label generated
  • the classification branch model of face occlusion is optimized. For example, extract features of the occluder from multiple second convolution feature maps, and match the features of the occluder with a preset occlusion label. When the feature of the occluder matches the hat, the generated classification result indicates the occlusion The object feature corresponds to the hat.
  • Step S500 Generate a final occlusion result according to the first occlusion result and the classification result.
  • the final occlusion result includes the first occlusion result used to determine whether the face area is occluded and the classification result for determining the classification type of the occluder.
  • the face occlusion detection method further includes: generating a feedback instruction according to the final occlusion result, the feedback instruction being used to indicate the occlusion area included in the user's face and the occlusion object located in the occlusion area .
  • the computer device when the final occlusion result indicates that the user's eyes are blocked and the occluded object is sunglasses, the computer device generates a feedback instruction according to the final occlusion result.
  • the feedback instruction is used to indicate that the user's eyes are blocked by the sunglasses and cannot be Face recognition, so that the user responds according to the feedback instruction, even if the user takes off the sunglasses according to the feedback instruction, so as to facilitate effective face recognition.
  • the embodiment of the present application also includes the training process of the face occlusion detection branch model of the face occlusion model. Please refer to FIG. 5, which is specifically as follows:
  • step S600 a plurality of sample face images are preprocessed to obtain a comparison sample face image.
  • multiple sample face images are acquired from a preset database, and the multiple sample face images include multiple occluded face sample images and unoccluded face sample images.
  • Perform occlusion feature, unocclusion feature and background location recognition label for each sample face image to mark the location of occlusion feature, unocclusion feature and background feature, and set the pixels of occlusion feature, unocclusion feature and background feature respectively Value to output a comparison sample face image.
  • the pixel value of the part where the occlusion feature is determined is set to 1, and the pixel value of the part where the unoccluded feature and the background are determined is set to 0.
  • the comparison sample face image is output. That is, in the face image of the comparative sample, the part of the unoccluded feature appears as white, and the part of the occluded feature and the background appear as black.
  • the sample face image also includes an area that cannot be determined whether it is occluded, and the pixel value of the area is set to 225.
  • the area that cannot be determined whether it is occluded is located and collected by the preset fuzzy area. In the process of calculating the subsequent intersection and ratio, in order to maintain the correct rate, the area that cannot be determined whether it is occluded can be excluded from the location acquisition.
  • the contrast face image is a black and white contrast face image.
  • Step S601 input a plurality of sample face images into the first convolutional layer, the second convolutional layer and the first fully connected layer of the deep neural network model to perform convolution to output a plurality of convolutional feature maps, so
  • the multiple convolution feature maps include multiple first convolution feature maps and multiple second convolution feature maps.
  • step S602 a plurality of first convolution feature maps are combined and enlarged to output a predicted sample face image.
  • the prediction sample face image is presented as a black and white prediction sample face image.
  • Step S603 The prediction sample face image is matched with the comparison sample face image, and an intersection ratio is calculated, and the intersection ratio is the intersection and union of the prediction sample face image and the comparison sample face image. The ratio of the set.
  • intersection ratio refers to the ratio of the intersection and union of each corresponding occluded area and unoccluded area background area of the predicted sample face image and the comparison sample face image.
  • step S604 when the intersection ratio is less than the preset contrast threshold, the deep neural network model is iterated through the first loss function to adjust the intersection ratio, so as to obtain an optimized face occlusion detection branch model.
  • intersection ratio when the intersection ratio is smaller than the preset comparison threshold, and the intersection ratio is smaller, it means that the prediction sample face image is less similar to the comparison sample face image, and the model still needs further training. optimization.
  • the first loss function is a U-net loss function.
  • the U-net loss function can be a loss function with boundary weights. The purpose is to give higher weights to the pixels close to the boundary points in the sample face image.
  • the embodiment of the present application also includes the training process of the face occlusion classification branch model of the face occlusion model. Please refer to FIG. 6, and the details are as follows:
  • Step S610 Input a plurality of second convolution feature maps into several hidden layers of the deep neural network model to extract the features of the sample occluder.
  • the obtained multiple second convolution feature maps containing the features of the occluder are input to the third convolutional layer among the several hidden layers of the deep neural network model to perform a further convolution operation to extract the features of the sample occluder.
  • Step S611 Input the feature of the sample occluder into the classification layer to identify and classify, so as to generate a first classification conclusion.
  • Step S612 The first classification conclusion is compared with the sample classification conclusion of the sample face image to calculate a second loss value.
  • Step S613 Iterate the deep neural network model through the second loss value and the second loss function, reduce the loss value and update the model parameters of the face occlusion classification branch model to obtain an optimized face occlusion classification branch model.
  • the second loss function may be a cross-entropy loss function.
  • the optimized face occlusion detection branch model and the face occlusion classification branch model are combined into an optimized face detection model.
  • the preset occluder label of the face occlusion classification branch model may also be updated regularly according to the classification result output by the face occlusion classification branch model.
  • the first occlusion result of the face occlusion detection branch model is the final occlusion result that affects the face detection model Important parameters.
  • the first occlusion result of the occlusion area of the face is obtained by recognizing the face area image, and the face area image is classified by the preset occlusion label to obtain the classification result of the classification type of the occlusion on the face.
  • the final occlusion result is obtained by combining the first occlusion result and the classification type of the occlusion object, which improves the recognition accuracy of the face occlusion detection, and recognizes the classification type of the occlusion object, which is convenient for the intelligent application of the face occlusion detection.
  • FIG. 7 shows a schematic diagram of program modules of the applicant's face occlusion detection system.
  • the face occlusion detection system 20 may include or be divided into one or more program modules, and the one or more program modules are stored in a storage medium and executed by one or more processors to This application is completed, and the above-mentioned face occlusion detection method can be realized.
  • the program module referred to in the embodiments of the present application refers to a series of computer program instruction segments capable of completing specific functions, and is more suitable for describing the execution process of the face occlusion detection system 20 in the storage medium than the program itself. The following description will specifically introduce the functions of each program module in this embodiment:
  • the acquisition module 700 is used to acquire an image to be detected.
  • the extraction module 710 is configured to obtain a face region image from the image to be detected.
  • the extraction module 710 is further configured to: obtain the coordinate positions of multiple facial feature points in the image to be detected; extract a face region image from the image to be detected according to the coordinate positions of the multiple facial feature points obtained ,
  • the face region image includes a plurality of facial features regions.
  • the multiple facial feature points are 68 facial feature points; the multiple facial features regions include eyebrows, eyes, nose, mouth, cheeks, forehead, teeth, and so on.
  • the recognition module 720 is configured to recognize the face region image to generate a first occlusion result.
  • the recognition module 720 is further configured to: perform convolution on the face region image to output multiple convolution feature maps, and the multiple convolution feature maps include multiple first convolution feature maps; Combine and enlarge the multiple first convolution feature maps to obtain a predicted face image, where the predicted face image includes an occlusion area; calculate the occlusion ratio of the occlusion area and the five sense organs where the occlusion area is located; The occlusion ratio is compared with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
  • the edge and background pixels are expanded to obtain a predicted face image, and the pixel value of the unoccluded part in the predicted face image is set to 0,
  • the pixel values of the occluded part and the background part in the predicted face image are set to 1, that is, the unoccluded part in the predicted face image appears white, and the background part and the occluded part in the predicted face image are both Appears as black.
  • the preset occlusion threshold corresponding to each facial features area may be set to the same preset occlusion threshold, or may be set to different preset occlusion thresholds.
  • the setting of the preset occlusion threshold corresponding to each facial features area can be dynamically adjusted according to different scene requirements. For example, when the scene requirements are high and the ratio of the required occlusion is small, a smaller preset occlusion threshold can be set; when the scene requirements are low, a higher preset occlusion threshold can be set.
  • the classification module 730 is configured to classify the face region image according to the preset occlusion label to generate a classification result.
  • the classification module 730 is further configured to: extract features of occlusion objects from a plurality of second convolution feature maps; and match the features of the occlusion objects with a preset occlusion label to generate a classification result.
  • the generating result module 740 is configured to generate a final occlusion result according to the first occlusion result and the classification result.
  • the computer device 2 is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • the computer device 2 may be a rack server, a blade server, a tower server, or a cabinet server (including an independent server or a server cluster composed of multiple servers).
  • the computer device 2 at least includes, but is not limited to, a memory 21, a processor 22, a network interface 23, and a face occlusion detection system 20 that can communicate with each other through a system bus. in:
  • the memory 21 includes at least one type of computer-readable storage medium.
  • the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory ( RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 21 may be an internal storage unit of the computer device 2, for example, the hard disk or memory of the computer device 2.
  • the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 21 may also include both the internal storage unit of the computer device 2 and its external storage device.
  • the memory 21 is generally used to store an operating system and various application software installed in the computer device 2, for example, the program code of the face occlusion detection system 20 in the second embodiment.
  • the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 22 may be a central processing unit (Central Processing Unit) in some embodiments. Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip.
  • the processor 22 is generally used to control the overall operation of the computer device 2.
  • the processor 22 is used to run the program code or process data stored in the memory 21, for example, to run the face occlusion detection system 20, so as to implement the face occlusion detection method in the embodiment of the present application.
  • the network interface 23 may include a wireless network interface or a wired network interface, and the network interface 23 is generally used to establish a communication connection between the computer device 2 and other electronic devices.
  • the network interface 23 is used to connect the computer device 2 with an external terminal through a network, and establish a data transmission channel and a communication connection between the computer device 2 and the external terminal.
  • the network may be an intranet (Intranet), the Internet (Internet), a global system of mobile communication (Global System of Mobile) communication, GSM), Wideband Code Division Multiple Access (Wideband Code Division Multiple Access, WCDMA), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
  • FIG. 8 only shows the computer device 2 with components 20-23, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
  • the face occlusion detection system 20 stored in the memory 21 can also be divided into one or more program modules, and the one or more program modules are stored in the memory 21 and consist of one Or executed by multiple processors (in this embodiment, the processor 22) to complete the application.
  • FIG. 7 shows a schematic diagram of the program modules of the second embodiment of the face occlusion detection system 20.
  • the face occlusion detection system 20 can be divided into a collection module 700, an extraction module 710, The recognition module 720, the classification module 730, and the generation result module 740.
  • the program module referred to in the present application refers to a series of computer program instruction segments capable of completing specific functions, and is more suitable than a program to describe the execution process of the face occlusion detection system 20 in the computer device 2.
  • the specific functions of the program modules 700-740 have been described in detail in the second embodiment, and will not be repeated here.
  • This embodiment also provides a computer-readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), only Read memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, servers, App application malls, etc., on which computer programs are stored, The corresponding function is realized when the program is executed by the processor.
  • the computer-readable storage medium of this embodiment is used to store the face occlusion detection system 20, and when executed by a processor, the face occlusion detection method of the embodiment of the present application is implemented.
  • the computer-readable storage medium may be non-volatile or volatile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present application relate to the field of artificial intelligence, and provide a face occlusion detection method, comprising: obtaining an image to be detected; obtaining a face region image from the image to be detected; recognizing the face region image by means of a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result; when the first occlusion result indicates that the face region image is occluded, classifying the face region image by means of a face occlusion classification branch model and a preset occlusion tag to generate a classification result; and generating a final occlusion result according to the first occlusion result and the classification result. The embodiments of the present application further provide a face occlusion detection system. According to the embodiments of the present application, the recognition precision of face occlusion detection is improved, the classification type of an occlusion object is recognized, and intelligent application of face occlusion detection is facilitated.

Description

人脸遮挡检测方法及系统Human face occlusion detection method and system
本申请要求于2020年3月5日提交中国专利局、申请号为202010146004.6,发明名称为“人脸遮挡检测方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on March 5, 2020, the application number is 202010146004.6, and the invention title is "face occlusion detection method and system", the entire content of which is incorporated into this application by reference .
技术领域Technical field
本申请实施例涉及大数据技术领域,尤其涉及一种人脸遮挡检测方法及系统。The embodiments of the present application relate to the field of big data technology, and in particular to a method and system for detecting face occlusion.
背景技术Background technique
人脸识别作为模式识别领域的热点研究问题受到了广泛的关注,人脸识别技术在众多领域的身份验证中有着广阔的应用前景。在实际人脸图像处理过程中,人脸图像的遮挡会经常出现,如头发、口罩、围巾、帽子、墨镜等,而遮挡对人脸注册和人脸识别有很大的影响。As a hot research issue in the field of pattern recognition, face recognition has received extensive attention. Face recognition technology has broad application prospects in identity verification in many fields. In the actual face image processing process, the occlusion of the face image will often appear, such as hair, mask, scarf, hat, sunglasses, etc., and the occlusion has a great impact on face registration and face recognition.
发明人意识到现有的人脸遮挡检测方法通常是对人脸图像进行识别,以识别出人脸的各区域是否被遮挡,识别精度较低,不利于人脸遮挡检测方法智能化的应用。The inventor realizes that the existing face occlusion detection method usually recognizes the face image to identify whether each area of the face is occluded, and the recognition accuracy is low, which is not conducive to the intelligent application of the face occlusion detection method.
发明内容Summary of the invention
有鉴于此,本申请实施例提供了一种人脸遮挡检测方法、系统、计算机设备及计算机可读存储介质,用于解决人脸遮挡检测方法识别精度较低、智能化程度低的问题。In view of this, the embodiments of the present application provide a face occlusion detection method, system, computer device, and computer readable storage medium, which are used to solve the problem of low recognition accuracy and low intelligence of the face occlusion detection method.
本申请实施例是通过下述技术方案来解决上述技术问题:The embodiments of this application solve the above technical problems through the following technical solutions:
一种人脸遮挡检测方法,包括:A face occlusion detection method includes:
获取待检测图像;Obtain the image to be detected;
从所述待检测图像获取人脸区域图像;Acquiring a face area image from the image to be detected;
通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;Recognizing the face region image through a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result;
当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;When the first occlusion result indicates that the face region image is occluded, classify the face region image through a face occlusion classification branch model and a preset occlusion label to generate a classification result;
根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。A final occlusion result is generated according to the first occlusion result and the classification result.
为了实现上述目的,本申请实施例还提供一种人脸遮挡检测系统,包括:In order to achieve the foregoing objective, an embodiment of the present application also provides a face occlusion detection system, including:
采集模块,用于获取待检测图像;The acquisition module is used to acquire the image to be detected;
提取模块,用于从所述待检测图像获取人脸区域图像;An extraction module, which is used to obtain an image of a face area from the image to be detected;
识别模块,用于通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;A recognition module, configured to recognize the face region image through a face occlusion detection branch model, and perform pixel processing to generate a first occlusion result;
分类模块,用于当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;A classification module, configured to classify the face area image through a face occlusion classification branch model and a preset occlusion label when the first occlusion result indicates that the face region image is occluded, to generate a classification result;
生成结果模块,用于根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。The generating result module is configured to generate a final occlusion result according to the first occlusion result and the classification result.
为了实现上述目的,本申请实施例还提供一种计算机设备,所述计算机设备包括存储器、处理器以及存储在所述存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时执行以下步骤:In order to achieve the foregoing objective, an embodiment of the present application further provides a computer device. The computer device includes a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the Perform the following steps in the computer program:
获取待检测图像;Obtain the image to be detected;
从所述待检测图像获取人脸区域图像;Acquiring a face area image from the image to be detected;
通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;Recognizing the face region image through a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result;
当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;When the first occlusion result indicates that the face region image is occluded, classify the face region image through a face occlusion classification branch model and a preset occlusion label to generate a classification result;
根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。A final occlusion result is generated according to the first occlusion result and the classification result.
为了实现上述目的,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序可被至少一个处理器所执行,以使所述至少一个处理器执行以下步骤:In order to achieve the foregoing objective, the embodiments of the present application also provide a computer-readable storage medium having a computer program stored in the computer-readable storage medium, and the computer program may be executed by at least one processor, so that the at least A processor performs the following steps:
获取待检测图像;Obtain the image to be detected;
从所述待检测图像获取人脸区域图像;Acquiring a face area image from the image to be detected;
通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;Recognizing the face region image through a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result;
当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;When the first occlusion result indicates that the face region image is occluded, classify the face region image through a face occlusion classification branch model and a preset occlusion label to generate a classification result;
根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。A final occlusion result is generated according to the first occlusion result and the classification result.
本申请实施例提供的人脸遮挡检测方法、系统、计算机设备及计算机可读存储介质,通过识别人脸区域图像得到人脸遮挡区域的第一遮挡结果,以及通过预设的遮挡标签对人脸区域图像进行分类,以得到人脸上遮挡物的分类类型的分类结果,并结合第一遮挡结果和遮挡物的分类类型得到最终遮挡结果,提高了人脸遮挡检测的识别精度,并且识别出遮挡物的分类类型,便于人脸遮挡检测的智能化应用。The face occlusion detection method, system, computer device, and computer-readable storage medium provided by the embodiments of the present application obtain the first occlusion result of the face occlusion area by recognizing the image of the face region, and use the preset occlusion tag to detect the face The regional image is classified to obtain the classification result of the classification type of the occlusion on the face, and the first occlusion result and the classification type of the occlusion are combined to obtain the final occlusion result, which improves the recognition accuracy of the face occlusion detection and recognizes the occlusion The classification type of objects facilitates the intelligent application of face occlusion detection.
以下结合附图和具体实施例对本申请进行详细描述,但不作为对本申请的限定。The following describes the application in detail with reference to the accompanying drawings and specific embodiments, but it is not intended to limit the application.
附图说明Description of the drawings
图1为本申请实施例一之人脸遮挡检测方法的步骤流程图;FIG. 1 is a flow chart of the steps of the method for detecting face occlusion in the first embodiment of this application;
图2为本申请实施例一之人脸遮挡检测方法的获取人脸区域图像的流程示意图;FIG. 2 is a schematic diagram of the process of obtaining a face region image in the method for detecting face occlusion according to the first embodiment of the application; FIG.
图3为本申请实施例一之人脸遮挡检测方法的生成第一遮挡结果的流程示意图;FIG. 3 is a schematic diagram of the flow of generating a first occlusion result in the face occlusion detection method according to Embodiment 1 of the application;
图4为本申请实施例一之人脸遮挡检测方法的生成分类结果的流程示意图;4 is a schematic diagram of the process of generating classification results of the face occlusion detection method according to the first embodiment of the application;
图5为本申请实施例一之人脸遮挡检测方法的人脸遮挡检测分支模型训练的流程示意图;5 is a schematic diagram of the training process of the face occlusion detection branch model of the face occlusion detection method according to the first embodiment of the application;
图6为本申请实施例一之人脸遮挡检测方法的人脸遮挡分类分支模型训练的流程示意图;6 is a schematic diagram of the training process of the face occlusion classification branch model of the face occlusion detection method according to the first embodiment of the application;
图7为本申请实施例二之人脸遮挡检测系统的程序模块示意图;FIG. 7 is a schematic diagram of program modules of the face occlusion detection system according to the second embodiment of the application;
图8为本申请实施例三之计算机设备的硬件结构示意图。FIG. 8 is a schematic diagram of the hardware structure of the computer device according to the third embodiment of the application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions, and advantages of this application clearer and clearer, the following further describes the application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and are not used to limit the application. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。The technical solutions between the various embodiments can be combined with each other, but they must be based on what can be achieved by a person of ordinary skill in the art. When the combination of technical solutions is contradictory or cannot be achieved, it should be considered that such a combination of technical solutions does not exist. It is not within the scope of protection required by this application.
实施例一Example one
请参阅图1,示出了本申请实施例之人脸遮挡检测方法的步骤流程图。可以理解,本方法实施例中的流程图不用于对执行步骤的顺序进行限定。下面以计算机设备为执行主体进行示例性描述,具体如下:Please refer to FIG. 1, which shows a flowchart of steps of a method for detecting occlusion of a face according to an embodiment of the present application. It can be understood that the flowchart in this method embodiment is not used to limit the order of execution of the steps. The following is an exemplary description with computer equipment as the main body of execution, and the details are as follows:
步骤S100,获取待检测图像。In step S100, an image to be detected is acquired.
示例性的,通过摄像采集单元实时获取待检测图像,所述待检测图像包括人脸区域图像和背景区域。Exemplarily, the image to be detected is acquired in real time by the camera acquisition unit, and the image to be detected includes an image of a face area and a background area.
步骤S200,从所述待检测图像获取人脸区域图像。Step S200: Obtain a face area image from the image to be detected.
在示例性的实施例中,请参阅图2,所述步骤S200还可以进一步包括:In an exemplary embodiment, referring to FIG. 2, the step S200 may further include:
步骤S201,获取所述待检测图像中多个面部特征点的坐标位置。Step S201: Obtain the coordinate positions of multiple facial feature points in the image to be detected.
具体的,多个面部特征点为68个面部特征点。Specifically, the multiple facial feature points are 68 facial feature points.
步骤S202,根据获取到的多个面部特征点的坐标位置从待检测图像中提取人脸区域图像,所述人脸区域图像包括多个五官区域。Step S202: Extract a face region image from the image to be detected according to the acquired coordinate positions of multiple facial feature points, the face region image including multiple facial features regions.
具体的,多个五官区域包括眉毛区域、眼睛区域、鼻子区域、嘴巴区域、脸颊区域、额头区域、牙齿区域等。Specifically, the multiple facial features include the eyebrow area, the eye area, the nose area, the mouth area, the cheek area, the forehead area, the teeth area, and the like.
举例而言,本申请实施例通过landmark算法(面部特征点提取算法)进行68个面部特征点的标定,即landmark算法调用预测器获取68个面部特征点的坐标位置,根据每个面部特征点的坐标位置在每个面部特征点处画一个圈,并按照标定的先后顺序标明68个面部特征点标定的序号。根据多个面部特征点的坐标位置确定五官的形状,以在待检测图像中获取多个五官区域。For example, in this embodiment of the application, the landmark algorithm (facial feature point extraction algorithm) is used to calibrate 68 facial feature points, that is, the landmark algorithm calls the predictor to obtain the coordinate positions of the 68 facial feature points. The coordinate position draws a circle at each facial feature point, and the serial number of the 68 facial feature points is marked according to the calibration sequence. The shape of the facial features is determined according to the coordinate positions of multiple facial feature points to obtain multiple facial features in the image to be detected.
其中,每个五官区域利用矩形框进行区域划分标记。Among them, each facial features area is marked by a rectangular frame.
步骤S300,通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果。In step S300, the face region image is recognized by the face occlusion detection branch model, and pixel processing is performed to generate a first occlusion result.
具体的,本申请实施例可以通过人脸遮挡模型的人脸遮挡检测分支模型识别所述人脸区域图像,以生成第一遮挡结果。Specifically, in the embodiment of the present application, the face area image may be recognized by the face occlusion detection branch model of the face occlusion model to generate the first occlusion result.
所述人脸遮挡检测分支模型包括多个第一卷积层、第二卷积层、第一全连接层等。The face occlusion detection branch model includes a plurality of first convolutional layers, second convolutional layers, first fully connected layers, and the like.
在示例性的实施例中,请参阅图3,所述步骤S300还可以进一步包括:In an exemplary embodiment, referring to FIG. 3, the step S300 may further include:
步骤S301,通过人脸遮挡检测分支模型对所述人脸区域图像执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图和第二卷积特征图。Step S301: Perform convolution on the face region image through the face occlusion detection branch model to output multiple convolution feature maps, and the multiple convolution feature maps include multiple first convolution feature maps and second convolution feature maps. Convolution feature map.
具体的,将携带有人脸区域图像的待检测图像输入至人脸遮挡检测分支模型中,通过多个第一卷积层、第二卷积层进行卷积,并通过第一全连接层输出多个第一卷积特征图。Specifically, the image to be detected carrying the image of the face region is input into the face occlusion detection branch model, convolved through multiple first convolutional layers and second convolutional layers, and output multiple through the first fully connected layer. The first convolution feature map.
举例而言,将携带有人脸区域图像的待检测图像输入至人脸遮挡检测分支模型中,通过多个第一卷积层进行卷积,输出多个第二卷积特征图;再将多个第二卷积特征图输入至第二卷积层以及第一全连接层进行进一步卷积,以输出多个第一卷积特征图。其中,第二卷积特征图包含遮挡物特征,且第二卷积特征图高和宽以及第一卷积特征图高和宽均与所述待检测图像的高和宽一致。For example, input the to-be-detected image carrying the face region image into the face occlusion detection branch model, convolve through multiple first convolutional layers, and output multiple second convolution feature maps; then add multiple The second convolution feature map is input to the second convolution layer and the first fully connected layer for further convolution to output multiple first convolution feature maps. Wherein, the second convolution feature map includes occluder features, and the height and width of the second convolution feature map and the height and width of the first convolution feature map are consistent with the height and width of the image to be detected.
具体的,所述遮挡物特征包括遮挡物的形状、位置、像素等数据。Specifically, the features of the shield include data such as the shape, position, and pixels of the shield.
其中,第一卷积特征图内包含的卷积特征为通过特征融合拼接的方式形成更厚的卷积特征。Among them, the convolution features included in the first convolution feature map are thicker convolution features formed by feature fusion splicing.
步骤S302,将多个第一卷积特征图组合放大,根据预设规则调整各个五官区域与背景区域的像素值,以得到预测人脸图像,所述预测人脸图像包括遮挡区域。Step S302: Combine and enlarge a plurality of first convolution feature maps, and adjust the pixel values of each facial feature region and the background region according to a preset rule to obtain a predicted face image. The predicted face image includes an occluded area.
具体的,将多个第一卷积特征图组合后,扩充边缘和根据预设规则调整各个五官区域与背景区域的像素值,得到预测人脸图像。其中,预设规则为将所述预测人脸图像中的未遮挡部分的像素值设置为0,所述预测人脸图像中的遮挡部分和背景部分的像素值设置为1,即所述预测人脸图像中的未遮挡部分表现为白色,所述预测人脸图像中的背景部分以及遮挡部分均表现为黑色。Specifically, after combining a plurality of first convolutional feature maps, the edges are expanded and the pixel values of each facial features region and the background region are adjusted according to a preset rule to obtain a predicted face image. The preset rule is that the pixel value of the unoccluded part in the predicted face image is set to 0, and the pixel value of the occluded part and the background part in the predicted face image is set to 1, that is, the predicted person The unoccluded part of the face image is represented as white, and the background part and the occluded part in the predicted face image are both represented as black.
步骤S303,计算所述遮挡区域和所述遮挡区域所在的五官区域的遮挡比例。Step S303: Calculate the occlusion ratio of the occlusion area and the five sense organs area where the occlusion area is located.
具体的,遮挡区域表示为每个矩形框中的黑色区域,黑色区域为经像素处理后得到的预测人脸图像中各个矩形框中的黑色部分。Specifically, the occlusion area is represented as a black area in each rectangular frame, and the black area is a black part in each rectangular frame in the predicted face image obtained after pixel processing.
具体的,计算每个遮挡区域与该遮挡区域所在的矩形框的五官区域的遮挡比例,即计算每个矩形框中黑色区域与整个矩形框的比值,以得到相应的遮挡区域的遮挡比例。Specifically, the occlusion ratio of each occlusion area to the facial features of the rectangular frame in which the occlusion area is located is calculated, that is, the ratio of the black area in each rectangular frame to the entire rectangular frame is calculated to obtain the occlusion ratio of the corresponding occlusion area.
步骤S304,将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果。Step S304, comparing the occlusion ratio with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
在示例性的实施例中,当所述遮挡比例大于所述遮挡阈值时,所述第一遮挡结果表示所述五官区域被遮挡;当所述遮挡比例小于所述遮挡阈值时,所述第一遮挡结果表示所述五官区域未遮挡。In an exemplary embodiment, when the occlusion ratio is greater than the occlusion threshold, the first occlusion result indicates that the facial features area is occluded; when the occlusion ratio is less than the occlusion threshold, the first occlusion The occlusion result indicates that the facial features are not occluded.
其中,各个五官区域对应的所述预设遮挡阈值可以设置为相同的预设遮挡阈值,也可以设置为不同的预设遮挡阈值。具体的,各个五官区域对应的预设遮挡阈值的设置,可以根据不同的场景要求动态调整各个五官区域对应的预设遮挡阈值。例如,当场景要求高,要求遮挡的比例小时,可以设置更小的预设遮挡阈值;当场景要求低,可以设置更高的预设遮挡阈值。Wherein, the preset occlusion threshold corresponding to each facial features area may be set to the same preset occlusion threshold, or may be set to different preset occlusion thresholds. Specifically, the setting of the preset occlusion threshold corresponding to each facial features area can be dynamically adjusted according to different scene requirements. For example, when the scene requirements are high and the ratio of the required occlusion is small, a smaller preset occlusion threshold can be set; when the scene requirements are low, a higher preset occlusion threshold can be set.
进一步的,遮挡阈值可以根据不同场景要求进行人工设置,例如遮挡阈值设置为95%、90%、85%等。Further, the occlusion threshold can be manually set according to different scene requirements, for example, the occlusion threshold is set to 95%, 90%, 85%, and so on.
步骤S400,当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果。Step S400: When the first occlusion result indicates that the face region image is occluded, classify the face region image through the face occlusion classification branch model and a preset occlusion label to generate a classification result.
具体的,本申请实施例可以通过人脸遮挡模型的人脸遮挡分类分支模型中预设的遮挡标签对人脸区域图像分类,以生成分类结果。Specifically, the embodiment of the present application may classify the face region image by using the preset occlusion label in the face occlusion classification branch model of the face occlusion model to generate the classification result.
其中,预设的遮挡标签包括预设的帽子、刘海、墨镜、眼睛、胡子、口罩、围巾等遮挡标签。Among them, the preset occlusion tags include preset occlusion tags such as hats, bangs, sunglasses, eyes, beards, masks, and scarves.
在示例性的实施例中,人脸遮挡分类分支模型包括若干隐藏层,所述隐藏层为若干第三卷积层、第二全连接层以及分类层,每个层与层之间相互连接。In an exemplary embodiment, the face occlusion classification branch model includes several hidden layers, and the hidden layers are several third convolutional layers, second fully connected layers, and classification layers, and each layer is connected to each other.
在示例性的实施例中,请参阅图4,所述步骤S400还可以进一步包括:In an exemplary embodiment, referring to FIG. 4, the step S400 may further include:
步骤S401,通过人脸遮挡分类分支模型从多个第二卷积特征图提取遮挡物特征。Step S401, extracting features of occlusion objects from multiple second convolution feature maps through the face occlusion classification branch model.
具体的,从最后的第一卷积层中获取多个携带有遮挡物特征的第二卷积特征图,并从多个第二卷积特征图中提取遮挡物特征。Specifically, a plurality of second convolution feature maps carrying features of the occluder is obtained from the last first convolution layer, and the features of the occluder are extracted from the plurality of second convolution feature maps.
步骤S402,将所述遮挡物特征和预设的遮挡标签进行匹配,以生成分类结果。Step S402, matching the features of the obstruction object with a preset obstruction label to generate a classification result.
在示例性的实施例中,当遮挡物特征与预设的任一遮挡标签匹配一致时,则分类结果表示该遮挡物特征对应为与其匹配一致的遮挡标签;当遮挡物特征与预设的任一遮挡标签匹配不一致时,对所述遮挡物特征标记,以得到新遮挡物标签,则分类结果表示该遮挡物特征对应为新遮挡标签。In an exemplary embodiment, when the feature of the occluder matches any one of the preset occlusion labels, the classification result indicates that the feature of the occluder corresponds to the matching occlusion label; when the feature of the occluder matches the preset one When an occlusion label matches inconsistently, the occlusion feature is marked to obtain a new occlusion label, and the classification result indicates that the occlusion feature corresponds to the new occlusion label.
进一步的,将新遮挡物标签保存在预设数据库中。在后续的模型维护过程中,当检测到新遮挡标签时,识别新遮挡标签对应的新遮挡物,以生成新的遮挡物标签并保存在预设数据库中;并根据生成的新的遮挡物标签对人脸遮挡分类分支模型进行优化。举例而言,从多个第二卷积特征图中提取遮挡物特征,将遮挡物特征和预设的遮挡标签进行匹配,当遮挡物特征与帽子匹配一致时,则生成的分类结果表示该遮挡物特征对应为帽子。Further, the new obstruction label is saved in the preset database. In the subsequent model maintenance process, when a new occlusion label is detected, the new occlusion label corresponding to the new occlusion label is identified to generate a new occlusion label and save it in the preset database; and according to the new occlusion label generated The classification branch model of face occlusion is optimized. For example, extract features of the occluder from multiple second convolution feature maps, and match the features of the occluder with a preset occlusion label. When the feature of the occluder matches the hat, the generated classification result indicates the occlusion The object feature corresponds to the hat.
步骤S500,根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。Step S500: Generate a final occlusion result according to the first occlusion result and the classification result.
具体的,最终遮挡结果包括用于判断人脸区域是否被遮挡的第一遮挡结果以及判断遮挡物分类类型的分类结果。Specifically, the final occlusion result includes the first occlusion result used to determine whether the face area is occluded and the classification result for determining the classification type of the occluder.
本申请实施例中,通过动态调整各个五官区域的预设遮挡阈值以及同时区分出遮挡物特征的分类类型,即同时区分出各个五官区域被什么东西遮挡,对于一些需要排除胡子等会被误判为遮挡的因素同时又要判断遮挡的人脸遮挡检测场景,能够提高人脸遮挡检测的准确度。In the embodiment of the application, by dynamically adjusting the preset occlusion threshold of each facial features area and distinguishing the classification type of the occlusion feature at the same time, that is, at the same time distinguishing what is occluded by each facial features region, it will be misjudged for some needs to exclude beards, etc. For the occlusion factor, it is necessary to judge the occluded face occlusion detection scene at the same time, which can improve the accuracy of the face occlusion detection.
在示例性的实施例中,所述人脸遮挡检测方法还包括:根据所述最终遮挡结果生成反馈指令,所述反馈指令用于指示用户的脸部包含的遮挡区域以及位于遮挡区域的遮挡物。In an exemplary embodiment, the face occlusion detection method further includes: generating a feedback instruction according to the final occlusion result, the feedback instruction being used to indicate the occlusion area included in the user's face and the occlusion object located in the occlusion area .
举例而言,当最终遮挡结果表示为用户眼部被遮挡,且遮挡物为墨镜时,计算机设备根据所述最终遮挡结果生成反馈指令,该反馈指令用于指示用户眼部被墨镜遮挡无法进行人脸识别,以使用户根据反馈指令作出响应,即使用户根据所述反馈指令摘掉墨镜,以便于人脸识别的有效进行。For example, when the final occlusion result indicates that the user's eyes are blocked and the occluded object is sunglasses, the computer device generates a feedback instruction according to the final occlusion result. The feedback instruction is used to indicate that the user's eyes are blocked by the sunglasses and cannot be Face recognition, so that the user responds according to the feedback instruction, even if the user takes off the sunglasses according to the feedback instruction, so as to facilitate effective face recognition.
在步骤S100之前,本申请实施例还包括人脸遮挡模型的人脸遮挡检测分支模型的训练过程,请参阅图5,具体如下:Before step S100, the embodiment of the present application also includes the training process of the face occlusion detection branch model of the face occlusion model. Please refer to FIG. 5, which is specifically as follows:
步骤S600,对多个样本人脸图像预处理,以得到对比样本人脸图像。In step S600, a plurality of sample face images are preprocessed to obtain a comparison sample face image.
具体的,从预设数据库中获取多个样本人脸图像,所述多个样本人脸图像包括多个遮挡人脸样本图像和未遮挡人脸样本图像。对每个样本人脸图像进行遮挡特征、未遮挡特征以及背景的定位识别标注,以标注出遮挡特征、未遮挡特征以及背景特征的位置,并分别设置遮挡特征、未遮挡特征以及背景特征的像素值,以输出对比样本人脸图像。Specifically, multiple sample face images are acquired from a preset database, and the multiple sample face images include multiple occluded face sample images and unoccluded face sample images. Perform occlusion feature, unocclusion feature and background location recognition label for each sample face image to mark the location of occlusion feature, unocclusion feature and background feature, and set the pixels of occlusion feature, unocclusion feature and background feature respectively Value to output a comparison sample face image.
将确定遮挡特征的部分的像素值设置为1,将确定未遮挡特征以及背景的部分的像素值设置为0,此时,输出对比样本人脸图像。即对比样本人脸图像中的未被遮挡特征的部分呈像为白色,遮挡特征的部分及背景的部分呈像为黑色。在对样本人脸图像预处理时,所述样本人脸图像还包括无法确定是否为遮挡的区域,将该区域的像素值设置为225。无法确定是否为遮挡的区域由预设的模糊区域进行定位采集,在计算后续交并比的过程中,为了保持正确率,可以将定位采集到的无法确定是否为遮挡的区域排除在外。The pixel value of the part where the occlusion feature is determined is set to 1, and the pixel value of the part where the unoccluded feature and the background are determined is set to 0. At this time, the comparison sample face image is output. That is, in the face image of the comparative sample, the part of the unoccluded feature appears as white, and the part of the occluded feature and the background appear as black. When preprocessing the sample face image, the sample face image also includes an area that cannot be determined whether it is occluded, and the pixel value of the area is set to 225. The area that cannot be determined whether it is occluded is located and collected by the preset fuzzy area. In the process of calculating the subsequent intersection and ratio, in order to maintain the correct rate, the area that cannot be determined whether it is occluded can be excluded from the location acquisition.
其中,对比人脸图像呈像为黑白对比人脸图像。Among them, the contrast face image is a black and white contrast face image.
步骤S601,将多个样本人脸图像输入至深度神经网络模型的若干第一卷积层、第二卷积层以及第一全连接层中执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图和多个第二卷积特征图。Step S601, input a plurality of sample face images into the first convolutional layer, the second convolutional layer and the first fully connected layer of the deep neural network model to perform convolution to output a plurality of convolutional feature maps, so The multiple convolution feature maps include multiple first convolution feature maps and multiple second convolution feature maps.
步骤S602,将多个第一卷积特征图组合放大,以输出预测样本人脸图像。In step S602, a plurality of first convolution feature maps are combined and enlarged to output a predicted sample face image.
其中,所述预测样本人脸图像呈像为黑白预测样本人脸图像。Wherein, the prediction sample face image is presented as a black and white prediction sample face image.
步骤S603,将所述预测样本人脸图像与所述对比样本人脸图像进行匹配,计算交并比,所述交并比为预测样本人脸图像与所述对比样本人脸图像的交集与并集的比值。Step S603: The prediction sample face image is matched with the comparison sample face image, and an intersection ratio is calculated, and the intersection ratio is the intersection and union of the prediction sample face image and the comparison sample face image. The ratio of the set.
具体的,所述交并比指的是预测样本人脸图像与所述对比样本人脸图像各个相对应的遮挡区域、未遮挡区域背景区域的交集和并集的比值。Specifically, the intersection ratio refers to the ratio of the intersection and union of each corresponding occluded area and unoccluded area background area of the predicted sample face image and the comparison sample face image.
步骤S604,当交并比小于预设的对比阈值时,通过第一损失函数对深度神经网络模型迭代,以调整交并比,从而得到优化后的人脸遮挡检测分支模型。In step S604, when the intersection ratio is less than the preset contrast threshold, the deep neural network model is iterated through the first loss function to adjust the intersection ratio, so as to obtain an optimized face occlusion detection branch model.
具体的,当所述交并比小于预设的对比阈值,且交并比越小时,说明所述预测样本人脸图像与所述对比样本人脸图像越不像,则表示模型仍需进一步训练优化。Specifically, when the intersection ratio is smaller than the preset comparison threshold, and the intersection ratio is smaller, it means that the prediction sample face image is less similar to the comparison sample face image, and the model still needs further training. optimization.
在示例性的实施例中,所述第一损失函数为U-net损失函数。U-net损失函数可以为带边界权值的损失函数,目的是为了给样本人脸图像中贴近边界点的像素更高的权值。In an exemplary embodiment, the first loss function is a U-net loss function. The U-net loss function can be a loss function with boundary weights. The purpose is to give higher weights to the pixels close to the boundary points in the sample face image.
进一步的,本申请实施例还包括人脸遮挡模型的人脸遮挡分类分支模型的训练过程,请参阅图6,具体如下:Further, the embodiment of the present application also includes the training process of the face occlusion classification branch model of the face occlusion model. Please refer to FIG. 6, and the details are as follows:
步骤S610,将多个第二卷积特征图输入至深度神经网络模型的若干隐藏层,以提取样本遮挡物特征。Step S610: Input a plurality of second convolution feature maps into several hidden layers of the deep neural network model to extract the features of the sample occluder.
具体的,将获取到的多个包含遮挡物特征的第二卷积特征图输入至深度神经网络模型的若干隐藏层中的第三卷积层进行进一步卷积操作,以提取样本遮挡物特征。Specifically, the obtained multiple second convolution feature maps containing the features of the occluder are input to the third convolutional layer among the several hidden layers of the deep neural network model to perform a further convolution operation to extract the features of the sample occluder.
步骤S611,将所述样本遮挡物特征输入分类层中识别分类,以生成第一分类结论。Step S611: Input the feature of the sample occluder into the classification layer to identify and classify, so as to generate a first classification conclusion.
步骤S612,将所述第一分类结论与所述样本人脸图像的样本分类结论进行比对,以计算第二损失值。Step S612: The first classification conclusion is compared with the sample classification conclusion of the sample face image to calculate a second loss value.
步骤S613,通过所述第二损失值与第二损失函数对深度神经网络模型迭代,降低损失值并更新人脸遮挡分类分支模型的模型参数,以得到优化后的人脸遮挡分类分支模型。Step S613: Iterate the deep neural network model through the second loss value and the second loss function, reduce the loss value and update the model parameters of the face occlusion classification branch model to obtain an optimized face occlusion classification branch model.
具体的,所述第二损失函数可以为交叉熵损失函数。Specifically, the second loss function may be a cross-entropy loss function.
优化后的人脸遮挡检测分支模型与人脸遮挡分类分支模型组合为优化后的人脸检测模型。The optimized face occlusion detection branch model and the face occlusion classification branch model are combined into an optimized face detection model.
在示例性的实施例中,还可以定时根据人脸遮挡分类分支模型输出的分类结果对所述人脸遮挡分类分支模型的预设遮挡物标签进行更新。In an exemplary embodiment, the preset occluder label of the face occlusion classification branch model may also be updated regularly according to the classification result output by the face occlusion classification branch model.
在示例性的实施例中,还可以为人脸遮挡检测分支模型与人脸遮挡分类分支模型匹配相应的权重,且人脸遮挡检测分支模型的第一遮挡结果为影响人脸检测模型的最终遮挡结果的重要参数。In an exemplary embodiment, it is also possible to match the corresponding weights for the face occlusion detection branch model and the face occlusion classification branch model, and the first occlusion result of the face occlusion detection branch model is the final occlusion result that affects the face detection model Important parameters.
本申请实施例通过识别人脸区域图像得到人脸遮挡区域的第一遮挡结果,以及通过预设的遮挡标签对人脸区域图像进行分类,以得到人脸上遮挡物的分类类型的分类结果,并结合第一遮挡结果和遮挡物的分类类型得到最终遮挡结果,提高了人脸遮挡检测的识别精度,并且识别出遮挡物的分类类型,便于人脸遮挡检测的智能化应用。In the embodiment of the present application, the first occlusion result of the occlusion area of the face is obtained by recognizing the face area image, and the face area image is classified by the preset occlusion label to obtain the classification result of the classification type of the occlusion on the face. The final occlusion result is obtained by combining the first occlusion result and the classification type of the occlusion object, which improves the recognition accuracy of the face occlusion detection, and recognizes the classification type of the occlusion object, which is convenient for the intelligent application of the face occlusion detection.
实施例二Example two
请继续参阅图7,示出了本申请人脸遮挡检测系统的程序模块示意图。在本实施例中,人脸遮挡检测系统20可以包括或被分割成一个或多个程序模块,一个或者多个程序模块被存储于存储介质中,并由一个或多个处理器所执行,以完成本申请,并可实现上述人脸遮挡检测方法。本申请实施例所称的程序模块是指能够完成特定功能的一系列计算机程序指令段,比程序本身更适合于描述人脸遮挡检测系统20在存储介质中的执行过程。以下描述将具体介绍本实施例各程序模块的功能:Please continue to refer to FIG. 7, which shows a schematic diagram of program modules of the applicant's face occlusion detection system. In this embodiment, the face occlusion detection system 20 may include or be divided into one or more program modules, and the one or more program modules are stored in a storage medium and executed by one or more processors to This application is completed, and the above-mentioned face occlusion detection method can be realized. The program module referred to in the embodiments of the present application refers to a series of computer program instruction segments capable of completing specific functions, and is more suitable for describing the execution process of the face occlusion detection system 20 in the storage medium than the program itself. The following description will specifically introduce the functions of each program module in this embodiment:
采集模块700,用于获取待检测图像。The acquisition module 700 is used to acquire an image to be detected.
提取模块710,用于从所述待检测图像获取人脸区域图像。The extraction module 710 is configured to obtain a face region image from the image to be detected.
进一步地,所述提取模块710还用于:获取所述待检测图像中多个面部特征点的坐标位置;根据获取到的多个面部特征点的坐标位置从待检测图像中提取人脸区域图像,所述人脸区域图像包括多个五官区域。Further, the extraction module 710 is further configured to: obtain the coordinate positions of multiple facial feature points in the image to be detected; extract a face region image from the image to be detected according to the coordinate positions of the multiple facial feature points obtained , The face region image includes a plurality of facial features regions.
在示例性的实施例中,多个面部特征点为68个面部特征点;多个五官区域包括眉毛区域、眼睛区域、鼻子区域、嘴巴区域、脸颊区域、额头区域、牙齿区域等。In an exemplary embodiment, the multiple facial feature points are 68 facial feature points; the multiple facial features regions include eyebrows, eyes, nose, mouth, cheeks, forehead, teeth, and so on.
识别模块720,用于识别所述人脸区域图像,以生成第一遮挡结果。The recognition module 720 is configured to recognize the face region image to generate a first occlusion result.
进一步地,所述识别模块720还用于:对所述人脸区域图像执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图;将多个第一卷积特征图组合放大,以得到预测人脸图像,所述预测人脸图像包括遮挡区域;计算所述遮挡区域和所述遮挡区域所在的五官区域的遮挡比例;将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果。Further, the recognition module 720 is further configured to: perform convolution on the face region image to output multiple convolution feature maps, and the multiple convolution feature maps include multiple first convolution feature maps; Combine and enlarge the multiple first convolution feature maps to obtain a predicted face image, where the predicted face image includes an occlusion area; calculate the occlusion ratio of the occlusion area and the five sense organs where the occlusion area is located; The occlusion ratio is compared with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
在示例性的实施例中,将多个第一卷积特征图组合后,扩充边缘和背景像素,得到预测人脸图像,所述预测人脸图像中的未遮挡部分的像素值设置为0,所述预测人脸图像中的遮挡部分和背景部分的像素值设置为1,即所述预测人脸图像中的未遮挡部分表现为白色,所述预测人脸图像中的背景部分以及遮挡部分均表现为黑色。In an exemplary embodiment, after combining a plurality of first convolutional feature maps, the edge and background pixels are expanded to obtain a predicted face image, and the pixel value of the unoccluded part in the predicted face image is set to 0, The pixel values of the occluded part and the background part in the predicted face image are set to 1, that is, the unoccluded part in the predicted face image appears white, and the background part and the occluded part in the predicted face image are both Appears as black.
在示例性的实施例中,各个五官区域对应的所述预设遮挡阈值可以设置为相同的预设遮挡阈值,也可以设置为不同的预设遮挡阈值。具体的,各个五官区域对应的预设遮挡阈值的设置,可以根据不同的场景要求动态调整各个五官区域对应的预设遮挡阈值。例如,当场景要求高,要求遮挡的比例小时,可以设置更小的预设遮挡阈值;当场景要求低,可以设置更高的预设遮挡阈值。In an exemplary embodiment, the preset occlusion threshold corresponding to each facial features area may be set to the same preset occlusion threshold, or may be set to different preset occlusion thresholds. Specifically, the setting of the preset occlusion threshold corresponding to each facial features area can be dynamically adjusted according to different scene requirements. For example, when the scene requirements are high and the ratio of the required occlusion is small, a smaller preset occlusion threshold can be set; when the scene requirements are low, a higher preset occlusion threshold can be set.
分类模块730,用于根据预设的遮挡标签对人脸区域图像分类,以生成分类结果。The classification module 730 is configured to classify the face region image according to the preset occlusion label to generate a classification result.
进一步地,所述分类模块730还用于:从多个第二卷积特征图提取遮挡物特征;将所述遮挡物特征和预设的遮挡标签进行匹配,以生成分类结果。Further, the classification module 730 is further configured to: extract features of occlusion objects from a plurality of second convolution feature maps; and match the features of the occlusion objects with a preset occlusion label to generate a classification result.
在示例性的实施例中,In an exemplary embodiment,
生成结果模块740,用于根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。The generating result module 740 is configured to generate a final occlusion result according to the first occlusion result and the classification result.
实施例三Example three
参阅图8,是本申请实施例三之计算机设备的硬件架构示意图。本实施例中,所述计算机设备2是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或信息处理的设备。该计算机设备2可以是机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个服务器所组成的服务器集群)等。如图8所示,所述计算机设备2至少包括,但不限于,可通过系统总线相互通信连接存储器21、处理器22、网络接口23、以及人脸遮挡检测系统20。其中:Refer to FIG. 8, which is a schematic diagram of the hardware architecture of the computer device according to the third embodiment of the present application. In this embodiment, the computer device 2 is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. The computer device 2 may be a rack server, a blade server, a tower server, or a cabinet server (including an independent server or a server cluster composed of multiple servers). As shown in FIG. 8, the computer device 2 at least includes, but is not limited to, a memory 21, a processor 22, a network interface 23, and a face occlusion detection system 20 that can communicate with each other through a system bus. in:
本实施例中,存储器21至少包括一种类型的计算机可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器21可以是计算机设备2的内部存储单元,例如该计算机设备2的硬盘或内存。在另一些实施例中,存储器21也可以是计算机设备2的外部存储设备,例如该计算机设备2上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,存储器21还可以既包括计算机设备2的内部存储单元也包括其外部存储设备。本实施例中,存储器21通常用于存储安装于计算机设备2的操作系统和各类应用软件,例如实施例二的人脸遮挡检测系统20的程序代码等。此外,存储器21还可以用于暂时地存储已经输出或者将要输出的各类数据。In this embodiment, the memory 21 includes at least one type of computer-readable storage medium. The readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory ( RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc. In some embodiments, the memory 21 may be an internal storage unit of the computer device 2, for example, the hard disk or memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc. Of course, the memory 21 may also include both the internal storage unit of the computer device 2 and its external storage device. In this embodiment, the memory 21 is generally used to store an operating system and various application software installed in the computer device 2, for example, the program code of the face occlusion detection system 20 in the second embodiment. In addition, the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
处理器22在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器22通常用于控制计算机设备2的总体操作。本实施例中,处理器22用于运行存储器21中存储的程序代码或者处理数据,例如运行人脸遮挡检测系统20,以实现本申请实施例的人脸遮挡检测方法。The processor 22 may be a central processing unit (Central Processing Unit) in some embodiments. Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip. The processor 22 is generally used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is used to run the program code or process data stored in the memory 21, for example, to run the face occlusion detection system 20, so as to implement the face occlusion detection method in the embodiment of the present application.
所述网络接口23可包括无线网络接口或有线网络接口,该网络接口23通常用于在所述计算机设备2与其他电子装置之间建立通信连接。例如,所述网络接口23用于通过网络将所述计算机设备2与外部终端相连,在所述计算机设备2与外部终端之间的建立数据传输通道和通信连接等。所述网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯系统(Global System of Mobile communication,GSM)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi等无线或有线网络。The network interface 23 may include a wireless network interface or a wired network interface, and the network interface 23 is generally used to establish a communication connection between the computer device 2 and other electronic devices. For example, the network interface 23 is used to connect the computer device 2 with an external terminal through a network, and establish a data transmission channel and a communication connection between the computer device 2 and the external terminal. The network may be an intranet (Intranet), the Internet (Internet), a global system of mobile communication (Global System of Mobile) communication, GSM), Wideband Code Division Multiple Access (Wideband Code Division Multiple Access, WCDMA), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
需要指出的是,图8仅示出了具有部件20-23的计算机设备2,但是应理解的是,并不要求实施所有示出的部件,可以替代的实施更多或者更少的部件。It should be pointed out that FIG. 8 only shows the computer device 2 with components 20-23, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
在本实施例中,存储于存储器21中的所述人脸遮挡检测系统20还可以被分割为一个或者多个程序模块,所述一个或者多个程序模块被存储于存储器21中,并由一个或多个处理器(本实施例为处理器22)所执行,以完成本申请。In this embodiment, the face occlusion detection system 20 stored in the memory 21 can also be divided into one or more program modules, and the one or more program modules are stored in the memory 21 and consist of one Or executed by multiple processors (in this embodiment, the processor 22) to complete the application.
例如,图7示出了所述实现人脸遮挡检测系统20实施例二的程序模块示意图,该实施例中,所述基于人脸遮挡检测系统20可以被划分为采集模块700、提取模块710、识别模块720、分类模块730以及生成结果模块740。其中,本申请所称的程序模块是指能够完成特定功能的一系列计算机程序指令段,比程序更适合于描述所述人脸遮挡检测系统20在所述计算机设备2中的执行过程。所述程序模块700-740的具体功能在实施例二中已有详细描述,在此不再赘述。For example, FIG. 7 shows a schematic diagram of the program modules of the second embodiment of the face occlusion detection system 20. In this embodiment, the face occlusion detection system 20 can be divided into a collection module 700, an extraction module 710, The recognition module 720, the classification module 730, and the generation result module 740. Among them, the program module referred to in the present application refers to a series of computer program instruction segments capable of completing specific functions, and is more suitable than a program to describe the execution process of the face occlusion detection system 20 in the computer device 2. The specific functions of the program modules 700-740 have been described in detail in the second embodiment, and will not be repeated here.
实施例四Example four
本实施例还提供一种计算机可读存储介质,如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘、服务器、App应用商城等等,其上存储有计算机程序,程序被处理器执行时实现相应功能。本实施例的计算机可读存储介质用于存储人脸遮挡检测系统20,被处理器执行时实现本申请实施例的人脸遮挡检测方法。所述计算机可读存储介质可以是非易失性,也可以是易失性。This embodiment also provides a computer-readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), only Read memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, servers, App application malls, etc., on which computer programs are stored, The corresponding function is realized when the program is executed by the processor. The computer-readable storage medium of this embodiment is used to store the face occlusion detection system 20, and when executed by a processor, the face occlusion detection method of the embodiment of the present application is implemented. The computer-readable storage medium may be non-volatile or volatile.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the superiority or inferiority of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (20)

  1. 一种人脸遮挡检测方法,其中,包括: A face occlusion detection method, which includes:
    获取待检测图像;Obtain the image to be detected;
    从所述待检测图像获取人脸区域图像;Acquiring a face area image from the image to be detected;
    通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;Recognizing the face region image through a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result;
    当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;When the first occlusion result indicates that the face region image is occluded, classify the face region image through a face occlusion classification branch model and a preset occlusion label to generate a classification result;
    根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。A final occlusion result is generated according to the first occlusion result and the classification result.
  2. 根据权利要求1所述的人脸遮挡检测方法,其中,所述从所述待检测图像获取人脸区域图像包括: The method for detecting face occlusion according to claim 1, wherein said obtaining a face area image from the image to be detected comprises:
    获取所述待检测图像中多个面部特征点的坐标位置;Acquiring coordinate positions of multiple facial feature points in the image to be detected;
    根据获取到的多个面部特征点的坐标位置从待检测图像中提取人脸区域图像,所述人脸区域图像包括多个五官区域。Extracting a face region image from the image to be detected according to the acquired coordinate positions of the multiple facial feature points, the face region image including a plurality of facial features regions.
  3. 根据权利要求2所述的人脸遮挡检测方法,其中,所述待检测图像包括背景区域; The method for detecting human face occlusion according to claim 2, wherein the image to be detected includes a background area;
    所述通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果包括:The step of recognizing the face region image through the face occlusion detection branch model and performing pixel processing to generate the first occlusion result includes:
    通过人脸遮挡检测分支模型对所述人脸区域图像执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图;Performing convolution on the face region image by using a face occlusion detection branch model to output a plurality of convolution feature maps, the plurality of convolution feature maps including a plurality of first convolution feature maps;
    将多个第一卷积特征图组合放大,根据预设规则调整各个五官区域与背景区域的像素值,以得到预测人脸图像,所述预测人脸图像包括遮挡区域;Combining and magnifying the multiple first convolution feature maps, and adjusting the pixel values of each facial feature region and the background region according to preset rules to obtain a predicted face image, where the predicted face image includes an occluded area;
    计算所述遮挡区域和所述遮挡区域所在的五官区域的遮挡比例;Calculating the occlusion ratio of the occlusion area and the facial features area where the occlusion area is located;
    将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果。The occlusion ratio is compared with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
  4. 根据权利要求3所述的人脸遮挡检测方法,其中,所述将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果包括: The method for detecting face occlusion according to claim 3, wherein the comparing the occlusion ratio with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result comprises:
    当所述遮挡比例大于所述遮挡阈值时,所述第一遮挡结果表示所述五官区域被遮挡;When the occlusion ratio is greater than the occlusion threshold, the first occlusion result indicates that the facial features area is occluded;
    当所述遮挡比例小于所述遮挡阈值时,所述第一遮挡结果表示所述五官区域未遮挡。When the occlusion ratio is less than the occlusion threshold, the first occlusion result indicates that the facial features are not occluded.
  5. 根据权利要求4所述的人脸遮挡检测方法,其中,所述多个卷积特征图包括多个第二卷积特征图,所述通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果包括: The face occlusion detection method according to claim 4, wherein the plurality of convolution feature maps include a plurality of second convolution feature maps, and the face occlusion classification branch model and the preset occlusion label are The face region image classification to generate classification results includes:
    通过人脸遮挡分类分支模型从多个第二卷积特征图提取遮挡物特征;Extracting features of occlusion objects from multiple second convolution feature maps through the face occlusion classification branch model;
    将所述遮挡物特征和预设的遮挡标签进行匹配,以生成分类结果。The feature of the occluder is matched with the preset occlusion label to generate a classification result.
  6. 根据权利要求5所述的人脸遮挡检测方法,其中,所述将所述遮挡物特征和预设的遮挡标签进行匹配,以生成分类结果包括: The method for detecting occlusion of a human face according to claim 5, wherein the matching the feature of the occluder with a preset occlusion label to generate a classification result comprises:
    当遮挡物特征与预设的任一遮挡标签匹配一致时,则分类结果表示该遮挡物特征对应为与其匹配一致的遮挡标签;When the feature of the occluder matches any one of the preset occlusion labels, the classification result indicates that the feature of the occluder corresponds to the matching occlusion label;
    当遮挡物特征与预设的任一遮挡标签匹配不一致时,对所述遮挡物特征标记,以得到新遮挡物标签,则分类结果表示该遮挡物特征对应为新遮挡物标签。When the feature of the occluder is inconsistent with any preset occlusion label, mark the feature of the occluder to obtain a new occluder label, and the classification result indicates that the feature of the occluder corresponds to the new occluder label.
  7. 根据权利要求6所述的人脸遮挡检测方法,其中,所述方法还包括:根据所述最终遮挡结果生成反馈指令,所述反馈指令用于指示用户的脸部包含的遮挡区域以及位于遮挡区域的遮挡物。 The face occlusion detection method according to claim 6, wherein the method further comprises: generating a feedback instruction according to the final occlusion result, the feedback instruction being used to indicate the occlusion area included in the user's face and the occlusion area Of occluders.
  8. 一种人脸遮挡检测系统,其中,包括: A face occlusion detection system, which includes:
    采集模块,用于获取待检测图像;The acquisition module is used to acquire the image to be detected;
    提取模块,用于从所述待检测图像获取人脸区域图像;An extraction module, which is used to obtain an image of a face area from the image to be detected;
    识别模块,用于识别所述人脸区域图像,以生成第一遮挡结果;A recognition module for recognizing the image of the face area to generate a first occlusion result;
    分类模块,用于根据预设的遮挡标签对人脸区域图像分类,以生成分类结果;The classification module is used to classify the face area image according to the preset occlusion tags to generate the classification result;
    生成结果模块,用于根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。The generating result module is configured to generate a final occlusion result according to the first occlusion result and the classification result.
  9. 根据权利要求8所述的人脸遮挡检测系统,其中,所述提取模块还用于:The face occlusion detection system according to claim 8, wherein the extraction module is further used for:
    获取所述待检测图像中多个面部特征点的坐标位置;Acquiring coordinate positions of multiple facial feature points in the image to be detected;
    根据获取到的多个面部特征点的坐标位置从待检测图像中提取人脸区域图像,所述人脸区域图像包括多个五官区域。Extracting a face region image from the image to be detected according to the acquired coordinate positions of the multiple facial feature points, the face region image including a plurality of facial features regions.
  10. 根据权利要求9所述的人脸遮挡检测系统,其中,所述待检测图像包括背景区域;所述识别模块还用于:The face occlusion detection system according to claim 9, wherein the image to be detected includes a background area; and the recognition module is further used for:
    通过人脸遮挡检测分支模型对所述人脸区域图像执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图;Performing convolution on the face region image by using a face occlusion detection branch model to output a plurality of convolution feature maps, the plurality of convolution feature maps including a plurality of first convolution feature maps;
    将多个第一卷积特征图组合放大,根据预设规则调整各个五官区域与背景区域的像素值,以得到预测人脸图像,所述预测人脸图像包括遮挡区域;Combining and magnifying the multiple first convolution feature maps, and adjusting the pixel values of each facial feature region and the background region according to preset rules to obtain a predicted face image, where the predicted face image includes an occluded area;
    计算所述遮挡区域和所述遮挡区域所在的五官区域的遮挡比例;Calculating the occlusion ratio of the occlusion area and the facial features area where the occlusion area is located;
    将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果。The occlusion ratio is compared with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
  11. 根据权利要求10所述的人脸遮挡检测系统,其中,所述识别模块还用于:The face occlusion detection system according to claim 10, wherein the recognition module is further used for:
    当所述遮挡比例大于所述遮挡阈值时,所述第一遮挡结果表示所述五官区域被遮挡;When the occlusion ratio is greater than the occlusion threshold, the first occlusion result indicates that the facial features area is occluded;
    当所述遮挡比例小于所述遮挡阈值时,所述第一遮挡结果表示所述五官区域未遮挡。When the occlusion ratio is less than the occlusion threshold, the first occlusion result indicates that the facial features are not occluded.
  12. 根据权利要求11所述的人脸遮挡检测系统,其中,所述多个卷积特征图包括多个第二卷积特征图,所述分类模块还用于:The face occlusion detection system according to claim 11, wherein the multiple convolution feature maps include multiple second convolution feature maps, and the classification module is further configured to:
    通过人脸遮挡分类分支模型从多个第二卷积特征图提取遮挡物特征;Extracting features of occlusion objects from multiple second convolution feature maps through the face occlusion classification branch model;
    将所述遮挡物特征和预设的遮挡标签进行匹配,以生成分类结果。The feature of the occluder is matched with the preset occlusion label to generate a classification result.
  13. 根据权利要求12所述的人脸遮挡检测系统,其中,所述分类模块还用于:The face occlusion detection system according to claim 12, wherein the classification module is further used for:
    当遮挡物特征与预设的任一遮挡标签匹配一致时,则分类结果表示该遮挡物特征对应为与其匹配一致的遮挡标签;When the feature of the occluder matches any one of the preset occlusion labels, the classification result indicates that the feature of the occluder corresponds to the matching occlusion label;
    当遮挡物特征与预设的任一遮挡标签匹配不一致时,对所述遮挡物特征标记,以得到新遮挡物标签,则分类结果表示该遮挡物特征对应为新遮挡物标签。When the feature of the occluder is inconsistent with any preset occlusion label, mark the feature of the occluder to obtain a new occluder label, and the classification result indicates that the feature of the occluder corresponds to the new occluder label.
  14. 根据权利要求13所述的人脸遮挡检测系统,其中,所述系统还包括:The human face occlusion detection system according to claim 13, wherein the system further comprises:
    反馈模块,用于根据所述最终遮挡结果生成反馈指令,所述反馈指令用于指示用户的脸部包含的遮挡区域以及位于遮挡区域的遮挡物。The feedback module is configured to generate a feedback instruction according to the final occlusion result, and the feedback instruction is used to indicate the occlusion area included in the user's face and the occluder located in the occlusion area.
  15. 一种计算机设备,所述计算机设备包括存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时执行以下步骤:A computer device includes a memory, a processor, and a computer program that is stored on the memory and can run on the processor, wherein the processor executes the following steps when the computer program is executed:
    获取待检测图像;Obtain the image to be detected;
    从所述待检测图像获取人脸区域图像;Acquiring a face area image from the image to be detected;
    通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;Recognizing the face region image through a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result;
    当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;When the first occlusion result indicates that the face region image is occluded, classify the face region image through a face occlusion classification branch model and a preset occlusion label to generate a classification result;
    根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。A final occlusion result is generated according to the first occlusion result and the classification result.
  16. 根据权利要求15所述的计算机设备,其中,所述处理器执行所述计算机程序时执行以下步骤:The computer device according to claim 15, wherein the processor executes the following steps when executing the computer program:
    获取所述待检测图像中多个面部特征点的坐标位置;Acquiring coordinate positions of multiple facial feature points in the image to be detected;
    根据获取到的多个面部特征点的坐标位置从待检测图像中提取人脸区域图像,所述人脸区域图像包括多个五官区域。Extracting a face region image from the image to be detected according to the acquired coordinate positions of the multiple facial feature points, the face region image including a plurality of facial features regions.
  17. 根据权利要求16所述的计算机设备,其中,所述待检测图像包括背景区域;所述处理器执行所述计算机程序时执行以下步骤:The computer device according to claim 16, wherein the image to be detected includes a background area; when the processor executes the computer program, the following steps are performed:
    通过人脸遮挡检测分支模型对所述人脸区域图像执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图;Performing convolution on the face region image by using a face occlusion detection branch model to output a plurality of convolution feature maps, the plurality of convolution feature maps including a plurality of first convolution feature maps;
    将多个第一卷积特征图组合放大,根据预设规则调整各个五官区域与背景区域的像素值,以得到预测人脸图像,所述预测人脸图像包括遮挡区域;Combining and magnifying the multiple first convolution feature maps, and adjusting the pixel values of each facial feature region and the background region according to preset rules to obtain a predicted face image, where the predicted face image includes an occluded area;
    计算所述遮挡区域和所述遮挡区域所在的五官区域的遮挡比例;Calculating the occlusion ratio of the occlusion area and the facial features area where the occlusion area is located;
    将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果。The occlusion ratio is compared with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
  18. 一种计算机可读存储介质,其中,所述计算机可读存储介质内存储有计算机程序,所述计算机程序可被至少一个处理器所执行,以使所述至少一个处理器执行以下步骤:A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and the computer program can be executed by at least one processor, so that the at least one processor executes the following steps:
    获取数据库中的待审计数据库表;Obtain the database table to be audited in the database;
    获取待检测图像;Obtain the image to be detected;
    从所述待检测图像获取人脸区域图像;Acquiring a face area image from the image to be detected;
    通过人脸遮挡检测分支模型对所述人脸区域图像识别,并进行像素处理后,以生成第一遮挡结果;Recognizing the face region image through a face occlusion detection branch model, and performing pixel processing to generate a first occlusion result;
    当所述第一遮挡结果表示所述人脸区域图像被遮挡时,通过人脸遮挡分类分支模型及预设的遮挡标签对人脸区域图像分类,以生成分类结果;When the first occlusion result indicates that the face region image is occluded, classify the face region image through a face occlusion classification branch model and a preset occlusion label to generate a classification result;
    根据所述第一遮挡结果和所述分类结果生成最终遮挡结果。A final occlusion result is generated according to the first occlusion result and the classification result.
  19. 根据权利要求18所述的计算机可读存储介质,其中,所述处理器执行所述计算机程序时执行以下步骤:The computer-readable storage medium according to claim 18, wherein the processor executes the following steps when executing the computer program:
    获取所述待检测图像中多个面部特征点的坐标位置;Acquiring coordinate positions of multiple facial feature points in the image to be detected;
    根据获取到的多个面部特征点的坐标位置从待检测图像中提取人脸区域图像,所述人脸区域图像包括多个五官区域。Extracting a face region image from the image to be detected according to the acquired coordinate positions of the multiple facial feature points, the face region image including a plurality of facial features regions.
  20. 根据权利要求19所述的计算机可读存储介质,其中,所述待检测图像包括背景区域;所述处理器执行所述计算机程序时执行以下步骤:The computer-readable storage medium according to claim 19, wherein the image to be detected includes a background area; the processor executes the following steps when executing the computer program:
    通过人脸遮挡检测分支模型对所述人脸区域图像执行卷积,以输出多个卷积特征图,所述多个卷积特征图包括多个第一卷积特征图;Performing convolution on the face region image by using a face occlusion detection branch model to output a plurality of convolution feature maps, the plurality of convolution feature maps including a plurality of first convolution feature maps;
    将多个第一卷积特征图组合放大,根据预设规则调整各个五官区域与背景区域的像素值,以得到预测人脸图像,所述预测人脸图像包括遮挡区域;Combining and magnifying the multiple first convolution feature maps, and adjusting the pixel values of each facial feature region and the background region according to preset rules to obtain a predicted face image, where the predicted face image includes an occluded area;
    计算所述遮挡区域和所述遮挡区域所在的五官区域的遮挡比例;Calculating the occlusion ratio of the occlusion area and the facial features area where the occlusion area is located;
    将所述遮挡比例与所述五官区域对应的预设遮挡阈值进行比对,以生成第一遮挡结果。The occlusion ratio is compared with a preset occlusion threshold corresponding to the facial features area to generate a first occlusion result.
PCT/CN2020/118112 2020-03-05 2020-09-27 Face occlusion detection method and system WO2021174819A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010146004.6A CN111428581B (en) 2020-03-05 2020-03-05 Face shielding detection method and system
CN202010146004.6 2020-03-05

Publications (1)

Publication Number Publication Date
WO2021174819A1 true WO2021174819A1 (en) 2021-09-10

Family

ID=71547399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118112 WO2021174819A1 (en) 2020-03-05 2020-09-27 Face occlusion detection method and system

Country Status (2)

Country Link
CN (1) CN111428581B (en)
WO (1) WO2021174819A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963424A (en) * 2021-12-21 2022-01-21 西南石油大学 Infant asphyxia or sudden death early warning method based on single-order face positioning algorithm
CN114332720A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Camera device shielding detection method and device, electronic equipment and storage medium
CN114565506A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Image color migration method, device, equipment and storage medium
CN115236072A (en) * 2022-06-14 2022-10-25 杰能科世智能安全科技(杭州)有限公司 Lifting column state detection method and device
CN115249281A (en) * 2022-01-29 2022-10-28 北京百度网讯科技有限公司 Image occlusion and model training method, device, equipment and storage medium
CN116612279A (en) * 2023-04-28 2023-08-18 广东科技学院 Method, device, network equipment and storage medium for target detection
CN117275075A (en) * 2023-11-01 2023-12-22 浙江同花顺智能科技有限公司 Face shielding detection method, system, device and storage medium

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428581B (en) * 2020-03-05 2023-11-21 平安科技(深圳)有限公司 Face shielding detection method and system
CN112016464B (en) * 2020-08-28 2024-04-12 中移(杭州)信息技术有限公司 Method and device for detecting face shielding, electronic equipment and storage medium
CN112183504B (en) * 2020-11-27 2021-04-13 北京圣点云信息技术有限公司 Video registration method and device based on non-contact palm vein image
CN112396125B (en) * 2020-12-01 2022-11-18 中国第一汽车股份有限公司 Classification method, device, equipment and storage medium for positioning test scenes
CN112597867B (en) * 2020-12-17 2024-04-26 佛山科学技术学院 Face recognition method and system for wearing mask, computer equipment and storage medium
CN112633144A (en) * 2020-12-21 2021-04-09 平安科技(深圳)有限公司 Face occlusion detection method, system, device and storage medium
CN112651322B (en) * 2020-12-22 2024-05-24 北京眼神智能科技有限公司 Cheek shielding detection method and device and electronic equipment
CN112926424B (en) * 2021-02-10 2024-05-31 北京爱笔科技有限公司 Face shielding recognition method, device, readable medium and equipment
CN115131843B (en) * 2021-03-24 2024-05-07 北京君正集成电路股份有限公司 Method for detecting face shielding based on image segmentation
CN113111817B (en) * 2021-04-21 2023-06-27 中山大学 Semantic segmentation face integrity measurement method, system, equipment and storage medium
CN113222973B (en) * 2021-05-31 2024-03-08 深圳市商汤科技有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN113705466B (en) * 2021-08-30 2024-02-09 浙江中正智能科技有限公司 Face five sense organ shielding detection method for shielding scene, especially under high imitation shielding
CN113762136A (en) * 2021-09-02 2021-12-07 北京格灵深瞳信息技术股份有限公司 Face image occlusion judgment method and device, electronic equipment and storage medium
CN114093012B (en) * 2022-01-18 2022-06-10 荣耀终端有限公司 Face shielding detection method and detection device
CN114792295B (en) * 2022-06-23 2022-11-04 深圳憨厚科技有限公司 Method, device, equipment and medium for correcting blocked object based on intelligent photo frame
CN115909468B (en) * 2023-01-09 2023-06-06 广州佰锐网络科技有限公司 Face five sense organs shielding detection method, storage medium and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247609A1 (en) * 2007-04-06 2008-10-09 Rogerio Feris Rule-based combination of a hierarchy of classifiers for occlusion detection
CN105095856A (en) * 2015-06-26 2015-11-25 上海交通大学 Method for recognizing human face with shielding based on mask layer
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
CN110826519A (en) * 2019-11-14 2020-02-21 深圳市华付信息技术有限公司 Face occlusion detection method and device, computer equipment and storage medium
CN111428581A (en) * 2020-03-05 2020-07-17 平安科技(深圳)有限公司 Face shielding detection method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095829B (en) * 2014-04-29 2019-02-19 华为技术有限公司 A kind of face identification method and system
CN107633204B (en) * 2017-08-17 2019-01-29 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN109002801B (en) * 2018-07-20 2021-01-15 燕山大学 Face shielding detection method and system based on video monitoring

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247609A1 (en) * 2007-04-06 2008-10-09 Rogerio Feris Rule-based combination of a hierarchy of classifiers for occlusion detection
CN105095856A (en) * 2015-06-26 2015-11-25 上海交通大学 Method for recognizing human face with shielding based on mask layer
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
CN110826519A (en) * 2019-11-14 2020-02-21 深圳市华付信息技术有限公司 Face occlusion detection method and device, computer equipment and storage medium
CN111428581A (en) * 2020-03-05 2020-07-17 平安科技(深圳)有限公司 Face shielding detection method and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963424A (en) * 2021-12-21 2022-01-21 西南石油大学 Infant asphyxia or sudden death early warning method based on single-order face positioning algorithm
CN114332720A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Camera device shielding detection method and device, electronic equipment and storage medium
CN114565506A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Image color migration method, device, equipment and storage medium
CN115249281A (en) * 2022-01-29 2022-10-28 北京百度网讯科技有限公司 Image occlusion and model training method, device, equipment and storage medium
CN115249281B (en) * 2022-01-29 2023-11-24 北京百度网讯科技有限公司 Image occlusion and model training method, device, equipment and storage medium
CN115236072A (en) * 2022-06-14 2022-10-25 杰能科世智能安全科技(杭州)有限公司 Lifting column state detection method and device
CN116612279A (en) * 2023-04-28 2023-08-18 广东科技学院 Method, device, network equipment and storage medium for target detection
CN116612279B (en) * 2023-04-28 2024-02-02 广东科技学院 Method, device, network equipment and storage medium for target detection
CN117275075A (en) * 2023-11-01 2023-12-22 浙江同花顺智能科技有限公司 Face shielding detection method, system, device and storage medium
CN117275075B (en) * 2023-11-01 2024-02-13 浙江同花顺智能科技有限公司 Face shielding detection method, system, device and storage medium

Also Published As

Publication number Publication date
CN111428581B (en) 2023-11-21
CN111428581A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
WO2021174819A1 (en) Face occlusion detection method and system
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
WO2022078041A1 (en) Occlusion detection model training method and facial image beautification method
WO2021051611A1 (en) Face visibility-based face recognition method, system, device, and storage medium
CN109033940B (en) A kind of image-recognizing method, calculates equipment and storage medium at device
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN111598038B (en) Facial feature point detection method, device, equipment and storage medium
WO2021174941A1 (en) Physical attribute recognition method, system, computer device, and storage medium
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN112364827B (en) Face recognition method, device, computer equipment and storage medium
CN113343826A (en) Training method of human face living body detection model, human face living body detection method and device
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
CN108596098A (en) Analytic method, system, equipment and the storage medium of human part
CN107992807A (en) A kind of face identification method and device based on CNN models
US11893773B2 (en) Finger vein comparison method, computer equipment, and storage medium
CN112836625A (en) Face living body detection method and device and electronic equipment
WO2021203718A1 (en) Method and system for facial recognition
CN112241689A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN113378764A (en) Video face acquisition method, device, equipment and medium based on clustering algorithm
US10803677B2 (en) Method and system of automated facial morphing for eyebrow hair and face color detection
WO2022111271A1 (en) Clothing standardization detection method and apparatus
CN111444928A (en) Key point detection method and device, electronic equipment and storage medium
CN113468925B (en) Occlusion face recognition method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20922999

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20922999

Country of ref document: EP

Kind code of ref document: A1