WO2019033572A1 - Method for detecting whether face is blocked, device and storage medium - Google Patents

Method for detecting whether face is blocked, device and storage medium Download PDF

Info

Publication number
WO2019033572A1
WO2019033572A1 PCT/CN2017/108751 CN2017108751W WO2019033572A1 WO 2019033572 A1 WO2019033572 A1 WO 2019033572A1 CN 2017108751 W CN2017108751 W CN 2017108751W WO 2019033572 A1 WO2019033572 A1 WO 2019033572A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
lip
image
eye
model
Prior art date
Application number
PCT/CN2017/108751
Other languages
French (fr)
Chinese (zh)
Inventor
陈林
张国辉
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019033572A1 publication Critical patent/WO2019033572A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present application relates to the field of computer vision processing technologies, and in particular, to a face occlusion detection method, apparatus, and computer readable storage medium.
  • Face recognition is a biometric recognition technology based on human facial feature information for identity authentication.
  • the detected face is matched and recognized by collecting an image or a video stream containing a face and detecting and tracking the face in the image.
  • the application field of face recognition is very extensive, and plays a very important role in many fields such as financial payment, access control, and identification, which brings great convenience to people's lives.
  • the general product in the industry judges that the face occlusion is determined by the deep learning training method, but the judgment method has high requirements on the sample size, and if the occlusion is predicted by the deep learning method, the calculation amount is large and the speed is relatively slow.
  • the present application provides a face occlusion detection method, apparatus, and computer readable storage medium, the main purpose of which is to quickly detect a face occlusion situation in a real-time facial image.
  • the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a face occlusion detection program, and the face occlusion detection program is executed by the processor Implement the following steps:
  • Image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
  • a feature point recognition step inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
  • a feature region determining step determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face
  • the lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
  • the following steps are further implemented:
  • a determining step determining an eye part class model of the face, a lip part class model of the face Whether the judgment results of the eye area and the lip area are true.
  • the following steps are further implemented:
  • the present application further provides a method for detecting a face occlusion, the method comprising:
  • Image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
  • a feature point recognition step inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
  • a feature region determining step determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face
  • the lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
  • the method further comprises:
  • the determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
  • the method further comprises:
  • the present application further provides a computer readable storage medium including a face occlusion detection program, where the face occlusion detection program is executed by a processor, as described above Any of the steps in the face occlusion detection method described.
  • the face occlusion detection method, the electronic device and the computer readable storage medium proposed by the present application identify the facial feature points in the real-time facial image by inputting the real-time facial image into the face averaging model, and use the eye part class of the human face
  • the lip model of the model and the face determines the authenticity of the eye region and the lip region determined by the facial feature point, and determines whether the face in the real-time facial image occurs according to the authenticity of the eye region and the lip region. Occlusion.
  • FIG. 1 is a schematic diagram of an application environment of a preferred embodiment of a method for detecting a face occlusion of an applicant
  • FIG. 2 is a block diagram of a face occlusion detection program of FIG. 1;
  • FIG. 3 is a flow chart of a preferred embodiment of the applicant's face occlusion detection method.
  • the present application provides a face occlusion detection method applied to an electronic device 1 .
  • FIG. 1 it is a schematic diagram of an application environment of a preferred embodiment of the applicant's face occlusion detection method.
  • the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • a computing function such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
  • the electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15.
  • the camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network.
  • Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • Communication bus 15 is used to implement connection communication between these components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), Secure Digital (SD) card, Flash Card, etc.
  • SMC smart memory card
  • SD Secure Digital
  • the readable storage medium of the memory 11 is generally used to store a face occlusion detection program 10 installed on the electronic device 1, a face image sample library, a human eye sample library, and a human lip sample.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing face occlusion Test procedure 10, etc.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing face occlusion Test procedure 10, etc.
  • Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the electronic device 1 may further include a user interface, and the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, and the like.
  • the device for voice recognition function, voice output device such as audio, earphone, etc. optionally the user interface may also include a standard wired interface, a wireless interface.
  • the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit.
  • a display may also be appropriately referred to as a display screen or a display unit.
  • it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
  • the display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
  • the electronic device 1 further comprises a touch sensor.
  • the area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area.
  • the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like.
  • the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like.
  • the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
  • the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor.
  • a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
  • the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
  • RF radio frequency
  • an operating system and a face occlusion detecting program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the face occlusion detecting program 10 stored in the memory 11, Implement the following steps:
  • Image acquisition step acquiring a real-time image captured by the camera device 13, and extracting a real-time face image from the real-time image using a face recognition algorithm.
  • the camera 13 When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12.
  • the processor 12 receives the real-time image, it first acquires the size of the image to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
  • Feature point identification step input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
  • a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips.
  • t 1 eye feature points representing eye positions
  • t 2 eye feature points representing eye positions
  • t 3 lip feature points representing the position of the lips.
  • the feature points constitute a shape feature vector S, and the n shape feature vectors S of the face are obtained.
  • the face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  • the face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm.
  • ERT Regression Tress
  • t represents the cascading sequence number
  • ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
  • Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
  • each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
  • Each level of regression is based on feature points for prediction.
  • the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
  • the number of face images in the first sample library is t, assuming that each sample image has 34 feature points, the feature vector x 1 to x 12 represent the abscissa of the eyelid feature point, x 13 to x 14 represent the abscissa of the eye feature point, and x 15 to x 34 represent the abscissa of the lip feature point.
  • Take some feature points of all sample images for example, randomly take 25 feature points out of 34 feature points of each sample picture) to train the first regression tree, and predict the first regression tree and the partial features.
  • the residual of the true value of the point (the weighted average of the 25 feature points taken from each sample picture) is used to train the second tree...
  • the face average model is obtained, and the model file and the sample library are saved in the memory. Since the sample of the training model marks 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points, the face average model of the trained face can be used to identify 12 eyelid feature points from the face image, 2 eye feature points and 20 lip feature points.
  • the trained facial average model is called from the memory 11
  • the real-time facial image is aligned with the facial average model
  • the feature extraction algorithm is used to search the real-time facial image with the facial average model.
  • the 20 lip feature points are evenly distributed on the lip.
  • a feature region determining step determining an eye region and a lip region according to position information of the t facial feature points, the eye region and the eye region and the lip region according to the position information of the t facial feature points, The eye region and the lip region input an eye part class model of a pre-trained face, a lip part class model of the face, determine the authenticity of the eye region and the lip region, and determine the result according to the judgment result Whether the face in the live image is occluded.
  • a first number of human eye positive sample images and a second number of human eye negative sample images are collected, and local features of each human eye positive sample image and human eye negative sample image are extracted.
  • the human eye positive sample image refers to an eye sample containing a human eye, and the binocular portion can be extracted from the face image sample library as an eye sample.
  • the ocular negative eye sample image refers to an image of a broken eye region, and a plurality of human eye positive sample images and negative sample images form a second sample library.
  • a third number of lip positive sample images and a fourth number of lip negative sample images are collected, extracting local features of each lip positive sample image, lip negative sample image.
  • the lip positive sample image refers to an image containing the human's lips, and the lip portion can be extracted from the face image sample library as a lip positive sample image.
  • the lip negative sample image refers to an image of a person's lip region being defective, or the lip of the image is not the lip of a human (eg, an animal), and the plurality of lip positive sample images and the negative sample image form a third sample bank.
  • the local feature is a Histogram of Oriented Gradient (HOG) feature, which is extracted from a human eye sample image and a lip sample image by a feature extraction algorithm. Since the color information in the sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient direction of each pixel position is calculated accordingly. Values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells (8*8 pixels), and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region.
  • HOG Oriented Gradient
  • the support vector machine is trained by using the positive and negative sample images and the extracted HOG features in the second sample library and the third sample library to obtain the eye part class model of the face and The lip part class model of the face.
  • an eye region may be determined according to the 12 eyelid feature points and the two eyeball feature points, according to The 20 lip feature points define a lip region, and then the determined eye region and the lip region are input into the eye part class model of the trained face and the lip part class model of the face, and are judged according to the result obtained by the model.
  • the determined authenticity of the eye region and the lip region, that is, the result of the model output may be all false or all true, and may include both true and false.
  • the determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true. When the eye part class model of the face and the lip part class model of the face output the result, whether the judgment result contains only true.
  • the eye part type model of the face and the lip part type model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded. That is to say, when the eye region and the lip region determined according to the facial feature points are both the human eye region or the human lip region, it is considered that the human face in the real-time facial image is not blocked.
  • the face in the real-time face image is prompted to be occluded.
  • the face in the real-time facial image is considered to be occluded, prompting the real-time The face in the face image is occluded.
  • the eye area in the image is considered to be occluded
  • the lip part class model output result of the face is false
  • the lip in the image is considered The area is occluded and prompted accordingly.
  • the face recognition is performed after detecting whether the face is occluded, when the face in the real-time face image is occluded, the face in the current face image is occluded and re-acquired.
  • the real-time image captured by the imaging device 13 is subjected to subsequent steps.
  • the electronic device 1 of the embodiment extracts a real-time facial image from a real-time image, and uses a facial average model to identify a facial feature point in the real-time facial image, and uses an eye-part model of the human face and a lip of the human face.
  • the classification model analyzes the eye region and the lip region determined by the facial feature points, and quickly determines whether the face in the current image is occluded according to the authenticity of the eye region and the lip region.
  • the face occlusion detection program 10 can also be partitioned into one or more modules that are stored in the memory 11 and executed by the processor 12 to complete the application.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function. Referring to FIG. 2, it is a block diagram of the face occlusion detection program 10 of FIG.
  • the face occlusion detection program 10 can be divided into: an acquisition module 110, an identification module 120, a determination module 130, and a prompting module 140.
  • the functions or operational steps implemented by the modules 110-140 are similar to the above, and are not described in detail herein, by way of example, for example:
  • the acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time face image from the real-time image by using a face recognition algorithm;
  • the identification module 120 is configured to input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image;
  • the determining module 130 is configured to determine an eye region and a lip region according to position information of the t facial feature points, and input the eye region and the lip region into an eye part class model and a face of the pre-trained face a lip part type model for judging the authenticity of the eye area and the lip area, and determining whether the face in the real-time image is occluded according to the judgment result;
  • the prompting module 140 is configured to prompt the real-time facial image when the eye part class model of the human face and the lip part class model of the human face include the unreality of the judgment result of the eye region and the lip region The face is occluded.
  • this application also provides a face occlusion detection method.
  • this application A flowchart of a first embodiment of a face occlusion detection method. The method can be performed by a device that can be implemented by software and/or hardware.
  • the face occlusion detection method includes:
  • Step S10 Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm.
  • the camera captures a real-time image
  • the camera transmits the real-time image to the processor.
  • the processor When the processor receives the real-time image, first acquires the size of the image to create a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
  • the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
  • Step S20 input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
  • a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips.
  • t 1 eye feature points representing eye positions
  • t 2 eye feature points representing eye positions
  • t 3 lip feature points representing the position of the lips.
  • the feature points constitute a shape feature vector S, and the n shape feature vectors S of the face are obtained.
  • the face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  • the face feature recognition model is an ERT algorithm.
  • the ERT algorithm is expressed as follows:
  • t represents the cascading sequence number
  • ⁇ t ( ⁇ , ⁇ ) represents the regression of the current stage.
  • Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
  • each regression ⁇ t ( ⁇ , ⁇ ) predicts an increment based on the input images I and S(t) Add this increment to the current shape estimate to improve the current model.
  • Each level of regression is based on feature points for prediction.
  • the training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
  • the number of face images in the first sample library is t, assuming that each sample image has 34 feature points, the feature vector x 1 to x 12 represent the abscissa of the eyelid feature point, x 13 to x 14 represent the abscissa of the eye feature point, and x 15 to x 34 represent the abscissa of the lip feature point.
  • Take some feature points of all sample images for example, randomly take 25 feature points out of 34 feature points of each sample picture) to train the first regression tree, and predict the first regression tree and the partial features.
  • the residual of the true value of the point (the weighted average of the 25 feature points taken from each sample picture) is used to train the second tree...
  • the face average model is obtained, and the model file and the sample library are saved in the memory. Since the sample of the training model marks 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points, the face average model of the trained face can be used to identify 12 eyelid feature points from the face image, 2 eye feature points and 20 lip feature points.
  • the trained facial average model is called from the memory, and the real-time facial image is aligned with the facial average model, and the feature extraction algorithm is used to search the real-time facial image with the facial average model.
  • Step S30 determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part type model of a pre-trained face, and a lip of the face
  • the classification model determines the authenticity of the eye region and the lip region, and determines whether the face in the real-time image is occluded according to the judgment result.
  • a first number of human eye positive sample images and a second number of human eye negative sample images are collected, and local features of each human eye positive sample image and human eye negative sample image are extracted.
  • the human eye positive sample image refers to an eye sample containing a human eye, and the binocular portion can be extracted from the face image sample library as an eye sample, and the human eye negative eye sample image refers to an image of a broken eye region, and multiple human eye positive samples.
  • the image and the negative sample image form a second sample library.
  • a third number of lip positive sample images and a fourth number of lip negative sample images are collected, extracting local features of each lip positive sample image, lip negative sample image.
  • the lip positive sample image refers to an image containing the human's lips, and the lip portion can be extracted from the face image sample library as a lip positive sample image.
  • the lip negative sample image refers to an image of a person's lip region being defective, or the lip of the image is not the lip of a human (eg, an animal), and the plurality of lip positive sample images and the negative sample image form a third sample bank.
  • the local feature is a direction gradient histogram (HOG) feature, which is extracted from a human eye sample image and a lip sample image by a feature extraction algorithm. Since the color information in the sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient direction of each pixel position is calculated accordingly. Values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells (8*8 pixels), and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the cell cells are combined into a large block.
  • HOG direction gradient histogram
  • the support vector machine classifier is trained to obtain the eye part class model of the face and the lip part class model of the face.
  • an eye region may be determined according to the 12 eyelid feature points and the two eyeball feature points, according to The 20 lip feature points define a lip region, and then the determined eye region and the lip region are input into the eye part class model of the trained face and the lip part class model of the face, and are judged according to the result obtained by the model.
  • the determined authenticity of the eye region and the lip region, that is, the result of the model output may be all false or all true, and may include both true and false.
  • step S40 it is determined whether the judgment result of the eye part class model and the lip part class model of the human face on the eye area and the lip area is true.
  • the eye part class model of the face and the lip part class model of the face output the result, whether the judgment result contains only true.
  • Step S50 when the eye part class model of the face and the lip part class model of the face face the determination result of the eye area and the lip area are true, determining that the face in the real-time face image is not Occlusion occurs. That is to say, when the eye region and the lip region determined according to the facial feature points are both the human eye region or the human lip region, it is considered that the human face in the real-time facial image is not blocked.
  • Step S60 when the eye part class model of the face and the lip part class model of the face face the judgment result of the eye area and the lip area to be untrue, prompting the face in the real-time face image to occur Occlusion.
  • the face in the real-time facial image is considered to be occluded, prompting the real-time The face in the face image is occluded.
  • the eye area in the image is considered to be occluded
  • the lip part class model output result of the face is false
  • the lip in the image is considered The area is occluded and prompted accordingly.
  • the step S50 further includes:
  • the facial facial feature model of the facial feature point is used to identify the key facial feature points in the real-time facial image, and the facial part class model of the human face and the lip partial class model of the human face are used.
  • the eye area and the lip area determined by the point are analyzed, and according to the authenticity of the eye area and the lip area, whether the face in the current image is occluded is detected, and the occlusion of the face in the real-time face image is quickly detected.
  • the embodiment of the present application further provides a computer readable storage medium, the computer readable storage medium
  • the storage medium includes a face occlusion detection program, and when the face occlusion detection program is executed by the processor, the following operations are implemented:
  • Image acquisition step acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
  • a feature point recognition step inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
  • a feature region determining step determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face
  • the lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
  • the determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
  • the training steps of the eye part class model and the lip part class model of the face include:
  • the human eye negative sample image Using the human eye positive sample image, the human eye negative sample image and its local features to train the support vector machine classifier to obtain the eye part class model of the face;
  • the support vector machine classifier is trained by using the lip positive sample image, the lip negative sample image and its local features to obtain the lip part class model of the face.
  • the training step of the facial average model includes:
  • t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips;
  • the face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  • a disk including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for detecting whether a face is blocked, an electronic device and a computer-readable storage medium. Said method comprising: acquiring a real-time image captured by a camera device, and extracting a real-time face image from the real-time image (S10); inputting the real-time face image into a face average model, and identifying the face feature points from the real-time face image (S20); determining, according to the position information of the face feature points, an eye region and a lip region, inputting the eye region and the lip region into a pre-trained eye classification model of the face and a lip classification model of the face (S30), determining the authenticity of the eye region and the lip region (S40), and determining, according to a determination result, whether the face in the real-time image is blocked (S50, S60). The method above can quickly determine whether the face in a face image is blocked.

Description

人脸遮挡检测方法、装置及存储介质Face blocking detection method, device and storage medium
优先权申明Priority claim
本申请基于巴黎公约申明享有2017年8月17日递交的申请号为CN 201710707944.6、名称为“人脸遮挡检测方法、装置及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。The present application is based on the priority of the Chinese Patent Application entitled "Face Blocking Detection Method, Apparatus and Storage Medium", filed on August 17, 2017, with the application number of CN 201710707944.6, the entire contents of the Chinese patent application. It is incorporated herein by reference.
技术领域Technical field
本申请涉及计算机视觉处理技术领域,尤其涉及一种人脸遮挡检测方法、装置及计算机可读存储介质。The present application relates to the field of computer vision processing technologies, and in particular, to a face occlusion detection method, apparatus, and computer readable storage medium.
背景技术Background technique
人脸识别是基于人的脸部特征信息进行身份认证的一种生物识别技术。通过采集含有人脸的图像或视频流,并在图像中检测和跟踪人脸,进而对检测到的人脸进行匹配与识别。目前,人脸识别的应用领域很广泛,在金融支付、门禁考勤、身份识别等众多领域起到非常重要的作用,给人们的生活带来很大便利。然而,保证人脸没有发生遮挡至关重要,故在进行人脸识别之前需检测图像中的人脸是否发生遮挡。Face recognition is a biometric recognition technology based on human facial feature information for identity authentication. The detected face is matched and recognized by collecting an image or a video stream containing a face and detecting and tracking the face in the image. At present, the application field of face recognition is very extensive, and plays a very important role in many fields such as financial payment, access control, and identification, which brings great convenience to people's lives. However, it is important to ensure that no occlusion occurs on the face, so it is necessary to detect whether the face in the image is occluded before face recognition.
业内一般产品判断人脸遮挡是通过深度学习训练的方式,判断人脸遮挡情况,但该判断方法对样本量要求高,并且如果采用深度学习的方式预测遮挡,计算量很大,速度比较慢。The general product in the industry judges that the face occlusion is determined by the deep learning training method, but the judgment method has high requirements on the sample size, and if the occlusion is predicted by the deep learning method, the calculation amount is large and the speed is relatively slow.
发明内容Summary of the invention
本申请提供一种人脸遮挡检测方法、装置及计算机可读存储介质,其主要目的在于快速检测实时脸部图像中的人脸遮挡情况。The present application provides a face occlusion detection method, apparatus, and computer readable storage medium, the main purpose of which is to quickly detect a face occlusion situation in a real-time facial image.
为实现上述目的,本申请提供一种电子装置,该装置包括:存储器、处理器及摄像装置,所述存储器中包括人脸遮挡检测程序,所述人脸遮挡检测程序被所述处理器执行时实现如下步骤:To achieve the above objective, the present application provides an electronic device, including: a memory, a processor, and an imaging device, wherein the memory includes a face occlusion detection program, and the face occlusion detection program is executed by the processor Implement the following steps:
图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及a feature point recognition step: inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face The lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
优选地,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:Preferably, when the face occlusion detection program is executed by the processor, the following steps are further implemented:
判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述 眼部区域及唇部区域的判断结果是否均为真实。a determining step: determining an eye part class model of the face, a lip part class model of the face Whether the judgment results of the eye area and the lip area are true.
优选地,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:Preferably, when the face occlusion detection program is executed by the processor, the following steps are further implemented:
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the eye part model of the face and the lip part model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded; and
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded.
此外,为实现上述目的,本申请还提供一种人脸遮挡检测方法,该方法包括:In addition, to achieve the above object, the present application further provides a method for detecting a face occlusion, the method comprising:
图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及a feature point recognition step: inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face The lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
优选地,该方法还包括:Preferably, the method further comprises:
判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。The determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
优选地,该方法还包括:Preferably, the method further comprises:
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the eye part model of the face and the lip part model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded; and
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括人脸遮挡检测程序,所述人脸遮挡检测程序被处理器执行时,实现如上所述的人脸遮挡检测方法中的任意步骤。In addition, in order to achieve the above object, the present application further provides a computer readable storage medium including a face occlusion detection program, where the face occlusion detection program is executed by a processor, as described above Any of the steps in the face occlusion detection method described.
本申请提出的人脸遮挡检测方法、电子装置及计算机可读存储介质,通过将实时脸部图像输入面部平均模型,识别出该实时脸部图像中的面部特征点,利用人脸的眼部分类模型和人脸的唇部分类模型判断面部特征点确定的眼部区域及唇部区域的真实性,并根据眼部区域及唇部区域的真实性判断该实时脸部图像中的人脸是否发生遮挡。The face occlusion detection method, the electronic device and the computer readable storage medium proposed by the present application identify the facial feature points in the real-time facial image by inputting the real-time facial image into the face averaging model, and use the eye part class of the human face The lip model of the model and the face determines the authenticity of the eye region and the lip region determined by the facial feature point, and determines whether the face in the real-time facial image occurs according to the authenticity of the eye region and the lip region. Occlusion.
附图说明 DRAWINGS
图1为本申请人脸遮挡检测方法较佳实施例的应用环境示意图;1 is a schematic diagram of an application environment of a preferred embodiment of a method for detecting a face occlusion of an applicant;
图2为图1中人脸遮挡检测程序的模块示意图;2 is a block diagram of a face occlusion detection program of FIG. 1;
图3为本申请人脸遮挡检测方法较佳实施例的流程图。3 is a flow chart of a preferred embodiment of the applicant's face occlusion detection method.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请提供一种人脸遮挡检测方法,应用于电子装置1。参照图1所示,为本申请人脸遮挡检测方法较佳实施例的应用环境示意图。The present application provides a face occlusion detection method applied to an electronic device 1 . Referring to FIG. 1 , it is a schematic diagram of an application environment of a preferred embodiment of the applicant's face occlusion detection method.
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。In this embodiment, the electronic device 1 may be a terminal device having a computing function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
该电子装置1包括:处理器12、存储器11、摄像装置13、网络接口14及通信总线15。其中,摄像装置13安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器12。网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线15用于实现这些组件之间的连接通信。The electronic device 1 includes a processor 12, a memory 11, an imaging device 13, a network interface 14, and a communication bus 15. The camera device 13 is installed in a specific place, such as an office place and a monitoring area, and real-time images are taken in real time for the target entering the specific place, and the captured real-time image is transmitted to the processor 12 through the network. Network interface 14 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface). Communication bus 15 is used to implement connection communication between these components.
存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。The memory 11 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), Secure Digital (SD) card, Flash Card, etc.
在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的人脸遮挡检测程序10、人脸图像样本库、人眼样本库、人的唇部样本库、构建并训练好的面部特征点的面部平均模型、眼部分类模型及人脸的唇部分类模型等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。In this embodiment, the readable storage medium of the memory 11 is generally used to store a face occlusion detection program 10 installed on the electronic device 1, a face image sample library, a human eye sample library, and a human lip sample. The library, the facial average model of the face feature points constructed and trained, the eye part class model, and the lip part class model of the face. The memory 11 can also be used to temporarily store data that has been output or is about to be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行人脸遮挡检测程序10等。The processor 12, in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing face occlusion Test procedure 10, etc.
图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。Figure 1 shows only the electronic device 1 with components 11-15, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语 音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。Optionally, the electronic device 1 may further include a user interface, and the user interface may include an input unit such as a keyboard, a voice input device such as a microphone, and the like. The device for voice recognition function, voice output device such as audio, earphone, etc., optionally the user interface may also include a standard wired interface, a wireless interface.
可选地,该电子装置1还可以包括显示器,显示器也可以适当的称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit. In some embodiments, it may be an LED display, a liquid crystal display, a touch liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor. The display is used to display information processed in the electronic device 1 and a user interface for displaying visualizations.
可选地,该电子装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。Optionally, the electronic device 1 further comprises a touch sensor. The area provided by the touch sensor for the user to perform a touch operation is referred to as a touch area. Further, the touch sensor described herein may be a resistive touch sensor, a capacitive touch sensor, or the like. Moreover, the touch sensor includes not only a contact type touch sensor but also a proximity type touch sensor or the like. Furthermore, the touch sensor may be a single sensor or a plurality of sensors arranged, for example, in an array.
此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。In addition, the area of the display of the electronic device 1 may be the same as or different from the area of the touch sensor. Optionally, a display is stacked with the touch sensor to form a touch display. The device detects a user-triggered touch operation based on a touch screen display.
可选地,该电子装置1还可以包括射频(Radio Frequency,RF)电路,传感器、音频电路等等,在此不再赘述。Optionally, the electronic device 1 may further include a radio frequency (RF) circuit, a sensor, an audio circuit, and the like, and details are not described herein.
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及人脸遮挡检测程序10;处理器12执行存储器11中存储的人脸遮挡检测程序10时实现如下步骤:In the apparatus embodiment shown in FIG. 1, an operating system and a face occlusion detecting program 10 may be included in the memory 11 as a computer storage medium; when the processor 12 executes the face occlusion detecting program 10 stored in the memory 11, Implement the following steps:
图像获取步骤:获取摄像装置13拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。Image acquisition step: acquiring a real-time image captured by the camera device 13, and extracting a real-time face image from the real-time image using a face recognition algorithm.
当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器12,当处理器12接收到该实时图像后,首先获取图片的大小,建立一个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。When the camera 13 captures a real-time image, the camera 13 transmits the real-time image to the processor 12. When the processor 12 receives the real-time image, it first acquires the size of the image to create a grayscale image of the same size. Converting the acquired color image into a grayscale image and creating a memory space; equalizing the grayscale image histogram, reducing the amount of grayscale image information, speeding up the detection speed, and then loading the training library to detect the person in the image Face, and return an object containing face information, obtain the data of the location of the face, and record the number; finally obtain the area of the avatar and save it, thus completing a real-time facial image extraction process.
具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点。Feature point identification step: input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点。在第一样本库中的每张人脸图像中,手动标记t1个眼眶特征点、t2个眼球特征点及t3个唇部特征点,每张人 脸图像中的该(t1+t2+t3)个特征点组成一个形状特征向量S,得到面部的n个形状特征向量S。Establishing a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips. In each face image in the first sample library, manually mark t 1 eyelid feature point, t 2 eyeball feature points, and t 3 lip feature points, respectively in the face image (t 1 +t 2 + t 3 ) The feature points constitute a shape feature vector S, and the n shape feature vectors S of the face are obtained.
利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。所述人脸特征识别模型为Ensemble of Regression Tress(简称ERT)算法。ERT算法用公式表示如下:The face feature recognition model is trained by using the t facial feature points to obtain a face average model. The face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm. The ERT algorithm is expressed as follows:
Figure PCTCN2017108751-appb-000001
Figure PCTCN2017108751-appb-000001
其中t表示级联序号,τt(·,·)表示当前级的回归器。每个回归器由很多棵回归树(tree)组成,训练的目的就是得到这些回归树。Where t represents the cascading sequence number and τ t (·, ·) represents the regression of the current stage. Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
其中S(t)为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和S(t)来预测一个增量
Figure PCTCN2017108751-appb-000002
把这个增量加到当前的形状估计上来改进当前模型。其中每一级回归器都是根据特征点来进行预测。训练数据集为:(I1,S1),...,(In,Sn)其中I是输入的样本图像,S是样本图像中的特征点组成的形状特征向量。
Where S(t) is the shape estimate of the current model; each regression τ t (·, ·) predicts an increment based on the input images I and S(t)
Figure PCTCN2017108751-appb-000002
Add this increment to the current shape estimate to improve the current model. Each level of regression is based on feature points for prediction. The training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
在模型训练的过程中,第一样本库中人脸图像的数量为t,假设每一张样本图片有34个特征点,特征向量
Figure PCTCN2017108751-appb-000003
x1~x12表示眼眶特征点的横坐标,x13~x14表示眼球特征点的横坐标,x15~x34表示唇部特征点的横坐标。取所有样本图片的部分特征点(例如在每个样本图片的34个特征点中随机取25个特征点)训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值(每个样本图片所取的25个特征点的加权平均值)的残差用来训练第二棵树...依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型(mean shape),并将模型文件及样本库保存至存储器中。因为训练模型的样本标记了12个眼眶特征点、2个眼球特征点及20个唇部特征点,则训练得到的人脸的面部平均模型可用于从人脸图像中识别12个眼眶特征点、2个眼球特征点及20个唇部特征点。
In the process of model training, the number of face images in the first sample library is t, assuming that each sample image has 34 feature points, the feature vector
Figure PCTCN2017108751-appb-000003
x 1 to x 12 represent the abscissa of the eyelid feature point, x 13 to x 14 represent the abscissa of the eye feature point, and x 15 to x 34 represent the abscissa of the lip feature point. Take some feature points of all sample images (for example, randomly take 25 feature points out of 34 feature points of each sample picture) to train the first regression tree, and predict the first regression tree and the partial features. The residual of the true value of the point (the weighted average of the 25 feature points taken from each sample picture) is used to train the second tree... and so on until the predicted value of the Nth tree is trained and described The true value of some feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the face average model is obtained, and the model file and the sample library are saved in the memory. Since the sample of the training model marks 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points, the face average model of the trained face can be used to identify 12 eyelid feature points from the face image, 2 eye feature points and 20 lip feature points.
获取到实时脸部图像后,从存储器11中调用训练好的面部平均模型,将实时脸部图像与面部平均模型进行对齐,利用特征提取算法在该实时脸部图像中搜索与该面部平均模型的12个眼眶特征点、2个眼球特征点及20个唇部特征点匹配的12个眼眶特征点、2个眼球特征点及20个唇部特征点。其中,该20个唇部特征点均匀分布在唇部。After obtaining the real-time facial image, the trained facial average model is called from the memory 11, the real-time facial image is aligned with the facial average model, and the feature extraction algorithm is used to search the real-time facial image with the facial average model. 12 eyelid feature points, 2 eyeball feature points, and 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points matched by 20 lip feature points. Wherein, the 20 lip feature points are evenly distributed on the lip.
特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, the eye region and the eye region and the lip region according to the position information of the t facial feature points, The eye region and the lip region input an eye part class model of a pre-trained face, a lip part class model of the face, determine the authenticity of the eye region and the lip region, and determine the result according to the judgment result Whether the face in the live image is occluded.
收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征。人眼正样本图像是指包含人眼的眼睛样本,可以从人脸图像样本库中抠出双眼部分作为眼睛样本,人 眼负眼睛样本图像是指眼睛区域残缺的图像,多张人眼正样本图像及负样本图像形成第二样本库。A first number of human eye positive sample images and a second number of human eye negative sample images are collected, and local features of each human eye positive sample image and human eye negative sample image are extracted. The human eye positive sample image refers to an eye sample containing a human eye, and the binocular portion can be extracted from the face image sample library as an eye sample. The ocular negative eye sample image refers to an image of a broken eye region, and a plurality of human eye positive sample images and negative sample images form a second sample library.
收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征。唇部正样本图像是指包含人类的唇部的图像,可以从人脸图像样本库中抠出唇部部分作为唇部正样本图像。唇部负样本图像是指人的唇部区域残缺、或是图像中的唇部不是人类(例如动物)的唇部的图像,多张唇部正样本图像及负样本图像形成第三样本库。A third number of lip positive sample images and a fourth number of lip negative sample images are collected, extracting local features of each lip positive sample image, lip negative sample image. The lip positive sample image refers to an image containing the human's lips, and the lip portion can be extracted from the face image sample library as a lip positive sample image. The lip negative sample image refers to an image of a person's lip region being defective, or the lip of the image is not the lip of a human (eg, an animal), and the plurality of lip positive sample images and the negative sample image form a third sample bank.
具体地,所述局部特征为方向梯度直方图(Histogram of Oriented Gradient,简称HOG)特征,通过特征提取算法从人眼样本图像及唇部样本图像中提取得到。由于样本图像中颜色信息作用不大,通常将其转化为灰度图,并将整个图像进行归一化,计算图像横坐标和纵坐标方向的梯度,并据此计算每个像素位置的梯度方向值,以捕获轮廓、人影和一些纹理信息,且进一步弱化光照的影响。然后把整个图像分割为一个个的Cell单元格(8*8像素),为每个Cell单元格构建梯度方向直方图,以统计局部图像梯度信息并进行量化,得到局部图像区域的特征描述向量。接着把cell单元格组合成大的块(block),由于局部光照的变化以及前景-背景对比度的变化,使得梯度强度的变化范围非常大,这就需要对梯度强度做归一化,进一步地对光照、阴影和边缘进行压缩。最后将所有“block”的HOG描述符组合在一起,形成最终的HOG特征描述向量。Specifically, the local feature is a Histogram of Oriented Gradient (HOG) feature, which is extracted from a human eye sample image and a lip sample image by a feature extraction algorithm. Since the color information in the sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient direction of each pixel position is calculated accordingly. Values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells (8*8 pixels), and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the cell cells are combined into a large block. Due to the change of local illumination and the change of the foreground-background contrast, the gradient intensity varies greatly. This requires normalization of the gradient intensity, further Light, shadow, and edges are compressed. Finally, all "block" HOG descriptors are combined to form the final HOG feature description vector.
用上述第二样本库及第三样本库中的正、负样本图像及提取的HOG特征对支持向量机分类器(Support Vector Machine,SVM)进行训练,得到所述人脸的眼部分类模型及人脸的唇部分类模型。The support vector machine (SVM) is trained by using the positive and negative sample images and the extracted HOG features in the second sample library and the third sample library to obtain the eye part class model of the face and The lip part class model of the face.
当从实时脸部图像中识别到12个眼眶特征点、2个眼球特征点、20个唇部特征点后,可以根据该12个眼眶特征点、2个眼球特征点确定一个眼部区域,根据该20个唇部特征点确定一个唇部区域,然后将确定的眼部区域及唇部区域输入训练好的人脸的眼部分类模型及人脸的唇部分类模型,根据模型所得的结果判断所述确定的眼部区域及唇部区域的真实性,也就是说,模型输出的结果既可能全为false,也可能全为true,也可能既包含true,也包含false。当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为false,则表示所述眼部区域及唇部区域不是人的眼部区域和人的唇部区域;当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为true,则表示所述眼部区域及唇部区域是人的眼部区域和人的唇部区域。After identifying 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points from the real-time facial image, an eye region may be determined according to the 12 eyelid feature points and the two eyeball feature points, according to The 20 lip feature points define a lip region, and then the determined eye region and the lip region are input into the eye part class model of the trained face and the lip part class model of the face, and are judged according to the result obtained by the model. The determined authenticity of the eye region and the lip region, that is, the result of the model output may be all false or all true, and may include both true and false. When the result of the eye part class model of the face and the lip part class model of the face is false, it means that the eye area and the lip area are not the human eye area and the human lip area; Both the eye part class model of the face and the lip part class model output of the face are true, indicating that the eye region and the lip region are the human eye region and the human lip region.
判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。当所述人脸的眼部分类模型、人脸的唇部分类模型输出结果后,判断结果中是不是只包含true。The determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true. When the eye part class model of the face and the lip part class model of the face output the result, whether the judgment result contains only true.
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡。 也就是说,当根据面部特征点确定的眼部区域和唇部区域,均为人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸没有发生遮挡。When the eye part type model of the face and the lip part type model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded. That is to say, when the eye region and the lip region determined according to the facial feature points are both the human eye region or the human lip region, it is considered that the human face in the real-time facial image is not blocked.
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。当根据面部特征点确定的眼部区域和唇部区域中,存在任意一个区域不是人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸发生遮挡,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded. When any one of the eye region and the lip region determined according to the facial feature point is not the human eye region or the human lip region, the face in the real-time facial image is considered to be occluded, prompting the real-time The face in the face image is occluded.
进一步的,当所述人脸的眼部分类模型输出结果为false,则认为图像中的眼部区域发生遮挡,当所述人脸的唇部分类模型输出结果为false,则认为图像中的唇部区域发生遮挡,并作出相应提示。Further, when the output result of the eye part class model of the face is false, the eye area in the image is considered to be occluded, and when the lip part class model output result of the face is false, the lip in the image is considered The area is occluded and prompted accordingly.
在其他实施例中,若检测完人脸是否遮挡后还进行后续的人脸识别,那么对于实时脸部图像中的人脸发生遮挡时,提示当前脸部图像中的人脸发生遮挡,重新获取摄像装置13拍摄到的实时图像,并进行后续步骤。In other embodiments, if the face recognition is performed after detecting whether the face is occluded, when the face in the real-time face image is occluded, the face in the current face image is occluded and re-acquired. The real-time image captured by the imaging device 13 is subjected to subsequent steps.
本实施例提出的电子装置1,从实时图像中提取实时脸部图像,利用面部平均模型识别出该实时脸部图像中的面部特征点,利用人脸的眼部分类模型及人脸的唇部分类模型对面部特征点确定的眼部区域及唇部区域进行分析,根据眼部区域及唇部区域的真实性,快速判断当前图像中人脸是否发生遮挡。The electronic device 1 of the embodiment extracts a real-time facial image from a real-time image, and uses a facial average model to identify a facial feature point in the real-time facial image, and uses an eye-part model of the human face and a lip of the human face. The classification model analyzes the eye region and the lip region determined by the facial feature points, and quickly determines whether the face in the current image is occluded according to the authenticity of the eye region and the lip region.
在其他实施例中,人脸遮挡检测程序10还可以被分割为一个或者多个模块,所述一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本申请。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。参照图2所示,为图1中人脸遮挡检测程序10的模块示意图。In other embodiments, the face occlusion detection program 10 can also be partitioned into one or more modules that are stored in the memory 11 and executed by the processor 12 to complete the application. A module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function. Referring to FIG. 2, it is a block diagram of the face occlusion detection program 10 of FIG.
在本实施例中,所述人脸遮挡检测程序10可以被分割为:获取模块110、识别模块120、判断模块130及提示模块140。所述模块110-140所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:In this embodiment, the face occlusion detection program 10 can be divided into: an acquisition module 110, an identification module 120, a determination module 130, and a prompting module 140. The functions or operational steps implemented by the modules 110-140 are similar to the above, and are not described in detail herein, by way of example, for example:
获取模块110,用于获取摄像装置13拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;The acquiring module 110 is configured to acquire a real-time image captured by the camera device 13 and extract a real-time face image from the real-time image by using a face recognition algorithm;
识别模块120,用于将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;The identification module 120 is configured to input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image;
判断模块130,用于根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡;及The determining module 130 is configured to determine an eye region and a lip region according to position information of the t facial feature points, and input the eye region and the lip region into an eye part class model and a face of the pre-trained face a lip part type model for judging the authenticity of the eye area and the lip area, and determining whether the face in the real-time image is occluded according to the judgment result;
提示模块140,用于当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。The prompting module 140 is configured to prompt the real-time facial image when the eye part class model of the human face and the lip part class model of the human face include the unreality of the judgment result of the eye region and the lip region The face is occluded.
此外,本申请还提供一种人脸遮挡检测方法。参照图3所示,为本申请 人脸遮挡检测方法第一实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, the present application also provides a face occlusion detection method. Referring to Figure 3, this application A flowchart of a first embodiment of a face occlusion detection method. The method can be performed by a device that can be implemented by software and/or hardware.
在本实施例中,人脸遮挡检测方法包括:In this embodiment, the face occlusion detection method includes:
步骤S10,获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。当摄像装置拍摄到一张实时图像,摄像装置将这张实时图像发送到处理器,当处理器接收到该实时图像后,首先获取图片的大小,建立一个相同大小的灰度图像;将获取的彩色图像,转换成灰度图像,同时创建一个内存空间;将灰度图像直方图均衡化,使灰度图像信息量减少,加快检测速度,然后加载训练库,检测图片中的人脸,并返回一个包含人脸信息的对象,获得人脸所在位置的数据,并记录个数;最终获取头像的区域且保存下来,这样就完成了一次实时脸部图像提取的过程。Step S10: Acquire a real-time image captured by the camera device, and extract a real-time face image from the real-time image by using a face recognition algorithm. When the camera captures a real-time image, the camera transmits the real-time image to the processor. When the processor receives the real-time image, first acquires the size of the image to create a grayscale image of the same size; Color image, converted into gray image, and create a memory space; equalize the gray image histogram, reduce the amount of gray image information, speed up the detection, then load the training library, detect the face in the picture, and return An object containing face information, obtains the data of the location of the face, and records the number; finally obtains the area of the avatar and saves it, thus completing a real-time facial image extraction process.
具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a geometric feature-based method, a local feature analysis method, a feature face method, an elastic model-based method, a neural network method, or the like.
步骤S20,将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点。Step S20: input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.
建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点。在第一样本库中的每张人脸图像中,手动标记t1个眼眶特征点、t2个眼球特征点及t3个唇部特征点,每张人脸图像中的该(t1+t2+t3)个特征点组成一个形状特征向量S,得到面部的n个形状特征向量S。Establishing a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips. In each face image in the first sample library, manually mark t 1 eyelid feature point, t 2 eyeball feature points, and t 3 lip feature points, respectively in the face image (t 1 +t 2 + t 3 ) The feature points constitute a shape feature vector S, and the n shape feature vectors S of the face are obtained.
利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。所述人脸特征识别模型为ERT算法。ERT算法用公式表示如下:The face feature recognition model is trained by using the t facial feature points to obtain a face average model. The face feature recognition model is an ERT algorithm. The ERT algorithm is expressed as follows:
Figure PCTCN2017108751-appb-000004
Figure PCTCN2017108751-appb-000004
其中t表示级联序号,τt(·,·)表示当前级的回归器。每个回归器由很多棵回归树(tree)组成,训练的目的就是得到这些回归树。Where t represents the cascading sequence number and τ t (·, ·) represents the regression of the current stage. Each regression is composed of a number of regression trees, and the purpose of training is to obtain these regression trees.
其中S(t)为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和S(t)来预测一个增量
Figure PCTCN2017108751-appb-000005
把这个增量加到当前的形状估计上来改进当前模型。其中每一级回归器都是根据特征点来进行预测。训练数据集为:(I1,S1),...,(In,Sn)其中I是输入的样本图像,S是样本图像中的特征点组成的形状特征向量。
Where S(t) is the shape estimate of the current model; each regression τ t (·, ·) predicts an increment based on the input images I and S(t)
Figure PCTCN2017108751-appb-000005
Add this increment to the current shape estimate to improve the current model. Each level of regression is based on feature points for prediction. The training data set is: (I1, S1), ..., (In, Sn) where I is the input sample image and S is the shape feature vector composed of the feature points in the sample image.
在模型训练的过程中,第一样本库中人脸图像的数量为t,假设每一张样本图片有34个特征点,特征向量
Figure PCTCN2017108751-appb-000006
x1~x12表示眼眶特征点的横坐标,x13~x14表示眼球特征点的横坐标,x15~x34表示唇部特征点的横坐标。取所有样本图片的部分特征点(例如在每个样本图片的34个特征点中随机取25个特征点)训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值(每个样本图片所取的25个特征 点的加权平均值)的残差用来训练第二棵树...依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型(mean shape),并将模型文件及样本库保存至存储器中。因为训练模型的样本标记了12个眼眶特征点、2个眼球特征点及20个唇部特征点,则训练得到的人脸的面部平均模型可用于从人脸图像中识别12个眼眶特征点、2个眼球特征点及20个唇部特征点。
In the process of model training, the number of face images in the first sample library is t, assuming that each sample image has 34 feature points, the feature vector
Figure PCTCN2017108751-appb-000006
x 1 to x 12 represent the abscissa of the eyelid feature point, x 13 to x 14 represent the abscissa of the eye feature point, and x 15 to x 34 represent the abscissa of the lip feature point. Take some feature points of all sample images (for example, randomly take 25 feature points out of 34 feature points of each sample picture) to train the first regression tree, and predict the first regression tree and the partial features. The residual of the true value of the point (the weighted average of the 25 feature points taken from each sample picture) is used to train the second tree... and so on until the predicted value of the Nth tree is trained and described The true value of some feature points is close to 0, and all the regression trees of the ERT algorithm are obtained. According to these regression trees, the face average model is obtained, and the model file and the sample library are saved in the memory. Since the sample of the training model marks 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points, the face average model of the trained face can be used to identify 12 eyelid feature points from the face image, 2 eye feature points and 20 lip feature points.
获取到实时脸部图像后,从存储器中调用训练好的面部平均模型,将实时脸部图像与面部平均模型进行对齐,利用特征提取算法在该实时脸部图像中搜索与该面部平均模型的12个眼眶特征点、2个眼球特征点及20个唇部特征点匹配的12个眼眶特征点、2个眼球特征点及20个唇部特征点。其中,该20个唇部特征点均匀分布在唇部。After obtaining the real-time facial image, the trained facial average model is called from the memory, and the real-time facial image is aligned with the facial average model, and the feature extraction algorithm is used to search the real-time facial image with the facial average model. One eyelid feature point, two eyeball feature points, and twelve eyelid feature points, two eyeball feature points, and 20 lip feature points matched by 20 lip feature points. Wherein, the 20 lip feature points are evenly distributed on the lip.
步骤S30,根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Step S30, determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part type model of a pre-trained face, and a lip of the face The classification model determines the authenticity of the eye region and the lip region, and determines whether the face in the real-time image is occluded according to the judgment result.
收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征。人眼正样本图像是指包含人眼的眼睛样本,可以从人脸图像样本库中抠出双眼部分作为眼睛样本,人眼负眼睛样本图像是指眼睛区域残缺的图像,多张人眼正样本图像及负样本图像形成第二样本库。A first number of human eye positive sample images and a second number of human eye negative sample images are collected, and local features of each human eye positive sample image and human eye negative sample image are extracted. The human eye positive sample image refers to an eye sample containing a human eye, and the binocular portion can be extracted from the face image sample library as an eye sample, and the human eye negative eye sample image refers to an image of a broken eye region, and multiple human eye positive samples. The image and the negative sample image form a second sample library.
收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征。唇部正样本图像是指包含人类的唇部的图像,可以从人脸图像样本库中抠出唇部部分作为唇部正样本图像。唇部负样本图像是指人的唇部区域残缺、或是图像中的唇部不是人类(例如动物)的唇部的图像,多张唇部正样本图像及负样本图像形成第三样本库。A third number of lip positive sample images and a fourth number of lip negative sample images are collected, extracting local features of each lip positive sample image, lip negative sample image. The lip positive sample image refers to an image containing the human's lips, and the lip portion can be extracted from the face image sample library as a lip positive sample image. The lip negative sample image refers to an image of a person's lip region being defective, or the lip of the image is not the lip of a human (eg, an animal), and the plurality of lip positive sample images and the negative sample image form a third sample bank.
具体地,所述局部特征为方向梯度直方图(HOG)特征,通过特征提取算法从人眼样本图像及唇部样本图像中提取得到。由于样本图像中颜色信息作用不大,通常将其转化为灰度图,并将整个图像进行归一化,计算图像横坐标和纵坐标方向的梯度,并据此计算每个像素位置的梯度方向值,以捕获轮廓、人影和一些纹理信息,且进一步弱化光照的影响。然后把整个图像分割为一个个的Cell单元格(8*8像素),为每个Cell单元格构建梯度方向直方图,以统计局部图像梯度信息并进行量化,得到局部图像区域的特征描述向量。接着把cell单元格组合成大的块(block),由于局部光照的变化以及前景-背景对比度的变化,使得梯度强度的变化范围非常大,这就需要对梯度强度做归一化,进一步地对光照、阴影和边缘进行压缩。最后将所有“block”的HOG描述符组合在一起,形成最终的HOG特征描述向量。Specifically, the local feature is a direction gradient histogram (HOG) feature, which is extracted from a human eye sample image and a lip sample image by a feature extraction algorithm. Since the color information in the sample image is not very effective, it is usually converted into a grayscale image, and the entire image is normalized, the gradient of the horizontal and vertical directions of the image is calculated, and the gradient direction of each pixel position is calculated accordingly. Values to capture outlines, silhouettes, and some texture information, and further weaken the effects of lighting. Then the whole image is divided into individual Cell cells (8*8 pixels), and a gradient direction histogram is constructed for each Cell cell to calculate the local image gradient information and quantize to obtain the feature description vector of the local image region. Then the cell cells are combined into a large block. Due to the change of local illumination and the change of the foreground-background contrast, the gradient intensity varies greatly. This requires normalization of the gradient intensity, further Light, shadow, and edges are compressed. Finally, all "block" HOG descriptors are combined to form the final HOG feature description vector.
用上述第二样本库及第三样本库中的正、负样本图像及提取的HOG特征 对支持向量机分类器进行训练,得到所述人脸的眼部分类模型及人脸的唇部分类模型。Using the positive and negative sample images and the extracted HOG features in the second sample library and the third sample library The support vector machine classifier is trained to obtain the eye part class model of the face and the lip part class model of the face.
当从实时脸部图像中识别到12个眼眶特征点、2个眼球特征点、20个唇部特征点后,可以根据该12个眼眶特征点、2个眼球特征点确定一个眼部区域,根据该20个唇部特征点确定一个唇部区域,然后将确定的眼部区域及唇部区域输入训练好的人脸的眼部分类模型及人脸的唇部分类模型,根据模型所得的结果判断所述确定的眼部区域及唇部区域的真实性,也就是说,模型输出的结果既可能全为false,也可能全为true,也可能既包含true,也包含false。当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为false,则表示所述眼部区域及唇部区域不是人的眼部区域和人的唇部区域;当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为true,则表示所述眼部区域及唇部区域是人的眼部区域和人的唇部区域。After identifying 12 eyelid feature points, 2 eyeball feature points, and 20 lip feature points from the real-time facial image, an eye region may be determined according to the 12 eyelid feature points and the two eyeball feature points, according to The 20 lip feature points define a lip region, and then the determined eye region and the lip region are input into the eye part class model of the trained face and the lip part class model of the face, and are judged according to the result obtained by the model. The determined authenticity of the eye region and the lip region, that is, the result of the model output may be all false or all true, and may include both true and false. When the result of the eye part class model of the face and the lip part class model of the face is false, it means that the eye area and the lip area are not the human eye area and the human lip area; Both the eye part class model of the face and the lip part class model output of the face are true, indicating that the eye region and the lip region are the human eye region and the human lip region.
步骤S40,判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。当所述人脸的眼部分类模型、人脸的唇部分类模型输出结果后,判断结果中是不是只包含true。In step S40, it is determined whether the judgment result of the eye part class model and the lip part class model of the human face on the eye area and the lip area is true. When the eye part class model of the face and the lip part class model of the face output the result, whether the judgment result contains only true.
步骤S50,当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡。也就是说,当根据面部特征点确定的眼部区域和唇部区域,均为人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸没有发生遮挡。Step S50, when the eye part class model of the face and the lip part class model of the face face the determination result of the eye area and the lip area are true, determining that the face in the real-time face image is not Occlusion occurs. That is to say, when the eye region and the lip region determined according to the facial feature points are both the human eye region or the human lip region, it is considered that the human face in the real-time facial image is not blocked.
步骤S60,当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。当根据面部特征点确定的眼部区域和唇部区域中,存在任意一个区域不是人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸发生遮挡,提示该实时脸部图像中的人脸发生遮挡。Step S60, when the eye part class model of the face and the lip part class model of the face face the judgment result of the eye area and the lip area to be untrue, prompting the face in the real-time face image to occur Occlusion. When any one of the eye region and the lip region determined according to the facial feature point is not the human eye region or the human lip region, the face in the real-time facial image is considered to be occluded, prompting the real-time The face in the face image is occluded.
进一步的,当所述人脸的眼部分类模型输出结果为false,则认为图像中的眼部区域发生遮挡,当所述人脸的唇部分类模型输出结果为false,则认为图像中的唇部区域发生遮挡,并作出相应提示。Further, when the output result of the eye part class model of the face is false, the eye area in the image is considered to be occluded, and when the lip part class model output result of the face is false, the lip in the image is considered The area is occluded and prompted accordingly.
在其他实施例中,若检测完人脸是否遮挡后还进行后续的人脸识别,那么对于实时脸部图像中的人脸发生遮挡时,步骤S50还包括:In other embodiments, if the face recognition is performed after the face is detected as being occluded, the smearing is performed on the face in the real-time face image, and the step S50 further includes:
提示当前脸部图像中的人脸发生遮挡,重新获取摄像装置拍摄到的实时图像,并进行后续步骤。Prompting that the face in the current facial image is occluded, reacquiring the real-time image captured by the camera, and performing the subsequent steps.
本实施例提出的人脸遮挡检测方法,利用面部特征点的面部平均模型识别出该实时脸部图像中关键面部特征点,利用人脸的眼部分类模型及人脸的唇部分类模型对特征点确定的眼部区域及唇部区域进行分析,并根据该眼部区域及唇部区域的真实性判断当前图像中的人脸是否发生遮挡,快速检测实时脸部图像中人脸的遮挡情况。In the face occlusion detection method proposed by the embodiment, the facial facial feature model of the facial feature point is used to identify the key facial feature points in the real-time facial image, and the facial part class model of the human face and the lip partial class model of the human face are used. The eye area and the lip area determined by the point are analyzed, and according to the authenticity of the eye area and the lip area, whether the face in the current image is occluded is detected, and the occlusion of the face in the real-time face image is quickly detected.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读 存储介质中包括人脸遮挡检测程序,所述人脸遮挡检测程序被处理器执行时实现如下操作:In addition, the embodiment of the present application further provides a computer readable storage medium, the computer readable storage medium The storage medium includes a face occlusion detection program, and when the face occlusion detection program is executed by the processor, the following operations are implemented:
图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及a feature point recognition step: inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face The lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
可选地,所述人脸遮挡检测程序被处理器执行时,还实现如下操作:Optionally, when the face occlusion detection program is executed by the processor, the following operations are also implemented:
判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。The determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
可选地,所述人脸遮挡检测程序被处理器执行时,还实现如下操作:Optionally, when the face occlusion detection program is executed by the processor, the following operations are also implemented:
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the eye part model of the face and the lip part model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded; and
当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded.
可选地,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:Optionally, the training steps of the eye part class model and the lip part class model of the face include:
收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of human eye positive sample images and a second number of human eye negative sample images, extracting local features of each human eye positive sample image and a human eye negative sample image;
利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的眼部分类模型;Using the human eye positive sample image, the human eye negative sample image and its local features to train the support vector machine classifier to obtain the eye part class model of the face;
收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, extracting local features of each lip positive sample image and lip negative sample image;
利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的唇部分类模型。The support vector machine classifier is trained by using the lip positive sample image, the lip negative sample image and its local features to obtain the lip part class model of the face.
可选地,所述面部平均模型的训练步骤包括:Optionally, the training step of the facial average model includes:
建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点;及Establishing a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips; and
利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。The face feature recognition model is trained by using the t facial feature points to obtain a face average model.
本申请之计算机可读存储介质的具体实施方式与上述人脸遮挡检测方法的具体实施方式大致相同,在此不再赘述。The specific implementation of the computer readable storage medium of the present application is substantially the same as the specific embodiment of the above-described face occlusion detection method, and details are not described herein again.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在 涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that, herein, the terms "including", "comprising" or any other variant thereof are intended to mean The inclusion of non-exclusive inclusions, such that a process, apparatus, article, or method that comprises a plurality of elements includes not only those elements but also other elements that are not explicitly listed, or are included in the process, apparatus, item, or method. The inherent elements. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, the device, the item, or the method that comprises the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments. Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。 The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.

Claims (20)

  1. 一种电子装置,其特征在于,所述装置包括:存储器、处理器及摄像装置,所述存储器中包括人脸遮挡检测程序,所述人脸遮挡检测程序被所述处理器执行时实现如下步骤:An electronic device, comprising: a memory, a processor, and an imaging device, wherein the memory includes a face occlusion detection program, and the face occlusion detection program is executed by the processor to implement the following steps :
    图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
    特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及a feature point recognition step: inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
    特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face The lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
  2. 根据权利要求1所述的电子装置,其特征在于,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:The electronic device according to claim 1, wherein when the face occlusion detection program is executed by the processor, the following steps are further implemented:
    判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。The determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
  3. 根据权利要求2所述的电子装置,其特征在于,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:The electronic device according to claim 2, wherein when the face occlusion detection program is executed by the processor, the following steps are further implemented:
    当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the eye part model of the face and the lip part model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded; and
    当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded.
  4. 根据权利要求3所述的电子装置,其特征在于,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:The electronic device according to claim 3, wherein the training steps of the eye part class model and the lip part class model of the face comprise:
    收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of human eye positive sample images and a second number of human eye negative sample images, extracting local features of each human eye positive sample image and a human eye negative sample image;
    利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的眼部分类模型;Using the human eye positive sample image, the human eye negative sample image and its local features to train the support vector machine classifier to obtain the eye part class model of the face;
    收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, extracting local features of each lip positive sample image and lip negative sample image;
    利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的唇部分类模型。The support vector machine classifier is trained by using the lip positive sample image, the lip negative sample image and its local features to obtain the lip part class model of the face.
  5. 根据权利要求1所述的电子装置,其特征在于,所述面部平均模型的 训练步骤包括:The electronic device according to claim 1, wherein said facial average model Training steps include:
    建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点;及Establishing a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips; and
    利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。The face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  6. 根据权利要求5所述的电子装置,其特征在于,所述人脸特征识别模型为ERT算法,用公式表示如下:The electronic device according to claim 5, wherein the face feature recognition model is an ERT algorithm, which is formulated as follows:
    Figure PCTCN2017108751-appb-100001
    Figure PCTCN2017108751-appb-100001
    其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
    Figure PCTCN2017108751-appb-100002
    在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
    Where t represents the cascading sequence number, τ t (·, ·) represents the current class of the regression, S(t) is the shape estimate of the current model, and each regression τ t (·, ·) is based on the input current image I and S(t) to predict an increment
    Figure PCTCN2017108751-appb-100002
    In the process of model training, some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree and the residual of the true value of the partial feature points are used to train the second tree. The tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature points is close to 0, all the regression trees of the ERT algorithm are obtained, and the facial average model is obtained according to the regression trees.
  7. 根据权利要求1所述的电子装置,其特征在于,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法。The electronic device according to claim 1, wherein the face recognition algorithm comprises: a geometric feature based method, a local feature analyzing method, a feature face method, an elastic model based method, and a neural network method.
  8. 一种人脸遮挡检测方法,应用于电子装置,其特征在于,所述方法包括:A method for detecting a face occlusion is applied to an electronic device, the method comprising:
    图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
    特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及a feature point recognition step: inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
    特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face The lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
  9. 根据权利要求8所述的人脸遮挡检测方法,其特征在于,该方法还包括:The method for detecting a face occlusion according to claim 8, wherein the method further comprises:
    判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。The determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
  10. 根据权利要求9所述的人脸遮挡检测方法,其特征在于,该方法还 包括:The method for detecting a face occlusion according to claim 9, wherein the method further include:
    当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the eye part model of the face and the lip part model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded; and
    当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded.
  11. 根据权利要求10所述的人脸遮挡检测方法,其特征在于,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:The face occlusion detection method according to claim 10, wherein the training steps of the eye part class model and the lip part class model of the face comprise:
    收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of human eye positive sample images and a second number of human eye negative sample images, extracting local features of each human eye positive sample image and a human eye negative sample image;
    利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的眼部分类模型;Using the human eye positive sample image, the human eye negative sample image and its local features to train the support vector machine classifier to obtain the eye part class model of the face;
    收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, extracting local features of each lip positive sample image and lip negative sample image;
    利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的唇部分类模型。The support vector machine classifier is trained by using the lip positive sample image, the lip negative sample image and its local features to obtain the lip part class model of the face.
  12. 根据权利要求8所述的人脸遮挡检测方法,其特征在于,所述面部平均模型的训练步骤包括:The face occlusion detection method according to claim 8, wherein the training step of the face averaging model comprises:
    建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点;及Establishing a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips; and
    利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。The face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  13. 根据权利要求12所述的人脸遮挡检测方法,其特征在于,所述人脸特征识别模型为ERT算法,用公式表示如下:The face occlusion detection method according to claim 12, wherein the face feature recognition model is an ERT algorithm, which is expressed by a formula as follows:
    Figure PCTCN2017108751-appb-100003
    Figure PCTCN2017108751-appb-100003
    其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
    Figure PCTCN2017108751-appb-100004
    在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
    Where t represents the cascading sequence number, τ t (·, ·) represents the current class of the regression, S(t) is the shape estimate of the current model, and each regression τ t (·, ·) is based on the input current image I and S(t) to predict an increment
    Figure PCTCN2017108751-appb-100004
    In the process of model training, some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree and the residual of the true value of the partial feature points are used to train the second tree. The tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature points is close to 0, all the regression trees of the ERT algorithm are obtained, and the facial average model is obtained according to the regression trees.
  14. 根据权利要求8所述的人脸遮挡检测方法,其特征在于,所述人脸识别算法包括:基于几何特征的方法、局部特征分析方法、特征脸方法、基 于弹性模型的方法、神经网络方法。The face occlusion detection method according to claim 8, wherein the face recognition algorithm comprises: a geometric feature based method, a local feature analysis method, a feature face method, and a base The method of elastic model, neural network method.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括人脸遮挡检测程序,所述人脸遮挡检测程序被处理器执行时实现如下步骤:A computer readable storage medium, comprising: a face occlusion detection program, wherein the face occlusion detection program is executed by a processor to implement the following steps:
    图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by the camera device, and extracting a real-time face image from the real-time image by using a face recognition algorithm;
    特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及a feature point recognition step: inputting the real-time facial image into a pre-trained facial average model, and using the facial average model to identify t facial feature points from the real-time facial image;
    特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。a feature region determining step: determining an eye region and a lip region according to position information of the t facial feature points, and inputting the eye region and the lip region into an eye part class model of a pre-trained face, a face The lip part type model determines the authenticity of the eye area and the lip area, and determines whether the face in the real-time image is occluded according to the judgment result.
  16. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述人脸遮挡检测程序被处理器执行时,还实现如下步骤:The computer readable storage medium according to claim 15, wherein when the face occlusion detection program is executed by the processor, the following steps are further implemented:
    判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。The determining step is: determining whether the judgment result of the eye part class model and the lip part class model of the human face on the eye region and the lip region is true.
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述人脸遮挡检测程序被处理器执行时,还实现如下步骤:The computer readable storage medium according to claim 16, wherein when the face occlusion detection program is executed by the processor, the following steps are further implemented:
    当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the eye part model of the face and the lip part model of the face are true to the eye area and the lip area, it is determined that the face in the real-time face image is not occluded; and
    当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment result of the eye part class model of the face and the lip part class model of the face on the eye region and the lip region is not true, the face in the real-time face image is prompted to be occluded.
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:The computer readable storage medium according to claim 17, wherein the training steps of the eye part class model and the lip part class model of the face comprise:
    收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of human eye positive sample images and a second number of human eye negative sample images, extracting local features of each human eye positive sample image and a human eye negative sample image;
    利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的眼部分类模型;Using the human eye positive sample image, the human eye negative sample image and its local features to train the support vector machine classifier to obtain the eye part class model of the face;
    收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, extracting local features of each lip positive sample image and lip negative sample image;
    利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量机分类器进行训练,得到人脸的唇部分类模型。The support vector machine classifier is trained by using the lip positive sample image, the lip negative sample image and its local features to obtain the lip part class model of the face.
  19. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述面 部平均模型的训练步骤包括:A computer readable storage medium according to claim 15 wherein said face The training steps of the average model include:
    建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点;及Establishing a first sample library having n face images, and marking t facial feature points in each face image, the t facial feature points including: t 1 eye feature points representing eye positions, t 2 eye feature points and t 3 lip feature points representing the position of the lips; and
    利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。The face feature recognition model is trained by using the t facial feature points to obtain a face average model.
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述人脸特征识别模型为ERT算法,用公式表示如下:The computer readable storage medium according to claim 19, wherein the face feature recognition model is an ERT algorithm, which is formulated as follows:
    Figure PCTCN2017108751-appb-100005
    Figure PCTCN2017108751-appb-100005
    其中t表示级联序号,τt(·,·)表示当前级的回归器,S(t)为当前模型的形状估计,每个回归器τt(·,·)根据输入的当前图像I和S(t)来预测一个增量
    Figure PCTCN2017108751-appb-100006
    在模型训练的过程中,取所有样本图片的部分特征点训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值的残差用来训练第二棵树…依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型。
    Where t represents the cascading sequence number, τ t (·, ·) represents the current class of the regression, S(t) is the shape estimate of the current model, and each regression τ t (·, ·) is based on the input current image I and S(t) to predict an increment
    Figure PCTCN2017108751-appb-100006
    In the process of model training, some feature points of all sample images are trained to train the first regression tree, and the predicted value of the first regression tree and the residual of the true value of the partial feature points are used to train the second tree. The tree... and so on, until the predicted value of the Nth tree is trained and the true value of the partial feature points is close to 0, all the regression trees of the ERT algorithm are obtained, and the facial average model is obtained according to the regression trees.
PCT/CN2017/108751 2017-08-17 2017-10-31 Method for detecting whether face is blocked, device and storage medium WO2019033572A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710707944.6A CN107633204B (en) 2017-08-17 2017-08-17 Face occlusion detection method, apparatus and storage medium
CN201710707944.6 2017-08-17

Publications (1)

Publication Number Publication Date
WO2019033572A1 true WO2019033572A1 (en) 2019-02-21

Family

ID=61099639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108751 WO2019033572A1 (en) 2017-08-17 2017-10-31 Method for detecting whether face is blocked, device and storage medium

Country Status (2)

Country Link
CN (1) CN107633204B (en)
WO (1) WO2019033572A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119674A (en) * 2019-03-27 2019-08-13 深圳和而泰家居在线网络科技有限公司 A kind of method, apparatus, calculating equipment and the computer storage medium of cheating detection
CN110414394A (en) * 2019-07-16 2019-11-05 公安部第一研究所 A kind of face blocks face image method and the model for face occlusion detection
CN110543823A (en) * 2019-07-30 2019-12-06 平安科技(深圳)有限公司 Pedestrian re-identification method and device based on residual error network and computer equipment
CN111353404A (en) * 2020-02-24 2020-06-30 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111414879A (en) * 2020-03-26 2020-07-14 北京字节跳动网络技术有限公司 Face shielding degree identification method and device, electronic equipment and readable storage medium
CN111444887A (en) * 2020-04-30 2020-07-24 北京每日优鲜电子商务有限公司 Mask wearing detection method and device, storage medium and electronic equipment
CN111461047A (en) * 2020-04-10 2020-07-28 北京爱笔科技有限公司 Identity recognition method, device, equipment and computer storage medium
CN111486961A (en) * 2020-04-15 2020-08-04 贵州安防工程技术研究中心有限公司 Efficient forehead temperature estimation method based on wide-spectrum human forehead imaging and distance sensing
CN111489373A (en) * 2020-04-07 2020-08-04 北京工业大学 Occlusion object segmentation method based on deep learning
CN111626240A (en) * 2020-05-29 2020-09-04 歌尔科技有限公司 Face image recognition method, device and equipment and readable storage medium
CN111639596A (en) * 2020-05-29 2020-09-08 上海锘科智能科技有限公司 Anti-glasses-shielding face recognition method based on attention mechanism and residual error network
CN111814571A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Mask face recognition method and system based on background filtering
CN111860047A (en) * 2019-04-26 2020-10-30 美澳视界(厦门)智能科技有限公司 Face rapid identification method based on deep learning
CN111881740A (en) * 2020-06-19 2020-11-03 杭州魔点科技有限公司 Face recognition method, face recognition device, electronic equipment and medium
CN112016464A (en) * 2020-08-28 2020-12-01 中移(杭州)信息技术有限公司 Method and device for detecting face shielding, electronic equipment and storage medium
CN112052730A (en) * 2020-07-30 2020-12-08 广州市标准化研究院 3D dynamic portrait recognition monitoring device and method
CN112116525A (en) * 2020-09-24 2020-12-22 百度在线网络技术(北京)有限公司 Face-changing identification method, device, equipment and computer-readable storage medium
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium
CN112633183A (en) * 2020-12-25 2021-04-09 平安银行股份有限公司 Automatic detection method and device for image occlusion area and storage medium
CN112766214A (en) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 Face image processing method, device, equipment and storage medium
CN113111817A (en) * 2021-04-21 2021-07-13 中山大学 Semantic segmentation face integrity measurement method, system, equipment and storage medium
CN113449696A (en) * 2021-08-27 2021-09-28 北京市商汤科技开发有限公司 Attitude estimation method and device, computer equipment and storage medium
CN114399813A (en) * 2021-12-21 2022-04-26 马上消费金融股份有限公司 Face shielding detection method, model training method and device and electronic equipment

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664908A (en) * 2018-04-27 2018-10-16 深圳爱酷智能科技有限公司 Face identification method, equipment and computer readable storage medium
CN110472459B (en) * 2018-05-11 2022-12-27 华为技术有限公司 Method and device for extracting feature points
CN108551552B (en) * 2018-05-14 2020-09-01 Oppo广东移动通信有限公司 Image processing method, device, storage medium and mobile terminal
CN108763897A (en) * 2018-05-22 2018-11-06 平安科技(深圳)有限公司 Method of calibration, terminal device and the medium of identity legitimacy
CN108985159A (en) * 2018-06-08 2018-12-11 平安科技(深圳)有限公司 Human-eye model training method, eye recognition method, apparatus, equipment and medium
CN110084191B (en) * 2019-04-26 2024-02-23 广东工业大学 Eye shielding detection method and system
CN110348331B (en) * 2019-06-24 2022-01-14 深圳数联天下智能科技有限公司 Face recognition method and electronic equipment
CN110428399B (en) 2019-07-05 2022-06-14 百度在线网络技术(北京)有限公司 Method, apparatus, device and storage medium for detecting image
CN112183173B (en) * 2019-07-05 2024-04-09 北京字节跳动网络技术有限公司 Image processing method, device and storage medium
CN112929638B (en) * 2019-12-05 2023-12-15 北京芯海视界三维科技有限公司 Eye positioning method and device and multi-view naked eye 3D display method and device
CN111428581B (en) * 2020-03-05 2023-11-21 平安科技(深圳)有限公司 Face shielding detection method and system
CN111598018A (en) * 2020-05-19 2020-08-28 北京嘀嘀无限科技发展有限公司 Wearing detection method, device, equipment and storage medium for face shield
CN111598021B (en) * 2020-05-19 2021-05-28 北京嘀嘀无限科技发展有限公司 Wearing detection method and device for face shield, electronic equipment and storage medium
CN111626193A (en) * 2020-05-26 2020-09-04 北京嘀嘀无限科技发展有限公司 Face recognition method, face recognition device and readable storage medium
CN112418190B (en) * 2021-01-21 2021-04-02 成都点泽智能科技有限公司 Mobile terminal medical protective shielding face recognition method, device, system and server
CN112949418A (en) * 2021-02-05 2021-06-11 深圳市优必选科技股份有限公司 Method and device for determining speaking object, electronic equipment and storage medium
CN112966654B (en) * 2021-03-29 2023-12-19 深圳市优必选科技股份有限公司 Lip movement detection method, lip movement detection device, terminal equipment and computer readable storage medium
CN113762136A (en) * 2021-09-02 2021-12-07 北京格灵深瞳信息技术股份有限公司 Face image occlusion judgment method and device, electronic equipment and storage medium
CN114462495B (en) * 2021-12-30 2023-04-07 浙江大华技术股份有限公司 Training method of face shielding detection model and related device
CN117275075B (en) * 2023-11-01 2024-02-13 浙江同花顺智能科技有限公司 Face shielding detection method, system, device and storage medium
CN117282038B (en) * 2023-11-22 2024-02-13 杭州般意科技有限公司 Light source adjusting method and device for eye phototherapy device, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270308A (en) * 2011-07-21 2011-12-07 武汉大学 Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN102306304A (en) * 2011-03-25 2012-01-04 杜利利 Face occluder identification method and device
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN105868689A (en) * 2016-02-16 2016-08-17 杭州景联文科技有限公司 Cascaded convolutional neural network based human face occlusion detection method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542246A (en) * 2011-03-29 2012-07-04 广州市浩云安防科技股份有限公司 Abnormal face detection method for ATM (Automatic Teller Machine)
CN105654049B (en) * 2015-12-29 2019-08-16 中国科学院深圳先进技术研究院 The method and device of facial expression recognition
CN106056079B (en) * 2016-05-31 2019-07-05 中国科学院自动化研究所 A kind of occlusion detection method of image capture device and human face five-sense-organ
CN106295566B (en) * 2016-08-10 2019-07-09 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106485215B (en) * 2016-09-29 2020-03-06 西交利物浦大学 Face shielding detection method based on deep convolutional neural network
CN106910176B (en) * 2017-03-02 2019-09-13 中科视拓(北京)科技有限公司 A kind of facial image based on deep learning removes occlusion method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306304A (en) * 2011-03-25 2012-01-04 杜利利 Face occluder identification method and device
CN102270308A (en) * 2011-07-21 2011-12-07 武汉大学 Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN105868689A (en) * 2016-02-16 2016-08-17 杭州景联文科技有限公司 Cascaded convolutional neural network based human face occlusion detection method

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119674A (en) * 2019-03-27 2019-08-13 深圳和而泰家居在线网络科技有限公司 A kind of method, apparatus, calculating equipment and the computer storage medium of cheating detection
CN111860047A (en) * 2019-04-26 2020-10-30 美澳视界(厦门)智能科技有限公司 Face rapid identification method based on deep learning
CN110414394A (en) * 2019-07-16 2019-11-05 公安部第一研究所 A kind of face blocks face image method and the model for face occlusion detection
CN110414394B (en) * 2019-07-16 2022-12-13 公安部第一研究所 Facial occlusion face image reconstruction method and model for face occlusion detection
CN110543823A (en) * 2019-07-30 2019-12-06 平安科技(深圳)有限公司 Pedestrian re-identification method and device based on residual error network and computer equipment
CN110543823B (en) * 2019-07-30 2024-03-19 平安科技(深圳)有限公司 Pedestrian re-identification method and device based on residual error network and computer equipment
CN111353404A (en) * 2020-02-24 2020-06-30 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111353404B (en) * 2020-02-24 2023-12-01 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111414879A (en) * 2020-03-26 2020-07-14 北京字节跳动网络技术有限公司 Face shielding degree identification method and device, electronic equipment and readable storage medium
CN111414879B (en) * 2020-03-26 2023-06-09 抖音视界有限公司 Face shielding degree identification method and device, electronic equipment and readable storage medium
CN111489373A (en) * 2020-04-07 2020-08-04 北京工业大学 Occlusion object segmentation method based on deep learning
CN111489373B (en) * 2020-04-07 2023-05-05 北京工业大学 Occlusion object segmentation method based on deep learning
CN111461047A (en) * 2020-04-10 2020-07-28 北京爱笔科技有限公司 Identity recognition method, device, equipment and computer storage medium
CN111486961A (en) * 2020-04-15 2020-08-04 贵州安防工程技术研究中心有限公司 Efficient forehead temperature estimation method based on wide-spectrum human forehead imaging and distance sensing
CN111444887A (en) * 2020-04-30 2020-07-24 北京每日优鲜电子商务有限公司 Mask wearing detection method and device, storage medium and electronic equipment
CN111626240A (en) * 2020-05-29 2020-09-04 歌尔科技有限公司 Face image recognition method, device and equipment and readable storage medium
CN111626240B (en) * 2020-05-29 2023-04-07 歌尔科技有限公司 Face image recognition method, device and equipment and readable storage medium
CN111639596A (en) * 2020-05-29 2020-09-08 上海锘科智能科技有限公司 Anti-glasses-shielding face recognition method based on attention mechanism and residual error network
CN111639596B (en) * 2020-05-29 2023-04-28 上海锘科智能科技有限公司 Glasses-shielding-resistant face recognition method based on attention mechanism and residual error network
CN111814571A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Mask face recognition method and system based on background filtering
CN111881740A (en) * 2020-06-19 2020-11-03 杭州魔点科技有限公司 Face recognition method, face recognition device, electronic equipment and medium
CN111881740B (en) * 2020-06-19 2024-03-22 杭州魔点科技有限公司 Face recognition method, device, electronic equipment and medium
CN112052730A (en) * 2020-07-30 2020-12-08 广州市标准化研究院 3D dynamic portrait recognition monitoring device and method
CN112052730B (en) * 2020-07-30 2024-03-29 广州市标准化研究院 3D dynamic portrait identification monitoring equipment and method
CN112016464B (en) * 2020-08-28 2024-04-12 中移(杭州)信息技术有限公司 Method and device for detecting face shielding, electronic equipment and storage medium
CN112016464A (en) * 2020-08-28 2020-12-01 中移(杭州)信息技术有限公司 Method and device for detecting face shielding, electronic equipment and storage medium
CN112116525A (en) * 2020-09-24 2020-12-22 百度在线网络技术(北京)有限公司 Face-changing identification method, device, equipment and computer-readable storage medium
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium
CN112633183A (en) * 2020-12-25 2021-04-09 平安银行股份有限公司 Automatic detection method and device for image occlusion area and storage medium
CN112633183B (en) * 2020-12-25 2023-11-14 平安银行股份有限公司 Automatic detection method and device for image shielding area and storage medium
CN112766214A (en) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 Face image processing method, device, equipment and storage medium
CN113111817A (en) * 2021-04-21 2021-07-13 中山大学 Semantic segmentation face integrity measurement method, system, equipment and storage medium
CN113449696B (en) * 2021-08-27 2021-12-07 北京市商汤科技开发有限公司 Attitude estimation method and device, computer equipment and storage medium
CN113449696A (en) * 2021-08-27 2021-09-28 北京市商汤科技开发有限公司 Attitude estimation method and device, computer equipment and storage medium
CN114399813B (en) * 2021-12-21 2023-09-26 马上消费金融股份有限公司 Face shielding detection method, model training method, device and electronic equipment
CN114399813A (en) * 2021-12-21 2022-04-26 马上消费金融股份有限公司 Face shielding detection method, model training method and device and electronic equipment

Also Published As

Publication number Publication date
CN107633204A (en) 2018-01-26
CN107633204B (en) 2019-01-29

Similar Documents

Publication Publication Date Title
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
US10445562B2 (en) AU feature recognition method and device, and storage medium
WO2019109526A1 (en) Method and device for age recognition of face image, storage medium
WO2019033571A1 (en) Facial feature point detection method, apparatus and storage medium
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
WO2019033570A1 (en) Lip movement analysis method, apparatus and storage medium
WO2019041519A1 (en) Target tracking device and method, and computer-readable storage medium
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
US8934679B2 (en) Apparatus for real-time face recognition
EP2336949B1 (en) Apparatus and method for registering plurality of facial images for face recognition
WO2019033568A1 (en) Lip movement capturing method, apparatus and storage medium
US9633284B2 (en) Image processing apparatus and image processing method of identifying object in image
WO2019033567A1 (en) Method for capturing eyeball movement, device and storage medium
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
Vazquez-Fernandez et al. Built-in face recognition for smart photo sharing in mobile devices
WO2019056503A1 (en) Store monitoring evaluation method, device and storage medium
JP2013206458A (en) Object classification based on external appearance and context in image
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
CN111582118A (en) Face recognition method and device
Rai et al. Software development framework for real-time face detection and recognition in mobile devices
CN111191521A (en) Face living body detection method and device, computer equipment and storage medium
CN109409322B (en) Living body detection method and device, face recognition method and face detection system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921768

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 25.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17921768

Country of ref document: EP

Kind code of ref document: A1