CN111695432A - Artificial intelligent face abnormity detection system and method under video monitoring scene - Google Patents

Artificial intelligent face abnormity detection system and method under video monitoring scene Download PDF

Info

Publication number
CN111695432A
CN111695432A CN202010425077.9A CN202010425077A CN111695432A CN 111695432 A CN111695432 A CN 111695432A CN 202010425077 A CN202010425077 A CN 202010425077A CN 111695432 A CN111695432 A CN 111695432A
Authority
CN
China
Prior art keywords
face
label
classification
abnormity
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010425077.9A
Other languages
Chinese (zh)
Inventor
彭滢
吴杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronic Technology Cyber Security Co Ltd
Original Assignee
China Electronic Technology Cyber Security Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronic Technology Cyber Security Co Ltd filed Critical China Electronic Technology Cyber Security Co Ltd
Priority to CN202010425077.9A priority Critical patent/CN111695432A/en
Publication of CN111695432A publication Critical patent/CN111695432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence face abnormity detection, and discloses an artificial intelligence face abnormity detection system and method under a video monitoring scene, wherein the system comprises a face detection model for detecting the face position in a video; the human face posture estimation model is used for calculating a human face deflection angle and screening out a human face front image; and the human face abnormal classification model is used for identifying the human face abnormal obstruction. The invention converts the face abnormity identification problem into the face abnormity shielding object classification problem, accurately identifies the shielding object type, and the types which can be classified and identified by the face abnormity classification model comprise an environmental background type and a side face type, thereby providing a fault-tolerant mechanism for the system. The invention provides a new idea of face abnormity identification, namely, the face abnormity identification problem is converted into the face abnormity shielding object classification problem, the shielding object can be accurately identified, meanwhile, the system has strong expandability, and the system is stable, high in accuracy and has strong practical application value.

Description

Artificial intelligent face abnormity detection system and method under video monitoring scene
Technical Field
The invention relates to the field of artificial intelligence face abnormity detection, in particular to an artificial intelligence face abnormity detection system and method under a video monitoring scene.
Background
Video surveillance is a widely used safety precaution. Traditional video monitoring mainly plays the effect of collecting evidence after the criminal activity action takes place, and safeguard function lags behind, and often the manual work of collecting evidence needs to look over a large amount of video monitoring data moreover, consuming time and wasting power. With the vigorous development of the artificial intelligence subject, video intelligent monitoring systems based on deep learning and image processing technologies have come into use. The human face abnormity detection system is an important functional system generally concerned by intelligent monitoring, and aims to judge whether the monitored person has the abnormal condition that a face area is shielded, identify the shielded position or the type of a shielding object, and warn and record the abnormal shielding condition of the human face, so that multiple functions of preventing crimes and efficiently obtaining evidences after a case is sent out in real time are achieved. The accurate and stable face abnormity detection system is developed, and the system has positive significance for promoting and solving the problem of traditional video monitoring.
Generally, a face anomaly detection system can be divided into a preprocessing module, a face anomaly identification module and an early warning module. The mainstream human face abnormity detection thought is as follows: firstly, whether a face exists in a monitoring picture is detected through a preprocessing module, if so, the position information of the face is recorded, and then the positions of five sense organs are further acquired on the face position. Then, in the face abnormity identification module, whether the facial area is shielded is judged by judging whether the five sense organs are shielded, and the type of the shielding object is estimated according to the position where shielding occurs. And finally, warning by using an early warning module according to the abnormal shielding condition of the face. Such processes have much room for improvement in three areas:
firstly, the existing face anomaly detection system conjectures the type of a shielding object through a shielding part, the type of the shielding object obtained by the method is not accurate and fine enough, most of the types can only be conjectured into three types of sunglasses, masks or peaked caps, but the actual application scene needs to be judged more accurately and the types are richer. Secondly, the human face abnormity identification module has strong dependence on the preprocessing module, but lacks a fault-tolerant mechanism for the preprocessing module, so that the human face abnormity identification module is easily influenced by errors in the preprocessing process. For example, the preprocessing module may erroneously output a position that does not include a face, or it is difficult to precisely locate five sense organs in the case of a large-area occlusion of the face, which may affect the accuracy of the subsequent face abnormality module. Third, the processing of the side faces is too costly. Because the characteristics of the side face and the front face are different, the existing method generally adopts a mode of adding side face training samples in the characteristic extraction process to solve the problem of the side face, for example, adding side face mask samples with the number equivalent to that of training samples wearing a mask on the front face to detect the condition that the side face is shielded by the mask. This greatly burdens the training sample collection and labeling and also increases the difficulty of machine learning model training. In fact, compared with a front face, the face information contained in the side face is incomplete, face abnormality recognition errors can be caused, the cost for processing the side face is high, the face abnormality recognition of the side face can be eliminated through optimizing the system design, the cost is reduced, and the overall accuracy of the system is improved.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the existing problems, the invention provides an artificial intelligent face abnormity detection system and method under a video monitoring scene, and the invention improves a preprocessing module and a face abnormity identification module as follows:
1. for the face abnormity identification module, the face abnormity identification problem is converted into a face abnormity shielding object classification problem, the shielding object type is accurately identified, and the detectable type is expanded into five types of sunglasses, masks, peaked caps, helmets and bandit hoods;
2. a fault-tolerant mechanism for the preprocessing module is provided in the face abnormity identification module, and the face abnormity barrier classification algorithm has the capability of identifying pictures such as 'environmental background', 'side face' and the like which do not need face abnormity identification, so that the aim of avoiding errors when the preprocessing module provides wrong face positions or face angles is fulfilled;
3. for the preprocessing module, the position information of facial features is not acquired, but the deflection angle information of the face is acquired, and the face on the front side is screened out according to the deflection angle, so that the face abnormal recognition of the face on the side face is eliminated.
The technical scheme adopted by the invention is as follows: an artificial intelligence face anomaly detection system under video monitoring scene includes: the system comprises a preprocessing module, a face abnormity identification module and an early warning module;
the preprocessing module is used for acquiring face information in the monitoring video;
the face abnormity identification module is used for judging whether a face shielding phenomenon exists and identifying a shielding object;
the early warning module carries out abnormal face filing and abnormal phenomenon early warning according to the recognition result transmitted by the face abnormity recognition module;
the preprocessing module comprises a face detection model and a face posture estimation model; the face detection model is used for detecting the face position in the video; the human face posture estimation model is used for calculating a human face deflection angle and screening out a human face front image;
the face abnormity identification module comprises a face abnormity classification model; the face abnormity classification model is used for identifying face abnormity shelters.
Further, the face abnormality classification model can identify and distinguish classes including: the face type comprises a front naked face type, a face type with transparent glasses, a face type with sunglasses, a face type with a mask, a face type with a peaked cap, a face type with a helmet, a face type with a bandit hood, a side face type and an environmental background type. The purpose of training the model to detect the environmental background pictures is to provide a fault-tolerant mechanism for the face detection function of the preprocessing module, for example, the face detection may detect tires into faces and give wrong pictures without faces, under such a condition, the face abnormal classification model can classify the pictures into the environmental background class instead of abnormal shielding object classes such as 'hat faces' or 'mask faces', and the system is ensured to operate correctly. The training model for detecting the side face is used for providing a fault-tolerant mechanism for face pose estimation in the preprocessing module, for example, the face pose estimation may have an error calculation angle, and face pictures with a deflection angle larger than 45 degrees are not filtered, so that the face different classification model can classify the face different classification model into a side face category instead of other abnormal occlusion object types.
Further, the face deflection angle includes: pitch angle, rotation angle, and yaw angle.
Further, the threshold value of the face deflection angle is 45 °. Namely, the face with the pitch angle, the rotation angle and the yaw angle in the range of [ -45 degrees, 45 degrees ] is screened out as the face of the front.
An artificial intelligence face abnormity detection method under a video monitoring scene comprises the following steps:
step 1: reading a monitoring image from a monitoring video, and starting face abnormity detection;
step 2: detecting whether a face exists in the monitored image or not through a face detection model, if not, returning to the step 1, and if so, obtaining the coordinate position of the face in the monitored image;
and step 3: extracting a face picture in the monitoring image according to the coordinate position of the face in the monitoring image, and storing the extracted face picture in a face picture set;
and 4, step 4: screening a front face picture from the face picture set through the face posture estimation model, storing the front face picture into the face picture set, and returning to the step 1 if the front face picture does not exist in the face picture set after screening is finished;
and 5: carrying out face abnormity classification and identification on the front face picture in the face picture set through a face abnormity classification model to obtain a classification label of the front face picture;
step 6: identifying whether the face picture on the front side has an abnormal shielding phenomenon or not according to the classification label, and if so, sending out a warning and archiving the picture; if not, returning to the step 1.
Further, before the step 1, training and creating a face detection model, a face pose estimation model and a face abnormality classification model.
Further, the training and creating of the face anomaly classification model comprises the following steps:
a: determining which types of abnormal shelters need to be identified by a face abnormal classification model according to actual requirements, and designing corresponding abnormal shelter classification labels;
b: collecting face pictures and environment background pictures from a public data set, attaching a classification label to each picture to make a sample, sampling the sample at equal intervals, taking the extracted sample as a verification set, and taking the rest samples as a training set;
c: a Resnet-50 neural network model is pre-trained on an ImageNet data set and is finely adjusted to obtain a face abnormal classification initial model, the initial model is trained by matching with a training set, and after training is completed, a face abnormal classification model is obtained; and testing the classification accuracy by matching with a verification set, judging that the face abnormal classification model is qualified if the classification accuracy is qualified, and re-training if the face abnormal classification model is not qualified.
Further, the classification label includes: wear sunglasses face label, wear gauze mask face label, wear peaked cap face label, wear helmet face label, wear bandit headgear face label, wear clear glasses face label, normal naked face label, environment background label and side face label. The classification labels are designed according to the following principle: the most basic classification label comprises a shelter label needing to be identified; if the abnormal obstruction type in the classification label is regarded as a positive example, the label should have a negative example corresponding to the abnormal obstruction type, namely, at least a normal naked face label should be provided; in order to enable the face abnormity classification model to have fault-tolerant capability to the preprocessing module, an environment background label and a side face label should be arranged in the classification label.
Further, in step 6, the step of identifying the classification label of the abnormal shielding phenomenon existing in the front face image includes: the face label is worn with sunglasses, the face label is worn with a mask, the face label is worn with a peaked cap, the face label is worn with a helmet and the face label is worn with a bandit. The five labels indicate that the face has an abnormal shielding phenomenon, and an alarm needs to be provided.
Compared with the prior art, the beneficial effects of adopting the technical scheme are as follows:
1. the human face abnormity identification task is innovatively provided and converted into a human face abnormity shielding object classification task, so that the shielding object can be accurately identified, the identifiable shielding object types are enriched, and the actual requirements are met; training data can be added or modified according to requirements of related products and scenes under the condition that the overall strategy is not changed, so that classifiable shelter types are added or modified;
2. the method has the advantages that the fault-tolerant capability of the preprocessing module is obtained by training the face abnormity classification model to recognize the 'environment background' and 'side face' pictures, and the method is simple and effective;
3. side face pictures are removed from the preprocessing module innovatively, so that the data preprocessing and model training cost is reduced;
4. the video works, the monitoring videos and the like disclosed on the internet are used as test samples, the average accuracy of the face abnormity detection system reaches 99.57%, and the system is stable and high in accuracy and has strong practical application value.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
FIG. 2 is a schematic flow diagram of the process of the present invention.
FIG. 3 is a logic diagram of a face anomaly classification label.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The abnormal shielding object classification labels in the invention are not limited to the five types of faces recorded in the invention, namely the face wearing sunglasses, the face wearing a mask, the face wearing a peaked cap, the face wearing a helmet and the face wearing a bandit, and can be added or modified according to actual conditions.
The invention provides an artificial intelligence face abnormity detection system under a video monitoring scene, as shown in figure 1, the system mainly comprises a preprocessing module, a face abnormity identification module and an early warning module; the system takes a monitoring video as input, obtains face information in the video through a preprocessing module, judges whether a face shielding phenomenon exists or not through a face abnormity identification module, identifies the type of a shielding object, and activates a warning module according to the face abnormity identification result to carry out abnormal face filing and abnormity early warning.
In the prior art, the mainstream idea of face anomaly detection is to determine whether facial features are occluded or not, and to infer what the occlusion is according to the occlusion position. Different from this, this embodiment does not rely on the position of the facial features, but directly judges whether the face is occluded or not with the face as a whole, and classifies the occlusion.
Therefore, in this embodiment, a face anomaly classification model is trained in the face anomaly recognition module, and can recognize: the face of the user is naked, the face of the user wears transparent glasses, the face of the user wears sunglasses, the face of the user wears a mask, the face of the user wears a peaked cap, the face of the user wears a helmet, the face of the user wears a bandit headgear, the face of the user on the side face and the environment background. Training the classification model requires providing training data, in the training data provided in this embodiment, each sample is composed of a picture and a class label corresponding to the picture, where the picture may include a face or no face, and the label includes: the face-wearing glasses-type face comprises nine types of faces, namely a front bare face, a face wearing transparent glasses, a face wearing sunglasses, a face wearing a mask, a face wearing a peaked cap, a face wearing a helmet, a face wearing a bandit headgear, a side face and an environment background. The model learns the relationship between the picture and the label thereof through the training data, so that the class label of the face picture can be calculated when the face picture which does not appear in the training data is input. If the training data includes N types of labels, the obtained model has the capability of classifying the N types of data, where N is 9 in this embodiment.
Meanwhile, a face detection model and a face posture estimation model which are disclosed in related research are used in the preprocessing module, the face detection model is mainly used for detecting a face image in a monitoring video and determining the position of the face in the monitoring video, and the face posture estimation model is used for the face deflection angle in the face image to screen out the front face.
In this embodiment, the face estimation technology of the face estimation model is a technology for calculating an angle of a face in a picture based on a deep learning model, and usually a three-dimensional angle of the face on a three-dimensional coordinate axis is calculated: pitch angle (a head-down or head-up angle), rotation angle (a left or right turn angle), yaw angle (a head-tilt angle). Through experiments, the threshold value of the human face deflection angle is determined to be 45 degrees in the embodiment, namely, the human faces with the ternary angle values within the range of [ -45 degrees and 45 degrees ] are screened out to be used as the front human faces. The face attitude estimation model eliminates face abnormal recognition of the side face, and reduces the cost of the training model.
In this embodiment, the face anomaly classification model is used to learn and identify "environmental background" and "side face" label pictures, so as to make the face anomaly identification module have fault-tolerant capability for the preprocessing module. Further, the purpose of learning and recognizing the 'environment background' picture is to provide a fault-tolerant mechanism for the face detection function of the preprocessing module, for example, the face detection may detect a tire as a face, and an error picture without the face is provided. The purpose of learning and recognizing the side face is to provide a fault-tolerant mechanism for the face pose estimation in the preprocessing module, for example, the face pose estimation may have an error calculation angle, and a face picture with a deflection angle larger than 45 ° is not filtered, so that the face hetero-classification model can classify the face into a side face category instead of other abnormal occlusion types.
The above nine classification features are described below: the 'front bare face' belongs to a normal face, and is a face with a face deflection angle of [ -45 degrees, 45 degrees ] and without a face decoration; the 'face wearing transparent glasses' belongs to a normal face and is a face wearing transparent glasses such as colorless transparent glasses, light-color goggles or dark brown presbyopic glasses and the like; the "sunglasses-wearing face" is an abnormal face, and refers to a face of dark-color lens glasses which can obviously shield the face of a person; the 'peaked cap wearing face' is an abnormal face and refers to a face with a cap brim which can shield the face of a person; the 'helmet-wearing face' is an abnormal face and refers to a face with a riding helmet, a construction safety helmet or an anti-riot helmet; the banderoer wearing headgear face is an abnormal face, and refers to a face with only exposed eyes or only exposed eyes and a mouth headgear such as a riding headgear, an anti-terrorist headgear or a face keni; "side face" means a face with a weight-bias angle of less than-45 ° or greater than 45 °, regardless of whether a face or headgear is worn; the environment background is composed of natural landscape, street landscape, indoor decoration, irregular color blocks and the like.
The nine classification labels can refine the classified abnormal shielding objects, are more appropriate for the actual requirements of security and have fault-tolerant capability on the preprocessing module. The present embodiment directly identifies the blocking object through the face anomaly classification model, so that the blocking object is more accurately judged, and the types capable of being identified are more various. In addition, the model can increase or modify training data according to the requirements of related products and scenes under the condition of not changing the overall strategy, so that classifiable obstruction types are increased or modified, and the model has strong expandability.
In this embodiment, the present invention further provides an artificial intelligence face anomaly detection method in a video monitoring scene, a working flow of the method is shown in fig. 2, but before detection, models used by each module need to be prepared, so that the method specifically includes the following steps:
step 1: the model to be used in each module is prepared.
The model used by the pre-processing module includes: a face detection model for detecting the face position in the video, taking an open source model MobileNet-SSD as an example;
the face pose estimation model for calculating the face deflection angle takes an open source model HopeNet as an example.
The model used by the face anomaly recognition module is the face anomaly classification model provided by the embodiment, and the classification design, the data preparation and the model training are shown in step 2.
Step 2: and training a face abnormity classification model. According to the human face abnormity detection thought, a human face abnormity classification model is trained, whether the input human face image contains shielding or not is judged, and the type of a shielding object is classified.
Specifically, the training is divided into three substeps:
(1) and designing a classification label. According to actual requirements, determining which types of abnormal shelters need to be identified by the face abnormal classification model, and designing the label type of training data of the face abnormal classification model. In the embodiment, five types of abnormal shelters such as sunglasses, masks, peaked caps, helmets, bandit hoods and the like are identified as examples, and the classification is designed according to the following principle: the most basic classification label comprises the shelters needing to be identified; if the abnormal obstruction type in the classification label is regarded as a positive example, the label should have a negative example corresponding to the abnormal obstruction type, namely a label of a 'front naked face' type; in order to enable the face abnormity classification model to have fault-tolerant capability to the preprocessing module, an 'environment background' class and a 'side face' class are required in the classification label. Accordingly, the classification designed by the present embodiment has 9 kinds of labels: the logic schematic diagram of the classification label is shown in figure 3, wherein the logic schematic diagram of the classification label comprises a sunglasses-wearing face, a mask-wearing face, a peaked cap-wearing face, a helmet-wearing face, a bandit-wearing headgear face, a transparent glasses-wearing face, a normal naked face, an environment background and a side face.
(2) Model training data is prepared. Face pictures are collected from public data sets, such as CelebA, 300W-LP, and online gallery, each picture including 1 face. Manually labeling each picture with a category label, wherein one labeled picture is a sample. In the embodiment, 10 ten thousand training samples are collected, and the samples are cut, affine transformed and resized, so that the face area is about 50% of the whole picture in the middle of the image, and all samples have the same resolution Height × Width. And collecting the environmental background pictures from the online gallery, clipping the pictures, and adjusting the size of the pictures to ensure that the resolution of each picture is Height multiplied by Width. Then, the samples are sampled equidistantly, 3% of the samples are randomly extracted as a verification set, and the rest are training sets.
(3) And (5) training the model. The face anomaly classification model of the embodiment is obtained by fine tuning of a Resnet-50 neural network model pre-trained on an ImageNet data set, training is carried out on the training set obtained in the step (2), and optimization is carried out by adjusting the scheme of the number of layers of a fixed network, the initial learning rate and the learning rate attenuation, and 80 rounds of training are carried out each time. And (3) testing the classification accuracy of the model on the verification set obtained in the step (2). Setting accuracyThreshold value Tacc99.50%, and judging whether the model accuracy reaches TaccIf yes, obtaining a face abnormity detection model, and if not, readjusting parameters for training. The parameters of the finally obtained face abnormal classification model are as follows: and fixing a conv1 layer, a layer1 layer and a layer2 layer, wherein the initial learning rate is 0.001, the learning rate is attenuated once every 15 rounds, and the attenuation coefficient is 0.5.
And step 3: face anomaly detection is started. And reading in monitoring images from the monitoring video frame by frame.
And 4, step 4: and obtaining the position of the human face in the monitored image.
For each frame of monitoring image read in the step 3, detecting whether a human face exists through a human face detection model in the preprocessing module, and if not, turning to the step 3; if yes, obtaining the position coordinate set of all m human faces in the image
Figure BDA0002498401640000101
Wherein p isiIs the position coordinate of the ith human face, the upper left corner of the picture is taken as the origin of coordinates, the straight line on which the upper edge of the picture is positioned is taken as the x axis, the straight line on which the left edge of the picture is positioned is taken as the y axis,
Figure BDA0002498401640000111
is the coordinate of the upper left corner of the ith personal face frame,
Figure BDA0002498401640000112
is the coordinate of the lower right corner of the ith face border.
And 5: and taking out the human face picture in the monitored image.
According to the set P obtained in the step 4 and the monitoring image read in the step 3, the m face pictures F contained in the image are scratched to be { F ═ F1,…,mWherein each fiThe face picture is a face picture, the face position is in the middle of the picture, the face area accounts for about 50% of the image area, and the picture size is the resolution Height × Width.
Step 6: and screening the face of the front face.
For the picture set F obtained in the step 5, according to a preprocessing moduleThe face pose estimation model in the block calculates the angle a of each facei(pitchi,awi,olli) Wherein, pitchiIs the pitch angle of the ith face, i.e. the angle of head up/down, yawiIs the yaw angle of the ith face, i.e., the angle of the left/right face, rolliThe rotation angle of the ith face, i.e., the angle of head left/right tilting. The calculation ranges of the three angles are [ -180 DEG ], 180 DEG °]. To screen a frontal face picture and exclude abnormal recognition of a side face, an angle threshold is set
Figure BDA0002498401640000113
For each aiIf it satisfies
Figure BDA0002498401640000114
Figure BDA0002498401640000115
And is
Figure BDA0002498401640000116
And is
Figure BDA0002498401640000117
Then the face in the picture is a positive face, then picture fiRemain in the set F, otherwise delete. And (3) judging whether the size of the obtained set F is 0, if so, turning to the step (3) to read the next frame of monitoring video, and if not, judging that no face exists on the front side.
And 7: and (5) face abnormity classification.
Inputting the face pictures obtained in the step 6 into the face abnormal classification model obtained in the step 2 one by one to obtain a classification label of each facei
And 8: and (5) early warning the abnormal phenomenon of the human face.
If label obtained in step 7i∈ { sunglasses-wearing face, mask-wearing face, peaked cap-wearing face, helmet-wearing face, bandit-wearing face }, then the face is considered to be abnormally shielded, and the picture f is displayediAnd labeliRecording, broadcasting and indicating lampSending out an alarm; otherwise, go to step 3.
The invention is not limited to the foregoing embodiments. The invention extends to any novel feature or any novel combination of features disclosed in this specification and any novel method or process steps or any novel combination of features disclosed. Those skilled in the art to which the invention pertains will appreciate that insubstantial changes or modifications can be made without departing from the spirit of the invention as defined by the appended claims.

Claims (9)

1. An artificial intelligence face anomaly detection system under video monitoring scene includes: the system comprises a preprocessing module, a face abnormity identification module and an early warning module;
the preprocessing module is used for acquiring face information in the monitoring video;
the face abnormity identification module is used for judging whether a face shielding phenomenon exists and identifying a shielding object;
the early warning module carries out abnormal face filing and abnormal phenomenon early warning according to the recognition result transmitted by the face abnormity recognition module;
the system is characterized in that the preprocessing module comprises a face detection model and a face posture estimation model; the face detection model is used for detecting the face position in the video; the human face posture estimation model is used for calculating a human face deflection angle and screening out a human face front image;
the face abnormity identification module comprises a face abnormity classification model; the face abnormity classification model is used for identifying face abnormity shelters.
2. The system of claim 1, wherein the face anomaly classification model identifies distinct classes comprising: the face type comprises a front naked face type, a face type with transparent glasses, a face type with sunglasses, a face type with a mask, a face type with a peaked cap, a face type with a helmet, a face type with a bandit hood, a side face type and an environmental background type.
3. The system of claim 1, wherein the face deflection angle comprises: pitch angle, rotation angle, and yaw angle.
4. The system according to any one of claims 1 to 3, wherein the threshold value of the face deflection angle is 45 °.
5. The method for detecting the artificial intelligence face abnormity in the video monitoring scene according to claim 1, which is characterized by comprising the following steps:
step 1: reading a monitoring image from a monitoring video, and starting face abnormity detection;
step 2: detecting whether a face exists in the monitored image or not through a face detection model, if not, returning to the step 1, and if so, obtaining the coordinate position of the face in the monitored image;
and step 3: extracting a face picture in the monitoring image according to the coordinate position of the face in the monitoring image, and storing the extracted face picture in a face picture set;
and 4, step 4: screening a front face picture from the face picture set through the face posture estimation model, storing the front face picture into the face picture set, and returning to the step 1 if the front face picture does not exist in the face picture set after screening is finished;
and 5: carrying out face abnormity classification and identification on the front face picture in the face picture set through a face abnormity classification model to obtain a classification label of the front face picture;
step 6: identifying whether the face picture on the front side has an abnormal shielding phenomenon or not according to the classification label, and if so, sending out a warning and archiving the picture; if not, returning to the step 1.
6. The method according to claim 5, wherein the step 1 is preceded by training and creating a face detection model, a face pose estimation model and a face anomaly classification model.
7. The method according to claim 6, wherein the training for creating the face anomaly classification model comprises the following steps:
a: determining which types of abnormal shelters need to be identified by a face abnormal classification model according to actual requirements, and designing corresponding abnormal shelter classification labels;
b: collecting face pictures and environment background pictures from a public data set, attaching a classification label to each picture to make a sample, sampling the sample at equal intervals, taking the extracted sample as a verification set, and taking the rest samples as a training set;
c: a Resnet-50 neural network model is pre-trained on an ImageNet data set and is finely adjusted to obtain a face abnormal classification initial model, the initial model is trained by matching with a training set, and after training is completed, a face abnormal classification model is obtained; and testing the classification accuracy by matching with a verification set, judging that the face abnormal classification model is qualified if the classification accuracy is qualified, and re-training if the face abnormal classification model is not qualified.
8. The method according to claim 5, wherein the classification label comprises: wear sunglasses face label, wear gauze mask face label, wear peaked cap face label, wear helmet face label, wear bandit headgear face label, wear clear glasses face label, normal naked face label, environment background label and side face label.
9. The method according to claim 8, wherein in the step 6, the step of identifying the classification label of the face image with abnormal occlusion comprises: the face label is worn with sunglasses, the face label is worn with a mask, the face label is worn with a peaked cap, the face label is worn with a helmet and the face label is worn with a bandit.
CN202010425077.9A 2020-05-19 2020-05-19 Artificial intelligent face abnormity detection system and method under video monitoring scene Pending CN111695432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010425077.9A CN111695432A (en) 2020-05-19 2020-05-19 Artificial intelligent face abnormity detection system and method under video monitoring scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010425077.9A CN111695432A (en) 2020-05-19 2020-05-19 Artificial intelligent face abnormity detection system and method under video monitoring scene

Publications (1)

Publication Number Publication Date
CN111695432A true CN111695432A (en) 2020-09-22

Family

ID=72477207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010425077.9A Pending CN111695432A (en) 2020-05-19 2020-05-19 Artificial intelligent face abnormity detection system and method under video monitoring scene

Country Status (1)

Country Link
CN (1) CN111695432A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200075A (en) * 2020-10-09 2021-01-08 西安西图之光智能科技有限公司 Face anti-counterfeiting method based on anomaly detection
CN113821681A (en) * 2021-09-17 2021-12-21 深圳力维智联技术有限公司 Video tag generation method, device and equipment
CN116503589A (en) * 2023-02-07 2023-07-28 珠海安联锐视科技股份有限公司 Deep learning-based detection method for thief mask

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128397A (en) * 1997-11-21 2000-10-03 Justsystem Pittsburgh Research Center Method for finding all frontal faces in arbitrarily complex visual scenes
US20080247609A1 (en) * 2007-04-06 2008-10-09 Rogerio Feris Rule-based combination of a hierarchy of classifiers for occlusion detection
CN102156537A (en) * 2010-02-11 2011-08-17 三星电子株式会社 Equipment and method for detecting head posture
CN105404854A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Methods and devices for obtaining frontal human face images
CN106485215A (en) * 2016-09-29 2017-03-08 西交利物浦大学 Face occlusion detection method based on depth convolutional neural networks
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
CN108205661A (en) * 2017-12-27 2018-06-26 浩云科技股份有限公司 A kind of ATM abnormal human face detection based on deep learning
CN109583339A (en) * 2018-11-19 2019-04-05 北京工业大学 A kind of ATM video brainpower watch and control method based on image procossing
CN109684913A (en) * 2018-11-09 2019-04-26 长沙小钴科技有限公司 A kind of video human face mask method and system based on community discovery cluster
CN109871747A (en) * 2018-12-30 2019-06-11 广州展讯信息科技有限公司 Zuo You lookout evaluation method, device and readable storage medium storing program for executing based on Face datection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6128397A (en) * 1997-11-21 2000-10-03 Justsystem Pittsburgh Research Center Method for finding all frontal faces in arbitrarily complex visual scenes
US20080247609A1 (en) * 2007-04-06 2008-10-09 Rogerio Feris Rule-based combination of a hierarchy of classifiers for occlusion detection
CN102156537A (en) * 2010-02-11 2011-08-17 三星电子株式会社 Equipment and method for detecting head posture
CN105404854A (en) * 2015-10-29 2016-03-16 深圳怡化电脑股份有限公司 Methods and devices for obtaining frontal human face images
CN106485215A (en) * 2016-09-29 2017-03-08 西交利物浦大学 Face occlusion detection method based on depth convolutional neural networks
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
CN108205661A (en) * 2017-12-27 2018-06-26 浩云科技股份有限公司 A kind of ATM abnormal human face detection based on deep learning
CN109684913A (en) * 2018-11-09 2019-04-26 长沙小钴科技有限公司 A kind of video human face mask method and system based on community discovery cluster
CN109583339A (en) * 2018-11-19 2019-04-05 北京工业大学 A kind of ATM video brainpower watch and control method based on image procossing
CN109871747A (en) * 2018-12-30 2019-06-11 广州展讯信息科技有限公司 Zuo You lookout evaluation method, device and readable storage medium storing program for executing based on Face datection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SUNGMIN EUM 等: "Face Recognizability Evaluation for ATM Applications With Exceptional Occlusion Handling", 《CVPR 2011 WORKSHOPS》 *
YIZHANG XIA 等: "Face Occlusion Detection Using Deep Convolutional Neural Networks", 《INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE》 *
张伟峰 等: "基于巡逻小车的人脸遮挡异常事件实时检测", 《计算机系统应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200075A (en) * 2020-10-09 2021-01-08 西安西图之光智能科技有限公司 Face anti-counterfeiting method based on anomaly detection
CN112200075B (en) * 2020-10-09 2024-06-04 西安西图之光智能科技有限公司 Human face anti-counterfeiting method based on anomaly detection
CN113821681A (en) * 2021-09-17 2021-12-21 深圳力维智联技术有限公司 Video tag generation method, device and equipment
CN113821681B (en) * 2021-09-17 2023-09-26 深圳力维智联技术有限公司 Video tag generation method, device and equipment
CN116503589A (en) * 2023-02-07 2023-07-28 珠海安联锐视科技股份有限公司 Deep learning-based detection method for thief mask
CN116503589B (en) * 2023-02-07 2024-05-10 珠海安联锐视科技股份有限公司 Deep learning-based detection method for thief mask

Similar Documents

Publication Publication Date Title
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN111967393B (en) Safety helmet wearing detection method based on improved YOLOv4
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN111695432A (en) Artificial intelligent face abnormity detection system and method under video monitoring scene
CN108062542B (en) Method for detecting shielded human face
CN111598040B (en) Construction worker identity recognition and safety helmet wearing detection method and system
CN109447168A (en) A kind of safety cap wearing detection method detected based on depth characteristic and video object
CN106128022B (en) A kind of wisdom gold eyeball identification violent action alarm method
CN104616438A (en) Yawning action detection method for detecting fatigue driving
CN110852183B (en) Method, system, device and storage medium for identifying person without wearing safety helmet
CN105184258A (en) Target tracking method and system and staff behavior analyzing method and system
KR101653278B1 (en) Face tracking system using colar-based face detection method
CN111539276B (en) Method for detecting safety helmet in real time in power scene
CN108197575A (en) A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device
CN110222596B (en) Driver behavior analysis anti-cheating method based on vision
CN105426820A (en) Multi-person abnormal behavior detection method based on security monitoring video data
CN112364778A (en) Power plant safety behavior information automatic detection method based on deep learning
CN113223046B (en) Method and system for identifying prisoner behaviors
CN112434827B (en) Safety protection recognition unit in 5T operation and maintenance
CN112434828B (en) Intelligent safety protection identification method in 5T operation and maintenance
CN106127814A (en) A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device
KR102263512B1 (en) IoT integrated intelligent video analysis platform system capable of smart object recognition
Eyiokur et al. A survey on computer vision based human analysis in the COVID-19 era
CN115393830A (en) Fatigue driving detection method based on deep learning and facial features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922

RJ01 Rejection of invention patent application after publication