CN112287823A - Facial mask identification method based on video monitoring - Google Patents

Facial mask identification method based on video monitoring Download PDF

Info

Publication number
CN112287823A
CN112287823A CN202011174256.6A CN202011174256A CN112287823A CN 112287823 A CN112287823 A CN 112287823A CN 202011174256 A CN202011174256 A CN 202011174256A CN 112287823 A CN112287823 A CN 112287823A
Authority
CN
China
Prior art keywords
image
face
video
monitoring
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011174256.6A
Other languages
Chinese (zh)
Inventor
肖虎
孟令昀
邓晓鹏
叶永盛
徐思源
陈纪鹏
肖星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaihua University
Original Assignee
Huaihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaihua University filed Critical Huaihua University
Priority to CN202011174256.6A priority Critical patent/CN112287823A/en
Publication of CN112287823A publication Critical patent/CN112287823A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Computing arrangements based on biological models using neural network models
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention belongs to the field of biological recognition, in particular to a facial mask recognition method based on video monitoring, aiming at the problems that the existing mask wearing detection algorithm is mostly based on deep learning, the labeled face and mask data are trained by designing a convolutional neural network, the mask detection is realized, the neural network-based algorithm needs a large number of samples to be labeled and trained, labeling and training are needed again after a use scene is changed, more samples, labeling and training time are needed, and the calculated amount is large, the following scheme is proposed, and the method comprises the following steps: step 1: monitoring video acquisition in real time; step 2: detecting a moving object; and step 3: morphological treatment; and 4, step 4: the method and the system can realize real-time monitoring of the wearing condition of the facial mask with less time consumption, high real-time performance, high precision and low error, mark people who do not wear the mask in a monitoring video and send out a prompt signal.

Description

Facial mask identification method based on video monitoring
Technical Field
The invention relates to the technical field of biological identification, in particular to a facial mask identification method based on video monitoring.
Background
The mask worn in public places during epidemic situations is responsible for preventing virus transmission, and the mask needs to be observed by individuals and needs to be supervised and managed by certain means. After the epidemic situation, most cities in the country gradually re-work and re-produce, and the premise of adhering to scientific prevention and control of the epidemic situation and ensuring safety and order is still the big premise. The face mask wearing detection technology based on videos is adopted, real-time monitoring and alarming of mask wearing conditions of people in public places are achieved, inspection efficiency can be effectively improved, the monitoring range is expanded, and the face mask wearing detection technology has important significance in epidemic prevention and control monitoring. In addition, in life, there are many other places where mask wearing detection is needed, such as operating rooms and dust factories, where masks should be worn, and in some key monitoring places, such as ATM cash dispensers, suspicious people can intentionally shield the face with the masks in order to avoid being captured by cameras, so that the video-based facial mask wearing detection still has important significance for applications outside epidemic prevention and control.
In some scenes where the face shield needs to be worn, whether people wear the face shield needs to be confirmed, so that the health and safety of people are ensured. Usually, the adopted detection method is used for manually checking whether people wear the face shelter, but the adoption of the method needs a large amount of manpower and material resources and can also cause threats to the health and safety of inspectors. Therefore, how to automatically detect whether people wear facial masks through a machine is a technical problem to be solved urgently at present.
Most of existing mask wearing detection algorithms are based on deep learning, labeled face and mask data are trained through designing a convolutional neural network, the purpose of mask detection is achieved, the neural network-based algorithms need a large number of samples to be labeled and trained, labeling and training need to be carried out again after a use scene is changed, more samples and labeling and training time are needed, and the calculated amount is large.
Disclosure of Invention
The invention aims to solve the defects that mask wearing detection algorithms in the prior art are mostly based on deep learning, labeled face and mask data are trained by designing a convolutional neural network to achieve the purpose of mask detection, the neural network-based algorithms need a large number of samples to label and train, labeling and training are needed again after a use scene is changed, more samples, labeling and training time are needed, and the calculated amount is large.
In order to achieve the purpose, the invention adopts the following technical scheme:
a facial mask identification method based on video monitoring comprises the following steps:
step 1: monitoring video acquisition in real time;
step 2: detecting a moving object;
and step 3: morphological treatment;
and 4, step 4: a connected domain label;
and 5: detecting a face area;
step 6: restoring the face image;
and 7: color space conversion;
and 8: extracting a skin color area: detecting a skin color area in an HSV color space;
and step 9: judging whether a face has a shelter or not;
step 10: carrying out face image binarization;
step 11: judging whether the shielding object is a mask: respectively calculating the horizontal projection and the vertical projection of the binary image, determining the position of a shielding object, and judging whether the shielding object is a mask or not according to the position;
step 12: marking the sub-image area in the video and sending a prompt signal;
step 13: and judging whether all the sub-images in the current frame image are detected, if so, starting to detect the next frame image, and otherwise, returning to the step S5 to detect the next sub-image of the current frame.
Preferably, in the step 1, the monitoring camera is erected to acquire images in real time, and the monitoring images of the monitoring camera are transmitted back in real time.
Preferably, in the step 2, a method of combining a three-frame difference method and a gaussian mixture background model is adopted to detect a moving object in the video.
Preferably, in step 3, the moving object image is morphologically processed, and the image is segmented and filled by the expansion and erosion operations, and the small-area region is removed.
Preferably, in step 4, all connected regions in the image are marked, the image is segmented according to the number of the connected regions, and each segmented region is a face image candidate region.
Preferably, in step 5, a horizontal projection and a vertical projection of each face image candidate area are calculated, and a face area image is intercepted.
Preferably, in the step 6, the intercepted face region binary image is overlapped with a current frame image in an original video, so as to display a face region image in the video image.
Preferably, in the step 7, the face region image is subjected to color space conversion, so that conversion from an RGB color space to an HSV color space is realized.
Preferably, in step 9, the area ratio of the skin color region to the face region is calculated, and whether the face has an obstruction is determined according to the value.
Preferably, in the step 10, the face image is binarized according to color, where the blocking object part is a foreground and the skin color part is a background.
Compared with the prior art, the invention has the beneficial effects that:
(1) detecting the face shielding condition by using the face skin color ratio;
(2) determining the face shielding position by utilizing horizontal and vertical projections;
(3) the mask wearing condition is detected according to the position judgment of the face shelter;
(4) marking the persons not wearing the mask in the image and the video according to the detection result, and sending out a prompt signal;
the method can realize real-time monitoring of the wearing condition of the facial mask with less time consumption, high real-time performance, high precision and low error, mark people who do not wear the mask in the monitoring video and send out prompt signals.
Drawings
Fig. 1 is a flowchart of a facial mask recognition method based on video monitoring according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1, a facial mask recognition method based on video monitoring includes the following steps:
step 1: real-time monitoring video acquisition: real-time image acquisition is carried out through the erected monitoring camera, and a monitoring image of the monitoring camera is transmitted back in real time; specifically, a monitoring camera is erected at a place where mask wearing condition monitoring is needed, the height and the angle of the camera are adjusted, and preparation is made for subsequent identification;
step 2: detecting a moving object: detecting a moving object in the video by adopting a method of combining a three-frame difference method and a mixed Gaussian background model; because the monitoring environment changes, a method capable of updating the background is required to be used for detecting the moving object, meanwhile, in order to ensure the real-time performance of detection, the data calculation amount and the running speed are considered, so that the method combining a three-frame difference method and a Gaussian mixture background model is selected to realize the detection, the detected background can be updated, the interference of the background change on the identification of the moving object is reduced, and the running speed is improved;
and step 3: morphological treatment: performing morphological processing on the moving object image, segmenting and filling the image through expansion and corrosion operations, and removing a small-area region; specifically, after the processing of step 2, the obtained image is usually not smooth, the detected moving object region has some noise holes, and the background region is scattered with some small noise objects, and through the expansion and corrosion operations, the edge smoothing, hole filling and background noise point removal can be effectively realized;
and 4, step 4: and (3) connected domain marking: marking all connected domains in the image, segmenting the image according to the number of the connected domains, wherein each segmented region is a face image alternative region; specifically, after the processing in step 3, a plurality of moving object regions may exist in the image, and connected domain marking needs to be performed on the image to mark each foreground region, so that each region can be identified and processed separately in the subsequent process, meanwhile, some regions with small areas can influence the subsequent identification, such as people with a long distance, so that the foregrounds need to be removed from the image, then the image is segmented according to the number of connected domains, each sub-image only contains one foreground region, and the subsequent steps from step 5 to step 12 are to perform operations on each sub-image separately;
and 5: detecting a face area, calculating horizontal projection and vertical projection of each face image alternative area, and intercepting a face area image; the human head region has the following characteristics: when viewed from the horizontal direction, the face is wider and more consistent in width, and the width of the cervical vertebra part is reduced; the width of the face is consistent and the shoulder part is widened when the face is seen from the vertical direction, so that the face area can be selected by calculating horizontal projection and vertical projection and combining the face characteristics by utilizing the characteristic;
step 6: restoring the face image, namely overlapping the intercepted face area binary image with a current frame image in the original video to display a face area image in the video image; since the black-and-white binary image is processed in steps 2 to 5, the step needs to superimpose the binary image processed in step 5 with the original color image, so that the extracted face area is colored, and the other parts are kept black;
and 7: the method comprises the following steps of color space conversion, namely performing color space conversion on a human face region image to realize conversion from an RGB color space to an HSV color space, wherein the HSV color space separates the brightness, hue and saturation of a color, and can reduce the influence of the ambient brightness and hue on the image, so that the image is converted from the RGB color space to the HSV color space;
and 8: extracting a skin color area, and detecting the skin color area in an HSV color space; extracting the face skin color area in the HSV color space according to the color characteristics of the skin color,
and step 9: judging whether the face has a shelter or not, calculating the area ratio of a skin color area to a face area, and judging whether the face has the shelter or not according to the value; specifically, the area of the skin color region extracted in the step 8 and the area of the face region extracted in the step 5 are calculated, the area ratio of the skin color region to the face region is calculated, when the ratio is greater than a set threshold value, the face is considered to have no shielding object and is not worn with a mask, and the process is directly carried out to a step 12; otherwise, the face is considered to have a shelter, and the step 10 is entered for continuous judgment;
step 10: performing binarization on the face image, namely performing binarization processing on the face image according to colors, wherein a shelter part is a foreground, and a skin color part is a background; after judging whether the face has the sheltering object through the step 9, judging whether the sheltering object is a mask through the step 10 and the step 11, carrying out binarization processing on the face area according to the skin color result extracted in the step 8, wherein the skin color part is a background area, namely a black part, and the sheltering object part is a foreground part and a white part;
step 11: judging whether the shielding object is a mask, respectively calculating the horizontal projection and the vertical projection of the binary image, determining the position of the shielding object, and judging whether the shielding object is the mask according to the position; judging whether the mask is a face mask or not according to the position of the face mask, specifically, calculating the horizontal projection and the vertical projection of the binary image obtained in the step 11, if the lower half part of the horizontal projection is high in numerical value and exceeds a threshold value, and the vertical projection is distributed uniformly, judging that the face mask is worn, and entering a step 13; otherwise, judging that the mask is not worn, and entering step 12;
step 12: marking the sub-image area in the video and sending a prompt signal;
step 13: and judging whether all the sub-images in the current frame image are detected, if so, starting to detect the next frame image, otherwise, returning to the step 5 to detect the next sub-image of the current frame.
The scheme combines face detection and skin color detection, utilizes the face skin color proportion and the position to realize automatic detection of the wearing condition of the mask, does not need a large amount of database training, and can realize the similar face shielding detection project after adjusting parameters, such as the detection of suspicious people shielded by the face in public places.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (10)

1. A facial mask identification method based on video monitoring is characterized by comprising the following steps:
step 1: monitoring video acquisition in real time;
step 2: detecting a moving object;
and step 3: morphological treatment;
and 4, step 4: a connected domain label;
and 5: detecting a face area;
step 6: restoring the face image;
and 7: color space conversion;
and 8: extracting a skin color area: detecting a skin color area in an HSV color space;
and step 9: judging whether a face has a shelter or not;
step 10: carrying out face image binarization;
step 11: judging whether the shielding object is a mask: respectively calculating the horizontal projection and the vertical projection of the binary image, determining the position of a shielding object, and judging whether the shielding object is a mask or not according to the position;
step 12: marking the sub-image area in the video and sending a prompt signal;
step 13: and judging whether all the sub-images in the current frame image are detected, if so, starting to detect the next frame image, and otherwise, returning to the step S5 to detect the next sub-image of the current frame.
2. The method according to claim 1, wherein in step S1, the monitoring camera is mounted to capture the real-time images and transmit the monitoring images back to the monitoring camera in real time.
3. The method for identifying a facial mask based on video monitoring as claimed in claim 1, wherein in the step 2, a method of combining a three-frame difference method and a Gaussian mixture background model is adopted to detect a moving object in the video.
4. The method for identifying a facial mask based on video monitoring as claimed in claim 1, wherein in the step 3, the moving object image is processed morphologically, and the image is segmented and filled by expansion and erosion operations, and small-area regions are removed.
5. The method for identifying a facial mask based on video monitoring as claimed in claim 1, wherein in the step 4, all connected domains in the image are marked, the image is segmented according to the number of the connected domains, and each segmented region is a face image candidate region.
6. The method according to claim 1, wherein in step 5, the horizontal projection and the vertical projection of each face image candidate area are calculated, and the face area image is intercepted.
7. The facial mask recognition method based on video monitoring as claimed in claim 1, wherein in step 6, the intercepted face region binary image is superimposed with the current frame image in the original video to display the face region image in the video image.
8. The facial mask recognition method based on video monitoring as claimed in claim 1, wherein in step 7, the face region image is subjected to color space conversion to realize conversion from an RGB color space to an HSV color space.
9. The method for identifying a facial mask based on video surveillance as claimed in claim 1, wherein in the step 9, the area ratio of the skin color area to the face area is calculated, and whether the face has an obstruction is determined according to the calculated value.
10. The method for identifying a facial mask based on video monitoring as claimed in claim 1, wherein in the step 10, the face image is binarized according to color, the shelter part is the foreground, and the skin color part is the background.
CN202011174256.6A 2020-10-28 2020-10-28 Facial mask identification method based on video monitoring Pending CN112287823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011174256.6A CN112287823A (en) 2020-10-28 2020-10-28 Facial mask identification method based on video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011174256.6A CN112287823A (en) 2020-10-28 2020-10-28 Facial mask identification method based on video monitoring

Publications (1)

Publication Number Publication Date
CN112287823A true CN112287823A (en) 2021-01-29

Family

ID=74373178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011174256.6A Pending CN112287823A (en) 2020-10-28 2020-10-28 Facial mask identification method based on video monitoring

Country Status (1)

Country Link
CN (1) CN112287823A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390315A (en) * 2022-03-22 2022-04-22 南京踏实信息科技有限公司 Fusion and analysis system of audio and video resources based on 5G communication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160297A (en) * 2015-07-27 2015-12-16 华南理工大学 Masked man event automatic detection method based on skin color characteristics
CN109002801A (en) * 2018-07-20 2018-12-14 燕山大学 A kind of face occlusion detection method and system based on video monitoring
CN111428681A (en) * 2020-04-09 2020-07-17 福建省通通发科技发展有限公司 Intelligent epidemic prevention system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160297A (en) * 2015-07-27 2015-12-16 华南理工大学 Masked man event automatic detection method based on skin color characteristics
CN109002801A (en) * 2018-07-20 2018-12-14 燕山大学 A kind of face occlusion detection method and system based on video monitoring
CN111428681A (en) * 2020-04-09 2020-07-17 福建省通通发科技发展有限公司 Intelligent epidemic prevention system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390315A (en) * 2022-03-22 2022-04-22 南京踏实信息科技有限公司 Fusion and analysis system of audio and video resources based on 5G communication

Similar Documents

Publication Publication Date Title
TWI409718B (en) Method of locating license plate of moving vehicle
KR101215948B1 (en) Image information masking method of monitoring system based on face recognition and body information
CN103069434B (en) For the method and system of multi-mode video case index
CN105160297B (en) Masked man's event automatic detection method based on features of skin colors
CN109117827B (en) Video-based method for automatically identifying wearing state of work clothes and work cap and alarm system
CN105187785B (en) A kind of across bayonet pedestrian's identifying system and method based on choice of dynamical notable feature
CN104361327A (en) Pedestrian detection method and system
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
KR101709751B1 (en) An automatic monitoring system for dangerous situation of persons in the sea
CN109635758A (en) Wisdom building site detection method is dressed based on the high altitude operation personnel safety band of video
CN106339657B (en) Crop straw burning monitoring method based on monitor video, device
Lin et al. Collaborative pedestrian tracking and data fusion with multiple cameras
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN105844245A (en) Fake face detecting method and system for realizing same
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN112149513A (en) Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN111091098A (en) Training method and detection method of detection model and related device
KR20200058260A (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN and Driving Method Thereof
CN112287823A (en) Facial mask identification method based on video monitoring
Surkutlawar et al. Shadow suppression using RGB and HSV color space in moving object detection
CN113963373A (en) Video image dynamic detection and tracking algorithm based system and method
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
CN109241847A (en) The Oilfield Operation District safety monitoring system of view-based access control model image
KR102171384B1 (en) Object recognition system and method using image correction filter
CN112488031A (en) Safety helmet detection method based on color segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination