CN111931661A - Real-time mask wearing detection method based on convolutional neural network - Google Patents
Real-time mask wearing detection method based on convolutional neural network Download PDFInfo
- Publication number
- CN111931661A CN111931661A CN202010806221.3A CN202010806221A CN111931661A CN 111931661 A CN111931661 A CN 111931661A CN 202010806221 A CN202010806221 A CN 202010806221A CN 111931661 A CN111931661 A CN 111931661A
- Authority
- CN
- China
- Prior art keywords
- detection
- mask
- target
- face
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention discloses a real-time mask wearing detection method based on a convolutional neural network, which is characterized by comprising the following steps of: 1) establishing a detection model based on a YOLO algorithm; 2) processing the video stream; 3) training a detection model 4) to obtain a screening result; 5) the identity of the detected person is determined. The method is simple to implement, high in detection efficiency, high in detection speed, capable of meeting real-time detection requirements and good in applicability.
Description
Technical Field
The invention relates to the field of computer vision and the field of image processing, in particular to a real-time mask wearing detection method based on a convolutional neural network.
Background
During epidemic prevention and control, basic measures such as body temperature monitoring and mask wearing are needed to be carried out in crowded places such as railway stations and airports, epidemic is prevented, the mask wearing condition of the workers for manually detecting body temperature and supervising passengers is mainly adopted, and the problems of waste of a large amount of human resources, low efficiency and near contact susceptibility when the people flow is large exist in the condition.
With the wide use of the monitoring camera, the rapid development of computer vision enables people to use the camera to be connected with a computer to detect whether people in the path wear the mask or not, and an infrared thermal imaging system is used for detecting body temperature, so that the purpose of non-contact automatic detection can be achieved.
In practical application, the situation that the light of the face of a person to be acquired is not uniform and the mask color is changeable, so that the detection success rate is low, may occur when the monitoring camera acquires images.
With the rapid development of deep learning in recent years, the target detection algorithm is continuously developed towards high speed and high performance, and particularly, the deep convolutional neural network has excellent performance in computer vision. The current popular algorithms can be mainly divided into two types, one type is an R-CNN algorithm (R-CNN, Fast R-CNN) based on a candidate region, the two types of algorithms are detected in two steps, a heuristic method (selective search) or a CNN network (RPN) is required to be firstly used for generating the candidate region, and then classification and regression are carried out on the candidate region; another type is a one-step Detection algorithm such as YOLO, SSD, which can directly predict the class and position of different objects using only one CNN network, as disclosed in the literature "Liu L, Ouyang W, Wang X, et al.
Generally speaking, the detection speed of the YOLOv3 is 1000 times faster than that of R-CNN and 100 times faster than that of Fast R-CNN, and the detection precision and the positioning accuracy are only slightly different, so that the convolutional neural network can be applied to a mask wearing detection scene by combining the advantages of a deep learning convolutional neural network model to achieve the capability of high-precision real-time detection.
Disclosure of Invention
The invention aims to provide a mask wearing real-time detection method based on a convolutional neural network, aiming at the defects of the prior art. The method is simple to implement, high in detection efficiency, high in detection speed, capable of meeting real-time detection requirements and good in applicability.
The technical scheme for realizing the purpose of the invention is as follows:
a real-time mask wearing detection method based on a convolutional neural network comprises the following steps:
1) establishing a detection model based on a YOLO algorithm, wherein the model is provided with a five-layer convolutional neural network based on ResNet, a three-layer maximum pooling layer based on SPP and a three-layer target detection layer;
2) processing the video stream: inputting a video stream on the detection model established in the step 1), performing data enhancement on collected mask wearing picture data to form a picture library, wherein the picture library comprises two parts of pictures and labeled data, and the labeled data comprises: masked represents whether the mask is worn or not, and X, Y, H, L represents the X coordinate, the Y coordinate, the height and the length of the center of the marking frame respectively;
3) training a detection model: selecting a plurality of pictures with or without masks from a picture library as training data sets, using the rest pictures as test data sets, inputting the training data sets into a detection model, training the detection model, and carrying out the following processes:
3-1) extracting the target characteristics of the face and the mask in the test data set by adopting a five-layer convolution network layer based on ResNet;
3-2) further extracting the target characteristics of the face and the mask in the test data set by adopting three maximum pooling layers based on the SPP;
3-3) predicting the boundary frame coordinate values of the face and mask target in the test data set according to the extracted face and mask target characteristics by adopting three target detection layers based on a multi-scale prediction strategy, and calculating a target confidence score and a face and mask target class probability, wherein the mask target class probability is calculated by a logistic regression function igmoid alone;
3-4) scoring class probability according to the coordinate value of the boundary box and the confidence coefficient of the target by adopting a non-maximum inhibition method, and obtaining the mask target classification probability by singly calculating by using a logistic regression function sigmoid function, screening to obtain the detection results of the face and the mask target, and determining a detection model;
4) and (4) screening results: inputting test data into a detection model, iterating for more than ten thousand times in training, outputting a mask wearing detection result in the test data set, and storing face pictures with the detection result of wearing the mask and the face pictures without wearing the mask in a classified manner;
5) determining the identity of the detected person: and for the picture of which the detection model is judged to be not wearing the mask, adopting faceNet to carry out screening comparison with the face in the face database, and determining the identity of the detected person.
The detection scales of the three target detection layers in the step 1) are respectively 13 × 13 pixels, 26 × 26 pixels and 104 × 104 pixels, and three corresponding anchor boxes with different sizes are respectively generated for each target detection layer by adopting a K-means clustering algorithm according to the detection scale of each target detection layer.
The data enhancement in the step 2) comprises the following steps: the method comprises the steps of turning over, cutting, randomly erasing and color distortion, wherein the random erasing refers to randomly selecting a rectangular area in an image, erasing pixels of the rectangular area and randomly erasing the pixels of the rectangular area, the characteristic of a randomly-erased target is adopted to simulate the shielding effect, the generalization capability of a model can be improved, the model can identify the target only through local characteristics in the training process, the cognition of the model on the local characteristics of the target is strengthened, the dependence of the model on all the characteristics of the target is weakened, the model is trained through the data, the robustness under the shielding condition of the mask can be enhanced, the color distortion includes changing the tone and the brightness of pictures, and the accuracy of the model in identifying different color masks is favorably enhanced.
The technical scheme has the advantages that: the method is simple to realize, high in detection efficiency and high in detection speed, and meets the requirement of real-time detection; the mask with the shielding function can be detected, and the applicability is good; the five-layer convolution network layer based on ResNet can effectively control gradient propagation, and avoid the disadvantage that the detection model is not trained due to gradient explosion or disappearance; the mask wearing detection problem in the video stream can be effectively solved.
The method is simple to implement, high in detection efficiency, high in detection speed, capable of meeting real-time detection requirements and good in applicability.
Drawings
FIG. 1 is a schematic block flow diagram of an example method;
FIG. 2 is a diagram illustrating an example of a library of pictures in an embodiment;
FIG. 3 is a diagram illustrating the variation of loss values in the model training process in the embodiment;
fig. 4 is a schematic diagram showing the result of mask wearing detection in the embodiment.
Detailed Description
The invention will be further illustrated by the following figures and examples, but is not limited thereto.
Example (b):
referring to fig. 1, a real-time mask wearing detection method based on a convolutional neural network includes the following steps:
1) establishing a detection model based on a YOLO algorithm, wherein the model is provided with a five-layer convolutional neural network based on ResNet, a three-layer maximum pooling layer based on SPP and a three-layer target detection layer;
2) processing the video stream: inputting a video stream on the detection model established in the step 1), performing data enhancement on collected mask wearing picture data to form a picture library, wherein the picture library comprises two parts of pictures and labeled data, and the labeled data comprises: masked represents whether the mask is worn or not, and X, Y, H, L represents the X coordinate, Y coordinate, height and length of the center of the labeling box respectively, as shown in FIG. 2;
3) training a detection model: selecting a plurality of pictures with or without masks from a picture library as training data sets, using the rest pictures as test data sets, inputting the training data sets into a detection model, training the detection model, and carrying out the following processes:
3-1) extracting the target characteristics of the face and the mask in the test data set by adopting a five-layer convolution network layer based on ResNet;
3-2) further extracting the target characteristics of the face and the mask in the test data set by adopting three maximum pooling layers based on the SPP;
3-3) predicting the boundary frame coordinate values of the face and mask target in the test data set according to the extracted face and mask target characteristics by adopting three target detection layers based on a multi-scale prediction strategy, and calculating a target confidence score and a face and mask target class probability, wherein the mask target class probability is calculated by a logistic regression function sigmoid function alone;
3-4) adopting a non-maximum suppression method to score class probability according to the coordinate value of the boundary box and the confidence degree of the target, and calculating the mask target classification probability by using a logistic regression function (sigmoid function) alone, screening to obtain the detection results of the face and the mask target, and determining a detection model;
4) and (4) screening results: inputting test data into a detection model, iterating for more than ten thousand times in training, outputting a mask wearing detection result in the test data set, and storing face pictures of a worn mask and a non-worn mask in a classified manner, wherein the iteration times are 10000 times and 15000 times respectively;
5) determining the identity of the detected person: and for the picture of which the detection model is judged to be not wearing the mask, adopting faceNet to carry out screening comparison with the face in the face database, and determining the identity of the detected person.
The detection scales of the three target detection layers in the step 1) are respectively 13 × 13 pixels, 26 × 26 pixels and 104 × 104 pixels, and three corresponding anchor boxes with different sizes are respectively generated for each target detection layer by adopting a K-means clustering algorithm according to the detection scale of each target detection layer.
The data enhancement in the step 2) comprises the following steps: the method comprises the steps of turning over, cutting, randomly erasing and color distortion, wherein the random erasing refers to randomly selecting a rectangular area in an image, erasing pixels of the rectangular area and randomly erasing the pixels of the rectangular area, the characteristic of a randomly-erased target is adopted to simulate the shielding effect, the generalization capability of a model can be improved, the model can identify the target only through local characteristics in the training process, the cognition of the model on the local characteristics of the target is strengthened, the dependence of the model on all the characteristics of the target is weakened, the model is trained through the data, the robustness under the shielding condition of the mask can be enhanced, the color distortion includes changing the tone and the brightness of pictures, and the accuracy of the model in identifying different color masks is favorably enhanced.
In this example, the adopted software environments are a Darknet deep learning framework, a Windows 10 operating system, a NVIDIA GeForce driver 442.50 version, a CUDA Toolkit 10.1 version, and a CuDNN neural network accelerator 7.6.4 version, and the training settings are as follows: the number of samples for each iteration training is 64, the samples are divided into 16 batches, the motion factor is set to be 0.949, the re-attenuation regular coefficient is 0.0005, the maximum iteration number is set to be 20000, the learning rate is 0.0050, the iteration numbers are 10000 and 15000 respectively, and the learning rate attenuation is 0.0025 and 0.0005.
In order to ensure the accuracy of the detection effect after model training, the embodiment selects a real face picture as a training and testing sample, marks a mask wearing picture collected through a network by a LabelImg visual picture marking tool, marks the face by a rectangular frame during marking, divides the marked rectangular frame into two types of a worn mask and an unworn mask, marks 4000 pictures as a mask wearing detection data set, wherein 3200 pictures are used as a training set and 800 pictures are used as a test set, reads the loss value of each iteration from a log file after training is finished, and draws a curve, and the result is shown in figure 3, as can be seen from the curve of figure 3, after 17500 iterations, the loss value is basically stabilized at about 0.4, a weight file after 20000 iterations is used for testing on the test set, and TP is set to indicate whether the number of people wearing the mask is correctly detected or not, FP represents the number of people who are mistakenly detected to wear the mask, FN represents the number of people who are missed to detect the pedestrians, Precision and Recall value Recall are defined as formula (1) and formula (2), detailed test results are shown as table 1, partial detection effects on a test set are shown as figure 4, a face image with the mask is marked by adopting a square frame, masked characters are added, for the face image without the mask, the model is marked by using the square frame, no-masked characters are added, for the completely exposed face, the model of the embodiment gives a correct detection result, but for the shielded face, the identification capability of the embodiment is weak, but at the moment, because the face is shielded too much, the manual work is difficult to distinguish whether the mask is worn, and the identity cannot be determined, the condition that the face to be detected is shielded by a large area is avoided during detection, and the detection accuracy is finally obtained to be 0.985, the recall value is 0.977, and the average detection time is 35.2ms, which shows that the method of the embodiment achieves good detection effect:
TABLE 1 test results
TP | FP | FN |
987 | 15 | 23 |
。
Claims (3)
1. A real-time mask wearing detection method based on a convolutional neural network is characterized by comprising the following steps:
1) establishing a detection model based on a YOLO algorithm, wherein the model is provided with a five-layer convolutional neural network based on ResNet, a three-layer maximum pooling layer based on SPP and a three-layer target detection layer;
2) processing the video stream: inputting a video stream on the detection model established in the step 1), performing data enhancement on collected mask wearing picture data to form a picture library, wherein the picture library comprises two parts of pictures and labeled data, and the labeled data comprises: masked represents whether the mask is worn or not, and X, Y, H, L represents the X coordinate, the Y coordinate, the height and the length of the center of the marking frame respectively;
3) training a detection model: selecting a plurality of pictures with or without masks from a picture library as training data sets, using the rest pictures as test data sets, inputting the training data sets into a detection model, training the detection model, and carrying out the following processes:
3-1) extracting the target characteristics of the face and the mask in the test data set by adopting a five-layer convolution network layer based on ResNet;
3-2) further extracting the target characteristics of the face and the mask in the test data set by adopting three maximum pooling layers based on the SPP;
3-3) predicting the boundary frame coordinate values of the face and mask target in the test data set according to the extracted face and mask target characteristics by adopting three target detection layers based on a multi-scale prediction strategy, and calculating a target confidence score and a face and mask target class probability, wherein the mask target class probability is calculated by a logistic regression function sigmoid alone:
3-4) scoring class probability according to the coordinate value of the boundary box and the confidence coefficient of the target by adopting a non-maximum inhibition method, and obtaining the mask target classification probability by separately calculating by using a logistic regression function sigmoid, screening to obtain the detection results of the face and the mask target, and determining a detection model;
4) and (4) screening results: inputting test data into a detection model, iterating for more than ten thousand times in training, outputting a mask wearing detection result in the test data set, and storing face pictures with the detection result of wearing the mask and the face pictures without wearing the mask in a classified manner;
5) determining the identity of the detected person: and for the picture of which the detection model is judged to be not wearing the mask, adopting faceNet to carry out screening comparison with the face in the face database, and determining the identity of the detected person.
2. The real-time mask wearing detection method based on the convolutional neural network as claimed in claim 1, wherein the detection scales of the three target detection layers in step 1) are respectively 13 × 13, 26 × 26 and 104 × 104 pixels, and a K-means clustering algorithm is adopted to generate three anchor boxes with different sizes for each target detection layer according to the detection scale of each target detection layer.
3. The convolutional neural network-based real-time mask wear detection method according to claim 1, wherein the data enhancement in step 2) comprises: the method comprises the following steps of turning over, cutting, randomly erasing and color distortion, wherein the randomly erasing refers to randomly selecting a rectangular area in an image and erasing pixels of the rectangular area by using random values, the characteristic of a randomly erased target is adopted to simulate the shielding effect, and the color distortion comprises changing the tone and the brightness of the picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010806221.3A CN111931661A (en) | 2020-08-12 | 2020-08-12 | Real-time mask wearing detection method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010806221.3A CN111931661A (en) | 2020-08-12 | 2020-08-12 | Real-time mask wearing detection method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111931661A true CN111931661A (en) | 2020-11-13 |
Family
ID=73310696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010806221.3A Pending CN111931661A (en) | 2020-08-12 | 2020-08-12 | Real-time mask wearing detection method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111931661A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113553922A (en) * | 2021-07-05 | 2021-10-26 | 安徽中医药大学 | Mask wearing state detection method based on improved convolutional neural network |
WO2022213348A1 (en) * | 2021-04-09 | 2022-10-13 | 鸿富锦精密工业(武汉)有限公司 | Recognition method and apparatus for detecting face with mask, and computer storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018188453A1 (en) * | 2017-04-11 | 2018-10-18 | 腾讯科技(深圳)有限公司 | Method for determining human face area, storage medium, and computer device |
US20190102646A1 (en) * | 2017-10-02 | 2019-04-04 | Xnor.ai Inc. | Image based object detection |
CN109800665A (en) * | 2018-12-28 | 2019-05-24 | 广州粤建三和软件股份有限公司 | A kind of Human bodys' response method, system and storage medium |
CN109934081A (en) * | 2018-08-29 | 2019-06-25 | 厦门安胜网络科技有限公司 | A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network |
CN110119686A (en) * | 2019-04-17 | 2019-08-13 | 电子科技大学 | A kind of safety cap real-time detection method based on convolutional neural networks |
CN110135476A (en) * | 2019-04-28 | 2019-08-16 | 深圳市中电数通智慧安全科技股份有限公司 | A kind of detection method of personal safety equipment, device, equipment and system |
CN110569754A (en) * | 2019-08-26 | 2019-12-13 | 江西航天鄱湖云科技有限公司 | Image target detection method, device, storage medium and equipment |
CN111368688A (en) * | 2020-02-28 | 2020-07-03 | 深圳市商汤科技有限公司 | Pedestrian monitoring method and related product |
CN111414887A (en) * | 2020-03-30 | 2020-07-14 | 上海高重信息科技有限公司 | Secondary detection mask face recognition method based on YO L OV3 algorithm |
CN111488804A (en) * | 2020-03-19 | 2020-08-04 | 山西大学 | Labor insurance product wearing condition detection and identity identification method based on deep learning |
CN111507199A (en) * | 2020-03-25 | 2020-08-07 | 杭州电子科技大学 | Method and device for detecting mask wearing behavior |
-
2020
- 2020-08-12 CN CN202010806221.3A patent/CN111931661A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018188453A1 (en) * | 2017-04-11 | 2018-10-18 | 腾讯科技(深圳)有限公司 | Method for determining human face area, storage medium, and computer device |
US20190102646A1 (en) * | 2017-10-02 | 2019-04-04 | Xnor.ai Inc. | Image based object detection |
CN109934081A (en) * | 2018-08-29 | 2019-06-25 | 厦门安胜网络科技有限公司 | A kind of pedestrian's attribute recognition approach, device and storage medium based on deep neural network |
CN109800665A (en) * | 2018-12-28 | 2019-05-24 | 广州粤建三和软件股份有限公司 | A kind of Human bodys' response method, system and storage medium |
CN110119686A (en) * | 2019-04-17 | 2019-08-13 | 电子科技大学 | A kind of safety cap real-time detection method based on convolutional neural networks |
CN110135476A (en) * | 2019-04-28 | 2019-08-16 | 深圳市中电数通智慧安全科技股份有限公司 | A kind of detection method of personal safety equipment, device, equipment and system |
CN110569754A (en) * | 2019-08-26 | 2019-12-13 | 江西航天鄱湖云科技有限公司 | Image target detection method, device, storage medium and equipment |
CN111368688A (en) * | 2020-02-28 | 2020-07-03 | 深圳市商汤科技有限公司 | Pedestrian monitoring method and related product |
CN111488804A (en) * | 2020-03-19 | 2020-08-04 | 山西大学 | Labor insurance product wearing condition detection and identity identification method based on deep learning |
CN111507199A (en) * | 2020-03-25 | 2020-08-07 | 杭州电子科技大学 | Method and device for detecting mask wearing behavior |
CN111414887A (en) * | 2020-03-30 | 2020-07-14 | 上海高重信息科技有限公司 | Secondary detection mask face recognition method based on YO L OV3 algorithm |
Non-Patent Citations (2)
Title |
---|
BOCHKOVSKIY A,WANG C Y,LIAO Y M.: "YOLOv4:Optimal Speed and Accuracy of Object Detection", 《ARXIV》 * |
管军霖,智鑫: "基于YOLOv4卷积神经网络的口罩佩戴检测方法", 《现代信息科技》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022213348A1 (en) * | 2021-04-09 | 2022-10-13 | 鸿富锦精密工业(武汉)有限公司 | Recognition method and apparatus for detecting face with mask, and computer storage medium |
CN113553922A (en) * | 2021-07-05 | 2021-10-26 | 安徽中医药大学 | Mask wearing state detection method based on improved convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738101B (en) | Behavior recognition method, behavior recognition device and computer-readable storage medium | |
US20200285896A1 (en) | Method for person re-identification based on deep model with multi-loss fusion training strategy | |
CN108921159B (en) | Method and device for detecting wearing condition of safety helmet | |
CN106960195B (en) | Crowd counting method and device based on deep learning | |
CN104063722B (en) | A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier | |
TWI754806B (en) | System and method for locating iris using deep learning | |
CN108810620A (en) | Identify method, computer equipment and the storage medium of the material time point in video | |
CN108171112A (en) | Vehicle identification and tracking based on convolutional neural networks | |
CN104615986B (en) | The method that pedestrian detection is carried out to the video image of scene changes using multi-detector | |
Bo et al. | Particle pollution estimation from images using convolutional neural network and weather features | |
CN114241548A (en) | Small target detection algorithm based on improved YOLOv5 | |
CN111126325A (en) | Intelligent personnel security identification statistical method based on video | |
CN112534470A (en) | System and method for image-based inspection of target objects | |
CN112149512A (en) | Helmet wearing identification method based on two-stage deep learning | |
CN111079518B (en) | Ground-falling abnormal behavior identification method based on law enforcement and case handling area scene | |
CN111931661A (en) | Real-time mask wearing detection method based on convolutional neural network | |
CN108648211A (en) | A kind of small target detecting method, device, equipment and medium based on deep learning | |
CN111507134A (en) | Human-shaped posture detection method and device, computer equipment and storage medium | |
JPH06333054A (en) | System for detecting target pattern within image | |
Thipakorn et al. | Egg weight prediction and egg size classification using image processing and machine learning | |
CN114120317B (en) | Optical element surface damage identification method based on deep learning and image processing | |
CN113221956B (en) | Target identification method and device based on improved multi-scale depth model | |
CN107992854A (en) | Forest Ecology man-machine interaction method based on machine vision | |
CN114419659A (en) | Method for detecting wearing of safety helmet in complex scene | |
CN115861715A (en) | Knowledge representation enhancement-based image target relation recognition algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20201113 |