CN113343846A - Reflective garment detection system based on depth layer feature fusion - Google Patents

Reflective garment detection system based on depth layer feature fusion Download PDF

Info

Publication number
CN113343846A
CN113343846A CN202110627024.XA CN202110627024A CN113343846A CN 113343846 A CN113343846 A CN 113343846A CN 202110627024 A CN202110627024 A CN 202110627024A CN 113343846 A CN113343846 A CN 113343846A
Authority
CN
China
Prior art keywords
reflective
image
feature
unit
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110627024.XA
Other languages
Chinese (zh)
Other versions
CN113343846B (en
Inventor
范晨翔
张笑钦
曹少丽
赵丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN202110627024.XA priority Critical patent/CN113343846B/en
Publication of CN113343846A publication Critical patent/CN113343846A/en
Application granted granted Critical
Publication of CN113343846B publication Critical patent/CN113343846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides a reflective garment detection system based on depth layer feature fusion, which comprises: the device comprises an image acquisition unit, a personnel detection unit, a reflective clothes identification unit and an alarm reminding unit; the image acquisition unit is used for acquiring a monitoring video image of a working area and carrying out preprocessing operation on the monitoring video image; the personnel detection unit is used for analyzing the preprocessed image information and identifying personnel information in the image based on the analysis result; the reflective garment identification unit is used for detecting reflective garments according to colors set by a user or automatically identified colors by adopting a color identification-based method and judging whether workers wear the reflective garments or not; the alarm reminding unit is used for controlling the alarm device to carry out alarm reminding when identifying that a person who does not wear the reflective clothes in the working area, the detection precision and speed of detecting whether the person wears the reflective clothes are improved, and different types of reflective clothes can be identified according to the standard set by a user.

Description

Reflective garment detection system based on depth layer feature fusion
Technical Field
The invention relates to the technical field of safety monitoring, in particular to a reflective garment detection system based on depth layer feature fusion.
Background
The reflective garment is a garment capable of playing a role in warning under various light conditions. Common varieties are: reflective work clothes, reflective vests, reflective raincoats, and the like. Retroreflective garments are generally composed of a base material that is conspicuous in color and retroreflective materials, such as fluorescent materials and retroreflective materials. The fluorescent light and the reflection effect can ensure that a wearer can form strong contrast with the surrounding environment no matter in the daytime or at night under the irradiation of lamplight, thereby playing the roles of safety protection and the like. In industrial production and construction, work clothes and helmets play a significant role in the prevention of safety accidents, and therefore, wearing work clothes and wearing helmets on a regular basis is a necessary measure for safety production.
In conclusion, the light reflecting clothes detection system based on the depth layer feature fusion can improve the wearing detection precision and speed of light reflecting clothes, can give an alarm in time, and can identify the light reflecting clothes with different color types, and is a problem which needs to be solved urgently by technical personnel in the field.
Disclosure of Invention
In order to solve the above mentioned problems and needs, the present invention provides a reflective garment detection system based on depth feature fusion, which can solve the above technical problems by adopting the following technical solutions.
In order to achieve the purpose, the invention provides the following technical scheme: a reflective garment detection system based on depth layer feature fusion comprises the following steps: the device comprises an image acquisition unit, a personnel detection unit, a reflective clothes identification unit and an alarm reminding unit;
the image acquisition unit is used for acquiring a monitoring video image of a working area and carrying out preprocessing operation on the monitoring video image, wherein the preprocessing comprises the following steps: continuous image acquisition is carried out according to a user instruction, received monitoring video images are subjected to framing and graying processing, and input into the personnel detection unit for personnel detection;
the personnel detection unit is used for analyzing the preprocessed image information, identifying personnel information in the image based on the analysis result, and tracking and detecting the personnel;
the reflective garment identification unit is used for detecting reflective garments according to colors set by a user or automatically identified colors by adopting a color identification-based method and judging whether workers wear the reflective garments or not;
the alarm reminding unit is used for controlling the alarm device to carry out alarm reminding when the reflective clothes identification unit identifies that a person who does not wear reflective clothes exists in the working area, and if not, the detection is continued.
Furthermore, the image acquisition unit comprises a plurality of high-definition cameras and a camera shooting parameter control module, the high-definition cameras are used for acquiring image information in a monitoring working area and carrying out image preprocessing on the image information, and the camera shooting parameter control module is used for adjusting the angles and the focal length parameters of the high-definition cameras and compensating the light, controlling the power consumption of the high-definition cameras and the working switches and carrying out equipment fault detection.
Further, the method based on color identification comprises the steps of obtaining a minimum rectangle which can surround a detection target color block in a current frame image, solving an RGB image of a target object in the minimum rectangle, traversing a picture to solve pixel values of all colors, setting a threshold value for each pixel value, judging the size of each pixel value and outputting the color of the detection target.
Further, before analyzing the preprocessed image information, a personnel detection model based on a Faster R-CNN method needs to be constructed, a figure picture data set of a construction site is obtained, the data set is proportionally distributed to serve as a training set and a testing set, and the personnel detection model is trained by the training set.
Still further, the trained personnel detection model comprises: inputting the input processed image information of the training set into a convolutional layer, and performing feature extraction on the input figure picture of the construction site by using a feature extraction network based on fast R-CNN; and carrying out classification and identification on the extracted feature graph to obtain an identification result graph.
Furthermore, the data set is processed by using a data enhancement technology, the pictures are manually marked, and finally the marking result is made into a passacal _ voc data set format, wherein the data enhancement technology comprises random angle rotation, vertical turning, random cutting, Gaussian noise and mirror image.
Furthermore, the feature extraction network adopts a 50-layer residual error network ResNet50 network to extract image features, the processed image is sent to an underlying network to obtain a feature map by using the ResNet network, a candidate region is generated by using a region recommendation network based on FPN, and a feature map of the candidate region is generated on each picture by combining the FPN on the obtained feature map; sequentially carrying out ROI Pooling operation on the feature maps of the candidate regions with different sizes through an ROI Pooling layer to obtain an output feature map with a fixed size; processing the output feature map by two full-connection layers to output feature vectors, inputting the feature vectors into two output layers of the same level, wherein the two output layers of the same level comprise a classification layer for judging whether a target is a person and a boundary regression layer for finely adjusting the position and the size of a ROI (region of interest) frame, and outputting the category of a candidate region and the exact position of the candidate region.
Furthermore, the ROI Pooling process converts ROIs with different sizes in the input feature map into output feature maps with fixed sizes by using a Pooling method, the RoIs with different sizes adopt different feature layers, when the size of an object is larger, the features of a high layer are adopted, when the size of the object is smaller, the features of a bottom layer are adopted, and the ROI with different scales are distributed to different pyramid layers by using a coefficient k through the feature pyramid network. The RoI of width w and height h is assigned to the level of the FPN,
Figure BDA0003101818750000031
where w and h are the width and height of the ROI, k0 ═ 5, and k corresponds to the P layer in the FPN.
Furthermore, the anchors of the FPN structure use 5 prediction scales of 32 × 32, 64 × 64, 128 × 128, 256 × 256 and 512 × 512, and 3 kinds of aspect ratios of 1:2,1:1 and 2:1, corresponding to ResNet50, each layer of pyramids P2, P3, P4, P5 and P6, and use 15 types of anchors in common to predict the target objects and the background in the character picture of the construction site, so as to generate the target candidate frame of interest.
Further, alarm device includes signal reception module and audible and visual alarm module, signal reception module is used for receiving the discernment result that reflection of light clothing identification element sent, when discerning the personnel and not wearing reflection of light clothing, control audible and visual alarm module carries out pronunciation and light warning and reminds, audible and visual alarm module includes speech synthesis chip, controller, speaker and alarm lamp, the speaker passes through speech synthesis chip with the controller electricity is connected, the alarm lamp with the controller is used for carrying out the light warning of different modes according to the alarm signal of controller output.
According to the technical scheme, the invention has the beneficial effects that: the reflective garment detection device can improve the wearing detection precision and speed of the reflective garment, can give an alarm in time, and can identify the reflective garments of different color types according to the standard set by a user.
In addition to the above objects, features and advantages, preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings so that the features and advantages of the present invention can be easily understood.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described, wherein the drawings are only used for illustrating some embodiments of the present invention and do not limit all embodiments of the present invention thereto.
Fig. 1 is a schematic structural diagram of a reflective garment detection system based on depth-layer feature fusion.
Fig. 2 is a schematic diagram illustrating specific steps of training a human detection model in this embodiment.
Fig. 3 is a network structure diagram of the area generation network RPN based on the FPN in this embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of specific embodiments of the present invention. Like reference symbols in the various drawings indicate like elements. It should be noted that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without any inventive step, are within the scope of protection of the invention.
In industrial production and construction process, work clothes such as reflection of light clothes play the effect of lifting at the weight of in the precaution of incident, and it can form strong contrast with surrounding environment, is noticed easily, plays effects such as safety protection, to in construction or the production operation place, whether the staff dresses reflection of light clothes and carries out accurate detection, is the important measure of guaranteeing construction safety. As shown in fig. 1 to 3, a reflective garment detection system based on depth feature fusion is provided, which specifically includes: image acquisition unit, personnel detecting element, reflection of light clothing recognition cell and warning suggestion unit, the image acquisition unit is used for acquireing work area's surveillance video image, and right surveillance video image carries out the preliminary treatment operation, the preliminary treatment includes: and continuous image acquisition is carried out according to a user instruction, the received monitoring video image is subjected to framing and graying processing, and the processed monitoring video image is input into the personnel detection unit for personnel detection. The image acquisition unit comprises a plurality of high-definition cameras and a camera shooting parameter control module, the high-definition cameras are used for acquiring image information in a monitoring working area and carrying out image preprocessing on the image information, and the camera shooting parameter control module is used for adjusting angles and focal length parameters of the high-definition cameras and compensating light rays, controlling power utilization and working switches of the high-definition cameras and carrying out equipment fault detection.
The personnel detection unit is used for analyzing the preprocessed image information, identifying personnel information in the image based on the analysis result, and tracking and detecting the personnel. Before analyzing the preprocessed image information, a person detection model based on a Faster R-CNN method needs to be constructed, a figure picture data set of a construction site is obtained, the data set is proportionally distributed to serve as a training set and a test set, and the person detection model is trained by the training set.
As shown in fig. 2, the step of training the human detection model includes: a. acquiring a figure picture data set of a construction site, and proportionally distributing the data set as a training set and a test set; b. inputting the input processed image information of the training set into a convolutional layer, and performing feature extraction on the input figure picture of the construction site by using a feature extraction network based on fast R-CNN; c. and classifying and identifying the extracted feature graph by using a classifier to obtain an identification result graph. The data set is processed by using a data enhancement technology, the pictures are manually marked, and finally the marking result is made into a past cal _ voc data set format, wherein the data enhancement technology comprises random angle rotation, vertical turning, random cutting, Gaussian noise and mirror image. In this embodiment, after people's pictures of a construction site are collected, a data set is expanded by turning, cutting, rotating and the like, and labellimg software is used to label the data set with 1640 training sets and 1200 testing sets, wherein the training effect is also influenced by the number of training sample samples. If the training samples are too small, overfitting can be caused, the more training samples are, the stronger generalization capability of the network is, and the higher precision of the finally obtained model is.
Compared with Fast R-CNN, the Fast R-CNN feature extraction network has higher detection accuracy and higher detection speed, and specifically comprises two parts, an FPN-based region generation network RPN and a Fast R-CNN shared convolution part, an FPN-based region generation network RPN is used for generating candidate regions for the Fast R-CNN shared convolution part, and the Fast R-CNN shared convolution part is used for calculating the category, the score and the like of each candidate region.
The convolutional network is more classical than VGGNet, GoogleNet, ResNet and the like, and the ResNet network introduces a residual module, so that the problem of gradient disappearance caused by network deepening is solved, and a deeper network can continue training and learning. In this embodiment, the feature extraction network adopts a 50-layer residual error network ResNet50 network to extract image features, sends the processed image to an underlying network to obtain a feature map by using the ResNet network, generates a candidate region by using a FPN-based region recommendation network, and generates a feature map of the candidate region on each picture by combining the FPN on the obtained feature map; sequentially carrying out ROI Pooling operation on the feature maps of the candidate regions with different sizes through an ROI Pooling layer to obtain an output feature map with a fixed size; processing the output feature map by two full-connection layers to output feature vectors, inputting the feature vectors into two output layers of the same level, wherein the two output layers of the same level comprise a classification layer for judging whether a target is a person and a boundary regression layer for finely adjusting the position and the size of a ROI (region of interest) frame, and outputting the category of a candidate region and the exact position of the candidate region. The ROI Pooling process converts ROIs with different sizes in an input feature map into output feature maps with fixed sizes by using a Pooling method, the RoIs with different sizes adopt different feature layers, when the size of an object is larger, high-layer features are adopted, when the size of the object is smaller, bottom-layer features are adopted, and the ROI with different scales are distributed to different pyramid layers by using a coefficient k through a feature pyramid network. The RoI of width w and height h is assigned to the level of the FPN,
Figure BDA0003101818750000071
Figure BDA0003101818750000072
where w and h are the width and height of the ROI, k0 ═ 5, and k corresponds to the P layer in the FPN. In this embodiment, before sequentially passing feature maps of candidate regions of different sizes through the ROI Pooling layer for ROI Pooling operation, a non-maximum suppression algorithm is adopted, repeated detection frames of a target detection task are removed, an optimal target detection position is found, and in the fast R-CNN training process, a large number of generated human and object candidate frames are post-processed by using an NMS algorithm, so as to remove redundant candidate frames, thereby speeding up the target detection efficiency and improving the detection accuracy.
In addition, after images are firstly subjected to convolution and pooling for multiple times in a convolution network, abstract semantic feature information is extracted, then final prediction is carried out through a plurality of layers of full connection layers, generally, the prediction of the target comprises classification and a boundary box regression problem. The specific working principle is as follows: generally, the difference between the input candidate frame and the real target frame is small, so that the whole process can be regarded as a linear change process, and the original candidate frame is subjected to fine adjustment through linear change.
Specifically, the anchors of the FPN structure use 5 prediction scales of 32 × 32, 64 × 64, 128 × 128, 256 × 256, and 512 × 512, and 3 kinds of aspect ratios of 1:2,1:1,2:1, as shown in fig. 2, and correspond to each layer of pyramids P2, P3, P4, P5, and P6 of ResNet50, and share 15 types of anchors to predict the target object and the background in the character picture of the construction site, so as to generate the target candidate frame of interest.
The reflective clothes identification unit is used for detecting reflective clothes according to colors set by a user or automatically identifying the colors by adopting a color identification-based method, and judging whether workers wear the reflective clothes. The method based on color identification comprises the steps of obtaining a minimum rectangle which can surround a detection target color block in a current frame image, solving an RGB (red, green and blue) image of a target object in the minimum rectangle, traversing a picture to solve pixel values of all colors, setting a threshold value for each pixel value, judging the size of each pixel value and outputting the color of a detection target.
The alarm reminding unit is used for controlling the alarm device to carry out alarm reminding when the reflective clothes identification unit identifies that a person who does not wear reflective clothes exists in the working area, and if not, the detection is continued. Alarm device includes signal reception module and audible-visual alarm module, signal reception module is used for receiving the discernment result that reflection of light clothing identification element sent when discerning the personnel and not dressing reflection of light clothing, control audible-visual alarm module carries out pronunciation and light warning and reminds, audible-visual alarm module includes speech synthesis chip, controller, speaker and alarm lamp, the speaker passes through speech synthesis chip with the controller electricity is connected, the alarm lamp with the controller is used for carrying out the light warning of different modes according to the alarm signal of controller output.
It should be noted that the described embodiments of the invention are only preferred ways of implementing the invention, and that all obvious modifications, which are within the scope of the invention, are all included in the present general inventive concept.

Claims (10)

1. The utility model provides a reflection of light clothing detecting system based on depth layer characteristic fuses which characterized in that includes: the device comprises an image acquisition unit, a personnel detection unit, a reflective clothes identification unit and an alarm reminding unit;
the image acquisition unit is used for acquiring a monitoring video image of a working area and carrying out preprocessing operation on the monitoring video image, wherein the preprocessing comprises the following steps: continuous image acquisition is carried out according to a user instruction, received monitoring video images are subjected to framing and graying processing, and input into the personnel detection unit for personnel detection;
the personnel detection unit is used for analyzing the preprocessed image information, identifying personnel information in the image based on the analysis result, and tracking and detecting the personnel;
the reflective garment identification unit is used for detecting reflective garments according to colors set by a user or automatically identified colors by adopting a color identification-based method and judging whether workers wear the reflective garments or not;
the alarm reminding unit is used for controlling the alarm device to carry out alarm reminding when the reflective clothes identification unit identifies that a person who does not wear reflective clothes exists in the working area, and if not, the detection is continued.
2. The reflective garment detection system based on depth feature fusion as claimed in claim 1, wherein the image acquisition unit comprises a plurality of high definition cameras and a camera parameter control module, the plurality of high definition cameras are used for acquiring image information in a monitoring work area and performing image preprocessing on the image information, and the camera parameter control module is used for adjusting the angle and focal length parameters of the plurality of high definition cameras and performing light compensation, controlling the power consumption and working switches of the plurality of high definition cameras and performing equipment fault detection.
3. The system for detecting reflective garment based on fusion of light and dark features as claimed in claim 1, wherein the method based on color recognition comprises obtaining a minimum rectangle capable of surrounding the detection target in the current frame image, finding R, G, B three-channel images of the target object in the minimum rectangle, traversing the images to find pixel values of each color, then setting a threshold value for each pixel value, judging the size of each pixel value according to the threshold value and outputting the color of the detection target.
4. The reflective garment inspection system according to claim 1, wherein before analyzing the preprocessed image information, a personnel inspection model based on the fast R-CNN method is constructed, a data set of figure pictures of the construction site is obtained, the data set is proportionally distributed as a training set and a testing set, and the training set is used to train the personnel inspection model.
5. The reflective garment inspection system based on fusion of depth features according to claim 4, wherein said trainee inspection model comprises: inputting the input processed image information of the training set into a convolutional layer, and performing feature extraction on the input figure picture of the construction site by using a feature extraction network based on fast R-CNN; and carrying out classification and identification on the extracted feature graph to obtain an identification result graph.
6. The system of claim 5, wherein the data collection is processed by using a data enhancement technique, the pictures are manually labeled, and finally the labeling result is made into a pascal _ voc data collection format, wherein the data enhancement technique comprises random angle rotation, vertical flipping, random clipping, Gaussian noise and mirror image.
7. The reflective garment detection system based on depth layer feature fusion as claimed in claim 6, wherein the feature extraction network adopts 50 layers of residual error network ResNet50 network to extract image features, the processed image is sent to an underlying network to obtain a feature map by using the ResNet network, a candidate region is generated by using an FPN-based region recommendation network, and a feature map of the candidate region is generated on each picture by combining the FPN on the obtained feature map; sequentially carrying out ROI Pooling operation on the feature maps of the candidate regions with different sizes through an ROI Pooling layer to obtain an output feature map with a fixed size; processing the output feature map by two full-connection layers to output feature vectors, inputting the feature vectors into two output layers of the same level, wherein the two output layers of the same level comprise a classification layer for judging whether a target is a person and a boundary regression layer for finely adjusting the position and the size of a ROI (region of interest) frame, and outputting the category of a candidate region and the exact position of the candidate region.
8. The method of claim 7 based on shallow featuresThe fused reflective garment detection system is characterized in that ROI Pooling processes convert ROIs with different sizes in input feature maps into output feature maps with fixed sizes by using a Pooling method, the ROIs with different sizes adopt different feature layers, when the size of an object is larger, high-layer features are adopted, when the size of the object is smaller, bottom-layer features are adopted, and the ROI with different sizes are distributed to different pyramid layers by using a coefficient k through a feature pyramid network. The RoI of width w and height h is assigned to the level of the FPN,
Figure FDA0003101818740000031
where w and h are the width and height of the ROI, k0 ═ 5, and k corresponds to the P layer in the FPN.
9. The reflective garment inspection system based on fusion of depth and superficial characteristics as claimed in claim 8, wherein the anchor of FPN structure uses 5 prediction scales of 32 × 32, 64 × 64, 128 × 128, 256 × 256 and 512 × 512, and 3 kinds of anchors with aspect ratio of 1:2,1:1,2:1, corresponding to ResNet50 each layer of pyramid P2, P3, P4, P5 and P6, and 15 kinds of anchors are used in common to predict the target object and background in the character picture of the construction site, so as to generate the target candidate frame of interest.
10. The reflective garment detection system based on fusion of depth and superficial layer features as claimed in claim 1, wherein said alarm device comprises a signal receiving module and an audible and visual alarm module, said signal receiving module is used for receiving the recognition result sent by said reflective garment recognition unit, when it is recognized that the person does not wear the reflective garment, said audible and visual alarm module is controlled to perform voice and light alarm reminding, said audible and visual alarm module comprises a voice synthesis chip, a controller, a speaker and an alarm lamp, said speaker is electrically connected with said controller through said voice synthesis chip, said alarm lamp and said controller are used for performing different modes of light alarm according to the alarm signal output by the controller.
CN202110627024.XA 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion Active CN113343846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110627024.XA CN113343846B (en) 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110627024.XA CN113343846B (en) 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion

Publications (2)

Publication Number Publication Date
CN113343846A true CN113343846A (en) 2021-09-03
CN113343846B CN113343846B (en) 2024-03-15

Family

ID=77475336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110627024.XA Active CN113343846B (en) 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion

Country Status (1)

Country Link
CN (1) CN113343846B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830205A (en) * 2018-06-04 2018-11-16 江南大学 Based on the multiple dimensioned perception pedestrian detection method for improving full convolutional network
CN109117827A (en) * 2018-09-05 2019-01-01 武汉市蓝领英才科技有限公司 Work clothes work hat wearing state automatic identifying method and alarm system based on video
WO2020019673A1 (en) * 2018-07-25 2020-01-30 深圳云天励飞技术有限公司 Construction site monitoring method and device based on image analysis, and readable storage medium
CN111091110A (en) * 2019-12-24 2020-05-01 山东仁功智能科技有限公司 Wearing identification method of reflective vest based on artificial intelligence
CN111126325A (en) * 2019-12-30 2020-05-08 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site
US20200387785A1 (en) * 2019-06-05 2020-12-10 Wuhan University Power equipment fault detecting and positioning method of artificial intelligence inference fusion
CN112149514A (en) * 2020-08-28 2020-12-29 中国地质大学(武汉) Method and system for detecting safety dressing of construction worker
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830205A (en) * 2018-06-04 2018-11-16 江南大学 Based on the multiple dimensioned perception pedestrian detection method for improving full convolutional network
WO2020019673A1 (en) * 2018-07-25 2020-01-30 深圳云天励飞技术有限公司 Construction site monitoring method and device based on image analysis, and readable storage medium
CN109117827A (en) * 2018-09-05 2019-01-01 武汉市蓝领英才科技有限公司 Work clothes work hat wearing state automatic identifying method and alarm system based on video
US20200387785A1 (en) * 2019-06-05 2020-12-10 Wuhan University Power equipment fault detecting and positioning method of artificial intelligence inference fusion
CN111091110A (en) * 2019-12-24 2020-05-01 山东仁功智能科技有限公司 Wearing identification method of reflective vest based on artificial intelligence
CN111126325A (en) * 2019-12-30 2020-05-08 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site
CN112149514A (en) * 2020-08-28 2020-12-29 中国地质大学(武汉) Method and system for detecting safety dressing of construction worker
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孔英会;王维维;张珂;戚银城;: "基于改进Mask R-CNN模型的电力场景目标检测方法", 科学技术与工程, no. 08, 18 March 2020 (2020-03-18) *
张笑钦: "《Robust feature learning for adversarial defense via hierarchical feature alignment》", INFORMATION SCIENCES, no. 560, 20 December 2020 (2020-12-20), pages 256 - 270 *

Also Published As

Publication number Publication date
CN113343846B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN112115818B (en) Mask wearing identification method
CN111414887B (en) Secondary detection mask face recognition method based on YOLOV3 algorithm
CN111209810A (en) Bounding box segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time in visible light and infrared images
CN111126325B (en) Intelligent personnel security identification statistical method based on video
CN102542246A (en) Abnormal face detection method for ATM (Automatic Teller Machine)
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN113516076A (en) Improved lightweight YOLO v4 safety protection detection method based on attention mechanism
CN114842397B (en) Real-time old man falling detection method based on anomaly detection
CN112364778A (en) Power plant safety behavior information automatic detection method based on deep learning
CN113158850B (en) Ship driver fatigue detection method and system based on deep learning
CN111091110A (en) Wearing identification method of reflective vest based on artificial intelligence
CN112287838B (en) Cloud and fog automatic identification method and system based on static meteorological satellite image sequence
CN114882440A (en) Human head detection method and system
CN111401310B (en) Kitchen sanitation safety supervision and management method based on artificial intelligence
CN115223204A (en) Method, device, equipment and storage medium for detecting illegal wearing of personnel
CN113673614B (en) Metro tunnel foreign matter intrusion detection device and method based on machine vision
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
CN117475353A (en) Video-based abnormal smoke identification method and system
CN113343846A (en) Reflective garment detection system based on depth layer feature fusion
CN108563986A (en) Earthquake region electric pole posture judgment method based on wide-long shot image and system
CN115995097A (en) Deep learning-based safety helmet wearing standard judging method
CN107507191B (en) A kind of computational methods of the penetrating degree of tree crown
Shimizu et al. Development of a person-searching algorithm using an omnidirectional camera and LiDAR for the Tsukuba challenge
CN113569655A (en) Red eye patient identification system based on eye color monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant