CN113343846B - Reflective clothing detecting system based on depth layer feature fusion - Google Patents

Reflective clothing detecting system based on depth layer feature fusion Download PDF

Info

Publication number
CN113343846B
CN113343846B CN202110627024.XA CN202110627024A CN113343846B CN 113343846 B CN113343846 B CN 113343846B CN 202110627024 A CN202110627024 A CN 202110627024A CN 113343846 B CN113343846 B CN 113343846B
Authority
CN
China
Prior art keywords
image
feature
reflective clothing
reflective
personnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110627024.XA
Other languages
Chinese (zh)
Other versions
CN113343846A (en
Inventor
范晨翔
张笑钦
曹少丽
赵丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN202110627024.XA priority Critical patent/CN113343846B/en
Publication of CN113343846A publication Critical patent/CN113343846A/en
Application granted granted Critical
Publication of CN113343846B publication Critical patent/CN113343846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a reflective clothing detection system based on depth layer feature fusion, which comprises the following components: the device comprises an image acquisition unit, a personnel detection unit, a reflective garment identification unit and an alarm reminding unit; the image acquisition unit is used for acquiring a monitoring video image of the working area and preprocessing the monitoring video image; the personnel detection unit is used for analyzing the preprocessed image information and identifying personnel information in the image based on an analysis result; the reflective clothing identification unit is used for detecting reflective clothing according to the set color of a user or the automatic identification color by adopting a color identification-based method and judging whether a worker wears the reflective clothing or not; the alarm reminding unit is used for controlling the alarm device to carry out alarm reminding when the personnel who does not wear the reflective clothing are identified in the working area.

Description

Reflective clothing detecting system based on depth layer feature fusion
Technical Field
The invention relates to the technical field of safety monitoring, in particular to a reflective clothing detection system based on depth layer feature fusion.
Background
The reflective clothing is clothing which can play a role in warning under various light conditions. Common varieties are: reflective work clothes, reflective vests, reflective raincoats, and the like. Retroreflective garments generally consist of a base material that is conspicuous in color and a retroreflective material, such as a fluorescent material and a retroreflective material. The effect of fluorescence and reflection can lead a wearer to form strong contrast with the surrounding environment under the irradiation of lamplight in daytime or at night, thereby playing the roles of safety protection and the like. During industrial production and construction, work clothes and helmets play a significant role in the prevention of safety accidents, and therefore, wearing work clothes and helmets as prescribed is a necessary measure for safety production.
In summary, the reflective clothing detection system based on deep and shallow feature fusion, which can improve the wearing detection precision and speed of the reflective clothing, can give an alarm in time, and can identify reflective clothing of different color types, is a problem which needs to be solved by those skilled in the art.
Disclosure of Invention
Aiming at the problems and the demands, the scheme provides a reflective clothing detection system based on deep and shallow layer feature fusion, which can solve the technical problems due to the following technical scheme.
In order to achieve the above purpose, the present invention provides the following technical solutions: reflective clothing detecting system based on depth layer feature fusion includes the following steps: the device comprises an image acquisition unit, a personnel detection unit, a reflective garment identification unit and an alarm reminding unit;
the image acquisition unit is used for acquiring a monitoring video image of a working area and preprocessing the monitoring video image, and the preprocessing comprises the following steps: continuous image acquisition is carried out according to a user instruction, the received monitoring video image is subjected to framing and graying treatment, and the received monitoring video image is input into the personnel detection unit for personnel detection;
the personnel detection unit is used for analyzing the preprocessed image information, identifying personnel information in the image based on an analysis result, and tracking and detecting personnel;
the reflective clothing identification unit is used for detecting reflective clothing according to the set color of a user or the automatic identification color by adopting a color identification-based method and judging whether a worker wears the reflective clothing or not;
the alarming reminding unit is used for controlling the alarming device to alarm when the reflective clothing identification unit identifies that a person wearing no reflective clothing exists in the working area, and if not, the detection is continued.
Further, the image acquisition unit comprises a plurality of high-definition cameras and a shooting parameter control module, wherein the plurality of high-definition cameras are used for acquiring image information in a monitoring working area and carrying out image preprocessing on the image information, and the shooting parameter control module is used for adjusting angle and focal length parameters of the plurality of high-definition cameras and compensating light rays, controlling power consumption and working switches of the plurality of high-definition cameras and carrying out equipment fault detection.
Further, the color recognition-based method comprises the steps of obtaining a minimum rectangle which can surround a detection target color block in a current frame image, solving an RGB image of a target object in the minimum rectangle, traversing a picture to solve pixel values of all colors, setting a threshold value for each pixel value, judging the size of each pixel value, and outputting the color of the detection target.
Further, before analyzing the preprocessed image information, a personnel detection model based on a fast R-CNN method is required to be constructed, a figure picture dataset of a construction site is obtained, the dataset is proportionally distributed to be used as a training set and a testing set, and the personnel detection model is trained by the training set.
Still further, the training person detection model includes: the input training set image information after being processed is transmitted to a convolution layer, and character images of the input construction site are extracted by utilizing a character extraction network based on Faster R-CNN; and classifying and identifying the extracted feature images to obtain an identification result image.
Furthermore, the data set is processed by using a data enhancement technology, the picture is manually marked, and finally the marked result is manufactured into a pascal_voc data set format, wherein the data enhancement technology comprises random angle rotation, vertical overturn, random clipping, gaussian noise and mirroring.
Furthermore, the feature extraction network adopts a 50-layer residual network ResNet50 network to extract image features, the processed images are sent to a bottom layer network to obtain feature images by utilizing the ResNet network, a candidate region is generated by utilizing a region recommendation network based on FPN, the obtained feature images are combined with FPN, and the feature images of the candidate region are generated on each picture; sequentially carrying out ROI Pooling operation on the feature images of the candidate areas with different sizes through the ROI Pooling layer to obtain an output feature image with a fixed size; and processing the output feature map through two full-connection layers to output feature vectors, and inputting the feature vectors to two identical-stage output layers, wherein the two identical-stage output layers comprise a classification layer for judging whether a target is a person or not and a boundary regression layer for fine-tuning the position and the size of an ROI (region of interest) frame, and outputting the category of a candidate region and the exact position of the candidate region.
Furthermore, the ROI Pooling process converts ROIs with different sizes in the input feature map into output feature map with fixed size by using a Pooling method, wherein the ROIs with different sizes adopt different feature layers, when the object size is larger, the high-level features are adopted, and when the object size is smaller, the low-level features are adopted, and the feature pyramid network distributes ROIs with different scales to different pyramid layers by using a coefficient k. The rois of width w and height h are assigned to the levels of FPN,where w and h are the width and height of the ROI, k0=5, and k corresponds to the P layer in the FPN.
Furthermore, the anchor of the FPN structure predicts the target object and background in the character picture of the construction site by using 5 prediction scales of 32×32, 64×64, 128×128, 256×256 and 512×512, and 3 aspect ratios of 1:2,1:1,2:1, corresponding to each layer of pyramids P2, P3, P4, P5, P6 of the res net50, and 15 types of anchors are used to generate the target candidate frame of interest.
Further, alarm device includes signal receiving module and audible and visual alarm module, signal receiving module is used for receiving the recognition result that reflective clothing identification unit sent, when discernment personnel did not dress reflective clothing, control audible and visual alarm module carries out pronunciation and light warning and reminds, audible and visual alarm module includes speech synthesis chip, controller, speaker and alarm lamp, the speaker passes through speech synthesis chip with the controller electricity is connected, the alarm lamp with the controller is used for carrying out the light warning of different modes according to the alarm signal of controller output.
From the technical scheme, the beneficial effects of the invention are as follows: the method can improve the wearing detection precision and speed of the reflective clothing, can give an alarm in time, and can identify the reflective clothing with different color types according to the standard set by a user.
In addition to the objects, features and advantages described above, preferred embodiments for carrying out the present invention will be described in more detail below with reference to the accompanying drawings so that the features and advantages of the present invention can be readily understood.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the description of the embodiments of the present invention, wherein the drawings are only for illustrating some embodiments of the present invention, and not limiting all embodiments of the present invention thereto.
FIG. 1 is a schematic diagram of the composition structure of a reflective clothing detection system based on depth layer feature fusion.
Fig. 2 is a schematic diagram showing specific steps of training a human detection model in the present embodiment.
Fig. 3 is a network configuration diagram of a network RPN generated for an FPN-based region in this embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the technical solutions of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of specific embodiments of the present invention. Like reference numerals in the drawings denote like parts. It should be noted that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present invention fall within the protection scope of the present invention.
In the industrial production and construction process, work clothes such as reflective clothing play a role in the prevention of safety accidents, can form strong contrast with surrounding environment, is easy to notice, plays a role in safety protection and the like, accurately detects whether workers wear the reflective clothing in construction or production operation sites, and is an important measure for guaranteeing construction safety. As shown in fig. 1 to 3, a reflective clothing detection system based on deep and shallow feature fusion is provided, which specifically includes: the system comprises an image acquisition unit, a personnel detection unit, a reflective clothing identification unit and an alarm reminding unit, wherein the image acquisition unit is used for acquiring a monitoring video image of a working area and preprocessing the monitoring video image, and the preprocessing comprises the following steps: and carrying out continuous image acquisition according to a user instruction, carrying out framing and graying treatment on the received monitoring video image, and inputting the frame and graying treatment into the personnel detection unit for personnel detection. The image acquisition unit comprises a plurality of high-definition cameras and a shooting parameter control module, wherein the plurality of high-definition cameras are used for acquiring image information in a monitoring working area and carrying out image preprocessing on the image information, and the shooting parameter control module is used for adjusting angle and focal length parameters of the plurality of high-definition cameras and compensating light rays, controlling power consumption and working switches of the plurality of high-definition cameras and carrying out equipment fault detection.
The personnel detection unit is used for analyzing the preprocessed image information, identifying personnel information in the image based on an analysis result, and tracking and detecting personnel. Before analyzing the preprocessed image information, a personnel detection model based on a fast R-CNN method is required to be constructed, a figure picture data set of a construction site is obtained, the data set is distributed in proportion to serve as a training set and a testing set, and the personnel detection model is trained by the training set.
As shown in fig. 2, the step of training the person detection model includes: a. acquiring a figure picture data set of a construction site, and proportionally distributing the data set as a training set and a testing set; b. the input training set image information after being processed is transmitted to a convolution layer, and character images of the input construction site are extracted by utilizing a character extraction network based on Faster R-CNN; c. and classifying and identifying the extracted feature images by using a classifier to obtain an identification result image. The data set is processed by using a data enhancement technology, pictures are manually marked, and finally marking results are manufactured into a pascal_voc data set format, wherein the data enhancement technology comprises random angle rotation, vertical overturn, random cutting, gaussian noise and mirroring. In this embodiment, after the figure pictures of the construction site are collected, the data set is expanded through the modes of overturning, cutting, rotating and the like, labelimg software is used for marking the data set, wherein 1640 training sets, 1200 testing sets and the number of training sample samples affect the training effect. If the training samples are too small, overfitting can be caused, and the more the training samples are, the stronger the generalization capability of the network is, and the higher the precision of the finally obtained model is.
Compared with Fast R-CNN, the Fast R-CNN feature extraction network has higher detection accuracy while the detection speed is higher, the Fast R-CNN feature extraction network specifically comprises two parts, namely an FPN-based region generation network RPN and a Fast R-CNN shared convolution part, wherein the FPN-based region generation network RPN is used for generating candidate regions for the Fast R-CNN shared convolution part, the Fast R-CNN shared convolution part is used for calculating the category and the score of each candidate region, and the like.
Convolutional networks are more classicalVGGNet, googleNet and ResNet, etc., the ResNet network has introduced the residual error module, thus has solved the gradient disappearance problem caused by network deepening, make deeper network can continue training study, in particular, because the existence of residual error mapping makes in the course of carrying on network study, when the gradient of the bottom is stagnant for 0 in carrying on the back propagation process, the residual error mapping conversion of this moment is identical mapping, and then can continue to carry on the updating of the gradient, make the training loss of the final network will not continue to increase along with deepening of the depth, have accelerated the convergence process of the network at the same time further. In this embodiment, the feature extraction network adopts a 50-layer residual network res net50 network to extract image features, sends the processed image to a bottom layer network, obtains a feature map by using the res net network, generates a candidate region by using a region recommendation network based on FPN, combines FPN on the obtained feature map, and generates a feature map of the candidate region on each picture; sequentially carrying out ROI Pooling operation on the feature images of the candidate areas with different sizes through the ROI Pooling layer to obtain an output feature image with a fixed size; and processing the output feature map through two full-connection layers to output feature vectors, and inputting the feature vectors to two identical-stage output layers, wherein the two identical-stage output layers comprise a classification layer for judging whether a target is a person or not and a boundary regression layer for fine-tuning the position and the size of an ROI (region of interest) frame, and outputting the category of a candidate region and the exact position of the candidate region. The ROI Pooling process converts ROIs with different sizes in an input feature map into an output feature map with fixed sizes by using a Pooling method, the RoI with different sizes adopts different feature layers, when the object size is large, the feature of a high layer is adopted, when the object size is small, the feature of a bottom layer is adopted, and the feature pyramid network distributes the ROIs with different scales to different pyramid layers by using a coefficient k. The rois of width w and height h are assigned to the levels of FPN, wherein w and h are the width sum of the ROIHeight, k0=5, k corresponds to P layers in FPN. In this embodiment, before the feature maps of candidate areas with different sizes sequentially pass through the ROI Pooling layer to perform ROI Pooling operation, a non-maximum suppression algorithm is adopted, repeated detection frames of a target detection task are removed, an optimal target detection position is found, in the fast R-CNN training process, a large number of generated personnel and object candidate frames are post-processed by using an NMS algorithm, and redundant candidate frames are removed, so that the target detection efficiency is accelerated and the detection precision is improved.
In addition, after the images in the convolution network are subjected to convolution and pooling for many times, abstract semantic feature information is extracted, final prediction is carried out through all connection layers of a plurality of layers, and the prediction of the target generally comprises classification and a boundary frame regression problem. The specific working principle is as follows: since the input candidate frame is usually slightly different from the real target frame, the whole process can be regarded as a linear change process, and fine adjustment is performed on the original candidate frame through linear change.
Specifically, the anchor of the FPN structure predicts the target object and the background in the character picture of the construction site by using 5 prediction scales of 32×32, 64×64, 128×128, 256×256 and 512×512, and 3 aspect ratios of 1:2,1:1 and 2:1, as shown in fig. 2, and 15 types of anchors are used in total for each layer of pyramids P2, P3, P4, P5 and P6 of the res net50 to generate the target candidate frame of interest.
The reflective clothing identification unit is used for detecting reflective clothing according to the set color of a user or the automatic identification color by adopting a color identification-based method and judging whether a worker wears the reflective clothing or not. The color identification-based method comprises the steps of obtaining a minimum rectangle which can surround a detection target color block in a current frame image, solving an RGB image of a target object in the minimum rectangle, traversing a picture to solve pixel values of all colors, setting a threshold value for each pixel value, judging the size of each pixel value and outputting the color of a detection target.
The alarming reminding unit is used for controlling the alarming device to alarm when the reflective clothing identification unit identifies that a person wearing no reflective clothing exists in the working area, and if not, the detection is continued. The alarm device comprises a signal receiving module and an audible and visual alarm module, wherein the signal receiving module is used for receiving the identification result sent by the reflective garment identification unit, when personnel are identified to wear the reflective garment, the audible and visual alarm module is controlled to carry out voice and lamplight alarm reminding, the audible and visual alarm module comprises a voice synthesis chip, a controller, a loudspeaker and an alarm lamp, the loudspeaker is electrically connected with the controller through the voice synthesis chip, and the alarm lamp is used for carrying out lamplight alarm in different modes according to alarm signals output by the controller.
It should be noted that the embodiments of the present invention are only preferred modes for implementing the present invention, and only obvious modifications are included in the overall concept of the present invention, and should be considered as falling within the scope of the present invention.

Claims (7)

1. Reflective clothing detecting system based on depth layer feature fuses, characterized by comprising: the device comprises an image acquisition unit, a personnel detection unit, a reflective garment identification unit and an alarm reminding unit;
the image acquisition unit is used for acquiring a monitoring video image of a working area and preprocessing the monitoring video image, and the preprocessing comprises the following steps: continuous image acquisition is carried out according to a user instruction, the received monitoring video image is subjected to framing and graying treatment, and the received monitoring video image is input into the personnel detection unit for personnel detection;
the personnel detection unit is used for analyzing the preprocessed image information, identifying personnel information in the image based on an analysis result, and tracking and detecting personnel;
the reflective clothing identification unit is used for detecting reflective clothing according to the set color of a user or the automatic identification color by adopting a color identification-based method and judging whether a worker wears the reflective clothing or not;
the alarming reminding unit is used for controlling the alarming device to carry out alarming reminding when the reflective clothing identification unit identifies that personnel without wearing reflective clothing exist in the working area, and if not, the alarming device continues to detect;
before analyzing the preprocessed image information, a personnel detection model based on a fast R-CNN method is required to be constructed, a figure picture data set of a construction site is obtained, the data set is distributed in proportion to serve as a training set and a testing set, and the personnel detection model is trained by the training set;
the training person detection model comprises: the input training set image information after being processed is transmitted to a convolution layer, and character images of the input construction site are extracted by utilizing a character extraction network based on Faster R-CNN; classifying and identifying the extracted feature images to obtain an identification result image;
the feature extraction network adopts a 50-layer residual network ResNet50 network to extract image features, the processed image is sent to a bottom layer network to obtain a feature map by utilizing the ResNet network, a candidate region is generated by utilizing a region recommendation network based on FPN, the obtained feature map is combined with FPN, and a feature map of the candidate region is generated on each picture; sequentially carrying out ROI Pooling operation on the feature images of the candidate areas with different sizes through the ROI Pooling layer to obtain an output feature image with a fixed size; and processing the output feature map through two full-connection layers to output feature vectors, and inputting the feature vectors to two identical-stage output layers, wherein the two identical-stage output layers comprise a classification layer for judging whether a target is a person or not and a boundary regression layer for fine-tuning the position and the size of an ROI (region of interest) frame, and outputting the category of a candidate region and the exact position of the candidate region.
2. The reflective clothing detection system based on deep and shallow feature fusion according to claim 1, wherein the image acquisition unit comprises a plurality of high-definition cameras and a shooting parameter control module, wherein the plurality of high-definition cameras are used for acquiring image information in a monitoring working area and performing image preprocessing on the image information, and the shooting parameter control module is used for adjusting angle and focal length parameters of the plurality of high-definition cameras and compensating light rays, controlling power utilization and working switches of the plurality of high-definition cameras and performing equipment fault detection.
3. The reflective clothing detection system based on depth feature fusion according to claim 1, wherein the method based on color recognition comprises obtaining a minimum rectangle capable of surrounding a detection target in a current frame image, solving a R, G, B three-channel image of the target object in the minimum rectangle, traversing a picture to obtain pixel values of each color, setting a threshold value for each pixel value, judging the size of each pixel value according to the threshold value, and outputting the color of the detection target.
4. The reflective clothing detection system based on deep and shallow feature fusion according to claim 1, wherein the data set is processed by using a data enhancement technology, and pictures are manually marked, and finally the marked result is made into a pascal_voc data set format, wherein the data enhancement technology comprises random angle rotation, vertical overturn, random clipping, gaussian noise and mirroring.
5. The reflective clothing detection system based on deep and shallow feature fusion according to claim 1, wherein the ROI Pooling process converts ROIs of different sizes in the input feature map into output feature map of fixed size by using Pooling method, the ROIs of different sizes adopt different feature layers, the feature pyramid network allocates ROIs of different scales to different pyramid layers by using coefficient k, the ROIs of width w and height h are allocated to the grades of FPN,where w and h are the width and height of the ROI, k0=5, and k corresponds to the P layer in the FPN.
6. The reflective clothing detection system based on deep and shallow feature fusion as claimed in claim 5, wherein the anchors of the FPN structure adopt 5 prediction scales of 32×32, 64×64, 128×128, 256×256 and 512×512, and 3 aspect ratios of 1:2,1:1,2:1, corresponding to each layer of pyramids P2, P3, P4, P5, P6 of the res net50, and 15 types of anchors are used for predicting target objects and backgrounds in the character pictures of the construction sites, so as to generate target candidate frames of interest.
7. The reflective clothing detection system based on deep and shallow layer feature fusion according to claim 1, wherein the alarm device comprises a signal receiving module and an audible and visual alarm module, the signal receiving module is used for receiving the identification result sent by the reflective clothing identification unit, when the reflective clothing is not worn by a person, the audible and visual alarm module is controlled to carry out voice and light alarm reminding, the audible and visual alarm module comprises a voice synthesis chip, a controller, a loudspeaker and an alarm lamp, the loudspeaker is electrically connected with the controller through the voice synthesis chip, and the alarm lamp and the controller are used for carrying out different modes of light alarm according to alarm signals output by the controller.
CN202110627024.XA 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion Active CN113343846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110627024.XA CN113343846B (en) 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110627024.XA CN113343846B (en) 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion

Publications (2)

Publication Number Publication Date
CN113343846A CN113343846A (en) 2021-09-03
CN113343846B true CN113343846B (en) 2024-03-15

Family

ID=77475336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110627024.XA Active CN113343846B (en) 2021-06-04 2021-06-04 Reflective clothing detecting system based on depth layer feature fusion

Country Status (1)

Country Link
CN (1) CN113343846B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830205A (en) * 2018-06-04 2018-11-16 江南大学 Based on the multiple dimensioned perception pedestrian detection method for improving full convolutional network
CN109117827A (en) * 2018-09-05 2019-01-01 武汉市蓝领英才科技有限公司 Work clothes work hat wearing state automatic identifying method and alarm system based on video
WO2020019673A1 (en) * 2018-07-25 2020-01-30 深圳云天励飞技术有限公司 Construction site monitoring method and device based on image analysis, and readable storage medium
CN111091110A (en) * 2019-12-24 2020-05-01 山东仁功智能科技有限公司 Wearing identification method of reflective vest based on artificial intelligence
CN111126325A (en) * 2019-12-30 2020-05-08 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site
CN112149514A (en) * 2020-08-28 2020-12-29 中国地质大学(武汉) Method and system for detecting safety dressing of construction worker
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334740A (en) * 2019-06-05 2019-10-15 武汉大学 The electrical equipment fault of artificial intelligence reasoning fusion detects localization method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830205A (en) * 2018-06-04 2018-11-16 江南大学 Based on the multiple dimensioned perception pedestrian detection method for improving full convolutional network
WO2020019673A1 (en) * 2018-07-25 2020-01-30 深圳云天励飞技术有限公司 Construction site monitoring method and device based on image analysis, and readable storage medium
CN109117827A (en) * 2018-09-05 2019-01-01 武汉市蓝领英才科技有限公司 Work clothes work hat wearing state automatic identifying method and alarm system based on video
CN111091110A (en) * 2019-12-24 2020-05-01 山东仁功智能科技有限公司 Wearing identification method of reflective vest based on artificial intelligence
CN111126325A (en) * 2019-12-30 2020-05-08 哈尔滨工程大学 Intelligent personnel security identification statistical method based on video
CN111383429A (en) * 2020-03-04 2020-07-07 西安咏圣达电子科技有限公司 Method, system, device and storage medium for detecting dress of workers in construction site
CN112149514A (en) * 2020-08-28 2020-12-29 中国地质大学(武汉) Method and system for detecting safety dressing of construction worker
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Robust feature learning for adversarial defense via hierarchical feature alignment》;张笑钦;Information Sciences;20201220(第560期);256-270 *
基于改进Mask R-CNN模型的电力场景目标检测方法;孔英会;王维维;张珂;戚银城;;科学技术与工程;20200318(第08期);全文 *

Also Published As

Publication number Publication date
CN113343846A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN111126325B (en) Intelligent personnel security identification statistical method based on video
US7916904B2 (en) Face region detecting device, method, and computer readable recording medium
CN103442209B (en) Video monitoring method of electric transmission line
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN112216049A (en) Construction warning area monitoring and early warning system and method based on image recognition
CN110909690A (en) Method for detecting occluded face image based on region generation
CN108052859A (en) A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
WO2009123354A1 (en) Method, apparatus, and program for detecting object
CN111325133B (en) Image processing system based on artificial intelligent recognition
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN111091110A (en) Wearing identification method of reflective vest based on artificial intelligence
CN109218667B (en) Public place safety early warning system and method
CN102542246A (en) Abnormal face detection method for ATM (Automatic Teller Machine)
CN112364778A (en) Power plant safety behavior information automatic detection method based on deep learning
CN105022999A (en) Man code company real-time acquisition system
CN111401310B (en) Kitchen sanitation safety supervision and management method based on artificial intelligence
CN110110755A (en) Based on the pedestrian of PTGAN Regional disparity and multiple branches weight recognition detection algorithm and device
CN114202646A (en) Infrared image smoking detection method and system based on deep learning
CN109034038B (en) Fire identification device based on multi-feature fusion
CN108108740B (en) Active millimeter wave human body image gender identification method
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
CN106372566A (en) Digital signage-based emergency evacuation system and method
CN109934143A (en) A kind of method and apparatus of the detection of iris image Sino-U.S. pupil

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant