CN112560755A - Target detection method for identifying urban exposed garbage - Google Patents

Target detection method for identifying urban exposed garbage Download PDF

Info

Publication number
CN112560755A
CN112560755A CN202011546774.6A CN202011546774A CN112560755A CN 112560755 A CN112560755 A CN 112560755A CN 202011546774 A CN202011546774 A CN 202011546774A CN 112560755 A CN112560755 A CN 112560755A
Authority
CN
China
Prior art keywords
garbage
exposed
data set
zcs
exposed garbage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011546774.6A
Other languages
Chinese (zh)
Other versions
CN112560755B (en
Inventor
孙德亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Re Cloud Technology Co ltd
Original Assignee
China Re Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Re Cloud Technology Co ltd filed Critical China Re Cloud Technology Co ltd
Priority to CN202011546774.6A priority Critical patent/CN112560755B/en
Publication of CN112560755A publication Critical patent/CN112560755A/en
Application granted granted Critical
Publication of CN112560755B publication Critical patent/CN112560755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a target detection method for identifying urban exposed garbage, which mainly comprises the following steps: s1, shooting exposed garbage photos distributed in a city by using a mobile phone and arranging the photos as a source data set S; and S2, marking the position and the category of the exposed garbage S3: combining the xml data, converting a source data set S into a data set CS in a COCO format; s4, zero-mean normalized CS data set, recorded as ZCS data set; s5: on the data set ZCS, selecting a part of data set as a test set tes _ ZCS, selecting a part of data in the rest part of data set as a verification set val _ ZCS, and using the rest part of data set as a training set tra _ ZCS; and S6, training an exposed garbage recognition model, and S7, judging whether the exposed garbage recognition model has exposed garbage or not by adopting the exposed garbage recognition model. The invention adopts a target detection mode to detect and identify the exposed garbage, so that the urban exposed garbage is automatically identified and timely treated, the garbage is reduced to be exposed in public visual fields, the appearance of the urban city is effectively improved, and the method is further developed on the aspect of stepping into an intelligent civilized city.

Description

Target detection method for identifying urban exposed garbage
Technical Field
The invention mainly relates to the field of artificial intelligence image recognition, belongs to the field of environment protection of intelligent cities, and particularly relates to a target detection method.
Background
The target detection is one of core technologies of computer vision and digital image processing, and has wide application prospects in the fields of navigation, monitoring, detection, identification and the like. The target detection reduces the dependence on manpower resources and liberates manpower through computer vision research, and has very important significance to the modern science and technology society. Therefore, many scholars, researchers, have been diligent and have been very much in the research of target detection, and until now, the research on target detection is still going forward. Due to the wide application of deep learning, the target detection algorithm is developed rapidly within a period of time, so that at present, the target detection algorithm based on deep learning is divided into Two main flows, and the algorithms are vividly called an One-Stage target detection algorithm and a Two-Stage target detection algorithm according to the characteristics of the algorithms. The PP-YOLO referred to herein belongs to One-Stage target detection algorithm, which performs better than Two-Stage target detection algorithm in terms of training and predicting speed.
China has a large population, the daily garbage generation amount is large, and garbage is a problem which must be regarded as important. The urban population is relatively dense, the generation of garbage is faster and more, and if the garbage is not reasonably arranged, the garbage is timely disposed, so that the development of cities is seriously hindered. In view of the problems exposed at present, in part of cities in China, a plurality of places exist: the phenomena of non-regular stacking of garbage, long-term retention of garbage and blockage of a garbage can due to too much garbage seriously affect the appearance of a city, and also pose a serious threat to the health of vast citizens. Therefore, the research on the exposed garbage in the city has extremely important significance.
In the prior art, the exposed garbage is identified by adopting pure manpower resources, so that the efficiency is low, the cost is high, and the requirement of urban management cannot be met.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in the prior art, the exposed garbage is identified by adopting pure manpower resources, so that the efficiency is low, the cost is high, and the requirement of urban management cannot be met.
The invention provides a method for automatically and effectively identifying urban exposed garbage, which reduces human resources required by the identification of the exposed garbage.
The invention provides a method for identifying urban exposed garbage, which comprises the following steps:
s1, shooting exposed garbage photos distributed in a city by using a mobile phone and arranging the photos as a source data set S;
s2, marking the positions and the types of the exposed garbage, marking the positions of the exposed garbage in the photos by using a labelImg data marking tool, and marking the types of the exposed garbage in the photos to generate xml data;
labelImg is a professional image labeling tool in the prior art.
S3: combining the xml data, converting a source data set S into a data set CS in a COCO format;
s4, zero-mean normalized CS data set, recorded as ZCS data set;
s5: on the data set ZCS, selecting a part of data set as a test set tes _ ZCS, selecting a part of data in the rest part of data set as a verification set val _ ZCS, and using the rest part of data set as a training set tra _ ZCS;
s6, training an exposed garbage recognition model, learning and recognizing exposed garbage on a data set tra _ ZCS based on PP-YOLO, selecting a training model with a first test mAP ranking according to a test result of the recognition model on a test set tes _ ZCS, cutting and optimizing the training model, and converting the training model into an inference model to obtain the exposed garbage recognition model;
and S7, acquiring exposed garbage identification picture data of the urban wall, corner, ground and garbage can, and judging whether the exposed garbage is piled, scattered and full by adopting an exposed garbage identification model.
Further, the municipal exposed waste comprises: exposed trash stacked near walls, corners, floors, trash cans, unpackaged scattered or packaged trash, and trash exposed due to a trash can explosion.
In the photo shot by the mobile phone, the exposed garbage is wholly or mostly positioned in the photo.
Furthermore, label data by using labelImg, and when labeling, all the exposed garbage on the photo is placed in a labeling frame, and the category is labeled for each labeling frame.
Further, in step S5, for the ZCS dataset subjected to data conversion and zero-mean preprocessing, during segmentation, the ratio of 1: 1: and 3, dividing the data into a data set tes _ ZCS, a data set val _ ZCS and a data set tra _ ZCS.
Further, in the step S6, the framework network of PP-YOLO adopts ResNet 50.
PP-YOLO is a target detector with relatively balanced effectiveness and efficiency which can be directly applied in practical application scenes in the prior art, and can be acquired from a website provided by a hundred-degree company
Calculating anchors of the urban exposed garbage on the data set CS by adopting a kmeans algorithm, and taking the anchors as a selectable size set of a drawing frame in the modeling process of the step 6;
the kmeans algorithm is an algorithm that inputs the number k of clusters, and a database containing n data objects, and outputs k clusters satisfying a minimum variance criterion. The kmeans algorithm accepts an input quantity k; the n data objects are then divided into k clusters so that the obtained clusters satisfy: the similarity of objects in the same cluster is higher; while the object similarity in different clusters is smaller.
After the model is trained, selecting a model which is shown to exceed a preset value in a test set according to mAP indexes, cutting the model by using PaddleSlim, and finally exporting the cut model as an inference model to complete target detection, identification and modeling of the urban exposed garbage.
PaddleSlim is a model compression tool library in the prior art, and comprises a series of model compression strategies such as model clipping, fixed-point quantization, knowledge distillation, hyperparameter search and model structure search.
The invention has the advantages that the exposed garbage is detected and identified by adopting a target detection mode, so that the urban exposed garbage is automatically identified and timely treated, the garbage is reduced from being exposed in public visual fields, and the appearance and the city appearance of the urban city are effectively improved.
Drawings
FIG. 1 is a PP-YOLO network framework diagram of a target detection method for urban exposed garbage identification according to the present invention.
FIG. 2 is a network structure diagram of a backbone network ResNet50 of a PP-YOLO target detection method for urban exposed garbage identification according to the present invention.
FIG. 3 is a PP-YOLO network structure diagram of a target detection method for urban exposed garbage identification according to the present invention.
FIG. 4 is a flow chart of the present invention.
Detailed Description
In order to more clearly illustrate the specific identification method of the present invention, the following description will be given with reference to the accompanying drawings, wherein the important points involved in S6 of the present invention are described.
As shown in fig. 4, the present invention provides a method for identifying urban exposed garbage, comprising the following steps:
and S1, preparing a source data set S for exposed garbage, dispatching digital workers, visiting places where garbage is easy to discard, such as streets, road intersections, garbage can arrangement places and the like of a city, shooting photos of the exposed garbage by using a mobile phone, and collecting and sorting the photos to be used as a source data set for identifying the exposed garbage.
And S2, marking the positions and the types of the exposed garbage, marking the source data set identified by the exposed garbage by using a labelImg data marking tool, framing the positions of the exposed garbage in the photos, and marking the types of the exposed garbage.
And S3, converting the data expression form, and converting the source data set S into a data set CS in a COCO format by the staff in combination with the additional xml data generated after labeling.
And S4, preprocessing the data, and normalizing the CS data set by adopting zero-mean value, and recording the CS data set as a ZCS data set.
And S5, dividing the data set ZCS into 3 parts according to the proportion, and sequentially providing a test set for testing, a verification set for verification and a training set for modeling and learning. Firstly, setting the splitting ratio as 1: 1: and 3, on the data set ZCS, selecting 20% of the whole data set ZCS as a test set tes _ ZCS, 20% as a verification set val _ ZCS, and the rest 60% as a training set tra _ ZCS.
S6, training an exposed garbage recognition model, learning and recognizing exposed garbage on a training data set tra _ ZCS based on PP-YOLO, testing the learning effect on a test set tes _ ZCS by using the learned recognition characteristics, selecting a model established by the characteristics with the best recognition effect according to the test effect evaluation value mAP value, cutting, optimizing, and converting into an inference model to obtain the exposed garbage recognition model.
And S7, identifying the exposed garbage in scenes such as the wall, corner, ground, garbage bin and the like of the city by using the constructed exposed garbage identification model, and judging whether the exposed garbage is piled, scattered and exploded.
In the implementation process of the invention, the complete structure of the PP-YOLO network is shown in figure 2 and mainly comprises two parts, namely a backbone network backbone and a multi-scale feature network FPN (feature Pyramid networks). In the method of the present invention, the backbone network is a ResNet50 network, and the network structure is shown in fig. 3, the network does not calculate the last average pool layer and the full connection layer fc, and the total of 50 layers includes 5 layers of convolution kernels (conv1, conv2_ x, conv3_ x, conv4_ x, conv5_ x) and a max pool layer. After the conv3_ x, conv4_ x and conv5_ x outputs of the network, respectively, outputs c2, c1 and c0 are added as inputs of the multi-scale feature network FPN to form the 3-scale identified PP-YOLO network.
Therefore, the PP-YOLO network also outputs the predicted recognition result of 3 scales, and the predicted recognition result of each scale includes 4 types: the central location (x, y) of the exposed garbage, the extent (w, h) of the exposed garbage, the class (class) of the exposed garbage, and the confidence (confidence) of the identification.
And finally, according to the predicted recognition result, drawing a block diagram with the coordinate xy and the size wh, and the category and the credibility on the original recognition photo to achieve the target detection of recognizing the exposed garbage.
The following describes the PP-YOLO network learning feature process in detail, and outputs a correct prediction recognition result.
On the training data set tra _ ZCS, the positions and categories of exposed garbage have been artificially marked for the data set, i.e. the x, y, w, h, class and confidence values of the exposed garbage corresponding to the photos of the training data set are known. The PP-YOLO network compresses photo information through a convolution network, extracts a feature map, selects a proper frame drawing size from anchors according to the size of the extracted feature map, and draws a plurality of frames only for grid cells (grid cells) where the centers of marking frames are located. Then, according to the value IoU of the coincidence condition between the drawing frame and the manual labeling frame, the object attribute of the drawing frame corresponding to the maximum IoU value is marked as 1, the objects of the other drawing frames are 0, if the position information of the drawing frame is assumed to be the result of the final prediction recognition of the PP-YOLO, the recognition accuracy is very low, and the drawing frame needs to be further finely adjusted because the probability of coincidence between the drawing frame and the labeling frame is small and is ignored, so that IoU of the drawing frame and the labeling frame is larger.
In the above, anchors are result sets obtained by clustering the size of the labeling box through a clustering algorithm. The intersection ratio iou (intersection over union) is a ratio of the intersection and union of the "predicted frame" and the "real frame", and it is assumed that the predicted frame position information is SAThe real frame is position information SBThen IoU is calculated as follows:
Figure BDA0002855914350000061
in the training sample of one embodiment of the invention, the PP-YOLO network extracts a characteristic grid of which the characteristic graph is divided into S multiplied by S, the center of the pre-detection target is positioned in a characteristic grid cell, and the offset of the cell relative to the coordinate at the upper left corner of the characteristic grid is (c)x,cy) The offset of the target center from the upper left corner of the cell is σ (t)y),σ(tx). The width and height of the corresponding frame are expressed as (p)w,ph) And (X, Y, W and H) is the position information of the target real frame relative to the characteristic grid, and the fine scanning frame is close to the real frame by adopting the following formula:
X=σ(tx)+cx
Y=σ(ty)+cy
Figure BDA0002855914350000071
Figure BDA0002855914350000072
pr(object)×IoU(b,object)=σ(to)
σ(to) As confidence, pr(object) is the probability that the frame is the object to be detected, and IoU (b, object) is the value IoU between the frame position and the true position of the object. e is constant, X, Y represents the deviation of the center of the bounding box from the border of the grid cell in which it is located, W, H represents the proportion of the true width and height of the bounding box relative to the entire image. t is tx,ty,tw,thThe model is output which needs to be predicted, and the relative position information (X ', Y', W ', H') of the predicted frame and the position degree thereof can be obtained through the formula, so that the target frame diagram is drawn on the original image, and the target detection and identification effect is achieved.
In the above, the average accuracy mAP value is used as an evaluation index of the model detection, and the calculation process is divided into 6 steps:
1) IoU are calculated. The predicted result and the actual result for each test sample are calculated IoU.
2) The values of TP, FP, TN are calculated. Where TP represents the number of test samples that are determined to be positive samples, which are also positive samples in reality, FP represents the number of test samples that are determined to be positive samples, which are negative samples in reality, and FN represents the number of test samples that are determined to be negative samples, which are also negative samples in reality. The type of the sample is determined according to IoU value, the test sample with IoU value greater than confidence is determined as positive sample, and the other samples are negative samples, and the confidence is generally 0.7.
3) Precision rate p (precision) calculation. The calculation formula is as follows:
Figure BDA0002855914350000081
4) recall rate r (recall) calculation. The calculation formula is as follows:
Figure BDA0002855914350000082
5) AP (average precision) value calculation. And drawing a PR curve according to the category, wherein the area below the curve is the AP value of the corresponding category. For the data in the COCO format, 101 interpolation points (interpolation AP method) are used for calculating the AP value, that is, on the recall axis, one point is taken every 0.01, and 101 points are taken in total, so that the calculation formula of the AP value is as follows:
Figure BDA0002855914350000083
wherein p isiValue representing the accuracy at the ith point, RiA value representing the recall at the ith point.
6) And calculating mAP value, wherein mAP is the average value of AP values of all categories, and assuming that the total detected category number is C and the AP value of the ith category is AP (i), the mAP value is as follows:
Figure BDA0002855914350000084
in an embodiment of the present invention, the loss calculation in the PP-YOLO network is divided into four parts according to the prediction result. Where the position information loss is a simple squared error, and the confidence and class probability losses are two-class cross entropy, the total loss function is expressed as:
Figure BDA0002855914350000091
the first term in the formula is an error term of the coordinate of the center of the bounding box, the second term is an error term of the height and width of the bounding box, the third term is a confidence error term of the bounding box containing the target, the fourth term is a confidence error term of the bounding box not containing the target, and the fifth term is a classification error term of the cell containing the target.
Wherein beta iscoord,βnoobjFor weighting coefficient, the coordinate prediction error of the bounding box is usually predicted by using a larger weight value beta coord5 without the bounding box confidence error of the target, or with a smaller weight value βnoobjThe other weight values are all set to 1, 0.5. S multiplied by S represents the size of the grid, k is the number of drawing frames drawn by a single unit grid,
Figure BDA0002855914350000092
the ith cell existence target representing the feature grid,
Figure BDA0002855914350000093
the jth bounding box representing the cell is responsible for predicting the target, and
Figure BDA0002855914350000094
the ith cell representing the feature mesh has no object and the jth bounding box is responsible for predicting that cell as background. c belongs to a class set class, representing one of the classes, pi(c) Denotes the probability, p ', that the ith cell is of the category c'i(c) The predicted probability of class c for the ith cell. c. CiAs the confidence that the ith cell is of the category c, the cell belonging to the category c is 1, otherwise is 0, c'iThe prediction confidence for the ith cell is the category c. t is tx,ty,twAnd thIs four parameters, t ', representing the true frame position'x,t'y,t'wAnd t'hFour parameters representing the predicted box position.
In an embodiment of the present invention, only one of the categories is exposed garbage, and the anchors size is 9, so the final 3-dimensional output form of the PP-YOLO network is: tenasor of P0(13 × 13 × 3 × 0(5+1)), P1(26 × 126 × 23 × 3(5+1)), P2(52 × 52 × 3 × (5+ 1)). Wherein 13 × 13,26 × 26, and 52 × 52 are three different scales of the grid sxs, 3 is corresponding to 3 frames of a single unit grid of the feature grid, the size of the frame is selected from anchors according to sxs, and 5 is corresponding to 5 predicted values (t × S)x,ty,tw,th,to) And 1 corresponds to the probability value of exposure to spam. In order to simplify the output, the prediction frames with high confidence coefficient are extracted from a plurality of prediction frames, the prediction frames with low confidence coefficient are restrained, and the position information of the exposed garbage is finally output and drawn through a Non-Maximum Suppression NMS (Non-Maximum Suppression) algorithm, so that the target detection of the exposed garbage is completed.
The invention has the advantages that the exposed garbage is detected and identified by adopting a target detection mode, so that the urban exposed garbage is automatically identified and timely treated, the garbage is reduced from being exposed in public visual fields, and the appearance and the city appearance of the urban city are effectively improved.

Claims (5)

1. A target detection method for identifying urban exposed garbage is characterized by comprising the following steps:
s1, shooting exposed garbage photos distributed in a city by using a mobile phone and arranging the photos as a source data set S;
s2, marking the positions and the types of the exposed garbage, marking the positions of the exposed garbage in the photos by using a labelImg data marking tool, and marking the types of the exposed garbage in the photos to generate xml data;
s3: combining the xml data, converting a source data set S into a data set CS in a COCO format;
s4, zero-mean normalized CS data set, recorded as ZCS data set;
s5: on the data set ZCS, selecting a part of data set as a test set tes _ ZCS, selecting a part of data in the rest part of data set as a verification set val _ ZCS, and using the rest part of data set as a training set tra _ ZCS;
s6, training an exposed garbage recognition model, learning and recognizing exposed garbage on a data set tra _ ZCS based on PP-YOLO, selecting a training model with a first test mAP value ranking according to a test result of the recognition model on a test set tes _ ZCS, cutting and optimizing the model, and converting the training model into an inference model to obtain a recognition model of the exposed garbage;
and S7, acquiring exposed garbage identification picture data of the urban wall, corner, ground and garbage can, and judging whether the exposed garbage is piled, scattered and full by adopting an exposed garbage identification model.
2. The object detection method of claim 1, wherein the city exposure waste comprises: exposed garbage stacked near walls, corners, floors, garbage cans, unpackaged scattered or packaged garbage, and garbage exposed due to the garbage can being exploded
In the photo shot by the mobile phone, the exposed garbage is wholly or mostly positioned in the photo.
3. The object detection method for identifying urban exposed garbage according to claim 1, wherein in step S2, labelImg is used to label data, and all exposed garbage on the photo is placed in a label box during labeling, and each label box is labeled with a category.
4. The target detection method for identifying urban exposed garbage according to claim 1, wherein in step S5, for the ZCS dataset after data conversion and zero-mean preprocessing, in the slicing, the ratio of 1: 1: and 3, dividing the data into a data set tes _ ZCS, a data set val _ ZCS and a data set tra _ ZCS.
5. The object detection method for identifying urban exposed garbage according to claim 1, wherein in step S6, the framework network of PP-YOLO adopts ResNet 50.
Calculating anchors of the urban exposed garbage on the data set CS by adopting a kmeans algorithm, and taking the anchors as a selectable size set of a drawing frame in the modeling process of the step 6;
after the model is trained, selecting a model which is shown to exceed a preset value in a test set according to mAP indexes, cutting the model by using PaddleSlim, and finally exporting the cut model as an inference model to finish target detection, identification and modeling of urban exposed garbage.
CN202011546774.6A 2020-12-24 2020-12-24 Target detection method for identifying urban exposed garbage Active CN112560755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011546774.6A CN112560755B (en) 2020-12-24 2020-12-24 Target detection method for identifying urban exposed garbage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011546774.6A CN112560755B (en) 2020-12-24 2020-12-24 Target detection method for identifying urban exposed garbage

Publications (2)

Publication Number Publication Date
CN112560755A true CN112560755A (en) 2021-03-26
CN112560755B CN112560755B (en) 2022-08-19

Family

ID=75032350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011546774.6A Active CN112560755B (en) 2020-12-24 2020-12-24 Target detection method for identifying urban exposed garbage

Country Status (1)

Country Link
CN (1) CN112560755B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255533A (en) * 2021-05-31 2021-08-13 中再云图技术有限公司 Method for identifying forbidden zone intrusion behavior, storage device and server
CN113420673A (en) * 2021-06-24 2021-09-21 苏州科达科技股份有限公司 Garbage classification method, device, equipment and storage medium
CN113468976A (en) * 2021-06-10 2021-10-01 浙江大华技术股份有限公司 Garbage detection method, garbage detection system and computer readable storage medium
CN113903006A (en) * 2021-12-09 2022-01-07 北京云迹科技有限公司 Robot monitoring sanitation method and device, electronic equipment and storage medium
CN114119959A (en) * 2021-11-09 2022-03-01 盛视科技股份有限公司 Vision-based garbage can overflow detection method and device
CN114155467A (en) * 2021-12-02 2022-03-08 上海皓维电子股份有限公司 Garbage can overflow detection method and device and electronic equipment
CN116189099A (en) * 2023-04-25 2023-05-30 南京华苏科技有限公司 Method for detecting and stacking exposed garbage based on improved yolov8

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203498A (en) * 2016-07-07 2016-12-07 中国科学院深圳先进技术研究院 A kind of City scenarios rubbish detection method and system
CN110796186A (en) * 2019-10-22 2020-02-14 华中科技大学无锡研究院 Dry and wet garbage identification and classification method based on improved YOLOv3 network
US20200082224A1 (en) * 2018-09-10 2020-03-12 Sri International Weakly supervised learning for classifying images
CN111368895A (en) * 2020-02-28 2020-07-03 上海海事大学 Garbage bag target detection method and detection system in wet garbage
CN111914815A (en) * 2020-09-05 2020-11-10 广东鲲鹏智能机器设备有限公司 Machine vision intelligent recognition system and method for garbage target

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203498A (en) * 2016-07-07 2016-12-07 中国科学院深圳先进技术研究院 A kind of City scenarios rubbish detection method and system
US20200082224A1 (en) * 2018-09-10 2020-03-12 Sri International Weakly supervised learning for classifying images
CN110796186A (en) * 2019-10-22 2020-02-14 华中科技大学无锡研究院 Dry and wet garbage identification and classification method based on improved YOLOv3 network
CN111368895A (en) * 2020-02-28 2020-07-03 上海海事大学 Garbage bag target detection method and detection system in wet garbage
CN111914815A (en) * 2020-09-05 2020-11-10 广东鲲鹏智能机器设备有限公司 Machine vision intelligent recognition system and method for garbage target

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG LONG, KAIPENG DENG: "PP-YOLO: An Effective and Efficient Implementation of Object Detector", 《ARXIV》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255533A (en) * 2021-05-31 2021-08-13 中再云图技术有限公司 Method for identifying forbidden zone intrusion behavior, storage device and server
CN113468976A (en) * 2021-06-10 2021-10-01 浙江大华技术股份有限公司 Garbage detection method, garbage detection system and computer readable storage medium
CN113420673A (en) * 2021-06-24 2021-09-21 苏州科达科技股份有限公司 Garbage classification method, device, equipment and storage medium
CN113420673B (en) * 2021-06-24 2022-08-02 苏州科达科技股份有限公司 Garbage classification method, device, equipment and storage medium
CN114119959A (en) * 2021-11-09 2022-03-01 盛视科技股份有限公司 Vision-based garbage can overflow detection method and device
CN114155467A (en) * 2021-12-02 2022-03-08 上海皓维电子股份有限公司 Garbage can overflow detection method and device and electronic equipment
CN113903006A (en) * 2021-12-09 2022-01-07 北京云迹科技有限公司 Robot monitoring sanitation method and device, electronic equipment and storage medium
CN116189099A (en) * 2023-04-25 2023-05-30 南京华苏科技有限公司 Method for detecting and stacking exposed garbage based on improved yolov8
CN116189099B (en) * 2023-04-25 2023-10-10 南京华苏科技有限公司 Method for detecting and stacking exposed garbage based on improved yolov8

Also Published As

Publication number Publication date
CN112560755B (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN112560755B (en) Target detection method for identifying urban exposed garbage
CN110059554B (en) Multi-branch target detection method based on traffic scene
CN109508360B (en) Geographical multivariate stream data space-time autocorrelation analysis method based on cellular automaton
CN102956023B (en) A kind of method that traditional meteorological data based on Bayes's classification and perception data merge
CN112541532B (en) Target detection method based on dense connection structure
CN112287018A (en) Method and system for evaluating damage risk of 10kV tower under typhoon disaster
CN102867183A (en) Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN111026870A (en) ICT system fault analysis method integrating text classification and image recognition
CN114119110A (en) Project cost list collection system and method thereof
CN113989487A (en) Fault defect detection method and system for live-action scheduling
CN113673839B (en) Intelligent base event gridding automatic dispatch method and base event processing system
Wang et al. Based on the improved YOLOV3 small target detection algorithm
CN116630787A (en) Light-weight detection method and device for overflow of garbage can and storage device
CN116541944A (en) Carbon emission calculation method based on comprehensive oblique photography modeling model of transformer substation
CN110765900A (en) DSSD-based automatic illegal building detection method and system
CN115456238A (en) Urban trip demand prediction method based on dynamic multi-view coupling graph convolution
Sun et al. Automatic building age prediction from street view images
CN114283323A (en) Marine target recognition system based on image deep learning
CN114818849A (en) Convolution neural network based on big data information and anti-electricity-stealing method based on genetic algorithm
Xin et al. A new remote sensing image retrieval method based on CNN and YOLO
Greenwell et al. Implicit land use mapping using social media imagery
CN113190537A (en) Data characterization method for emergency repair site in monitoring area
Zhou et al. An overload behavior detection system for engineering transport vehicles based on deep learning
Yin et al. A Face Mask Detection Algorithm Based on YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant