CN116189099A - Method for detecting and stacking exposed garbage based on improved yolov8 - Google Patents

Method for detecting and stacking exposed garbage based on improved yolov8 Download PDF

Info

Publication number
CN116189099A
CN116189099A CN202310451024.8A CN202310451024A CN116189099A CN 116189099 A CN116189099 A CN 116189099A CN 202310451024 A CN202310451024 A CN 202310451024A CN 116189099 A CN116189099 A CN 116189099A
Authority
CN
China
Prior art keywords
garbage
exposed
model
stacking
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310451024.8A
Other languages
Chinese (zh)
Other versions
CN116189099B (en
Inventor
李鹏博
陈晓芳
孟维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Howso Technology Co ltd
Original Assignee
Nanjing Howso Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Howso Technology Co ltd filed Critical Nanjing Howso Technology Co ltd
Priority to CN202310451024.8A priority Critical patent/CN116189099B/en
Publication of CN116189099A publication Critical patent/CN116189099A/en
Application granted granted Critical
Publication of CN116189099B publication Critical patent/CN116189099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W30/00Technologies for solid waste management
    • Y02W30/10Waste collection, transportation, transfer or storage, e.g. segregated refuse collecting, electric or hybrid propulsion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting and stacking exposed garbage based on improved yolov8, which comprises the following steps: s1, collecting data and manufacturing a data set: collecting image data of a region to be detected and monitored, and then carrying out exposure garbage image annotation on the image data to manufacture a data set; s2, building a network and training a model: building a network and training a target detection model to obtain an exposed garbage detection model; s3, model reasoning: accessing the exposed garbage detection model into a real-time video stream to perform real-time stream reasoning, judging whether exposed garbage exists in a monitoring area, and processing according to a detection result; s4, analysis of results: carrying out logic analysis on the reasoning result obtained in the step S3, judging the stacking amount of the exposed garbage and recording the stacking time of the exposed garbage; s5, secondary alarming. The method improves the accuracy of identifying the specific garbage by the target detection model and the accuracy of reporting the alarm information under the open scene, and reduces the false detection rate of the model.

Description

Method for detecting and stacking exposed garbage based on improved yolov8
Technical Field
The invention relates to the technical field of visual positioning, in particular to a method for detecting and stacking exposed garbage based on improved yolov 8.
Background
With the vigorous development of AI intelligent technology, smart cities become one of targets for strategic development in China. The smart city is a necessary product under the background that the novel urban development of China and the modern scientific technology are continuously integrated into the urban and industry, the continuous innovative development of society and the like, and is an effective means for orderly promoting the novel urban development and realizing the healthy and sustainable development of urban science. The factors affecting the development of the appearance of the smart city are necessarily management and control modes of urban garbage, the urban exposed garbage is mainly distributed in roads and communities, the road garbage is mainly distributed in motor vehicle lanes, non-motor vehicle lanes, sidewalks, green belts and other areas, and the community garbage is mainly distributed around garbage cans or garbage van rooms. Most cities are mainly based on the mode of identifying garbage by means of pure human resources aiming at the phenomena that garbage is not stacked on time, kept for too long, exposed and randomly stacked, and the like, and the mode is low in efficiency, high in cost, influenced by traffic, weather and the like, cannot achieve all-weather urban garbage detection, and cannot meet the requirements of intelligent city management. Therefore, if there is a method to timely identify the messy stacking of the exposed garbage and notify the relevant personnel to clean it is necessary.
The current common method is that the obtained original image is subjected to area detection to obtain a target area in the original image, the target area is subjected to target detection to obtain position information and type information of garbage, but the types of exposed garbage are various, the feature part is less, the feature is not obvious, the garbage exposed at the roadside is difficult to stably identify by a single target detection algorithm, the condition of missing report is easy to cause, the garbage detection area is complex in scene, various objects are stacked together, and the garbage false detection is easy to generate. Under the condition of insufficient sample data, the model detection is inaccurate, and false detection is easier to generate under an open scene, so that the alarm rate is increased.
Chinese patent document (CN 112560755A) discloses a target detection method for identifying urban exposed garbage, which mainly comprises the following steps: s1: shooting exposed garbage photos distributed in cities by adopting a mobile phone and arranging the exposed garbage photos as a source data set S; s2: marking the position and the category S3 of the exposed garbage: converting the source data set S into a data set CS in a COCO format by combining the xml data; s4: the zero-mean normalized CS data set is recorded as a data set ZCS; s5: on a data set ZCS, selecting a part of the data set as a test set tes_ZCS, selecting a part of data in the rest part of the data set as a verification set val_ZCS, and selecting the rest part as a training set tra_ZCS; s6: training an exposed garbage identification model, and S7: and judging whether the garbage is exposed or not by adopting an exposed garbage identification model. According to the technical scheme, the object detection mode is adopted to detect and identify the exposed garbage, so that the urban exposed garbage is automatically identified and treated in time, the garbage is reduced to be exposed in public vision, urban appearance is effectively improved, the garbage exposed at the roadside is difficult to stably identify, and the condition of missing report is easily caused.
Chinese patent document (CN 111458721A) discloses a method for identifying and locating exposed garbage, comprising: acquiring a target image through a camera, wherein the target image comprises exposed garbage; identifying the exposed garbage in the target image according to a target detection algorithm; acquiring space position information through a laser radar; determining a positional relationship between a camera coordinate system and a laser radar coordinate system; calculating the area of the exposed garbage under the laser radar coordinate system according to the position relation between the camera coordinate system and the laser radar coordinate system and the identified exposed garbage in the target image; and accurately positioning the distribution positions of the exposed garbage, and mapping the accurately positioned positions of the exposed garbage to a map. The technical scheme discloses an identification positioning device and system for exposed garbage. The position exposing the garbage can be effectively obtained, and the problem of large labor cost can be solved; however, in the technical scheme, the target detection algorithm is single, and the condition of missing report is easy to cause.
Chinese patent literature (CN 106203498A) discloses a method and a system for detecting urban scene garbage, wherein the method comprises the following steps: selecting a visual object classification VOC data set as a basic data set for garbage detection, acquiring a city image, marking a garbage area, fusing the city image with the VOC data set, expanding the city image and enriching the VOC data set, building a deep learning platform based on a deep learning technology, selecting a pre-training model on the deep learning platform, performing garbage detection on the newly acquired city image through the deep learning platform and the pre-training model after performing prior parameter setting on the pre-training model, and automatically giving a detection result. The scene of the garbage detection area is complex, various objects are stacked together, and garbage false detection is easy to generate.
Chinese patent document (CN 114782681A) discloses a method for detecting exposure garbage and overflow garbage based on deep learning, comprising: training to obtain a first target detection model for detecting garbage, and inputting a target image into the first target detection model to obtain a first target detection result; when determining that the garbage exists according to the first target detection result, determining whether the garbage is exposed garbage according to the position relation between the garbage and the garbage can; when the garbage is determined not to be exposed, inputting the target sub-image into a second target detection model which is obtained through training in advance, and determining whether the garbage is overflow garbage or not according to a second target detection result; through based on the deep learning, training to obtain a first target detection model for detecting garbage and a second target detection model for detecting overflow garbage, the accuracy of garbage type identification is improved. According to the method, the accuracy of garbage type identification is improved in a mode of overlapping two models, but more resources are required to be consumed for loading the two models by hardware in an actual application scene, the requirement on the hardware is high, the detection speed is low, and the instantaneity is low.
Chinese patent document (CN 114818868A) discloses a method and a device for constructing a feature extraction model and detecting garbage, wherein the method for constructing the feature extraction model comprises the following steps: establishing a garbage exposure target detection data set according to the acquired first garbage image data; training a first preset model by using the garbage exposure target detection data set to obtain a garbage target detection model; detecting the second garbage image data by using the garbage target detection model to obtain a garbage target and a non-garbage target; classifying the garbage targets and the non-garbage targets, and merging classification results to establish a classification data set; training a preset classification model by using the classification data set to obtain a feature extraction model, wherein the feature extraction model is used for extracting feature vectors of garbage targets and non-garbage targets in the classification data set. The technical method is firstly limited by the first garbage image data, and if the quality of the first garbage image data is poor, the training effect of the garbage target detection model may be poor; second, the accuracy of the garbage target detection model may also be affected by the quality of the current image data, and if the quality of the second garbage image data is poor, the detection result may also be inaccurate.
Chinese patent document (CN 113468976A) discloses a garbage detection method, a garbage detection system, and a computer-readable storage medium, the garbage detection method comprising: acquiring a plurality of video frames, and acquiring a tracking result of a target to be detected in a target detection area from each video frame; wherein the tracking result comprises the category of the target to be detected; responding to the condition that the class of the object to be detected is a garbage bin and the overflow confidence coefficient is larger than a first preset threshold value, or responding to the condition that the class of the object to be detected is exposed garbage and the detection classification joint confidence coefficient is larger than a second preset threshold value, taking the object to be detected as an alarm object, and accumulating the time of all the alarm objects corresponding to video frames to obtain a first accumulated time; and in response to the first accumulated time being greater than a third preset threshold, emptying the first accumulated time, and reporting all alarm targets for alarm display. The technical method has poor tracking effect on the dynamic background condition, so that the recognition rate is low, the alarm is affected, and the possibility of false alarm is increased.
Chinese patent document (CN 106203498A) discloses a garbage detection method, and related devices, apparatuses, wherein the garbage detection method includes: acquiring an original image obtained by shooting a scene to be detected; performing region detection on the original image to obtain a target region in the original image, wherein the target region corresponds to a garbage region in a scene to be detected; and carrying out target detection on the target area to obtain the position information and/or the type information of the garbage in the original image. However, in the technical scheme, the target detection algorithm is single, and the condition of missing report is easy to cause.
Chinese patent document (CN 113989626A) discloses a multi-class garbage scene discrimination method based on a target detection model, comprising: obtaining images to be detected of a target environment scene, carrying out garbage identification detection on the images to be detected input into the images to be detected, and respectively using detection frames to select the regions identified as garbage in the images to be detected; confidence scores of all detection frames in the image to be detected are obtained; identifying the class of the garbage scene to which the garbage selected by all the detection frames belongs and marking the class as a main garbage treatment target; and outputting the class name of the garbage scene corresponding to the main garbage treatment target and the position of the corresponding detection frame. The invention solves the problems of low intelligent degree and use limitation of the garbage identification model in the prior art, so that the problem that garbage really needs to be fed back when a terminal shooting person shoots an image can not be distinguished.
The current common method is that the obtained original image is subjected to area detection to obtain a target area in the original image, the target area is subjected to target detection to obtain position information and type information of garbage, but the types of exposed garbage are various, the feature part is less, the feature is not obvious, the garbage exposed at the roadside is difficult to stably identify by a single target detection algorithm, the condition of missing report is easy to cause, the garbage detection area is complex in scene, various objects are stacked together, and the garbage false detection is easy to generate. Under the condition of insufficient sample data, the model detection is inaccurate, and false detection is easier to generate under an open scene, so that the alarm rate is increased.
Disclosure of Invention
The invention aims to solve the technical problem of providing an improved yolov 8-based method for detecting and stacking and monitoring exposed garbage, which can accurately identify the garbage exposure of a detected and monitored area, monitor and alarm the exposed garbage, improve the accuracy of identifying specific garbage by a target detection model and the accuracy of reporting alarm information under an open scene, and reduce the false detection rate of the model.
In order to solve the technical problems, the invention adopts the following technical scheme: the method for detecting and stacking the exposed garbage based on the improved yolov8 specifically comprises the following steps:
s1, collecting data and manufacturing a data set: collecting image data of an area to be detected and monitored, and then carrying out exposure garbage image annotation on the obtained image data to manufacture a data set;
s2, building a network and training a model: constructing a network and training a target detection model by utilizing a data set to obtain an exposed garbage detection model;
s3, model reasoning: accessing the obtained exposed garbage detection model into a real-time video stream to perform real-time stream reasoning, judging whether exposed garbage exists in a monitoring area, and processing according to a detection result to obtain a reasoning result;
s4, analysis of results: carrying out logic analysis on the reasoning result obtained in the step S3, judging the stacking amount of the exposed garbage and recording the stacking time of the exposed garbage;
s5, secondary alarming: and responding to the corresponding secondary alarm according to the stacking quantity and the stacking time of the exposed garbage.
By adopting the technical scheme, a data set is manufactured by collecting data; selecting a proper target identification model, building an improved network model, training an exposed garbage detection model according to a data set, carrying out corresponding detection, analyzing the obtained detection result, and alarming according to different results; the accuracy of identifying specific garbage by the target detection model and the accuracy of reporting alarm information are improved, and meanwhile, the false detection rate of the model is reduced.
Preferably, in the step S1, the image data of the area to be detected and monitored includes crawling the data set disclosed in the network through the web crawler technology, acquiring the image data of the garbage bin or the garbage recycling station surrounding the area to be detected and monitored through the external camera, and shooting the image data of the exposed garbage through manpower to the district or the street. In an open scene, crawling a data set disclosed by a network through a web crawler technology, shooting image data of exposed garbage nearby a garbage can through a peripheral camera, and shooting image data related to the exposed garbage through manpower to nearby cells and surrounding streets, so as to obtain a large amount of image data; and then the data set is prepared.
Preferably, the step S1 further includes amplifying the collected image data by means of data enhancement by sample transformation, specifically including single-sample data enhancement, multi-sample data enhancement, and data enhancement based on deep learning. The type of exposing rubbish that detects among this technical scheme mainly is domestic rubbish, including the disposal bag or the plastic bag of various colours that are equipped with rubbish, beverage plastic bottle, old clothes and shoes, white foam carton, cardboard and fitment material rubbish, because the sample number of above-mentioned some categories is less, prevents to lead to the detection effect of model poor because the sample is unbalanced to and influence the robustness of model, wherein the root effect that the sample unbalance brought is: the model learns the prior information of the sample proportion in the training set, so that the actual prediction has a emphasis on most categories (the accuracy of most categories is better, and the accuracy of few categories is worse); therefore, in order to make the proportion of each type of sample approach to 1:1:1:1:1:1, enhancement processing is required to be performed on the acquired image data; the single sample data enhancement mainly comprises the steps of generating new samples by geometric operation, color transformation, random erasure, noise addition and the like; the multi-sample enhancement is to construct a domain value sample of a known sample in a feature space by combining and converting a plurality of samples, mainly including methods such as Smote type and SamplePairing, mixup; the data enhancement based on deep learning is to generate models such as a variational self-coding network and a generating countermeasure network, the method for generating samples can be used for data enhancement, the generated samples are more various, and the proportion of different types of samples of the training samples reaches an equilibrium state by combining the three data enhancement modes.
Preferably, the specific steps of the step S4 are as follows:
s41: firstly, accessing a monitoring video stream through the step S1 to obtain a plurality of frames of stream image data;
s42: inputting one frame of streaming image data into the step S2 to train to obtain an exposure garbage detection model, detecting the exposure garbage, and judging whether a target, namely the exposure garbage, is detected in a detected and monitored area; if so, go to step S43; if not, returning to continue to read the next frame of stream image data until all frame of stream image data in the video stream are detected;
s43: calculating the quantity of the exposed garbage and the area ratio of the total garbage area to the detected and monitored area, if the area ratio exceeds a set threshold, jumping to the step S5, triggering the exposed garbage stacking alarm, otherwise triggering the exposed garbage alarm, performing timing treatment to obtain stacking time, and when the stacking time exceeds a preset time threshold, starting a secondary reminding alarm.
Preferably, the specific steps of the step S5 are: if the exposed garbage is detected in the detected and monitored area and the calculated area ratio is smaller than the set threshold, the exposed garbage is subjected to timing treatment to obtain the stacking time, and when the stacking time exceeds the preset time threshold, a secondary reminding alarm is started.
Preferably, in the step S2, a yolo series single-stage target detection model yolov8 is selected as a target detection model, and the training steps of the exposed garbage detection model are as follows:
s21, data preparation: dividing the data set manufactured in the step S1 into a training set, a verification set and a test set, and ensuring that the marked image data contains the category of the object of the target and the coordinate information of the boundary frame;
s22, model configuration: modifying a model configuration file in the source code of the single-stage target detection model yolov8, and designating parameters of the model;
s23, training a model: and training the model by using the training set and the model configuration file, and adjusting the learning rate, the optimizer and the parameter setting of the loss function in the training process so that the training outputs an optimal exposed garbage detection model. The single-stage target detection model YOLOv8 uses a C2f module to replace a C3 module of YOLOv5 on a backbone network, uses the forms of DFL ideas and Anchor-Free on a classification regression Loss function, uses BCE Loss as classification Loss and uses DFL Loss and cioU Loss as regression Loss, uses a Task-Aligned Assigner positive and negative sample matching mode, and simultaneously adopts the operation of closing the Mosaic at the last 10 epochs; the precision can be effectively improved.
Preferably, in the step S2, in the process of adopting the target detection task through the exposed garbage detection model, by adding the Loss function Loss WIoUv3 to the network of the target detection model yolov8, and performing an ablation experiment on the Loss function Loss CIoU, the Loss function Loss SIoU and the Loss function Loss WIoUv3, the influence of the Loss function Loss CIoU, the Loss function Loss SIoU and the Loss function Loss WIoUv3 on the accuracy of the output detection exposed garbage model is compared. Target detection is a central problem for computer vision, whose detection performance depends on the design of the loss function. In the target detection task, ioU is used to measure the degree of overlap between the anchor box and the target box. It effectively shields the bounding box size from interference in a proportional fashion. When the anchor box is well coincident with the target box, a good loss function should attenuate the penalty of the geometric factors, while less training intervention will make the model get better generalization ability; in the technical scheme, three loss functions are adopted to carry out an ablation experiment, so that an optimal model structure is obtained.
Preferably, the Loss function Loss IoU is formulated as:
Figure SMS_1
when the bounding boxes do not overlapL IoU Overlapping areaW i The derivative is equal to 0, namely:
Figure SMS_2
at this time, theL IoU The backprojection gradient disappears and the overlap region cannot be updated during trainingW i Is a width of (2);
therefore, the Loss function Loss IoU is optimized to obtain GIoU, where the formula is:
Figure SMS_3
the formula for the Loss function GIoU Loss is thus:
Figure SMS_4
and optimizing the Loss function GIoU Loss to obtain the CIoU, wherein the formula is as follows:
Figure SMS_5
wherein :
Figure SMS_6
,/>
Figure SMS_7
where v is used to calculate the height ratio consistency of the predicted and target frames, here measured by tan angle,
Figure SMS_8
is a balance parameter, which is given priority according to IoU value, when IoU of the predicted frame and the target frame is larger, the balance parameter +.>
Figure SMS_9
The larger.
Preferably, the Loss function Loss sio is a SIou constructed with angle cost, distance cost, shape cost; wherein angle cost describes the minimum angle between the center line of the bounding box and the x-y axis, the SIoU formula is:
Figure SMS_10
distance cost describes the normalized distance of the center points of two bounding boxes on the x-axis and the y-axis, with penalty term strength positively correlated with angle cost; thus, distance cost is defined as:
Figure SMS_11
Figure SMS_12
shape cost describes the shape difference of two bounding boxes, i.e. not 0 when the two bounding boxes are not identical in size, then shape cost is defined as:
Figure SMS_13
Figure SMS_14
penalty termR SIoU And (3) withR CIoU Both of which are composed of distance costs and shape costs, the formula is:
Figure SMS_15
loss functionL box Then it is defined as:
Figure SMS_16
preferably, the formula defining the Loss function Loss WIoU is:
Figure SMS_17
Figure SMS_18
wherein ,
Figure SMS_19
;/>
Figure SMS_20
W g andH g separate tableShowing the width and height of the smallest bounding box; upper energizer->
Figure SMS_21
Representation ofW g AndH g are separated from the computational graph. To prevent punishment itemsR WIoU A gradient is created that impedes the convergence,W g andH g are all separated from the computational graph; this effectively eliminates factors that prevent convergence so no new metrics are introduced, such as aspect ratio.
Compared with the prior art, the invention has the following beneficial effects: selecting a yolo series single-stage target detection model as a target detection recognition model, improving an original yolo8 model structure to obtain an improved yolo8 model structure, and carrying out a corresponding ablation experiment to obtain an exposed garbage detection model with higher precision and lower false detection rate; the optimization and improvement is specifically as follows: 1) The C2f module is used for replacing the C3 module of the YOLOv5, so that the light weight is realized, and meanwhile, the precision is not reduced; 2) A DFL module is used; 3) The traditional Anchor-Base is abandoned, and the Anchor-Free form is used; 4) BCE Loss was used as classification Loss and DFL loss+ciou Loss as regression Loss; 5) YOLOv8 discards the traditional IOU matching or unilateral proportion allocation mode, but uses a Task-Aligned Assigner matching mode; 6) The model closes the operation of the Mosaic in the last 10 epochs, so that the precision can be effectively improved; the problem of inaccurate model detection is solved; therefore, the garbage exposure of the detected and monitored area can be accurately identified, the exposed garbage is monitored and alarmed, the accuracy of identifying the specific garbage by the target detection model and the accuracy of reporting the alarm information are improved in an open scene, and meanwhile, the false detection rate of the model is reduced.
Drawings
FIG. 1 is a flow chart of a method of the present invention for improved yolov 8-based exposed trash detection and pile up monitoring;
FIG. 2 is a flow chart of step S4 of the method of the present invention based on improved yolov8 exposed trash detection and dumping monitoring;
FIG. 3a is a graph of test results of the YOLO8n_cioU model in a cluttered scenario in the method of the present invention based on improved yolov8 exposed garbage detection and dump monitoring;
FIG. 3b is a graph of test results of the yolov8n_SIoU model in a cluttered scenario in the method of the present invention based on improved yolov8 exposed garbage detection and pile up monitoring;
FIG. 3c is a graph of test results of the yolov8n_WIoUv3 model in a cluttered environment in the method of the present invention based on improved yolov8 exposed garbage detection and pile up monitoring;
FIG. 4a is a graph of test results of the YOLO8n_cioU model in a perspective scene in the method of the present invention based on improved yolov8 exposed garbage detection and pile up monitoring;
FIG. 4b is a graph of test results of the yolov8n_SIoU model in a perspective scene in the method of the present invention based on improved yolov8 exposed garbage detection and pile up monitoring;
FIG. 4c is a graph of test results of the yolov8n_WIoUv3 model in a perspective scene in the method of the present invention based on improved yolov8 exposed garbage detection and pile up monitoring;
FIG. 5a is a graph of test results of the YOLO8n_cioU model in a dynamic context in the method of the present invention based on improved yolov8 exposed trash detection and pile up monitoring;
FIG. 5b is a graph of test results of the yolov8n_SIoU model in a dynamic context in a method of improved yolov8-based exposed trash detection and pile-up monitoring of the present invention;
FIG. 5c is a graph of test results of the yolov8n_WIoUv3 model in a dynamic context in a method of improved yolov8-based exposed trash detection and pile-up monitoring of the present invention;
FIG. 6 is a network structure of a single-stage object detection model yolov8 of the present invention based on a method of improving the exposed refuse detection and pile-up monitoring of yolov 8.
Description of the embodiments
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the drawings of the embodiments of the present invention.
Examples: as shown in fig. 1, the method for detecting and stacking exposed garbage based on improved yolov8 specifically comprises the following steps:
s1, collecting data and manufacturing a data set: collecting image data of an area to be detected and monitored, and then carrying out exposure garbage image annotation on the obtained image data to manufacture a data set; in the step S1, the image data of the area to be detected and monitored includes crawling the data set disclosed in the network through the web crawler technology, acquiring the image data of the garbage can or the garbage recycling bin around the area to be detected and monitored through the external camera, and shooting the image data exposing the garbage through manpower to the district or the street. In an open scene, crawling a data set disclosed by a network through a web crawler technology, shooting image data of exposed garbage nearby a garbage can through a peripheral camera, and shooting image data related to the exposed garbage through manpower to nearby cells and surrounding streets, so as to obtain a large amount of image data; preparing a data set; the step S1 further comprises amplifying the collected image data in a data enhancement mode of sample transformation, and specifically comprises single sample data enhancement, multiple sample data enhancement and data enhancement based on deep learning;
the type of exposed garbage detected in the implementation is mainly household garbage, including garbage bags or plastic bags with various colors filled with garbage, beverage plastic bottles, old clothes and shoes, white foam paper boxes, hard paper boards and decoration material garbage, and as the number of samples of the above categories is small, the detection effect of a model is prevented from being poor due to unbalance of the samples, and the robustness of the model is influenced, wherein the root influence caused by unbalance of the samples is as follows: the model learns the prior information of the sample proportion in the training set, so that the actual prediction has a emphasis on most categories (the accuracy of most categories is better, and the accuracy of few categories is worse); therefore, in order to make the proportion of each type of sample approach to 1:1:1:1:1:1, enhancement processing is required to be performed on the acquired image data; the single sample data enhancement mainly comprises the steps of generating new samples by geometric operation, color transformation, random erasure, noise addition and the like; the multi-sample enhancement is to construct a domain value sample of a known sample in a feature space by combining and converting a plurality of samples, mainly including methods such as Smote type and SamplePairing, mixup; the data enhancement based on deep learning is to generate models such as a variational self-coding network and a generating countermeasure network, the method for generating samples can be used for data enhancement, the generated samples are more various, and the proportion of different types of samples of the training samples reaches an equilibrium state by combining the three data enhancement modes;
s2, building a network and training a model: constructing a network and training a target detection model by utilizing a data set to obtain an exposed garbage detection model; selecting a yolo series single-stage target detection model YOLOv8 as a target detection model in the step S2, wherein the single-stage target detection model YOLOv8 uses a C2f module to replace a C3 module of YOLOv5 on a backbone network, so that lighter weight is realized, and meanwhile, the precision is not reduced; the DFL thought and Anchor-Free form is used on the classification regression Loss function, the BCE Loss is used as the classification Loss, the DFL Loss and the CIoU Loss are used as the regression Loss, the Task-Aligned Assigner positive and negative sample matching mode is used, meanwhile, the operation of closing the Mosaic is adopted for the last 10 epochs, and the precision can be effectively improved; the network structure of the single-stage target detection model yolov8 is shown in fig. 6; the network structure mainly comprises three blocks: backbone network Backbone, detection head and matching mechanism; backbone in FIG. 6, represents the Backbone network portion of the model; head represents the detection Head portion of the model; detect represents the detection output portion of the model; the CBS module represents convolution (Conv) Batch Normalization (BN) and activation function (SiLU) operations; the C2f module represents a convolution + split + merge combined operation structure module, which is richer in structure gradient flow. Concat represents a merging operation of the number of channels; upsamples: represent upsampling modules; SPFF represents a spatial pyramid pooling module that functions as an output to achieve an adaptive size; CBE represents a cross entropy loss function module for classification; the DFL full scale Dostronition Focal Loss (DFL) represents a loss function module that optimizes the probability of the left-right 2 positions closest to the tag y in the form of cross entropy;
the training steps of the exposed garbage detection model are as follows:
s21, data preparation: dividing the data set manufactured in the step S1 into a training set, a verification set and a test set, and ensuring that the marked image data contains the category of the object of the target and the coordinate information of the boundary frame;
s22, model configuration: modifying a model configuration file in the source code of the single-stage target detection model yolov8, and designating parameters of the model (the parameters comprise input size, anchor blocks, category number, data paths and the like), wherein the improved model also needs to be modified and called with a function related file for calculating frame loss in the source code of the yolov 8;
s23, training a model: training a model by using the training set and the model configuration file, and adjusting the learning rate, the optimizer and the parameter setting of the loss function in the training process so that the training outputs an optimal exposed garbage detection model;
in the step S2, in the process of adopting the target detection task through the exposed garbage detection model, by adding the Loss function Loss WIoUv3 to the network of the target detection model yolov8, and performing an ablation experiment on the Loss function Loss CIoU, the Loss function Loss SIoU and the Loss function Loss WIoUv3, the influence of the Loss function Loss CIoU, the Loss function Loss SIoU and the Loss function Loss WIoUv3 on the accuracy of the output detection exposed garbage model is compared. In the target detection task, ioU is used to measure the degree of overlap between the anchor box and the target box; the method effectively shields the interference of the size of the boundary box in a proportional mode;
the Loss function Loss IoU is given by:
Figure SMS_22
when the bounding boxes do not overlapL IoU Overlapping areaW i The derivative is equal to 0, namely:
Figure SMS_23
at this time, theL IoU The backprojection gradient disappears and the overlap region cannot be updated during trainingW i Is a width of (2);
however, ioU loss suffers from two major drawbacks:
(1) When the predicted frame and the real frame have no intersection, the calculated IoU is 0, the loss is 1, but the distance information is missing, and when the relative positions of the predicted frame and the real frame are close, the loss function is smaller;
(2) When the intersection ratio of the predicted frame and the real frame is the same, but the positions of the predicted frames are different, the calculated losses are the same, so that the judgment of which predicted frame is more real cannot be performed;
thus, optimizing on the basis of IoU to obtain the GIoU; optimizing the Loss function Loss IoU to obtain the GIoU, wherein the formula is as follows:
Figure SMS_24
the formula for the Loss function GIoU Loss is thus:
Figure SMS_25
however, the disadvantage of the GIoU penalty is that when the two prediction frames are the same in width and at the same level, the GIoU is degraded to the ios. Furthermore, GIOU and IOU have two disadvantages: the convergence is slower and the regression is not accurate enough;
therefore, the Loss function GIoU Loss is optimized again to obtain CIoU, and the formula is:
Figure SMS_26
wherein :
Figure SMS_27
,/>
Figure SMS_28
where v is used to calculate the height ratio consistency of the predicted and target frames, here measured by tan angle,
Figure SMS_29
is a balance parameter (this coefficient does not participate in gradient calculation), which is given priority according to IoU value, when IoU of the prediction frame and the target frame is larger, the balance parameter +.>
Figure SMS_30
The larger;
the Loss function Loss SIou is SIoU constructed by angle cost, distance cost and shape cost; wherein angle cost describes the minimum angle between the center line of the bounding box and the x-y axis, the SIoU has a faster convergence speed, the formula:
Figure SMS_31
distance cost describes the normalized distance of the center points of two bounding boxes on the x-axis and the y-axis, with penalty term strength positively correlated with angle cost; thus, distance cost is defined as:
Figure SMS_32
;/>
Figure SMS_33
shape cost describes the shape difference of two bounding boxes, i.e. not 0 when the two bounding boxes are not identical in size, then shape cost is defined as:
Figure SMS_34
Figure SMS_35
penalty termR SIoU And (3) withR CIoU Both of which are composed of distance costs and shape costs, the formula is:
Figure SMS_36
loss functionL box Then it is defined as:
Figure SMS_37
the formula for defining the Loss function Loss WIoU is:
Figure SMS_38
Figure SMS_39
wherein ,W g andH g representing the width and height of the minimum bounding box, respectively; superscript
Figure SMS_40
Representation ofW g AndH g are all separated from the computational graph; to prevent punishment itemsR WIoU A gradient is created that impedes the convergence,W g andH g are all separated from the computational graph; this effectively eliminates factors that prevent convergence so no new metrics are introduced, such as aspect ratio. />
Figure SMS_41
This will significantly amplify the LIoU of the common mass anchor box; />
Figure SMS_42
This will significantly reduce the high quality anchor frame anchor boxR WIoU The method comprises the steps of carrying out a first treatment on the surface of the And when the anchor frame anchor box is coincident with the target frame, the distance between the center points is of great concern. Since the training data inevitably contains low quality examples, geometric factors (such as distance and aspect ratio) will exacerbate the penalty on low quality examples, thereby reducing the generalization performance of the model; when the anchor box is well coincident with the target box, a good loss function should attenuate the penalty of the geometric factors, and fewer training interventions will make the model availableBetter generalization capability is obtained; therefore, in this embodiment, three loss functions are adopted to perform an ablation experiment, so as to obtain an optimal model structure; ablation experiments refer to the exploration of the performance of a network model by modifying a small portion of the structure of the network when studying complex deep neural networks. In the embodiment, improvement is realized mainly by replacing a loss function of yolov8, a default CIoU loss function of yolov8 is changed into a WIoU function, and the detection effect of the loss function of SIoU is tested to highlight the effect of the loss function in detecting exposed garbage;
the results of the ablation experiments are shown in table 1, and in this embodiment, by comparing the average detection accuracy map of three groups of models (yolk8n_ciou model, yolk8n_siou model, yolk8n_wiouv 3 model), it is found that the detection accuracy of the yolk8n_wiouv 3 model reaches 70.69%, and the detection performance is optimal; as shown in fig. 3 a-3 c (detection results of three models in a cluttered scene), fig. 4 a-4 c (detection results of three models in a distant scene) and fig. 5 a-5 c (detection results of three models in a dynamic scene), the garbag in the figures is a garbage bag, the carbboard is a cardboard, and the detection results of the yolo8n_ciou model, the yolov8n_siou model and the yolov8n_wiohuv 3 model on a part of a test set are compared, and the detection results of the yolov8n_wiohuv 3 model are superior to those of other two detection models, so that the false detection rate is reduced, and the generalization capability of the model is improved;
table 1 ablation experimental results
Figure SMS_43
S3, model reasoning: accessing the obtained exposed garbage detection model into a real-time video stream to perform real-time stream reasoning, judging whether exposed garbage exists in a monitoring area, and processing according to a detection result to obtain a reasoning result; in the example, the exposed garbage detection model obtained by training in the step S2 is connected with a sea-health hemispherical network camera for real-time detection;
s4, analysis of results: carrying out logic analysis on the reasoning result obtained in the step S3, judging the stacking amount of the exposed garbage and recording the stacking time of the exposed garbage;
as shown in fig. 2, the specific steps of the step S4 are as follows:
s41: firstly, accessing a monitoring video stream through the step S1 to obtain a plurality of frames of stream image data;
s42: inputting one frame of streaming image data into the step S2 to train to obtain an exposure garbage detection model, detecting the exposure garbage, and judging whether a target, namely the exposure garbage, is detected in a detected and monitored area; if so, go to step S43; if not, returning to continue to read the next frame of stream image data until all frame of stream image data in the video stream are detected;
s43: calculating the quantity of the exposed garbage and the area ratio of the total garbage area to the detected and monitored area, if the area ratio exceeds a set threshold (the set threshold in the embodiment is 0.2), jumping to the step S5, triggering the exposed garbage stacking alarm, otherwise triggering the exposed garbage alarm, performing timing treatment to obtain the stacking time, and starting a secondary reminding alarm when the stacking time exceeds a preset time threshold.
S5, secondary alarming: responding to the corresponding secondary alarm according to the stacking amount and the stacking time of the exposed garbage; the specific steps of the step S5 are as follows: if the exposed garbage is detected in the detected and monitored area and the calculated area ratio is smaller than the set threshold, the exposed garbage is subjected to timing treatment to obtain the stacking time, and when the stacking time exceeds the preset time threshold, a secondary reminding alarm is started.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1. The method for detecting and stacking the exposed garbage based on the improved yolov8 is characterized by comprising the following steps of:
s1, collecting data and manufacturing a data set: collecting image data of a region to be detected and monitored, and then carrying out exposure garbage image annotation on the obtained image data to manufacture a data set;
s2, building a network and training a model: constructing a network and training a target detection model by utilizing a data set to obtain an exposed garbage detection model;
s3, model reasoning: accessing the obtained exposed garbage detection model into a real-time video stream to perform real-time stream reasoning, judging whether exposed garbage exists in a monitoring area, and processing according to a detection result to obtain a reasoning result;
s4, analysis of results: carrying out logic analysis on the reasoning result obtained in the step S3, judging the stacking amount of the exposed garbage and recording the stacking time of the exposed garbage;
s5, secondary alarming: and responding to the corresponding secondary alarm according to the stacking quantity and the stacking time of the exposed garbage.
2. The method of claim 1, wherein in step S1, the image data of the area to be detected and monitored includes crawling the data set disclosed in the network by web crawler technology, acquiring the image data of the garbage in the area to be detected and monitored around the garbage can or garbage recycling station by external camera, and capturing the image data of the exposed garbage by manpower to the district or street.
3. The method for detecting and monitoring the garbage exposure and stacking based on the improved yolov8 according to claim 2, wherein the step S1 further comprises amplifying the collected image data by a data enhancement mode of sample transformation, in particular, single sample data enhancement, multiple sample data enhancement and data enhancement based on deep learning.
4. The method for detecting and monitoring the stacking of the exposed garbage based on the improved yolov8 according to claim 2, wherein the specific steps of the step S4 are as follows:
s41: firstly, accessing a monitoring video stream through the step S1 to obtain a plurality of frames of stream image data;
s42: inputting one frame of streaming image data into the step S2 to train to obtain an exposure garbage detection model, detecting the exposure garbage, and judging whether a target, namely the exposure garbage, is detected in a detected and monitored area; if so, go to step S43; if not, returning to continue to read the next frame of stream image data until all frame of stream image data in the video stream are detected;
s43: calculating the quantity of the exposed garbage and the area ratio of the total garbage area to the detected and monitored area, if the area ratio exceeds a set threshold, jumping to the step S5, triggering the exposed garbage stacking alarm, otherwise triggering the exposed garbage alarm, performing timing treatment to obtain stacking time, and when the stacking time exceeds a preset time threshold, starting a secondary reminding alarm.
5. The method for detecting and monitoring the stacking of the exposed garbage based on the improved yolov8 according to claim 4, wherein the specific steps of the step S5 are as follows: if the exposed garbage is detected in the detected and monitored area and the calculated area ratio is smaller than the set threshold, the exposed garbage is subjected to timing treatment to obtain the stacking time, and when the stacking time exceeds the preset time threshold, a secondary reminding alarm is started.
6. The method for detecting and stacking monitoring exposed garbage based on improved yolov8 according to claim 4, wherein in the step S2, a yolo series single-stage object detection model yolov8 is selected as an object detection model, and the training steps of the exposed garbage detection model are as follows:
s21, data preparation: dividing the data set manufactured in the step S1 into a training set, a verification set and a test set, and ensuring that the marked image data contains the category of the object of the target and the coordinate information of the boundary frame;
s22, model configuration: modifying a model configuration file in the source code of the single-stage target detection model yolov8, and designating parameters of the model;
s23, training a model: and training the model by using the training set and the model configuration file, and adjusting the learning rate, the optimizer and the parameter setting of the loss function in the training process so that the training outputs an optimal exposed garbage detection model.
7. The method for detecting and stacking the exposed garbage based on the improved yolov8 according to claim 6, wherein in the step S2, in the process of adopting the objective detection task through the exposed garbage detection model, by adding a Loss function Loss WIoUv3 to the network of the objective detection model yolov8, and performing an ablation experiment on the Loss function Loss CIoU, the Loss function Loss SIoU and the Loss function Loss WIoUv3, the influence of the Loss function Loss CIoU, the Loss function Loss SIoU and the Loss function Loss WIoUv3 on the accuracy of the output detected exposed garbage model is compared.
8. The method for improved yolov 8-based exposed trash detection and dumping monitoring of claim 7, wherein the Loss function Loss IoU is formulated as:
Figure QLYQS_1
when the bounding boxes do not overlapL IoU Overlapping areaW i The derivative is equal to 0, namely:
Figure QLYQS_2
at this time, theL IoU The backprojection gradient disappears and the overlap region cannot be updated during trainingW i Is a width of (2);
therefore, the Loss function Loss IoU is optimized to obtain GIoU, where the formula is:
Figure QLYQS_3
the formula for the Loss function GIoU Loss is thus:
Figure QLYQS_4
and optimizing the Loss function GIoU Loss to obtain the CIoU, wherein the formula is as follows:
Figure QLYQS_5
wherein :
Figure QLYQS_6
,/>
Figure QLYQS_7
wherein v is used to calculate the height ratio consistency of the predicted and target frames, measured by tan angle,
Figure QLYQS_8
is a balance parameter, which is given priority according to IoU value, when IoU of the predicted frame and the target frame is larger, the balance parameter +.>
Figure QLYQS_9
The larger.
9. The method for improved yolov 8-based exposed trash detection and pile up monitoring of claim 7 wherein the Loss function Loss sio is a SIou constructed with angle cost, distance cost, shape cost; wherein angle cost describes the minimum angle between the center line of the bounding box and the x-y axis, the SIoU formula is:
Figure QLYQS_10
distance cost describes the normalized distance of the center points of two bounding boxes on the x-axis and the y-axis, with penalty term strength positively correlated with angle cost; thus, distance cost is defined as:
Figure QLYQS_11
Figure QLYQS_12
shape cost describes the shape difference of two bounding boxes, i.e. not 0 when the two bounding boxes are not identical in size, then shape cost is defined as:
Figure QLYQS_13
;/>
Figure QLYQS_14
penalty termR SIoU And (3) withR CIoU Both of which are composed of distance costs and shape costs, the formula is:
Figure QLYQS_15
loss functionL box Then it is defined as:
Figure QLYQS_16
10. the method of improved yolov 8-based exposed trash detection and dumping monitoring of claim 7, wherein the formula defining the Loss function Loss WIoU is:
Figure QLYQS_17
Figure QLYQS_18
wherein ,
Figure QLYQS_19
;/>
Figure QLYQS_20
W g andH g representing the width and height of the minimum bounding box, respectively; upper energizer->
Figure QLYQS_21
Representation ofW g AndH g are separated from the computational graph. />
CN202310451024.8A 2023-04-25 2023-04-25 Method for detecting and stacking exposed garbage based on improved yolov8 Active CN116189099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310451024.8A CN116189099B (en) 2023-04-25 2023-04-25 Method for detecting and stacking exposed garbage based on improved yolov8

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310451024.8A CN116189099B (en) 2023-04-25 2023-04-25 Method for detecting and stacking exposed garbage based on improved yolov8

Publications (2)

Publication Number Publication Date
CN116189099A true CN116189099A (en) 2023-05-30
CN116189099B CN116189099B (en) 2023-10-10

Family

ID=86450922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310451024.8A Active CN116189099B (en) 2023-04-25 2023-04-25 Method for detecting and stacking exposed garbage based on improved yolov8

Country Status (1)

Country Link
CN (1) CN116189099B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342895A (en) * 2023-05-31 2023-06-27 浙江联运知慧科技有限公司 Method and system for improving sorting efficiency of renewable resources based on AI (advanced technology attachment) processing
CN116452590A (en) * 2023-06-16 2023-07-18 中国人民解放军战略支援部队航天工程大学 Optimization method and system for boundary box intersection-parallel ratio and electronic equipment
CN116665080A (en) * 2023-07-26 2023-08-29 国网江西省电力有限公司电力科学研究院 Unmanned aerial vehicle deteriorated insulator detection method and system based on target recognition
CN116958688A (en) * 2023-07-28 2023-10-27 南京信息工程大学 Target detection method and system based on YOLOv8 network
CN117079327A (en) * 2023-08-18 2023-11-17 广东保伦电子股份有限公司 Face recognition method and device based on real-time target detection and storage medium
CN117197728A (en) * 2023-11-07 2023-12-08 成都千嘉科技股份有限公司 Method for identifying real-time gas diffusing operation through wearable camera equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205114216U (en) * 2015-11-20 2016-03-30 浙江联运知慧科技有限公司 Intelligence trash classification recycling case with rubbish overflows to expire and reports to police
CN106203498A (en) * 2016-07-07 2016-12-07 中国科学院深圳先进技术研究院 A kind of City scenarios rubbish detection method and system
CN111458721A (en) * 2020-03-31 2020-07-28 江苏集萃华科智能装备科技有限公司 Exposed garbage identification and positioning method, device and system
WO2021022475A1 (en) * 2019-08-06 2021-02-11 中国长城科技集团股份有限公司 Refuse disposal method and apparatus, and terminal device
CN112560755A (en) * 2020-12-24 2021-03-26 中再云图技术有限公司 Target detection method for identifying urban exposed garbage
US20210178432A1 (en) * 2017-06-30 2021-06-17 Boe Technology Group Co., Ltd. Trash sorting and recycling method, trash sorting device, and trash sorting and recycling system
CN113468976A (en) * 2021-06-10 2021-10-01 浙江大华技术股份有限公司 Garbage detection method, garbage detection system and computer readable storage medium
CN113834451A (en) * 2021-08-26 2021-12-24 贵阳市环境卫生管理服务中心 Automatic garbage exposure area monitoring method for domestic garbage landfill operation area
JP2022014437A (en) * 2020-07-06 2022-01-19 株式会社タクマ Device for warning falling into refuse pit, method for warning falling into refuse pit and program for warning falling into refuse pit
CN114299364A (en) * 2021-12-31 2022-04-08 郑州信大先进技术研究院 Data expansion method and system for urban exposed garbage sample image
CN114898309A (en) * 2022-03-11 2022-08-12 苏州市伏泰信息科技股份有限公司 City intelligent inspection vehicle system and inspection method based on visual AI technology
CN114955289A (en) * 2022-03-24 2022-08-30 中国人民解放军陆军军事交通学院 Intelligent garbage classification recycling and management method and intelligent classification garbage can
CN115035474A (en) * 2022-06-21 2022-09-09 武汉市万睿数字运营有限公司 Scene attention-based garbage detection method and device and related medium
CN115100527A (en) * 2022-07-08 2022-09-23 姚淞瀚 Garbage detection method of neural network model based on YOLOv5
CN115937655A (en) * 2023-02-24 2023-04-07 城云科技(中国)有限公司 Target detection model of multi-order feature interaction, and construction method, device and application thereof
CN115995119A (en) * 2023-03-23 2023-04-21 山东特联信息科技有限公司 Gas cylinder filling link illegal behavior identification method and system based on Internet of things

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205114216U (en) * 2015-11-20 2016-03-30 浙江联运知慧科技有限公司 Intelligence trash classification recycling case with rubbish overflows to expire and reports to police
CN106203498A (en) * 2016-07-07 2016-12-07 中国科学院深圳先进技术研究院 A kind of City scenarios rubbish detection method and system
US20210178432A1 (en) * 2017-06-30 2021-06-17 Boe Technology Group Co., Ltd. Trash sorting and recycling method, trash sorting device, and trash sorting and recycling system
WO2021022475A1 (en) * 2019-08-06 2021-02-11 中国长城科技集团股份有限公司 Refuse disposal method and apparatus, and terminal device
CN111458721A (en) * 2020-03-31 2020-07-28 江苏集萃华科智能装备科技有限公司 Exposed garbage identification and positioning method, device and system
JP2022014437A (en) * 2020-07-06 2022-01-19 株式会社タクマ Device for warning falling into refuse pit, method for warning falling into refuse pit and program for warning falling into refuse pit
CN112560755A (en) * 2020-12-24 2021-03-26 中再云图技术有限公司 Target detection method for identifying urban exposed garbage
CN113468976A (en) * 2021-06-10 2021-10-01 浙江大华技术股份有限公司 Garbage detection method, garbage detection system and computer readable storage medium
CN113834451A (en) * 2021-08-26 2021-12-24 贵阳市环境卫生管理服务中心 Automatic garbage exposure area monitoring method for domestic garbage landfill operation area
CN114299364A (en) * 2021-12-31 2022-04-08 郑州信大先进技术研究院 Data expansion method and system for urban exposed garbage sample image
CN114898309A (en) * 2022-03-11 2022-08-12 苏州市伏泰信息科技股份有限公司 City intelligent inspection vehicle system and inspection method based on visual AI technology
CN114955289A (en) * 2022-03-24 2022-08-30 中国人民解放军陆军军事交通学院 Intelligent garbage classification recycling and management method and intelligent classification garbage can
CN115035474A (en) * 2022-06-21 2022-09-09 武汉市万睿数字运营有限公司 Scene attention-based garbage detection method and device and related medium
CN115100527A (en) * 2022-07-08 2022-09-23 姚淞瀚 Garbage detection method of neural network model based on YOLOv5
CN115937655A (en) * 2023-02-24 2023-04-07 城云科技(中国)有限公司 Target detection model of multi-order feature interaction, and construction method, device and application thereof
CN115995119A (en) * 2023-03-23 2023-04-21 山东特联信息科技有限公司 Gas cylinder filling link illegal behavior identification method and system based on Internet of things

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
COMPUTER VISION: "YOLOv8 vs. YOLOv5: Choosing the Best Object Detection Model", HTTPS://WWW.AUGMENTEDSTARTUPS.COM/BLOG/YOLOV8-VS-YOLOV5-CHOOSING-THE-BEST-OBJECT-DETECTION-MODEL, pages 1 - 4 *
Z. WU 等: "Using YOLOv5 for Garbage Classification", 2021 4TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE (PRAI), pages 35 - 38 *
ZANJIA TONG等: "Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism", HTTPS://ARXIV.ORG/ABS/2301.10051, pages 1 - 8 *
加勒比海带66: "目标检测算法——YOLOv5/v7/v8改进结合涨点Trick之Wise-IoU(超越CIOU/SIOU)", HTTPS://BLOG.CSDN.NET/M0_53578855/ARTICLE/DETAILS/129762616, pages 1 - 3 *
朱祥祥: "基于深度学习的城市管理目标检测算法研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 2023, pages 138 - 298 *
董子源;韩卫光;: "基于卷积神经网络的垃圾图像分类算法", 计算机系统应用, no. 08, pages 203 - 208 *
魏书法;程章林;: "基于图像的城市场景垃圾自动检测", 集成技术, no. 01, pages 41 - 54 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342895A (en) * 2023-05-31 2023-06-27 浙江联运知慧科技有限公司 Method and system for improving sorting efficiency of renewable resources based on AI (advanced technology attachment) processing
CN116342895B (en) * 2023-05-31 2023-08-11 浙江联运知慧科技有限公司 Method and system for improving sorting efficiency of renewable resources based on AI (advanced technology attachment) processing
CN116452590A (en) * 2023-06-16 2023-07-18 中国人民解放军战略支援部队航天工程大学 Optimization method and system for boundary box intersection-parallel ratio and electronic equipment
CN116665080A (en) * 2023-07-26 2023-08-29 国网江西省电力有限公司电力科学研究院 Unmanned aerial vehicle deteriorated insulator detection method and system based on target recognition
CN116665080B (en) * 2023-07-26 2023-11-07 国网江西省电力有限公司电力科学研究院 Unmanned aerial vehicle deteriorated insulator detection method and system based on target recognition
CN116958688A (en) * 2023-07-28 2023-10-27 南京信息工程大学 Target detection method and system based on YOLOv8 network
CN117079327A (en) * 2023-08-18 2023-11-17 广东保伦电子股份有限公司 Face recognition method and device based on real-time target detection and storage medium
CN117197728A (en) * 2023-11-07 2023-12-08 成都千嘉科技股份有限公司 Method for identifying real-time gas diffusing operation through wearable camera equipment
CN117197728B (en) * 2023-11-07 2024-01-23 成都千嘉科技股份有限公司 Method for identifying real-time gas diffusing operation through wearable camera equipment

Also Published As

Publication number Publication date
CN116189099B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN116189099B (en) Method for detecting and stacking exposed garbage based on improved yolov8
Nie et al. Pavement Crack Detection based on yolo v3
CN104050481B (en) Multi-template infrared image real-time pedestrian detection method combining contour feature and gray level
CN109816024A (en) A kind of real-time automobile logo detection method based on multi-scale feature fusion and DCNN
CN104361351B (en) A kind of diameter radar image sorting technique based on range statistics similarity
CN111611970B (en) Urban management monitoring video-based random garbage throwing behavior detection method
CN109859468A (en) Multilane traffic volume based on YOLOv3 counts and wireless vehicle tracking
CN112257799A (en) Method, system and device for detecting household garbage target
CN112084869A (en) Compact quadrilateral representation-based building target detection method
CN112001411B (en) Dam crack detection algorithm based on FPN structure
CN102496001A (en) Method of video monitor object automatic detection and system thereof
CN111914634A (en) Complex-scene-interference-resistant automatic manhole cover type detection method and system
CN111626277A (en) Vehicle tracking method and device based on over-station inter-modulation index analysis
CN113468976B (en) Garbage detection method, garbage detection system, and computer-readable storage medium
CN115346177A (en) Novel system and method for detecting target under road side view angle
CN107689158A (en) A kind of intellectual traffic control method based on image procossing
CN113221804B (en) Disordered material detection method and device based on monitoring video and application
CN107194393A (en) A kind of method and device for detecting Provisional Number Plate
CN114627437B (en) Traffic target identification method and system
CN112707058B (en) Detection method, system, device and medium for standard actions of kitchen waste
CN110490150A (en) A kind of automatic auditing system of picture violating the regulations and method based on vehicle retrieval
CN113095301A (en) Road occupation operation monitoring method, system and server
CN113469097A (en) SSD (solid State disk) network-based real-time detection method for water surface floating object multiple cameras
CN111217062A (en) Garbage can garbage identification method based on edge calculation and deep learning
Jardosh et al. SEGRO: key towards modern waste management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant