CN110796186A - Dry and wet garbage identification and classification method based on improved YOLOv3 network - Google Patents
Dry and wet garbage identification and classification method based on improved YOLOv3 network Download PDFInfo
- Publication number
- CN110796186A CN110796186A CN201911005605.9A CN201911005605A CN110796186A CN 110796186 A CN110796186 A CN 110796186A CN 201911005605 A CN201911005605 A CN 201911005605A CN 110796186 A CN110796186 A CN 110796186A
- Authority
- CN
- China
- Prior art keywords
- network
- dry
- garbage
- target frame
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a dry and wet garbage recognition and classification method based on an improved YOLOv3 network, wherein an original recognition model comprises a YOLO v3 network; the method comprises the following steps: collecting dry and wet garbage mixed pictures in a real throwing scene, and establishing a sample data set; firstly, data is preprocessed to finish image enhancement, and a sample data set is expanded; marking the dry garbage position in the picture; performing clustering analysis on the marked real target frame to obtain an initial target frame; setting a learning rate parameter of the improved YOLOv3 network and improving the network structure of the original recognition model; setting a loss function and an optimization algorithm of the improved YOLOv3 network to complete the training of the network in the recognition model; and testing by using the trained network parameters. The invention realizes the positioning detection and the recognition classification of the dry and wet garbage under the complex scene.
Description
Technical Field
The invention belongs to the field of application research of deep learning target detection algorithms, and particularly relates to a dry and wet garbage identification and classification method based on an improved YOLOv3 network.
Background
In recent years, the environment protection industry is facing a more severe situation, and the nation continuously puts forward new policies and measures in the aspect of environment protection, and garbage classification is an important means for promoting the environment protection. At present, facilities and systems for throwing, collecting, transporting and treating domestic garbage are built in a plurality of key cities in China. In order to promote the garbage classification work, the most important thing is to improve the garbage classification consciousness of residents from the source and improve the garbage classification purity.
The object detection is one of the most important technologies in computer vision, and before a deep neural network is developed, the traditional object detection technology extracts features through gradients and feature points, and then classifies the extracted features by using machine learning methods such as decision trees, random forests, SVM and the like. However, these methods are poor in representativeness and robustness, and cannot detect an object without fixed form and characteristics, such as garbage. In an actual life scene, the background of dry and wet garbage is complex, the condition that a plurality of wet garbage and dry garbage are mixed exists, the generalization capability of a model trained by adopting a traditional target detection algorithm is weak, and the positioning detection function cannot be realized. Deep learning is used as a multilayer neural network, good development is achieved in the fields of target detection and image segmentation, and strong capability is achieved in feature extraction and discrimination. From 2014 to date, increasingly rapid and accurate target detection methods such as R-CNN, Fast R-CNN, YOLO, YOLOv3, SSD and the like are developed in sequence. Different from the R-CNN series, SSD, YOLO and YOLOv3 belonging to one-stage method, the previous R-CNN series are based on a method of generating candidate frames for detection, and although the detection accuracy is high, the running speed is slow. The idea pioneered by the YoLO series is to combine the two stages of candidate region selection and detection into one and directly treat the target detection problem as a regression problem. From YOLOv1 to YOLOv2 to YOLOv3, through development and change of generations, network structures are continuously improved, under the condition of ensuring the advantage of detection speed, flash points of other excellent target detection algorithms are drawn, and high requirements on accuracy and speed can be met at the same time. Currently, there are also research personnel exploring the research work of garbage target detection using deep learning strategy. Mittal G in its paper "Spot Garbig: smart phone app to detect garbange using missing learning, ACMIntermental Joint Conference on Pervasive and Ubiquitous computing. ACM" accomplishes the classification of Garbage and non-Garbage pictures by constructing a Garbant deep convolutional network. Wei et al propose a garbage detection method based on ZF-Net high-speed region convolutional neural network based on the idea of FasterRCNN. But still not both accuracy and speed. Szegydy et al, in his paper "Going stripper with solutions", train the google lens model to detect the target object by applying the ideas in the overfeat model, and finally, under the background of other types of garbage, can realize the detection of objects such as cigarette butts from a height of three meters.
Disclosure of Invention
In order to solve the problems of positioning detection and identification classification of dry and wet garbage in a complex scene, the invention provides a dry and wet garbage identification and classification method based on an improved YOLOv3 network. The technical scheme adopted by the invention is as follows:
a dry and wet garbage recognition and classification method based on an improved YOLOv3 network comprises an original recognition model comprising a YOLOv3 network; the method comprises the following steps:
step S1, collecting dry and wet garbage mixed pictures in a real throwing scene, and establishing a sample data set; firstly, data is preprocessed to finish image enhancement, and a sample data set is expanded; marking the dry garbage position in the picture;
step S2, performing clustering analysis on the marked real target frame to obtain an initial target frame;
step S3, setting the learning rate parameter of the improved YOLOv3 network and improving the network structure of the original recognition model;
step S4, setting a loss function and an optimization algorithm of the improved YOLOv3 network to complete the training of the network in the recognition model;
and step S5, testing by using the trained network parameters, and evaluating the trained network by at least using the detection accuracy mAP as the evaluation index of the network.
Further, in step S2, optimizing a clustering method in the original recognition model by using a Canopy algorithm in cooperation with a k-means method;
firstly, carrying out coarse clustering on real target frames of dry garbage by adopting a Canopy algorithm to obtain initial clustering central points of k-means clustering, wherein the number of the initial clustering central points is k, and then carrying out fine clustering on the real target frames of the dry garbage by initializing the initial clustering central points of k-means; and taking the area intersection ratio IOU as a clustering index, and taking a target frame obtained by prediction at the moment as an initial target frame when the area intersection ratio IOU is not lower than a set threshold value.
Further, in step S3, the learning rate of the improved YOLOv3 network is gradually decreased as the iteration continues.
Further, in step S3, improving the network structure of the original recognition model includes:
selecting darknet-53 as a basic network for feature extraction, calculating to obtain a convolution feature map, and then performing sliding window operation on the convolution feature map, wherein each grid in the convolution feature map can predict k target frames with different sizes, and the target frames are called anchor frames;
performing multi-scale feature map fusion, so that each grid can predict more anchor points, and predicting the position information, confidence coefficient and C category probabilities of each target, wherein C is greater than 1;
adopting a non-maximum suppression algorithm to reject redundant target frames with low confidence coefficient, comprising: the method comprises the steps of firstly sorting each target frame according to probability scores in the target frames before screening, then performing area intersection on the frame with the highest score and all the rest frames, and calculating an IOU2, wherein the target frame with the IOU2 smaller than a preset threshold is considered as a target frame pointing to different targets to be reserved, the target frame with the IOU2 larger than or equal to the preset threshold is considered as a target frame pointing to the same target with the target frame with the highest probability score to be inhibited, and the loop judgment is carried out until the IOU2 values of all the rest frames are smaller than the preset threshold.
Further, in step S4, a loss function is obtained by combining the positioning loss, the area intersection ratio error, and the classification loss; loss function calculation formula:
wherein:the center of the representative object appears in the ith predicted target box,indicating that the jth anchor block in the ith grid is responsible for predicting the object at the current time; n denotes the side length of the feature map, B denotes the prediction of B object boxes for each grid, (x)i,yi),ωi,hiThe center coordinates, width and height of the predicted target frame representing the dry garbage in the ith mesh,respectively representing the center coordinates, width and height of the marked real target frame, ciAndrespectively representing the prediction confidence coefficient and the real confidence coefficient of the dry garbage in the ith grid, pi(c) Andrespectively representing the prediction probability value and the real probability value of the dry garbage in the ith grid belonging to a certain category, c representing the certain category, and classes representing the total number of the categories;
in the training process, through iterative computation, when the loss function value is reduced to the minimum, the optimal parameters of the network are computed.
Furthermore, in order to accelerate the convergence of the loss function, an Adam-based optimization algorithm is adopted, and a momentum gradient descent method and a RMSProp algorithm are combined to optimize the network.
Further, in step S5, a test picture is input, a picture with a predicted target frame and a classification confidence after network calculation is output, and the detection accuracy of the improved network on the test set is obtained through formula calculation:
wherein, TP is the number of correct detection positive samples, i.e. marked as dry garbage and correct detection, FP is the number of positive samples detected as negative samples, i.e. not marked as dry garbage but detected as dry garbage, FN is the number of negative samples detected as positive samples, i.e. marked as dry garbage but not detected as dry garbage;
index F obtained by fusion1And the mAP is used to judge global performance.
The invention has the advantages that: the traditional detection method has the defects of poor characteristic extraction representativeness because of extracting the characteristics through gradients and characteristic points, and the characteristics are difficult to define accurately. The improved YOLOv 3-based dry and wet garbage recognition and classification algorithm solves the problem of feature extraction under complex scenes and complex feature objects, fully exerts the advantage of feature extraction of a deep learning method, and can learn simple features from a large amount of data sets and then learn more complex and abstract deep features gradually without relying on artificial feature engineering. The method of the invention has good detection performance in the identification and classification of dry and wet garbage. The method is also suitable for identifying other irregular small target objects in a complex scene through proper optimization and deformation.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is an exemplary diagram of the detection results of the present invention.
Detailed Description
The invention is further illustrated by the following specific figures and examples.
Step S1, collecting dry and wet garbage mixed pictures in a real throwing scene, and establishing a sample data set; firstly, data is preprocessed to finish image enhancement, and a sample data set is expanded; marking the dry garbage position in the picture;
in the step, image enhancement can be completed by adopting various modes such as turnover transformation, translation transformation, scale transformation, rotation transformation, random cutting, color dithering, contrast transformation, noise disturbance and the like, and a sample data set is expanded; the robustness of the recognition model can be enhanced through the expanded data set; then, labeling the dry garbage position in the picture by using a LabelImg labeling tool, and dividing the sample data set into a training set and a test set after the labeling is finished, wherein the division ratio is 8: 2;
step S2, performing clustering analysis on the marked real target frame to obtain an initial target frame;
the method adopts the Canopy algorithm to be matched with the k-means method to optimize the clustering method in the original recognition model (an unmodified YOLO v3 network), accelerates the aggregation process, and has two main processes;
the method comprises the steps of firstly, carrying out coarse clustering on a real target frame of the dry garbage by adopting a Canopy algorithm to obtain initial clustering central points of k-means clustering, wherein the number of the initial clustering central points is k, the k obtained in the method is 9, and then carrying out fine clustering on the real target frame of the dry garbage by initializing the initial clustering central points of k-means; taking the area intersection ratio IOU as a clustering index, and taking a target frame obtained by prediction at the moment as an initial target frame when the area intersection ratio IOU is not lower than 0.5; the IOU calculation is as follows:
boxpredindicates the predicted area size, box, of the target frametruthRepresenting the area size of the marked real target frame;
the method is regarded as an improved clustering method by adopting the Canopy algorithm and matching with a k-means method, and is compared with the clustering method in the original recognition model (an unmodified YOLO v3 network);
according to the method, the initial target frames selected by two clustering methods are applied to an improved YOLO v3 network respectively, and the network prediction effect is observed; the accuracy and recall value are adopted as the evaluation indexes of the network prediction effect, and the accuracy and recall value of models in different methods are as follows:
step S3, setting the learning rate parameter of the improved YOLOv3 network and improving the network structure of the original recognition model;
the learning rate parameter of the improved YOLOv3 network is set, and the specific setting process comprises the steps of firstly setting the initial learning rate to be 0.001, accelerating to obtain a better solution by using a larger learning rate, then setting the initial learning rate to be 0.1 time of the initial learning rate after 30000 iterations, and gradually reducing along with the continuation of the iterations; preferably, after 50000 times of iteration, the initial learning rate is set to be 0.01 time, so that the recognition model tends to be stable in the later training period;
improving the network structure of an original recognition model, selecting darknet-53 as a basic network for feature extraction, calculating to obtain a convolution feature map, and then performing sliding window operation on the convolution feature map, wherein each grid in the convolution feature map can predict k target frames with different sizes, and the target frames are called anchor points; multi-scale feature map fusion is also carried out in the network structure, so that more anchor points can be predicted by each grid, and the position information, the confidence coefficient and the C category probability of each target prediction target frame are more than 1; in the testing stage, a plurality of target frames and confidence degrees are inevitably predicted, many target frames are redundant, one target can be detected by a plurality of target frames, and if all the target frames are output, the target frames cannot be accurately positioned, so that result visualization is influenced; in order to solve the phenomenon, a non-maximum suppression algorithm is adopted, and the specific process is that the target frames are sorted according to probability scores in the target frames before screening, then the frame with the highest score is subjected to area intersection with all the rest frames and is compared with IOU2, the target frame with the IOU2 smaller than a set threshold value of 0.5 is considered to be the target frame pointing to different targets and is reserved, the target frame with the IOU2 larger than or equal to the set threshold value of 0.5 is considered to be the target frame pointing to the same target with the target frame with the highest probability score and is suppressed, and the loop judgment is carried out until the IOU2 values of all the rest frames are smaller than the set threshold value; through the process, the redundant target frames with low confidence coefficient can be successfully removed.
Step S4, setting a loss function and an optimization algorithm of the improved YOLOv3 network to complete the training of the network in the recognition model;
the identification model obtains a loss function by combining the positioning loss, the area cross-over comparison error and the classification loss, and continuously updates network parameters including a weighted value and a bias value by adopting an optimization algorithm iteration; loss function calculation formula:
wherein:the center of the representative object appears in the ith predicted target box,indicating that the jth anchor block in the ith grid is responsible for predicting the object at the current time; n denotes the side length of the feature map, B denotes the prediction of B object boxes for each grid, (x)i,yi),wi,hiThe center coordinates, width and height of the predicted target frame representing the dry garbage in the ith mesh,respectively representing the center coordinates, width and height of the marked real target frame, ciAndthe prediction confidence coefficient sum respectively representing the existence of dry garbage in the ith gridTrue confidence, pi(c) Andrespectively representing the prediction probability value and the real probability value of the dry garbage in the ith grid belonging to a certain category, c representing the certain category, and classes representing the total number of the categories; to prevent the loss function from being phagocytosed by the third term, a coefficient is set here: lambda [ alpha ]coord=5,λnoobj=0.5;
In the training process, through iterative computation, when the loss function value is reduced to the minimum, the optimal parameters of the network are computed;
preferably, in order to accelerate the convergence of the loss function, an optimization algorithm based on Adam is adopted, and a momentum gradient descent method and an RMSProp algorithm are combined to optimize the network; the algorithm can realize high-efficiency calculation, solve the problem of large-scale data and parameter optimization, and enable the parameters to be intuitively explained.
And step S5, testing by using the trained network parameters, and evaluating the trained network by using the detection precision mAP as the evaluation index of the network.
Inputting a test picture, outputting the picture with a predicted target frame and classification confidence after network calculation, and obtaining the detection precision of the improved network on a test set through formula calculation:
wherein, TP is the number of correct detection positive samples, i.e. marked as dry garbage and correct detection, FP is the number of positive samples detected as negative samples, i.e. not marked as dry garbage but detected as dry garbage, FN is the number of negative samples detected as positive samples, i.e. marked as dry garbage but not detected as dry garbage; since the two indexes of accuracy and recall rate can only represent a part of the performance of the model, an index F obtained by further fusing the accuracy and the recall rate needs to be used1And mAP to determine global performance;
in order to verify the advantages of the network, the network provided by the invention is compared with an unmodified YOLO v3 network, SSD and Faster R-CNN in terms of detection performance, the same test set data is adopted in a comparison experiment, and the final result shows that the detection precision of the model provided by the invention is the highest, and the detection precision of the detection precision on the test set reaches 90%. The result is mainly benefited by that after the Canopy algorithm is matched with k-means clustering, the network accuracy is improved by 2.5 percent, and the detection accuracy is improved. In addition, the characteristic extraction capability of the network can be enhanced by using the darknet-53 as a basic network, and the loss function combining three losses is used, so that the identification network disclosed by the invention can adapt to the identification and classification of dry and wet garbage under a complex scene, particularly the identification capability of a small target object is obviously enhanced, and the occurrence of missing detection is reduced.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.
Claims (7)
1. A dry and wet garbage recognition and classification method based on an improved YOLOv3 network comprises an original recognition model comprising a YOLO v3 network; the method is characterized by comprising the following steps:
step S1, collecting dry and wet garbage mixed pictures in a real throwing scene, and establishing a sample data set; firstly, data is preprocessed to finish image enhancement, and a sample data set is expanded; marking the dry garbage position in the picture;
step S2, performing clustering analysis on the marked real target frame to obtain an initial target frame;
step S3, setting the learning rate parameter of the improved YOLOv3 network and improving the network structure of the original recognition model;
step S4, setting a loss function and an optimization algorithm of the improved YOLOv3 network to complete the training of the network in the recognition model;
and step S5, testing by using the trained network parameters, and evaluating the trained network by at least using the detection accuracy mAP as the evaluation index of the network.
2. The improved YOLOv3 network-based dry and wet garbage recognition and classification method as claimed in claim 1,
in the step S2, optimizing a clustering method in the original recognition model by adopting a Canopy algorithm and matching with a k-means method;
firstly, carrying out coarse clustering on real target frames of dry garbage by adopting a Canopy algorithm to obtain initial clustering central points of k-means clustering, wherein the number of the initial clustering central points is k, and then carrying out fine clustering on the real target frames of the dry garbage by initializing the initial clustering central points of k-means; and taking the area intersection ratio IOU as a clustering index, and taking a target frame obtained by prediction at the moment as an initial target frame when the area intersection ratio IOU is not lower than a set threshold value.
3. The improved YOLOv3 network-based dry and wet garbage recognition and classification method as claimed in claim 2,
in step S3, the learning rate of the improved yollov 3 network is gradually decreased as the iteration continues.
4. The improved YOLOv3 network-based dry and wet garbage recognition and classification method as claimed in claim 2,
in step S3, the method for improving the network structure of the original recognition model includes:
selecting darknet-53 as a basic network for feature extraction, calculating to obtain a convolution feature map, and then performing sliding window operation on the convolution feature map, wherein each grid in the convolution feature map can predict k target frames with different sizes, and the target frames are called anchor frames;
performing multi-scale feature map fusion, so that each grid can predict more anchor points, and predicting the position information, confidence coefficient and C category probabilities of each target, wherein C is greater than 1;
adopting a non-maximum suppression algorithm to reject redundant target frames with low confidence coefficient, comprising: the method comprises the steps of firstly sorting each target frame according to probability scores in the target frames before screening, then performing area intersection on the frame with the highest score and all the rest frames, and calculating an IOU2, wherein the target frame with the IOU2 smaller than a preset threshold is considered as a target frame pointing to different targets to be reserved, the target frame with the IOU2 larger than or equal to the preset threshold is considered as a target frame pointing to the same target with the target frame with the highest probability score to be inhibited, and the loop judgment is carried out until the IOU2 values of all the rest frames are smaller than the preset threshold.
5. The improved YOLOv3 network-based dry and wet garbage recognition and classification method according to claim 4,
in step S4, a loss function is obtained by combining the positioning loss, the area intersection ratio error and the classification loss; loss function calculation formula:
wherein:the center of the representative object appears in the ith predicted target box,indicating that the jth anchor block in the ith grid is responsible for predicting the object at the current time; n denotes the side length of the feature map, B denotes the prediction of B object boxes for each grid, (x)i,yi),ωi,hiThe center coordinates, width and height of the predicted target frame representing the dry garbage in the ith mesh,respectively representing the center coordinates, width and height of the marked real target frame, ciAndrespectively representing the prediction confidence coefficient and the real confidence coefficient of the dry garbage in the ith grid, pi(c) Andrespectively representing the prediction probability value and the real probability value of the dry garbage in the ith grid belonging to a certain category, c representing the certain category, and classes representing the total number of the categories;
in the training process, through iterative computation, when the loss function value is reduced to the minimum, the optimal parameters of the network are computed.
6. The improved YOLOv3 network-based dry and wet garbage recognition and classification method according to claim 5,
in order to accelerate the convergence of the loss function, an optimization algorithm based on Adam is adopted, and a momentum gradient descent method and a RMSProp algorithm are combined to optimize the network.
7. The method for recognizing and classifying dry and wet garbage based on improved YOLOv3 network as claimed in claim 5 or 6,
in step S5, a test picture is input, a picture with a predicted target frame and a classification confidence after network calculation is output, and the detection accuracy of the improved network on the test set is obtained through formula calculation:
wherein, TP is the number of correct detection positive samples, i.e. marked as dry garbage and correct detection, FP is the number of positive samples detected as negative samples, i.e. not marked as dry garbage but detected as dry garbage, FN is the number of negative samples detected as positive samples, i.e. marked as dry garbage but not detected as dry garbage;
index F obtained by fusion1And the mAP is used to judge global performance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911005605.9A CN110796186A (en) | 2019-10-22 | 2019-10-22 | Dry and wet garbage identification and classification method based on improved YOLOv3 network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911005605.9A CN110796186A (en) | 2019-10-22 | 2019-10-22 | Dry and wet garbage identification and classification method based on improved YOLOv3 network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110796186A true CN110796186A (en) | 2020-02-14 |
Family
ID=69440547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911005605.9A Pending CN110796186A (en) | 2019-10-22 | 2019-10-22 | Dry and wet garbage identification and classification method based on improved YOLOv3 network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110796186A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368895A (en) * | 2020-02-28 | 2020-07-03 | 上海海事大学 | Garbage bag target detection method and detection system in wet garbage |
CN111797758A (en) * | 2020-07-03 | 2020-10-20 | 成都理工大学 | Identification and positioning technology for plastic bottles |
CN111833322A (en) * | 2020-07-08 | 2020-10-27 | 昆明理工大学 | Garbage multi-target detection method based on improved YOLOv3 |
CN111862038A (en) * | 2020-07-17 | 2020-10-30 | 中国医学科学院阜外医院 | Plaque detection method, device, equipment and medium |
CN111914815A (en) * | 2020-09-05 | 2020-11-10 | 广东鲲鹏智能机器设备有限公司 | Machine vision intelligent recognition system and method for garbage target |
CN111950357A (en) * | 2020-06-30 | 2020-11-17 | 北京航天控制仪器研究所 | Marine water surface garbage rapid identification method based on multi-feature YOLOV3 |
CN112170233A (en) * | 2020-09-01 | 2021-01-05 | 燕山大学 | Small part sorting method and system based on deep learning |
CN112329768A (en) * | 2020-10-23 | 2021-02-05 | 上善智城(苏州)信息科技有限公司 | Improved YOLO-based method for identifying fuel-discharging stop sign of gas station |
CN112329605A (en) * | 2020-11-03 | 2021-02-05 | 中再云图技术有限公司 | City appearance random pasting and random drawing behavior identification method, storage device and server |
CN112507929A (en) * | 2020-12-16 | 2021-03-16 | 武汉理工大学 | Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network |
CN112560576A (en) * | 2020-11-09 | 2021-03-26 | 华南农业大学 | AI map recognition garbage classification and intelligent recovery method |
CN112560755A (en) * | 2020-12-24 | 2021-03-26 | 中再云图技术有限公司 | Target detection method for identifying urban exposed garbage |
CN112633174A (en) * | 2020-12-23 | 2021-04-09 | 电子科技大学 | Improved YOLOv4 high-dome-based fire detection method and storage medium |
CN113011465A (en) * | 2021-02-25 | 2021-06-22 | 浙江净禾智慧科技有限公司 | Household garbage throwing intelligent supervision method based on grouping multi-stage fusion |
CN113052005A (en) * | 2021-02-08 | 2021-06-29 | 湖南工业大学 | Garbage sorting method and garbage sorting device for home service |
CN113076992A (en) * | 2021-03-31 | 2021-07-06 | 武汉理工大学 | Household garbage detection method and device |
CN113139476A (en) * | 2021-04-27 | 2021-07-20 | 山东英信计算机技术有限公司 | Data center-oriented human behavior attribute real-time detection method and system |
CN113537106A (en) * | 2021-07-23 | 2021-10-22 | 仲恺农业工程学院 | Fish feeding behavior identification method based on YOLOv5 |
CN113627481A (en) * | 2021-07-09 | 2021-11-09 | 南京邮电大学 | Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens |
CN113902044A (en) * | 2021-12-09 | 2022-01-07 | 江苏游隼微电子有限公司 | Image target extraction method based on lightweight YOLOV3 |
CN114241425A (en) * | 2022-02-21 | 2022-03-25 | 南京甄视智能科技有限公司 | Training method and device of garbage detection model, storage medium and equipment |
CN114913438A (en) * | 2022-03-28 | 2022-08-16 | 南京邮电大学 | Yolov5 garden abnormal target identification method based on anchor frame optimal clustering |
CN115147348A (en) * | 2022-05-05 | 2022-10-04 | 合肥工业大学 | Improved YOLOv 3-based tire defect detection method and system |
CN115187870A (en) * | 2022-09-13 | 2022-10-14 | 浙江蓝景科技有限公司杭州分公司 | Marine plastic waste material identification method and system, electronic equipment and storage medium |
CN116681660A (en) * | 2023-05-18 | 2023-09-01 | 中国长江三峡集团有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
CN116777843A (en) * | 2023-05-26 | 2023-09-19 | 湖南大学 | Kitchen waste detection method and system based on dynamic non-maximum suppression |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095266A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Method and system for clustering optimization based on Canopy algorithm |
CN107610087A (en) * | 2017-05-15 | 2018-01-19 | 华南理工大学 | A kind of tongue fur automatic division method based on deep learning |
CN108647655A (en) * | 2018-05-16 | 2018-10-12 | 北京工业大学 | Low latitude aerial images power line foreign matter detecting method based on light-duty convolutional neural networks |
CN109447034A (en) * | 2018-11-14 | 2019-03-08 | 北京信息科技大学 | Traffic mark detection method in automatic Pilot based on YOLOv3 network |
CN109815886A (en) * | 2019-01-21 | 2019-05-28 | 南京邮电大学 | A kind of pedestrian and vehicle checking method and system based on improvement YOLOv3 |
CN109859202A (en) * | 2019-02-18 | 2019-06-07 | 哈尔滨工程大学 | A kind of deep learning detection method based on the tracking of USV water surface optical target |
CN109928107A (en) * | 2019-04-08 | 2019-06-25 | 江西理工大学 | A kind of automatic classification system |
-
2019
- 2019-10-22 CN CN201911005605.9A patent/CN110796186A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095266A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Method and system for clustering optimization based on Canopy algorithm |
CN107610087A (en) * | 2017-05-15 | 2018-01-19 | 华南理工大学 | A kind of tongue fur automatic division method based on deep learning |
CN108647655A (en) * | 2018-05-16 | 2018-10-12 | 北京工业大学 | Low latitude aerial images power line foreign matter detecting method based on light-duty convolutional neural networks |
CN109447034A (en) * | 2018-11-14 | 2019-03-08 | 北京信息科技大学 | Traffic mark detection method in automatic Pilot based on YOLOv3 network |
CN109815886A (en) * | 2019-01-21 | 2019-05-28 | 南京邮电大学 | A kind of pedestrian and vehicle checking method and system based on improvement YOLOv3 |
CN109859202A (en) * | 2019-02-18 | 2019-06-07 | 哈尔滨工程大学 | A kind of deep learning detection method based on the tracking of USV water surface optical target |
CN109928107A (en) * | 2019-04-08 | 2019-06-25 | 江西理工大学 | A kind of automatic classification system |
Non-Patent Citations (1)
Title |
---|
JOSEPH REDMON ET AL.: "yolov3:an incremental improvement", 《ARXIV》 * |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368895B (en) * | 2020-02-28 | 2023-04-07 | 上海海事大学 | Garbage bag target detection method and detection system in wet garbage |
CN111368895A (en) * | 2020-02-28 | 2020-07-03 | 上海海事大学 | Garbage bag target detection method and detection system in wet garbage |
CN111950357A (en) * | 2020-06-30 | 2020-11-17 | 北京航天控制仪器研究所 | Marine water surface garbage rapid identification method based on multi-feature YOLOV3 |
CN111797758A (en) * | 2020-07-03 | 2020-10-20 | 成都理工大学 | Identification and positioning technology for plastic bottles |
CN111833322A (en) * | 2020-07-08 | 2020-10-27 | 昆明理工大学 | Garbage multi-target detection method based on improved YOLOv3 |
CN111833322B (en) * | 2020-07-08 | 2022-05-20 | 昆明理工大学 | Garbage multi-target detection method based on improved YOLOv3 |
CN111862038B (en) * | 2020-07-17 | 2024-05-14 | 中国医学科学院阜外医院 | Plaque detection method, plaque detection device, plaque detection equipment and plaque detection medium |
CN111862038A (en) * | 2020-07-17 | 2020-10-30 | 中国医学科学院阜外医院 | Plaque detection method, device, equipment and medium |
CN112170233A (en) * | 2020-09-01 | 2021-01-05 | 燕山大学 | Small part sorting method and system based on deep learning |
CN111914815A (en) * | 2020-09-05 | 2020-11-10 | 广东鲲鹏智能机器设备有限公司 | Machine vision intelligent recognition system and method for garbage target |
CN112329768A (en) * | 2020-10-23 | 2021-02-05 | 上善智城(苏州)信息科技有限公司 | Improved YOLO-based method for identifying fuel-discharging stop sign of gas station |
CN112329605A (en) * | 2020-11-03 | 2021-02-05 | 中再云图技术有限公司 | City appearance random pasting and random drawing behavior identification method, storage device and server |
CN112329605B (en) * | 2020-11-03 | 2022-05-17 | 中再云图技术有限公司 | City appearance random pasting and random drawing behavior identification method, storage device and server |
CN112560576A (en) * | 2020-11-09 | 2021-03-26 | 华南农业大学 | AI map recognition garbage classification and intelligent recovery method |
CN112560576B (en) * | 2020-11-09 | 2022-09-16 | 华南农业大学 | AI map recognition garbage classification and intelligent recovery method |
CN112507929B (en) * | 2020-12-16 | 2022-05-13 | 武汉理工大学 | Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network |
CN112507929A (en) * | 2020-12-16 | 2021-03-16 | 武汉理工大学 | Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network |
CN112633174A (en) * | 2020-12-23 | 2021-04-09 | 电子科技大学 | Improved YOLOv4 high-dome-based fire detection method and storage medium |
CN112633174B (en) * | 2020-12-23 | 2022-08-02 | 电子科技大学 | Improved YOLOv4 high-dome-based fire detection method and storage medium |
CN112560755A (en) * | 2020-12-24 | 2021-03-26 | 中再云图技术有限公司 | Target detection method for identifying urban exposed garbage |
CN113052005A (en) * | 2021-02-08 | 2021-06-29 | 湖南工业大学 | Garbage sorting method and garbage sorting device for home service |
CN113052005B (en) * | 2021-02-08 | 2024-02-02 | 湖南工业大学 | Garbage sorting method and garbage sorting device for household service |
CN113011465A (en) * | 2021-02-25 | 2021-06-22 | 浙江净禾智慧科技有限公司 | Household garbage throwing intelligent supervision method based on grouping multi-stage fusion |
CN113076992A (en) * | 2021-03-31 | 2021-07-06 | 武汉理工大学 | Household garbage detection method and device |
CN113139476A (en) * | 2021-04-27 | 2021-07-20 | 山东英信计算机技术有限公司 | Data center-oriented human behavior attribute real-time detection method and system |
CN113627481A (en) * | 2021-07-09 | 2021-11-09 | 南京邮电大学 | Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens |
CN113537106B (en) * | 2021-07-23 | 2023-06-02 | 仲恺农业工程学院 | Fish ingestion behavior identification method based on YOLOv5 |
CN113537106A (en) * | 2021-07-23 | 2021-10-22 | 仲恺农业工程学院 | Fish feeding behavior identification method based on YOLOv5 |
CN113902044A (en) * | 2021-12-09 | 2022-01-07 | 江苏游隼微电子有限公司 | Image target extraction method based on lightweight YOLOV3 |
CN114241425A (en) * | 2022-02-21 | 2022-03-25 | 南京甄视智能科技有限公司 | Training method and device of garbage detection model, storage medium and equipment |
CN114913438A (en) * | 2022-03-28 | 2022-08-16 | 南京邮电大学 | Yolov5 garden abnormal target identification method based on anchor frame optimal clustering |
CN115147348A (en) * | 2022-05-05 | 2022-10-04 | 合肥工业大学 | Improved YOLOv 3-based tire defect detection method and system |
CN115187870A (en) * | 2022-09-13 | 2022-10-14 | 浙江蓝景科技有限公司杭州分公司 | Marine plastic waste material identification method and system, electronic equipment and storage medium |
CN116681660A (en) * | 2023-05-18 | 2023-09-01 | 中国长江三峡集团有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
CN116681660B (en) * | 2023-05-18 | 2024-04-19 | 中国长江三峡集团有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
CN116777843A (en) * | 2023-05-26 | 2023-09-19 | 湖南大学 | Kitchen waste detection method and system based on dynamic non-maximum suppression |
CN116777843B (en) * | 2023-05-26 | 2024-02-27 | 湖南大学 | Kitchen waste detection method and system based on dynamic non-maximum suppression |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110796186A (en) | Dry and wet garbage identification and classification method based on improved YOLOv3 network | |
CN109190442B (en) | Rapid face detection method based on deep cascade convolution neural network | |
CN112507996B (en) | Face detection method of main sample attention mechanism | |
CN111444821A (en) | Automatic identification method for urban road signs | |
CN111275688A (en) | Small target detection method based on context feature fusion screening of attention mechanism | |
CN112528845B (en) | Physical circuit diagram identification method based on deep learning and application thereof | |
CN112836639A (en) | Pedestrian multi-target tracking video identification method based on improved YOLOv3 model | |
CN111738258A (en) | Pointer instrument reading identification method based on robot inspection | |
CN109671102A (en) | A kind of composite type method for tracking target based on depth characteristic fusion convolutional neural networks | |
CN110543906B (en) | Automatic skin recognition method based on Mask R-CNN model | |
CN103902960A (en) | Real-time face recognition system and method thereof | |
CN109284779A (en) | Object detection method based on deep full convolution network | |
CN112738470B (en) | Method for detecting parking in highway tunnel | |
CN112183435A (en) | Two-stage hand target detection method | |
CN110008899B (en) | Method for extracting and classifying candidate targets of visible light remote sensing image | |
CN111753682A (en) | Hoisting area dynamic monitoring method based on target detection algorithm | |
CN113487610A (en) | Herpes image recognition method and device, computer equipment and storage medium | |
CN117152746B (en) | Method for acquiring cervical cell classification parameters based on YOLOV5 network | |
CN112053354A (en) | Track slab crack detection method | |
CN113221956A (en) | Target identification method and device based on improved multi-scale depth model | |
CN102708367A (en) | Image identification method based on target contour features | |
Gajjar et al. | Intersection over Union based analysis of Image detection/segmentation using CNN model | |
Wang et al. | Automatic identification and location of tunnel lining cracks | |
CN113505120B (en) | Double-stage noise cleaning method for large-scale face data set | |
CN113869412B (en) | Image target detection method combining lightweight attention mechanism and YOLOv network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200214 |