CN113592002A - Real-time garbage monitoring method and system - Google Patents
Real-time garbage monitoring method and system Download PDFInfo
- Publication number
- CN113592002A CN113592002A CN202110890183.9A CN202110890183A CN113592002A CN 113592002 A CN113592002 A CN 113592002A CN 202110890183 A CN202110890183 A CN 202110890183A CN 113592002 A CN113592002 A CN 113592002A
- Authority
- CN
- China
- Prior art keywords
- image
- value
- garbage
- feature vector
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012544 monitoring process Methods 0.000 title claims abstract description 27
- 239000013598 vector Substances 0.000 claims abstract description 77
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000012545 processing Methods 0.000 claims abstract description 11
- 230000004913 activation Effects 0.000 claims abstract description 9
- 238000010606 normalization Methods 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 25
- 238000013528 artificial neural network Methods 0.000 claims description 14
- 238000002372 labelling Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000009514 concussion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 238000002759 z-score normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The invention discloses a real-time garbage monitoring method and a system, wherein the monitoring method comprises the following steps: acquiring an image; inputting the characteristic information of the image into a backbone network model for training, extracting the characteristic information of the image, and performing normalization processing on image pixels; obtaining image characteristic vector values with non-uniform characteristic distribution; weighting the normalized image feature vector value to obtain a new image feature vector value gathering features in an interval range; inputting the weighted feature vector value into a set activation function for training, so that new feature vector values are gathered in a smaller interval range again; and entering a tack layer to further perform feature extraction on the new feature vector value, and simultaneously performing Concat operation on the extracted feature vector value to obtain a model so as to enter a head layer for prediction. The method can judge whether the picture is a junk picture more accurately.
Description
Technical Field
The invention relates to the field of image recognition, in particular to a real-time garbage monitoring method and system.
Background
Along with social development, the national requirements on environmental protection are higher and higher, the requirements on garbage treatment are stricter and stricter, and beautiful villages and cities are built. Therefore, real-time monitoring of the garbage condition in various places is required. Ensure that the garbage is not placed upside down. According to the traditional method, after the picture is shot, the picture is selected in an artificial mode, and time and labor are wasted.
Therefore, it is necessary to develop an intelligent recognition system and method for monitoring garbage in real time and determining whether the monitored object is garbage. Due to the variety of garbage, the accuracy of the existing monitoring system is low, which is about 60% probably, and the requirement of a user is far from being met.
Chinese patent application No. CN202010127360.3 discloses a method and system for detecting garbage bag targets in wet garbage, which comprises: collecting wet garbage images containing garbage bags to form a wet garbage image library; marking the position and the category information of the garbage bags by the wet garbage image, and dividing the position and the category information into a training set, a verification set and a test set; building a deep learning neural network for training; adjusting parameters of the training network model to optimize the network model; and inputting the wet garbage images in the test set into the trained deep learning neural network for testing, and if the accuracy and the omission factor meet the threshold range of scene use, locally storing. The invention adopts the deep learning network to detect and identify the garbage bags in the wet garbage image, realizes automatic identification of the garbage bags in a large amount of wet garbage, improves the identification efficiency and improves the garbage reutilization rate.
The neural network is adopted for training, so the requirement on the model of the neural network is very important, the recognition accuracy is different for different models, and the problem of how to improve the accuracy of garbage recognition is to be solved.
Therefore, there is a need to provide a new real-time garbage monitoring method and system to improve the accuracy of garbage identification.
Disclosure of Invention
In view of the above problems, the present invention provides a method and a system for real-time monitoring garbage with high accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme: a real-time garbage monitoring method is characterized by comprising the following steps: step 1: acquiring an image; step 2: inputting the characteristic information of the image into the neural network which finishes learning for training; the method comprises the following steps: inputting the characteristic information of the image into a backbone network model for training, and extracting the characteristic information of the image; the method comprises the following steps: (1) carrying out normalization processing on image pixels; obtaining image characteristic vector values with non-uniform characteristic distribution; (2) weighting the normalized image feature vector value to obtain a new image feature vector value gathering features in an interval range; (3) inputting the weighted feature vector value into a set activation function for training, so that new feature vector values are gathered in a smaller interval range again; and entering a tack layer to further perform feature extraction on the new feature vector value, and simultaneously performing Concat operation on the extracted feature vector value to obtain a model so as to enter a head layer for prediction.
When the feature vector value is input into an activation function for training, when the weighted feature vector value is larger than 0, the weighted image feature vector value is compared with an XOY coordinate 0 point, and the maximum value is taken; and when the weighted feature vector value is smaller than 0, comparing the weighted image feature vector value with the XOY coordinate 0 point, and taking the minimum value.
The neural network learning step includes: acquiring a pixel value of a sample, extracting a characteristic vector value of a garbage image, and setting a threshold value for judging the garbage image; enhancing the garbage image data; naming the acquired and data-enhanced garbage images; labeling the named garbage image; putting the preprocessed XML file of the garbage image into a GPU for training; image data enhancement is carried out by adopting a Mosaic neural network model; inputting a backbone network model for training, entering a neck layer to further perform feature extraction on a new feature vector value, entering a head layer to predict, storing the result of the successfully trained image, and re-training the result of the image.
In order to achieve the purpose, the invention also adopts the following technical scheme: a real-time waste monitoring system, comprising: an image acquisition system for acquiring an image; and the neural network is used for training the images, gathering the interval range of the image characteristic vector values through normalization processing, weighting the image characteristic vector values, and then extracting the characteristics of the gathered image characteristic vector values and predicting.
Compared with the prior art, the garbage real-time monitoring method and the garbage real-time monitoring system with high accuracy have the beneficial effects that: (1) in the application process, the obtained pictures do not need to be numbered, labeled and image reinforced, and the characteristic information of the pictures can be directly input into a backbone network model for training; the efficiency of monitoring and identifying the garbage is improved; (2) in the training of the backbone network model, the new characteristic vector values are gathered in a smaller interval range again through training through the activation function, and the pictures which are obtained by the back end and judge whether the pictures are garbage or not are more accurate.
Drawings
Fig. 1 is a flow chart of the real-time monitoring method for garbage according to the present invention.
FIG. 2 is a flow chart of the present invention for training images.
FIG. 3 is a diagram of a comparison analysis of the real frame and the predicted frame of the image according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below through the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, it is a flow chart of the real-time monitoring method for garbage.
The invention provides a real-time garbage monitoring method, which comprises the following steps:
step 1: an image is acquired.
The front end obtains N images of the corresponding area by calling the related area camera, and the N images are transmitted to the cloud server in an encryption format of base 64.
Step 2: inputting the characteristic information of the image into the neural network which finishes learning for training, and outputting the image predicted to be garbage.
In the invention, a YOLO V5 neural network model is adopted, and before the garbage monitoring is applied, the network model needs to be learned firstly so as to confirm the characteristic vector value of the garbage.
201. And acquiring the pixel value of the sample, extracting the characteristic vector value of the garbage image, and setting a threshold value for judging the garbage image.
The number of samples is calculated based on the expected 90% confidence in identifying spam images:where N is the sample size, Z is the statistic, P is the probability in statistics, and E is the error value.
When the confidence is 95%, z = 1.96; when the confidence is 90%, z = 1.64. The value of this time is as follows: z =1.64, E =3%, P =0.5, N =747 is calculated, thereby determining the number of first-order training samples as 747 with an upper and lower error of ± 3%. Therefore, 747 garbage images were collected earlier for training the corresponding initial model.
Step 202: and enhancing the garbage image data.
Firstly, carrying out mirror image processing on the garbage image, specifically, carrying out left-right replacement on pixels by finding out a central point of the garbage image to obtain a new garbage image.
Second, with mixup hybrid model data enhancement, the process is by passing two images throughTo be mixed into a new image, corresponding to the formula:
Wherein the content of the first and second substances,the pixel values are corresponding to the images;and for one-hot coding, the value of the discrete features is expanded to an Euclidean space, and lambda is the value of the optimal point of beta distribution.
And then, the gray scale distribution of the image is adjusted through histogram equalization, so that the distribution on 0-255 gray scales is more balanced, the contrast of the image is improved, and the aim of improving the subjective visual effect of the image is fulfilled. Images with low contrast are suitable for enhancing image details using histogram equalization methods.
Step 203: and naming the acquired and data-enhanced junk images according to the increasing sequence of numbers from 1 to n, so that the later-stage image labeling and calibration are facilitated.
Because the marking is finished by different personnel, path values in the xml files generated by the marking are inconsistent, and therefore all marked files can be assigned in a unified mode through the storage positions of the final files.
Step 204: and labeling the named garbage image.
In the embodiment, LabelImg labeling is adopted, rectangular frame labeling is carried out on the data enhanced image, and a labeled result is written into an XML file, so that the characteristics of the image can be conveniently learned by using a deep learning network in the later period.
Step 205: and putting the preprocessed XML file of the garbage image into a GPU for training.
Specifically, setting the trained batch _ size to a specific value (e.g., 32) has the advantages of speeding up convergence and reducing concussion, setting the epochs to a specific value (e.g., 201) prevents the model overfitting problem when the epochs value is too high, and prevents the model from ending the training problem when the model has not converged after the epochs value is too low.
Step 206: and image data enhancement is carried out by adopting a Mosaic neural network model.
Specifically, a plurality of images are spliced in one image through overturning, zooming, color gamut change in an area and the like, so that the processing efficiency is greatly enhanced.
Next, adaptive image scaling is performed to specify the size of the image to be trained, and the size x × y of the image is scaled assuming that the image size is 608 × 608.
if it is notThen add the corresponding x height up and downThe black edges of (a) are made to be the final 608X 608, and the y height is calculated in the same way as the X height.
And (3) slicing the processed image through a Focus model, and performing 32 convolution kernels on the obtained feature map to finally obtain a 304X 32 feature map. These operations are also the re-expansion of the original data, which is convenient for step 207 to proceed.
Step 207: and training the obtained feature graph in a backbone network model.
The network aims to extract the characteristic information of the image, a BN algorithm is used, the speed of solving the optimal solution by gradient descent is increased, the precision is improved as much as possible, and a trained data set can be disordered to prevent deviation.
Fig. 2 is a flowchart of a process of training images according to the present invention.
The method comprises the following specific steps: (1) carrying out normalization processing on the expanded image pixels; obtaining image characteristic vector values with non-uniform characteristic distribution; possibly distributed in any quadrant of the planar coordinate system XOY; the boundary for spam identification is unclear.
Specifically, the normalization processing adopts a Z-score normalization method, and the calculation method is as follows:wherein, in the step (A),is the mean of the feature vectors of the pixels of the input image, X is the feature vector of the pixels of the input image,feature vector values normalized for the input image.
For standard deviation, then the normalized calculation can be written as:the method mainly aims to forcibly pull back the distribution of the input values to the standard normal distribution with the mean value of 0 and the variance of 1, so that most of characteristic values can be gathered in a relative interval, and the convergence is easy during later-stage back propagation during later-stage training.
(2) And weighting the normalized image feature vector value to obtain a new image feature vector value with the features gathered in an interval range.
for the feature vectors obtained for the weighted images,is a weighted value;is the normalized picture feature vector value,is an additional offset value.
At the initial stage, toAnd values are randomly taken, and the values are updated in the later reverse transmission process, so that the initial values do not greatly influence the later calculation.
(3) Inputting the weighted feature vector into a set function for training, so that new feature vector values are gathered in a smaller interval range again; determining the weight value and the weight value set for the image feature vector by facilitating later-stage direction propagation。
The invention trains the weighted feature vector by adopting an ELU function, wherein the activation function is as follows:or is orOr is or。
Namely: when the weighted feature vector value is larger than 0, comparing the weighted image feature vector value with an XOY coordinate 0 point, and taking the maximum value; and when the weighted feature vector value is smaller than 0, comparing the weighted image feature vector value with the XOY coordinate 0 point, and taking the minimum value.
Wherein, by the application of the following function,the eigenvector values are more aggregated, which is the best technical effect.
And performing back propagation according to the selected characteristic vector value, updating the original parameter, and controlling the error of the characteristic vector value which is judged to be garbage or not within a corresponding threshold value through iteration.
x is a characteristic vector value of the weighted image; the newiveSlope is a set weight value; max is the maximum value of the weighted image after the characteristic vector value is compared with 0; min is the minimum value of the weighted image after the characteristic vector value is compared with 0;
namely, the output result after ELU isTherefore, the method has the advantage that the ELU function can realize more accurate garbage identification.
The training mode in the invention not only inherits the characteristic of garbage recognition precision of the prior art model, but also enables the image characteristic vector value to be saturated at one side in the coordinate system, and enables the backbone network to better learn the image characteristic through the function, the characteristic vector curve obtained after training is smooth, when z =0, the acceleration gradient is reduced, a large amount of left and right jumps can not occur when z =0, the convergence can be better, and the accuracy after garbage recognition can be improved.
Forward propagation learning is carried out to obtainThen, for the one obtained after trainingVariance is performed with the expected result (e.g., garbage result is 1, not 0),and obtaining how much influence the corresponding training weight has on the error through back propagation by using the obtained variance, and updating the corresponding weight. From this iteration, continue untilClose to or the same as the expected value.
The backbone network may use a Resnet unit to account for the problem of the gradient decaying continuously as the number of layers increases.
Step 208: and after the backsbone layer is processed, the result enters a sock layer, the data of the backsbone layer is further subjected to feature extraction through an SPP (spin-point) module, an FPN (field programmable gate array) and a PAN (PAN) network structure, and meanwhile, the Concat function operation is carried out on feature graphs of different scales, so that a model is obtained and then the model enters a head layer for prediction.
Step 209: and after entering the head layer, the head layer adopts a CIOU _ loss function as a loss function of the image frame. In deep learning, the loss function plays a crucial role. And by minimizing the loss function, the model reaches a convergence state, and the error of the predicted value of the model is reduced. Therefore, different loss functions, the impact on the model is significant. The effect is as follows: when the prediction frame and the real frame are overlapped, the center alignment relation is judged, and meanwhile, the problem of different widths and heights between the prediction frame and the real frame is optimized. The CIOU _ loss function calculation method comprises the following steps:
whereinB is the spatial position of the real frame of the image, Bgt is the spatial position of the prediction frame of the image detection,the intersection position between the real frame and the prediction frame of the image is taken as the position of the intersection;is a real frame of the image andpredicting all positions of the space occupied by the frame;representing the distance between the predicted frame and the center point of the real frame,indicating the length of the diagonal of the circumscribed rectangle formed by the two. w is the width of the image and h is the height of the image.
Fig. 3 is a comparative analysis diagram of the real frame and the predicted frame of the image. The predicted frame of the image may deviate from the actual real frame.
If the difference in position between the two frames is too great, thenThe value will be close to 2. similarly, when the two boxes are infinitely close, even after inclusion,the value is calculated based on the aspect ratio of the two boxes that are included, and when the two boxes overlap in perfect agreement,maximum accuracy is achieved, thereby yielding the most desirable results.
And the model reaches a convergence state by adopting a minimum loss function, and the error of the predicted value of the model is reduced. Therefore, different loss functions, the impact on the model is significant.
And step 3: and outputting the recognition result.
If the recognition is successful, returning the obtained result to the front end through the json value for calling; and storing the result of the successfully trained image, wherein the stored content comprises the image storage address, the position of the image marking frame and the name of the image marking, and is used for automatically generating a new xml file for model retraining.
And the secondarily trained model is used for updating and replacing the model of the last stage, so that the confidence of the model identification is strengthened repeatedly.
For the successfully identified image, after receiving the successfully identified instruction, the front end performs systematic processing including not only sorting the position information, judging the size of the related garbage and the like, but also delivering the information to the related unit to clean the garbage in the region, thereby accelerating the cleaning efficiency in the region.
In addition, the failed image training can be manually identified and judged, if garbage occurs but the garbage is not marked, manual marking is needed, and then the image training is performed again in the step 2. The image with failed image refers to the image with garbage but no label or the image with wrong label and the image with low recognition confidence.
The invention also discloses a real-time garbage monitoring system, which comprises:
an image acquisition system for acquiring an image;
and the neural network is used for training the images, gathering the interval range of the image characteristic vector values through normalization processing, weighting the image characteristic vector values, and then extracting the characteristics of the gathered image characteristic vector values and predicting.
The neural network includes:
the backbone layer is used for extracting the characteristic information of the image;
the neck layer is used for further extracting the features of the new feature vector values and gathering the extracted feature vector values in a smaller interval range;
and the head layer is used for predicting the garbage image by adopting the characteristic vector value.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.
Claims (7)
1. A real-time garbage monitoring method is characterized by comprising the following steps:
acquiring an image;
inputting the characteristic information of the image into a neural network which finishes learning for training, and the method comprises the following steps:
inputting the feature information of the image into a backbone network model for training, and extracting the feature information of the image, wherein the steps comprise:
(1) carrying out normalization processing on image pixels; obtaining image characteristic vector values with non-uniform characteristic distribution;
(2) weighting the normalized image feature vector value to obtain a new image feature vector value gathering features in an interval range;
(3) inputting the weighted feature vector value into a set activation function for training, so that new feature vector values are gathered in a smaller interval range again;
and entering a tack layer to further perform feature extraction on the new feature vector value, and simultaneously performing Concat operation on the extracted feature vector value to obtain a model so as to enter a head layer for prediction.
2. The real-time garbage monitoring method according to claim 1, wherein when the eigenvector value is input into the activation function for training, when the weighted eigenvector value is larger than 0, the weighted image eigenvector value is compared with the XOY coordinate 0 point, and the maximum value is taken; and when the weighted feature vector value is smaller than 0, comparing the weighted image feature vector value with the XOY coordinate 0 point, and taking the minimum value.
3. The real-time spam monitoring method of claim 2 wherein said activation function is:wherein, x is a characteristic vector value of the weighted image, and negtivslope is a set weight value; max is the maximum value of the weighted image after the characteristic vector value is compared with 0; min is the minimum value of the weighted image after the characteristic vector value is compared with 0.
4. The real-time spam monitoring method according to claim 1, wherein said neural network learning step comprises:
acquiring a pixel value of a sample, extracting a characteristic vector value of a garbage image, and setting a threshold value for judging the garbage image;
enhancing the garbage image data;
naming the acquired and data-enhanced garbage images;
labeling the named garbage image;
putting the preprocessed XML file of the garbage image into a GPU for training;
image data enhancement is carried out by adopting a Mosaic neural network model;
inputting a backbone network model for training, entering a neck layer to further perform feature extraction on a new feature vector value, entering a head layer to predict, storing the result of the successfully trained image, and re-training the result of the image.
5. The real-time garbage monitoring method of claim 2, wherein after entering the head layer, the head layer adopts a CIOU _ loss function as a loss function of an image frame, and the CIOU _ loss function is calculated as follows:
b is the spatial position of the real frame of the image, Bgt is the spatial position of the prediction frame of the image detection,the intersection position between the real frame and the prediction frame of the image is taken as the position of the intersection;all positions of the space occupied by the real image frame and the prediction frame are taken as the positions;representing the distance between the predicted frame and the center point of the real frame,the length of a diagonal line of a circumscribed rectangle formed by the two is shown, w is the width of the image, and h is the height of the image.
6. A real-time garbage monitoring system is characterized by comprising:
an image acquisition system for acquiring an image;
and the neural network is used for training the images, gathering the interval range of the image characteristic vector values through normalization processing, weighting the image characteristic vector values, and then extracting the characteristics of the gathered image characteristic vector values and predicting.
7. The real-time spam monitoring system of claim 6, wherein said neural network comprises:
the backbone layer is used for extracting the characteristic information of the image;
the neck layer is used for further extracting the features of the new feature vector values and gathering the extracted feature vector values in a smaller interval range;
and the head layer is used for predicting the garbage image by adopting the characteristic vector value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110890183.9A CN113592002A (en) | 2021-08-04 | 2021-08-04 | Real-time garbage monitoring method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110890183.9A CN113592002A (en) | 2021-08-04 | 2021-08-04 | Real-time garbage monitoring method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113592002A true CN113592002A (en) | 2021-11-02 |
Family
ID=78254845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110890183.9A Pending CN113592002A (en) | 2021-08-04 | 2021-08-04 | Real-time garbage monitoring method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113592002A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559302A (en) * | 2018-11-23 | 2019-04-02 | 北京市新技术应用研究所 | Pipe video defect inspection method based on convolutional neural networks |
CN110598709A (en) * | 2019-08-12 | 2019-12-20 | 北京智芯原动科技有限公司 | Convolutional neural network training method and license plate recognition method and device |
CN110727665A (en) * | 2019-09-23 | 2020-01-24 | 江河瑞通(北京)技术有限公司 | Internet of things equipment reported data quality analysis method and system |
CN111368895A (en) * | 2020-02-28 | 2020-07-03 | 上海海事大学 | Garbage bag target detection method and detection system in wet garbage |
CN111814750A (en) * | 2020-08-14 | 2020-10-23 | 深延科技(北京)有限公司 | Intelligent garbage classification method and system based on deep learning target detection and image recognition |
CN112686172A (en) * | 2020-12-31 | 2021-04-20 | 上海微波技术研究所(中国电子科技集团公司第五十研究所) | Method and device for detecting foreign matters on airport runway and storage medium |
CN112926405A (en) * | 2021-02-01 | 2021-06-08 | 西安建筑科技大学 | Method, system, equipment and storage medium for detecting wearing of safety helmet |
-
2021
- 2021-08-04 CN CN202110890183.9A patent/CN113592002A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559302A (en) * | 2018-11-23 | 2019-04-02 | 北京市新技术应用研究所 | Pipe video defect inspection method based on convolutional neural networks |
CN110598709A (en) * | 2019-08-12 | 2019-12-20 | 北京智芯原动科技有限公司 | Convolutional neural network training method and license plate recognition method and device |
CN110727665A (en) * | 2019-09-23 | 2020-01-24 | 江河瑞通(北京)技术有限公司 | Internet of things equipment reported data quality analysis method and system |
CN111368895A (en) * | 2020-02-28 | 2020-07-03 | 上海海事大学 | Garbage bag target detection method and detection system in wet garbage |
CN111814750A (en) * | 2020-08-14 | 2020-10-23 | 深延科技(北京)有限公司 | Intelligent garbage classification method and system based on deep learning target detection and image recognition |
CN112686172A (en) * | 2020-12-31 | 2021-04-20 | 上海微波技术研究所(中国电子科技集团公司第五十研究所) | Method and device for detecting foreign matters on airport runway and storage medium |
CN112926405A (en) * | 2021-02-01 | 2021-06-08 | 西安建筑科技大学 | Method, system, equipment and storage medium for detecting wearing of safety helmet |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jia et al. | Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot | |
CN111444821B (en) | Automatic identification method for urban road signs | |
CN106960195B (en) | Crowd counting method and device based on deep learning | |
CN111178197B (en) | Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method | |
CN109934115B (en) | Face recognition model construction method, face recognition method and electronic equipment | |
CN105426870B (en) | A kind of face key independent positioning method and device | |
CN109409365A (en) | It is a kind of that method is identified and positioned to fruit-picking based on depth targets detection | |
CN110991435A (en) | Express waybill key information positioning method and device based on deep learning | |
CN111178120B (en) | Pest image detection method based on crop identification cascading technology | |
CN110765865B (en) | Underwater target detection method based on improved YOLO algorithm | |
CN109460754A (en) | A kind of water surface foreign matter detecting method, device, equipment and storage medium | |
CN114663346A (en) | Strip steel surface defect detection method based on improved YOLOv5 network | |
CN109635634A (en) | A kind of pedestrian based on stochastic linear interpolation identifies data enhancement methods again | |
CN112990392A (en) | New material floor defect target detection system based on improved YOLOv5 algorithm | |
CN110827312A (en) | Learning method based on cooperative visual attention neural network | |
CN111401374A (en) | Model training method based on multiple tasks, character recognition method and device | |
CN112926652B (en) | Fish fine granularity image recognition method based on deep learning | |
CN114693661A (en) | Rapid sorting method based on deep learning | |
CN108710893A (en) | A kind of digital image cameras source model sorting technique of feature based fusion | |
CN109242826B (en) | Mobile equipment end stick-shaped object root counting method and system based on target detection | |
CN111680705A (en) | MB-SSD method and MB-SSD feature extraction network suitable for target detection | |
CN111382766A (en) | Equipment fault detection method based on fast R-CNN | |
CN115082922A (en) | Water meter digital picture processing method and system based on deep learning | |
CN116385374A (en) | Cell counting method based on convolutional neural network | |
CN114663769A (en) | Fruit identification method based on YOLO v5 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20211102 |