CN115272778A - Recyclable garbage classification method and system based on RPA and computer vision - Google Patents
Recyclable garbage classification method and system based on RPA and computer vision Download PDFInfo
- Publication number
- CN115272778A CN115272778A CN202211186439.9A CN202211186439A CN115272778A CN 115272778 A CN115272778 A CN 115272778A CN 202211186439 A CN202211186439 A CN 202211186439A CN 115272778 A CN115272778 A CN 115272778A
- Authority
- CN
- China
- Prior art keywords
- pixel
- pixel point
- distance
- point
- seed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000010813 municipal solid waste Substances 0.000 title claims abstract description 129
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000011218 segmentation Effects 0.000 claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 7
- 230000005484 gravity Effects 0.000 claims description 6
- 239000010819 recyclable waste Substances 0.000 claims description 4
- 238000007405 data analysis Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 238000005259 measurement Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 239000000463 material Substances 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 241000287196 Asthenes Species 0.000 description 1
- 101100272279 Beauveria bassiana Beas gene Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000010893 paper waste Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000004801 process automation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/38—Outdoor scenes
- G06V20/39—Urban scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of data identification, in particular to a recyclable garbage classification method and a recyclable garbage classification system based on RPA and computer vision, wherein the method comprises the following steps: acquiring a recyclable garbage image through the identification graph, and calculating the edge probability of the pixel points according to the characteristics of the pixel points; calculating the seed excellence of the pixel point by combining the edge probability of the pixel point, and selecting an initial seed point; calculating the space distance and the color distance between the pixel point and the initial seed point, and setting distance measurement weight by combining edge probability to obtain the comprehensive distance between the pixel point and the initial seed point; carrying out superpixel segmentation on the image according to the comprehensive distance to obtain a superpixel image; and classifying and identifying the super-pixel images, and sorting the recyclable garbage according to the identification result. The method has higher segmentation precision on the recyclable images, so that the accuracy of classifying and identifying the images is high, and the sorting efficiency of the recyclable garbage is improved.
Description
Technical Field
The invention relates to the technical field of data identification, in particular to a recyclable garbage classification method and system based on RPA and computer vision.
Background
Robot Process Automation (RPA) is a Process task that simulates human operations on a computer by specific "robot software" and executes automatically according to rules. Artificial Intelligence (AI) is a technical science that studies and develops theories, methods, techniques and application systems for simulating, extending and expanding human intelligence. Applications based on artificial intelligence and computer vision are now increasingly common.
With the development of the environmental protection concept, more and more cities push on garbage classification. For the recyclable garbage, manual further sorting is needed in a garbage yard, recyclable materials are separated into different types and sent to different processing centers, and the recyclable materials are recycled after being processed. However, because the recyclable materials are various and the quantity of recyclable garbage generated every day is huge, the efficiency of manual sorting is low. In the prior art, recyclable garbage is sorted after recyclable objects of different types in a recyclable garbage image are segmented and identified. The method for recognizing recoverable garbage by using common image segmentation has poor segmentation effect on recoverable garbage images of different types of recoverable objects, further influences the accuracy of subsequent recoverable object recognition results, and causes the problems of low sorting efficiency, wrong sorting and the like.
Disclosure of Invention
In order to solve the above technical problems, the present invention aims to provide a method for classifying recyclable garbage based on RPA and computer vision, which adopts the following technical scheme:
identifying based on the RPA to obtain a recoverable garbage color image and a recoverable garbage gray image; calculating a gray level co-occurrence matrix corresponding to each pixel point on the recyclable garbage gray level image, and calculating the edge probability of the pixel points according to the probability of gray level combination in the gray level co-occurrence matrix;
aiming at any pixel point on the recyclable garbage gray image, acquiring pixel points corresponding to local maximum values of edge probabilities at two sides of the pixel point on a straight line passing through a set direction of the pixel point, and marking the pixel points as a first mark pixel point and a second mark pixel point; calculating the seed excellence of any pixel point according to the distance from the first mark pixel point and the second mark pixel point to any pixel point and the marginal probability;
calculating seed excellence of all pixel points on the recyclable garbage gray image, acquiring pixel point positions corresponding to local maximum values of the seed excellence, and taking the pixel points on the recyclable garbage gray image corresponding to the pixel point positions on the recyclable garbage color image as initial seed points;
respectively calculating the space distance and the color distance between the pixel point and the initial seed point according to the coordinates and the color characteristics of the pixel point and the initial seed point on the recyclable garbage color image; carrying out weighted summation on the space distance and the color distance to obtain the comprehensive distance from the pixel point to the initial seed point; dividing super pixels according to the comprehensive distance to obtain a super pixel image;
and classifying the types of the recyclable garbage according to the super pixel images, and finishing the sorting work of the recyclable garbage according to the classification result.
Preferably, the method for obtaining the edge probability of the pixel point specifically comprises the following steps:
taking each pixel point on the recyclable garbage gray image as a center, constructing a window with the size of n multiplied by n, and calculating a gray level co-occurrence matrix of each window in each set direction to obtain a gray level co-occurrence matrix of the pixel point in the center of the window;
calculating the edge probability of the central pixel point of the window under the gray level co-occurrence matrix in the set direction, and expressing the edge probability as follows by using a formula:
wherein,indicates in the set direction asThe edge probability of a pixel point in time,the value of the ith row and the jth column in the gray level co-occurrence matrix, i.e. the combination of the ith gray level and the jth gray level, is set to beThe probability of the occurrence of the time-varying,a difference value representing a gray level, k representing the number of kinds of gray levels,representing a normalized coefficient;
and calculating the edge probability of the central pixel point of the window under the gray level co-occurrence matrixes in all the set directions, and acquiring the maximum value of the edge probability as the edge probability of the pixel point.
Preferably, the method for obtaining the seed excellence specifically comprises the following steps:
wherein,the seed goodness of the u-th pixel point is represented,the edge probability of the u-th pixel point is represented,indicating that the pixel point from the first mark pixel point to the u-th pixel point is in the set directionThe distance of the time of day is,indicating that the pixel point of the second mark to the u-th pixel point are in the set directionThe distance of time, S, is a sequence of all set directions.
Preferably, the method for acquiring the spatial distance specifically includes:
wherein,representing the spatial distance of the v-th pixel point from the g-th initial seed point on the recyclable garbage color image,is the coordinate of the v-th pixel point,is the coordinates of the g-th initial seed point.
Preferably, the method for acquiring the color distance specifically includes:
wherein,representing the color distance of the v-th pixel point from the g-th initial seed point on the recyclable color image,for the LAB color feature of the v-th pixel,LAB color feature for the g-th initial seed point.
Preferably, the weighted summation of the spatial distance and the color distance specifically includes:
the method for acquiring the weight corresponding to the spatial distance specifically comprises the following steps:
marking pixel points which do not belong to the initial seed points on the recyclable garbage color image as non-seed pixel points; aiming at any non-seed pixel point, acquiring an initial seed point which is closest to the non-seed pixel point in all set directions of the non-seed pixel point; forming a pixel distance set by the distance from each initial seed point to the non-seed pixel point;
judging whether the edge probability of other non-seed pixel points between each initial seed point and the non-seed pixel point is greater than that of the non-seed pixel point; if so, the pixel point with the maximum edge probability between the initial seed point and the non-seed pixel point is called an interception point, the distance between the non-seed pixel point and the corresponding initial seed point in the pixel distance set is replaced by the distance between the non-seed point and the interception point, an updated pixel distance set is obtained, and the value corresponding to the maximum value element in the updated pixel distance sequence is obtained and used as the weight corresponding to the spatial distance; if not, taking the value corresponding to the maximum value element in the pixel distance set as the weight corresponding to the spatial distance;
acquiring the number of types of gray levels corresponding to gray values of all pixel points on the recyclable garbage gray image, and taking the ratio of 255 to the number of types as the weight corresponding to the color distance; and carrying out weighted summation on the spatial distance and the color distance by using the weight corresponding to the spatial distance and the weight corresponding to the color distance.
Preferably, the method for obtaining the comprehensive distance of the initial seed points specifically comprises:
wherein,representing the integrated distance of the v-th pixel point and the g-th initial seed point on the recyclable garbage color image,Representing the spatial distance of the v-th pixel point from the g-th initial seed point,representing the color distance of the v-th pixel point from the g-th initial seed point,the weight corresponding to the spatial distance is a weight,and the weight is corresponding to the color distance.
Preferably, the obtaining of the super-pixel image by dividing the super-pixels according to the comprehensive distance specifically comprises:
calculating the comprehensive distance from each pixel point on the recyclable waste color image to each initial seed point, and selecting the initial seed point with the minimum comprehensive distance as a clustering center of the pixel point, wherein each cluster is a super pixel; and acquiring the gravity center of each super pixel according to the coordinates of the pixel points, taking the gravity center as the position of a new seed point of the super pixel, and continuously iterating until the error is converged to finally obtain the super pixel image.
Preferably, the classifying the recyclable garbage type according to the super pixel image specifically includes: and classifying recoverable substance types in the super-pixel image by utilizing a DNN semantic segmentation neural network.
The invention also provides a recoverable garbage classification system based on RPA and computer vision, which comprises:
the data acquisition module is used for identifying and obtaining a recyclable garbage color image and a recyclable garbage gray image based on the RPA; calculating a gray level co-occurrence matrix corresponding to each pixel point on the recyclable garbage gray level image, and calculating the edge probability of the pixel points according to the probability of gray level combination in the gray level co-occurrence matrix;
the data processing module is used for acquiring pixel points corresponding to local maximum values of edge probabilities at two sides of each pixel point on a straight line passing through a set direction of the pixel point aiming at any pixel point on the recyclable garbage gray image, and marking the pixel points as a first mark pixel point and a second mark pixel point; calculating the seed excellence of any pixel point according to the distance from the first mark pixel point and the second mark pixel point to any pixel point and the marginal probability;
the data analysis module is used for calculating seed excellence of all pixel points on the recyclable garbage gray image, acquiring pixel point positions corresponding to local maximum values of the seed excellence, and taking the pixel points corresponding to the pixel point positions on the recyclable garbage gray image on the recyclable garbage color image as initial seed points;
respectively calculating the space distance and the color distance from the pixel point to the initial seed point according to the coordinates and the color characteristics of the pixel point and the initial seed point on the recyclable garbage color image; carrying out weighted summation on the space distance and the color distance to obtain the comprehensive distance from the pixel point to the initial seed point; dividing the super pixels according to the comprehensive distance to obtain a super pixel image; and classifying the types of the recyclable garbage according to the super pixel images, and finishing the sorting work of the recyclable garbage according to the classification result.
The embodiment of the invention at least has the following beneficial effects:
the method comprises the steps of identifying based on RPA to obtain an image of recyclable garbage, processing data based on the image, analyzing characteristics of pixel points, calculating marginal probability of each pixel point, calculating excellence of pixel point seeds by combining the marginal probability of the pixel points, and selecting initial seed points. And setting distance measurement weight in combination with edge probability to perform superpixel segmentation. The method selects the local most excellent seed points according to the characteristics, can reduce the iteration times of the super-pixel segmentation, and can avoid the influence of the seed points on the edge on the segmentation effect. The super pixels obtained by the method are different in size, and the method is well suitable for the size of the recyclable object target, so that the accuracy is higher when the recyclable garbage image is classified and identified, and the sorting efficiency of the recyclable garbage is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method of the present invention for classifying recyclable garbage based on RPA and computer vision.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a recyclable garbage classification method and system based on RPA and computer vision according to the present invention, with reference to the accompanying drawings and preferred embodiments, and its specific implementation, structure, features and effects. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the recyclable garbage classification method and system based on the RPA and the computer vision in detail with reference to the accompanying drawings.
The specific scenes aimed by the invention are as follows: and identifying the types of recyclable objects in the recyclable garbage image and sorting by using computer vision, so that the recyclable objects are recycled.
Example 1:
referring to fig. 1, a flowchart of a method for classifying recyclable garbage based on RPA and computer vision according to an embodiment of the present invention is shown, where the method includes the following steps:
identifying based on RPA to obtain a recoverable garbage color image and a recoverable garbage gray image; and calculating a gray level co-occurrence matrix corresponding to each pixel point on the recyclable garbage gray level image, and calculating the edge probability of the pixel points according to the probability of gray level combination in the gray level co-occurrence matrix.
Specifically, recognition is carried out based on RPA, and acquisition operation is automatically carried out to obtain a recyclable garbage color image and a recyclable garbage gray image. Specifically, in the embodiment, the operation of acquiring the recyclable garbage RGB image on the conveyor belt shot by the camera is automatically performed based on the RPA, the recyclable garbage RGB image is converted into an LAB image and recorded as a recyclable garbage color image, and meanwhile, in order to facilitate the analysis of the features in the image, the recyclable garbage RGB image is grayed to obtain a recyclable garbage grayscale image.
And dividing gray levels of all pixel points on the recyclable garbage gray image, and acquiring the variety and the number of the gray levels. Specifically, the gray value is used as sample data, the gray value is divided into a plurality of categories by using a K-means clustering algorithm, and each category corresponds to one gray level. The number of categories is k, that is, the number of types of gray scales is k. Meanwhile, the implementer can select other more suitable methods to divide the gray scale according to the actual situation.
With each pixel point on the recyclable garbage gray image as the center, constructing a size ofAnd (4) calculating the gray level co-occurrence matrix of each window in the set direction to obtain the gray level co-occurrence matrix of the central pixel point of the window. In this embodiment, n is 5, and the set directions are 0 °, 45 °, 90 °, and 135 ° directions, respectively. Combining a gray level co-occurrence matrix of a window in a set direction, calculating the edge probability of the central pixel point of the window under the gray level co-occurrence matrix in the set direction, and expressing the edge probability as follows by using a formula:
wherein,indicates in the set direction asThe edge probability of a pixel point in time,the value of the ith row and the jth column in the gray co-occurrence matrix, i.e. the combination of the ith gray level and the jth gray level, is set to beThe probability of the occurrence of the time-varying,a difference value representing a gray level, k representing the number of kinds of gray levels,which represents the normalized coefficient of the coefficient,is a normalization function.Are 0 °, 45 °, 90 ° and 135 °. When the probability is high and the gray level difference is large, it indicates that there are a plurality of pixel points in the window to change the gray level, and the probability that the garbage edge can be recovered in the window is high, that is, the edge probability of the central pixel point of the window is high.
In the present embodiment, calculation is madeTime corresponding normalization coefficientIs n-1. If the center pixel point of the window is recyclableThe edge pixel points of (2) can be considered to be similar to a straight line in the window due to the fact that the selected window is small, at the moment, a straight line passing through the center pixel point of the window exists, and all the pixel points on the straight line are the edge pixel points of the recyclable object, namely n. If the edge is not obvious, the difference between the gray levels of two adjacent pixels in the vertical edge direction is small, and this embodiment considers that the minimum gray level difference on the edge line is 1. Then the probability that two pixels with gray level difference of 1 are adjacent in the 0 ° direction isThus, it is possible toAs a normalization coefficient; the larger the edge probability is, the larger the probability that the center pixel point of the window is the edge is.
And calculating the marginal probability of the central pixel point of the window under the gray level co-occurrence matrixes in all the set directions, and acquiring the maximum value of the marginal probability as the marginal probability of the pixel point.
In the present embodiment, an image is processed by superpixel division, where a superpixel is an irregular pixel block having a certain visual significance, which is formed by adjacent pixels having similar texture, color, brightness, and other characteristics. The super-pixel segmentation divides pixels into groups by utilizing the similarity of features between the pixels, and a small amount of super-pixels are used for replacing a large amount of pixel points to express picture features, so that the complexity of image post-processing can be reduced to a great extent.
Therefore, a certain number of seed points need to be selected first, and if the seed points are not selected properly, the subsequent segmentation effect is not good. According to the method, the marginal probability of the pixel points is considered firstly, and the marginal probability of the pixel points refers to the probability that the pixel points are marginal pixel points of recyclable objects, so that the smaller the marginal probability of the pixel points corresponding to the selected seed points is, the better the marginal probability is, and the pixel points at the positions, close to the central point, of the regions formed by the recyclable objects are selected as far as possible. If the selected initial seed point is closer to the edge line of each recyclable object, when the super-pixel segmentation is performed according to the seed point, parts which belong to different recyclable object regions are easily divided into the same super-pixel, and then the segmentation effect is poor.
Step two, aiming at any pixel point on the recyclable garbage gray image, acquiring pixel points corresponding to local maximum values of the probability of the edges of the pixel point on two sides on a straight line passing through the set direction of the pixel point, and marking the pixel points as a first mark pixel point and a second mark pixel point; and calculating the seed excellence of any pixel point according to the distance from the first mark pixel point and the second mark pixel point to any pixel point and the marginal probability.
Specifically, in order to select a proper super-pixel-segmented seed point, the seed excellence of the pixel point is calculated by combining the edge probability of the pixel point, and the formula is expressed as follows:
wherein,the seed goodness of the u-th pixel point is represented,the edge probability of the u-th pixel point is represented,indicating that the pixel point from the first mark pixel point to the u-th pixel point is in the set directionThe distance of the time-of-flight,indicating that the direction from the second mark pixel point to the u pixel point is set asThe distance of time, S, being formed for all set directionsA sequence, in this embodiment specifically。
Is composed ofThe greater the marginal probability of the pixel point, the more likely the pixel point is on the marginal line of the recyclable object, if the pixel point with the high marginal probability is selected as the seed point, the region belonging to different recyclable objects is probably segmented into the same super pixel by super pixel segmentation, so the smaller the marginal probability of the pixel point is, the greater the seed excellence of the pixel point is.
Acquiring pixel points which are positioned on the left and right sides of the u pixel point on a straight line in the horizontal direction and correspond to the nearest marginal probability local maximum value from the u pixel point, and marking the two pixel points on the left and right sides as first mark pixel points respectivelyAnd a second marking pixel pointRespectively calculating the distance from the first mark pixel point and the second mark pixel point to the u-th pixel point, and recording the distance asAnd. The straight line in the horizontal direction refers to a straight line with an included angle of 0 ° with the straight line in the horizontal direction when the set direction is 0 °. The method for obtaining the local maximum is a known technique, and will not be described in detail herein. Meanwhile, the distance between the pixel point and the pixel point is calculated on the image, and the implementation is carried outThe method can be selected according to actual conditions for calculation.
And according to the same method, acquiring a first mark pixel point and a second mark pixel point corresponding to the pixel points in all directions. In this embodiment, the u-th pixel point is obtained on straight lines in all directions, where the angles corresponding to the directions are 0 °, 45 °, 90 ° and 135 °, which means that the included angles between the straight line passing through the pixel point and the horizontal line are 0 °, 45 °, 90 ° and 135 °, and the straight line in one direction corresponds to a first flag pixel point and a second flag pixel point, so as to calculate the corresponding distance.
If the pixel is closer to the border of the region where the recyclable object is located, the super-pixel segmentation process may segment the portion not belonging to the region where the recyclable object is located into the same super-pixel, and therefore the seed point needs to be further away from the border of the region where the recyclable object is located in all directions.
By calculation ofAndthe smaller the ratio of the difference value to the mean value is, the closer the u-th pixel point is to the pixel pointAndintermediate position of, distance from, the pixel pointAndare all far away. On the contrary, when the proportion is larger, the u-th pixel point is explained to be biased to the pixel pointAndone of the pixel points may be closer to the edge line of the area where the recyclable object is located, and the smaller the value is, the greater the seed excellence of the pixel point is.
Calculating seed excellence of all pixel points on the recyclable garbage gray image, acquiring pixel point positions corresponding to local maximum values of the seed excellence, and taking the pixel points on the recyclable garbage gray image corresponding to the pixel point positions on the recyclable garbage color image as initial seed points; respectively calculating the space distance and the color distance between the pixel point and the initial seed point according to the coordinates and the color characteristics of the pixel point and the initial seed point on the recyclable garbage color image; carrying out weighted summation on the space distance and the color distance to obtain the comprehensive distance from the pixel point to the initial seed point; and dividing the super pixels according to the comprehensive distance to obtain a super pixel image.
Firstly, calculating the seed excellence of all pixel points on the recyclable garbage gray image according to the method in the second step, obtaining the positions of the pixel points corresponding to the local maximum of the seed excellence, and taking the pixel points as initial seed points for superpixel segmentation. The method combines the seed points selected by the seed excellence, can reduce the error convergence rate of the subsequent super-pixel segmentation, reduce the iteration times and improve the segmentation efficiency. Meanwhile, the situation that the seed points fall near the edge of the area where the recyclable objects are located to influence the segmentation effect can be avoided.
In order to obtain accurate segmentation results, the recyclable garbage color image needs to be subjected to superpixel segmentation. The steps are that the seed points are obtained based on the recyclable garbage gray level image, the pixel points of the seed points at the corresponding positions on the recyclable garbage color image are required to be obtained and serve as initial seed points, and the recyclable garbage color image is subjected to superpixel segmentation.
Then, marking the pixel points which do not belong to the initial seed points on the recyclable garbage color image as non-seed pixel points, calculating the spatial distance from the non-seed pixel points to the initial seed points, and expressing the spatial distance to be:
wherein,representing the spatial distance between the v-th pixel point and the g-th initial seed point on the recyclable garbage color image, namely the spatial distance between the v-th non-seed pixel point and the g-th initial seed point,is the coordinates of the v-th pixel point, namely the coordinates of the v-th non-seed pixel point,is the coordinates of the g-th initial seed point.
Calculating the color distance from the non-seed pixel point to the initial seed point, and expressing the color distance to be the following formula:
wherein,indicating the color distance between the v-th pixel point and the g-th initial seed point on the recyclable waste color image, i.e. the color distance between the v-th non-seed pixel point and the g-th initial seed point,is the LAB color characteristic of the v-th pixel point, namely the LAB color characteristic of the v-th non-seed pixel point,LAB color feature for the g-th initial seed point.
Furthermore, in the super-pixel segmentation distance measurement process, different weights are set for the space distance and the color distance of the non-seed pixel points obtained through calculation, and the appropriate initial seed points are selected as the clustering centers of the non-seed pixel points by combining the space distance and the color distance from the non-seed pixel points to the initial seed points. The weights corresponding to the spatial distances of all the non-seed pixel points are the same, and the weights corresponding to the color distances are also the same.
Calculating the comprehensive distance from the non-seed pixel point to the initial seed point, and expressing the distance to the initial seed point by a formula as follows:
wherein,represents the integrated distance between the v-th pixel point and the g-th initial seed point on the recoverable garbage color image, namely the integrated distance between the v-th non-seed pixel point and the g-th initial seed point,representing the spatial distance of the v-th pixel point from the g-th initial seed point,representing the color distance of the v-th pixel point from the g-th initial seed point,the weight corresponding to the spatial distance is used,and the weight is corresponding to the color distance.
The weight obtaining method corresponding to the color distance comprises the following steps: and acquiring the number of the types of gray levels corresponding to the gray values of all pixel points on the recyclable garbage gray image, and taking the ratio of 255 to the number of the types as the weight corresponding to the color distance. Specifically, the recyclable garbage is treated in the step oneDividing gray levels of all pixel points on the gray image, and obtaining the number k of types of the gray levels, wherein under an ideal condition, the range of each type of gray level is as large asThen will beAs the weight corresponding to the color distance. When the color distance is less thanThen, the color distance contributes less to the calculation of the integrated distance, and the spatial distance is more concerned, when the color distance is greater thanThe color distance contributes significantly to the calculation of the composite distance.
The weight obtaining method corresponding to the space distance comprises the following steps: aiming at any non-seed pixel point, acquiring an initial seed point which is closest to the non-seed pixel point in all set directions of the non-seed pixel point; forming a pixel distance set by the distance from each initial seed point to the non-seed pixel point; judging whether the edge probability of other non-seed pixel points between each initial seed point and the non-seed pixel point is greater than that of the non-seed pixel point; if so, the pixel point with the maximum marginal probability between the initial seed point and the non-seed pixel point is called an interception point, the distance between the non-seed pixel point and the corresponding initial seed point in the pixel distance set is replaced by the distance between the non-seed point and the interception point to obtain an updated pixel distance sequence, and the value corresponding to the maximum value element in the updated pixel distance set is obtained and used as the weight corresponding to the spatial distance; and if not, taking the value corresponding to the maximum value element in the pixel distance set as the weight corresponding to the spatial distance.
Specifically, an initial seed point, i.e., the closest initial seed point to the v-th non-seed pixel point in all the set directions of the v-th non-seed pixel point is obtainedCorresponding to a nearest initial seed point in each direction to obtain t initial seed points, calculating the distance from the v-th non-seed pixel point to the t initial seed points, and constructing a pixel distance set。
Judging whether the edge probability of other non-seed pixel points between the v-th non-seed pixel point and the t initial seed points is greater than that of the v-th non-seed pixel point; if not, the value corresponding to the maximum value element in the pixel distance set is taken as the weight corresponding to the spatial distance, namely。
If the set of the pixel distance set exists, the pixel point with the maximum marginal probability between the v-th non-seed pixel point and the corresponding initial seed point is called a truncation point, the fact that the boundary possibly exists between the v-th non-seed pixel point and the corresponding initial seed point is truncated is indicated, and the distance between the v-th non-seed pixel point and the truncation point is used for replacing the pixel distance setThe distance between the middle-th non-seed pixel point and the corresponding initial seed point is obtained to obtain an updated pixel distance setIf the weight corresponding to the spatial distance from the v-th non-seed pixel point to the initial seed point is。
When the spatial distance is less than the weightWhen the spatial distance is greater than the weight, the color distance is more concernedIn time, the spatial distance contributes significantly to the calculation of the synthetic distance. Because the reclaimable objects are different in size, the size of the final segmented super-pixel can adapt to the sizes of different reclaimable objects by setting the weights corresponding to the space distance and the color distance, and the smaller reclaimable objects and other reclaimable objects or backgrounds are prevented from being segmented into the same super-pixel.
Finally, calculating the comprehensive distance from each non-seed pixel point to each initial seed point on the recyclable garbage color image, and selecting the initial seed point with the minimum comprehensive distance as a clustering center of the non-seed pixel points, wherein each cluster is a superpixel; and obtaining the gravity center of each super pixel according to the coordinates of the non-seed pixel points, taking the gravity center as the position of a new seed point of the super pixel, and continuously iterating until the error is converged to finally complete the super pixel segmentation of the recyclable garbage color image to obtain the super pixel image.
And step four, classifying the types of the recyclable garbage according to the super-pixel images, and finishing the sorting work of the recyclable garbage according to the classification result.
Specifically, classification and identification are carried out on recyclables in the image according to the superpixel image, and the present embodiment adopts a DNN semantic segmentation mode to identify the target in the superpixel image. The label mark amount in the training set can be reduced by utilizing the super-pixel image, the complexity of a network can be reduced, the classification and identification speed of the recyclable objects is increased, and the accuracy of the classification and identification of the recyclable objects is improved.
The related contents of the DNN network comprise:
the input of the network is a super pixel image, and the output is the type of the recyclable object corresponding to each super pixel; the data set used is a super-pixel image data set; the superpixels to be segmented are totally classified into 6 types, namely the labeling process of the labels corresponding to the training set is as follows: the semantic label of single channel, corresponding position superpixel belong to the mark of waste paper class reclaimable article and be 1, belong to the mark of plastics class reclaimable article and be 2, belong to the mark of glass class reclaimable article and be 3, belong to the mark of metal class reclaimable article and be 4, belong to the mark of cloth class reclaimable article and be 5, do not belong to the rubbish and the background mark of reclaimable article and be 0. The task of the network is to classify and the loss function used is a cross-entropy loss function.
Sorting recyclables to corresponding categories of conveyor belts based on the RPA according to the classification results of the network. So that different types of recyclables can be subsequently transported to different processing centers, where they can be reused after processing.
Example 2:
the present embodiment provides a recyclable garbage classification system based on RPA and computer vision, the system comprising:
the data acquisition module is used for identifying and obtaining a recoverable garbage color image and a recoverable garbage gray image based on the RPA; calculating a gray level co-occurrence matrix corresponding to each pixel point on the recyclable garbage gray level image, and calculating the edge probability of the pixel points according to the probability of gray level combination in the gray level co-occurrence matrix;
the data processing module is used for acquiring pixel points corresponding to local maximum values of edge probabilities at two sides of each pixel point on a straight line passing through a set direction of the pixel point aiming at any pixel point on the recyclable garbage gray image, and marking the pixel points as a first mark pixel point and a second mark pixel point; calculating the seed excellence of any pixel point according to the distance from the first mark pixel point and the second mark pixel point to any pixel point and the marginal probability;
the data analysis module is used for calculating seed excellence of all pixel points on the recyclable garbage gray image, acquiring pixel point positions corresponding to local maximum values of the seed excellence, and taking the pixel points on the recyclable garbage gray image corresponding to the pixel point positions on the recyclable garbage color image as initial seed points;
respectively calculating the space distance and the color distance from the pixel point to the initial seed point according to the coordinates and the color characteristics of the pixel point and the initial seed point on the recyclable garbage color image; carrying out weighted summation on the space distance and the color distance to obtain the comprehensive distance from the pixel point to the initial seed point; dividing the super pixels according to the comprehensive distance to obtain a super pixel image; and classifying the types of the recyclable garbage according to the super pixel images, and finishing the sorting work of the recyclable garbage according to the classification result.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. A method for classifying recyclable garbage based on RPA and computer vision, the method comprising the steps of:
identifying based on the RPA to obtain a recoverable garbage color image and a recoverable garbage gray image; calculating a gray level co-occurrence matrix corresponding to each pixel point on the recyclable garbage gray level image, and calculating the edge probability of the pixel points according to the probability of gray level combination in the gray level co-occurrence matrix;
aiming at any pixel point on the recyclable garbage gray image, acquiring pixel points corresponding to local maximum values of edge probabilities at two sides of the pixel point on a straight line passing through a set direction of the pixel point, and marking the pixel points as a first mark pixel point and a second mark pixel point; calculating the seed excellence of any pixel point according to the distance from the first mark pixel point and the second mark pixel point to any pixel point and the marginal probability;
calculating seed excellence of all pixel points on the recyclable garbage gray image, acquiring pixel point positions corresponding to local maximum values of the seed excellence, and taking the pixel points on the recyclable garbage gray image corresponding to the pixel point positions on the recyclable garbage color image as initial seed points;
respectively calculating the space distance and the color distance between the pixel point and the initial seed point according to the coordinates and the color characteristics of the pixel point and the initial seed point on the recyclable garbage color image; carrying out weighted summation on the space distance and the color distance to obtain the comprehensive distance from the pixel point to the initial seed point; dividing super pixels according to the comprehensive distance to obtain a super pixel image;
and classifying the types of the recyclable garbage according to the super pixel images, and finishing the sorting work of the recyclable garbage according to the classification result.
2. The method for classifying recyclable garbage based on the RPA and the computer vision according to claim 1, wherein the method for obtaining the edge probability of the pixel point specifically comprises:
taking each pixel point on the recyclable garbage gray image as a center, constructing a window with the size of n multiplied by n, and calculating a gray level co-occurrence matrix of each window in each set direction to obtain a gray level co-occurrence matrix of the pixel point in the center of the window;
calculating the marginal probability of the central pixel point of the window under the gray level co-occurrence matrix in the set direction, and expressing the marginal probability as follows by using a formula:
wherein,indicates in the set direction asThe edge probability of a pixel point in time,the value of the ith row and the jth column in the gray level co-occurrence matrix, i.e. the combination of the ith gray level and the jth gray level, is set to beThe probability of the occurrence of the time is,a difference value representing a gray level, k representing the number of kinds of gray levels,representing a normalized coefficient;
and calculating the edge probability of the central pixel point of the window under the gray level co-occurrence matrixes in all the set directions, and acquiring the maximum value of the edge probability as the edge probability of the pixel point.
3. The RPA and computer vision based recyclable waste classification method according to claim 1, wherein the seed goodness obtaining method is specifically:
wherein,the seed goodness of the u-th pixel point is represented,the edge probability of the u-th pixel point is represented,the direction from the first mark pixel point to the u-th pixel point is set to beThe distance of the time-of-flight,indicating that the pixel point of the second mark to the u-th pixel point are in the set directionThe distance in time, S, is a sequence of all set directions.
4. The RPA and computer vision based recyclable garbage classification method according to claim 1, wherein the spatial distance obtaining method specifically comprises:
5. The method for classifying recyclable garbage based on RPA and computer vision according to claim 1, wherein the method for obtaining the color distance is specifically:
6. The RPA and computer vision based recyclable garbage classification method according to claim 1, wherein the weighted summation of the spatial distance and the color distance is specifically:
the method for acquiring the weight corresponding to the spatial distance specifically comprises the following steps:
marking pixel points which do not belong to the initial seed points on the recyclable garbage color image as non-seed pixel points; aiming at any non-seed pixel point, acquiring an initial seed point which is closest to the non-seed pixel point in all set directions of the non-seed pixel point; forming a pixel distance set by the distance from each initial seed point to the non-seed pixel point;
judging whether the edge probability of other non-seed pixel points between each initial seed point and the non-seed pixel point is greater than that of the non-seed pixel point; if so, the pixel point with the maximum edge probability between the initial seed point and the non-seed pixel point is called an interception point, the distance between the non-seed pixel point and the corresponding initial seed point in the pixel distance set is replaced by the distance between the non-seed point and the interception point, an updated pixel distance set is obtained, and the value corresponding to the maximum value element in the updated pixel distance sequence is obtained and used as the weight corresponding to the spatial distance; if not, taking the value corresponding to the maximum value element in the pixel distance set as the weight corresponding to the spatial distance;
acquiring the number of types of gray levels corresponding to gray values of all pixel points on the recyclable garbage gray image, and taking the ratio of 255 to the number of types as the weight corresponding to the color distance;
and carrying out weighted summation on the spatial distance and the color distance by using the weight corresponding to the spatial distance and the weight corresponding to the color distance.
7. The method for classifying recyclable garbage based on RPA and computer vision as claimed in claim 1, wherein the method for obtaining the integrated distance of the initial seed points comprises:
wherein,representing the combined distance of the v-th pixel point and the g-th initial seed point on the recyclable color image,representing the spatial distance of the v-th pixel point from the g-th initial seed point,representing the color distance of the v-th pixel point from the g-th initial seed point,the weight corresponding to the spatial distance is used,and the weight is corresponding to the color distance.
8. The method according to claim 1, wherein the step of dividing the superpixels according to the synthetic distance to obtain the superpixel image is specifically as follows:
calculating the comprehensive distance from each pixel point on the recyclable garbage color image to each initial seed point, and selecting the initial seed point with the minimum comprehensive distance as a clustering center of the pixel point, wherein each cluster is a super pixel; and acquiring the gravity center of each super pixel according to the coordinates of the pixel points, taking the gravity center as the position of a new seed point of the super pixel, and continuously iterating until the error is converged to finally obtain the super pixel image.
9. The RPA and computer vision based recyclable garbage classification method according to claim 1, wherein the classification of recyclable garbage categories according to superpixel images is specifically: and classifying recoverable substance types in the super-pixel image by utilizing a DNN semantic segmentation neural network.
10. A recyclable waste classification system based on RPA and computer vision, the system comprising:
the data acquisition module is used for identifying and obtaining a recoverable garbage color image and a recoverable garbage gray image based on the RPA; calculating a gray level co-occurrence matrix corresponding to each pixel point on the recyclable garbage gray level image, and calculating the edge probability of the pixel points according to the probability of gray level combination in the gray level co-occurrence matrix;
the data processing module is used for acquiring pixel points corresponding to local maximum values of edge probabilities at two sides of any pixel point on a straight line passing through a set direction of the pixel point aiming at the pixel point on the recyclable garbage gray image, and marking the pixel points as a first mark pixel point and a second mark pixel point; calculating the seed excellence of any pixel point according to the distance from the first mark pixel point and the second mark pixel point to any pixel point and the marginal probability;
the data analysis module is used for calculating seed excellence of all pixel points on the recyclable garbage gray image, acquiring pixel point positions corresponding to local maximum values of the seed excellence, and taking the pixel points on the recyclable garbage gray image corresponding to the pixel point positions on the recyclable garbage color image as initial seed points;
respectively calculating the space distance and the color distance from the pixel point to the initial seed point according to the coordinates and the color characteristics of the pixel point and the initial seed point on the recyclable garbage color image; carrying out weighted summation on the space distance and the color distance to obtain the comprehensive distance from the pixel point to the initial seed point; dividing super pixels according to the comprehensive distance to obtain a super pixel image; and classifying the types of the recyclable garbage according to the super pixel images, and finishing the sorting work of the recyclable garbage according to the classification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211186439.9A CN115272778A (en) | 2022-09-28 | 2022-09-28 | Recyclable garbage classification method and system based on RPA and computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211186439.9A CN115272778A (en) | 2022-09-28 | 2022-09-28 | Recyclable garbage classification method and system based on RPA and computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115272778A true CN115272778A (en) | 2022-11-01 |
Family
ID=83757492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211186439.9A Pending CN115272778A (en) | 2022-09-28 | 2022-09-28 | Recyclable garbage classification method and system based on RPA and computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272778A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115641327A (en) * | 2022-11-09 | 2023-01-24 | 浙江天律工程管理有限公司 | Building engineering quality supervision and early warning system based on big data |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635809A (en) * | 2018-11-02 | 2019-04-16 | 浙江工业大学 | A kind of superpixel segmentation method towards vision degraded image |
CN113658132A (en) * | 2021-08-16 | 2021-11-16 | 沭阳九鼎钢铁有限公司 | Computer vision-based structural part weld joint detection method |
CN114708464A (en) * | 2022-06-01 | 2022-07-05 | 广东艺林绿化工程有限公司 | Municipal sanitation cleaning garbage truck cleaning method based on road garbage classification |
-
2022
- 2022-09-28 CN CN202211186439.9A patent/CN115272778A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635809A (en) * | 2018-11-02 | 2019-04-16 | 浙江工业大学 | A kind of superpixel segmentation method towards vision degraded image |
CN113658132A (en) * | 2021-08-16 | 2021-11-16 | 沭阳九鼎钢铁有限公司 | Computer vision-based structural part weld joint detection method |
CN114708464A (en) * | 2022-06-01 | 2022-07-05 | 广东艺林绿化工程有限公司 | Municipal sanitation cleaning garbage truck cleaning method based on road garbage classification |
Non-Patent Citations (1)
Title |
---|
吴健等: "基于计算机视觉的废物垃圾分析与识别研究", 《信息技术与信息化》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115641327A (en) * | 2022-11-09 | 2023-01-24 | 浙江天律工程管理有限公司 | Building engineering quality supervision and early warning system based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111191732B (en) | Target detection method based on full-automatic learning | |
CN108830188B (en) | Vehicle detection method based on deep learning | |
Olgun et al. | Wheat grain classification by using dense SIFT features with SVM classifier | |
CN104599275B (en) | The RGB-D scene understanding methods of imparametrization based on probability graph model | |
CN103049763B (en) | Context-constraint-based target identification method | |
CN103984953A (en) | Cityscape image semantic segmentation method based on multi-feature fusion and Boosting decision forest | |
CN113205026B (en) | Improved vehicle type recognition method based on fast RCNN deep learning network | |
CN109740603A (en) | Based on the vehicle character identifying method under CNN convolutional neural networks | |
CN114092389A (en) | Glass panel surface defect detection method based on small sample learning | |
CN109145964B (en) | Method and system for realizing image color clustering | |
CN105321176A (en) | Image segmentation method based on hierarchical higher order conditional random field | |
CN110443257B (en) | Significance detection method based on active learning | |
CN111833322B (en) | Garbage multi-target detection method based on improved YOLOv3 | |
CN104217438A (en) | Image significance detection method based on semi-supervision | |
CN103366367A (en) | Pixel number clustering-based fuzzy C-average value gray level image splitting method | |
CN110008899B (en) | Method for extracting and classifying candidate targets of visible light remote sensing image | |
CN108154158B (en) | Building image segmentation method for augmented reality application | |
CN108596195B (en) | Scene recognition method based on sparse coding feature extraction | |
CN112580647A (en) | Stacked object oriented identification method and system | |
CN110781882A (en) | License plate positioning and identifying method based on YOLO model | |
CN112364791B (en) | Pedestrian re-identification method and system based on generation of confrontation network | |
CN107886066A (en) | A kind of pedestrian detection method based on improvement HOG SSLBP | |
CN111798447A (en) | Deep learning plasticized material defect detection method based on fast RCNN | |
CN107423771B (en) | Two-time-phase remote sensing image change detection method | |
CN115393631A (en) | Hyperspectral image classification method based on Bayesian layer graph convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |