CN115661646A - Intelligent garbage classification method based on computer vision - Google Patents

Intelligent garbage classification method based on computer vision Download PDF

Info

Publication number
CN115661646A
CN115661646A CN202211296686.4A CN202211296686A CN115661646A CN 115661646 A CN115661646 A CN 115661646A CN 202211296686 A CN202211296686 A CN 202211296686A CN 115661646 A CN115661646 A CN 115661646A
Authority
CN
China
Prior art keywords
pixel point
garbage
pixel
gray
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211296686.4A
Other languages
Chinese (zh)
Inventor
翟慧
房静
翟煜锦
李纪云
赵大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Polytechnic Institute
Original Assignee
Henan Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Polytechnic Institute filed Critical Henan Polytechnic Institute
Priority to CN202211296686.4A priority Critical patent/CN115661646A/en
Publication of CN115661646A publication Critical patent/CN115661646A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the field of garbage classification and recycling, in particular to an intelligent garbage classification method based on computer vision. Collecting a garbage gray image after being completely tiled; obtaining the uniformity of each pixel point; acquiring the adhesion degree of each pixel point according to the average distance between each pixel point and other pixel points with the same gradient amplitude and the uniformity of each pixel point; obtaining the size of a filtering window of each pixel point, and filtering each pixel point in the garbage gray level image to obtain a filtered garbage gray level image; and identifying the plastic bags in each connected domain by using a neural network, and separating the plastic bags in the garbage. According to the invention, the self-adaptive filtering window is selected for the garbage gray level image, so that the plastic bags are identified, the plastic bags in the garbage can be accurately separated, and the garbage classification efficiency is effectively improved.

Description

Intelligent garbage classification method based on computer vision
Technical Field
The invention relates to the field of garbage classification and recovery, in particular to an intelligent garbage classification method based on computer vision.
Background
With the rapid development of society, various products are continuously emerging, the types of used materials are various, and more waste garbage is generated, so that the problem of serious environmental pollution is brought. People have been aware of the importance of waste classification as early as the last century, waste classification is also more and more intelligent along with the progress of science and technology, the primary classification is mainly carried out by people's traditional conception awareness in the current waste classification, but because the habit or part of waste needs to be wrapped up, part of plastic bags still exist in the waste after the primary classification, when the waste is transported to a treatment station to be treated, secondary classification also needs to be carried out, when secondary classification is carried out on the waste, the part of the outer side of the waste needs to be broken away by the wrapping place of the plastic bags, so that the plastic bags are separated from the waste, at the moment, the plastic bags in the waste need to be picked out firstly.
In the prior art, a technical means of separating plastic bags from garbage by air separation exists, but the plastic bags in the garbage can be completely separated only by air separation for multiple times in the process, and meanwhile, the odor of the garbage caused by the air separation is serious, the efficiency is poor and the intelligence level is low; the method for identifying the plastic bags in the garbage through the computer vision has the advantages that the environments of the plastic bags and the garbage are complex, a large amount of noise exists in the image quality obtained through the computer vision, the neural network is used for identifying the plastic bags after the edge of the image is directly detected, the garbage can be mistakenly identified as the plastic bags, the identification of the plastic bags is not accurate enough, the garbage similar to the plastic bags in the image can be captured when the plastic bags are separated, and the garbage classification effect is poor.
Disclosure of Invention
In order to solve the problem that the image quality obtained through computer vision in the prior art has a large amount of noise, which causes the inaccurate identification of the subsequent plastic bags, the invention provides a computer vision-based intelligent garbage classification method, which comprises the following steps: collecting a garbage gray image after complete tiling; obtaining the uniformity of each pixel point; acquiring the attachment degree of each pixel point according to the average distance between each pixel point and other pixel points with the same gradient amplitude and the uniformity of each pixel point; obtaining the size of a filtering window of each pixel point, and filtering each pixel point in the garbage gray level image to obtain a filtered garbage gray level image; and identifying the plastic bags in each connected domain by using a neural network, and separating the plastic bags in the garbage. According to the invention, the self-adaptive filtering window is selected for the garbage gray level image, so that the plastic bags are identified, the plastic bags in the garbage can be accurately separated, and the garbage classification efficiency is effectively improved.
The invention adopts the following technical scheme, and the intelligent garbage classification method based on computer vision comprises the following steps:
breaking a plastic bag wrapped outside the garbage by using a roller device, flatly paving the garbage and the broken plastic bag on a conveyor belt, and collecting gray images of the garbage completely flatly paved on the conveyor belt;
obtaining the uniformity of each pixel point according to the gradient amplitude of each pixel point in the garbage gray level image and each pixel point in the eight neighborhoods of each pixel point;
acquiring the distance between every two pixel points with the same gradient amplitude in the garbage gray image, and acquiring the attachment of each pixel point according to the average distance between each pixel point and other pixel points with the same gradient amplitude and the uniformity of each pixel point;
acquiring the size of a filtering window of each pixel point according to the attachment degree of each pixel point, and filtering each pixel point in the garbage gray level image according to the size of the filtering window of each pixel point to obtain a filtered garbage gray level image;
performing edge detection on the filtered garbage gray level image to obtain a plurality of connected domains, and outputting the filtered garbage gray level image as a plastic bag in each connected domain as input of a neural network; separating the plastic bags from the waste in each connected domain.
Further, an intelligent garbage classification method based on computer vision, the method for obtaining the uniformity of each pixel point comprises the following steps:
acquiring the gradient amplitude of each pixel point in the garbage gray level image;
acquiring the information entropy of each pixel point according to the gradient amplitude of each pixel point and the pixel points in the eight neighborhoods of each pixel point;
obtaining the mean value of the gradient amplitude difference between each pixel point and the eight neighborhood pixel points;
and obtaining the uniformity of each pixel point according to the mean value of the gradient amplitude difference between each pixel point and the eight neighborhood pixel points and the information entropy of the pixel point.
Further, an intelligent garbage classification method based on computer vision, the method for obtaining the attachment degree of each pixel point comprises the following steps:
acquiring the average distance between each pixel point in the garbage gray level image and the same gradient pixel point;
obtaining the gradient amplitude of each pixel point and the ratio of the average distance between each pixel point and the same gradient pixel point;
and obtaining the attachment degree of each pixel point according to the product of the ratio and the uniformity of each pixel point in the garbage gray level image.
Further, an intelligent garbage classification method based on computer vision obtains an expression of the attachment degree of each pixel point as follows:
Figure BDA0003903011500000031
wherein, F i Expressing the degree of adhesion of the ith pixel, d i Represents the average distance, G, between the ith pixel and the same gradient pixel i Expressing the gradient amplitude of the ith pixel point, e expressing an exponential function with e as a base, J i And expressing the uniformity of the ith pixel point.
Further, a method for intelligently classifying garbage based on computer vision, which is used for acquiring the size of a filtering window of each pixel point, comprises the following steps:
taking the attachment degree of each pixel point as an index of an index function;
and obtaining the product of the exponential function and the set constant and rounding upwards to obtain the size of the filtering window of each pixel point.
Further, an intelligent garbage classification method based on computer vision, which takes the filtered garbage gray level image as the input of a neural network and outputs the garbage gray level image as a plastic bag in each connected domain, comprises the following steps:
acquiring gray images of the plastic bags with various colors as a data set, and training a neural network by using the data set;
and carrying out target recognition on a plurality of connected domains in the filtered garbage gray level image by using the trained neural network to obtain a plastic bag in the filtered garbage gray level image.
Further, an intelligent garbage classification method based on computer vision, which is a method for filtering each pixel point in a garbage gray image, comprises the following steps:
taking each pixel point in the garbage gray image as a central point, and acquiring a Gaussian kernel of a filtering window where each central point is located according to gray values of all pixel points in the filtering window where each central point is located;
and performing Gaussian filtering on each central point according to the Gaussian core of the filtering window where each central point is located.
The invention has the beneficial effects that: according to the invention, the distribution condition of the pixel points in the image is reflected by obtaining the uniformity of each pixel point in the image, the distribution condition can be used as one of indexes for distinguishing the pixel points and noise in the image, the attachment degree of each pixel point is further obtained by combining the average distance between each pixel point and the same gradient pixel point, the gray level change condition of the area where each pixel point is located and the aggregation degree of the pixel points can be reflected, so that the filtering window of each pixel point is obtained in a self-adaptive manner according to the attachment degree, a good filtering effect on the pixel points in the image is ensured, the accuracy of subsequent plastic bag identification can be effectively improved, and the intelligent classification of garbage is facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an intelligent garbage classification method based on computer vision according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As shown in fig. 1, a schematic structural diagram of an intelligent garbage classification method based on computer vision according to an embodiment of the present invention is provided, including:
101. and collecting the garbage gray level image after being completely tiled.
The invention aims at the following scenes: when kitchen waste classified by society is conveyed to a waste recycling station for secondary treatment, the kitchen waste on a conveyor belt of the recycling station needs to be secondarily classified, and plastic bags in the waste are separated out, so that the invention has the premise that a roller device is arranged at a feed port of the conveyor belt, and is provided with a sawtooth or steel knife-shaped sharp blade, so that the plastic bags in the waste are broken, and the waste in the plastic bags is dumped out; secondly, a shaking device is arranged in the conveyor belt, so that kitchen waste on the conveyor belt is shaken and scattered to be in a flat state, and the kitchen waste is not stacked, at the moment, a high-definition camera arranged above the conveyor belt is used for collecting garbage images which are flat in the conveyor belt, when the conveyor belt is in an open state, the garbage images in one conveyor belt need to be collected at regular intervals, if the conveyor belt does uniform linear motion, the transmission speed is v, the width of the conveyor belt is l, the length-width ratio of the collected images is 4, and the interval sampling time t is:
Figure BDA0003903011500000041
in order to prevent the same object from being captured in different frames when capturing images, it is necessary to sample t' = t- α at a sampling time in a small range. In order to reduce the influence of light on subsequent image processing, a light source used for image acquisition needs to uniformly hit the surface of the conveyor belt from top to bottom, and images are shot through a high-power camera.
And then graying the acquired image to obtain a corresponding garbage grayscale image, wherein the specific graying method is the prior art, and the invention is not limited.
Since there is a lot of edge noise in the detected edge of the kitchen waste, it is necessary to perform noise reduction processing on the edge in order to prevent noise from affecting the obtained virtual edge when the edge is acquired.
The kitchen waste mainly comprises discarded unused vegetable leaves, leftovers, fruit peels, eggshells, tea leaves, bones and the like, and a plastic bag in the kitchen waste has obvious characteristic difference compared with other kitchen waste, but the garbage has more forms and types, and a plurality of interference factors are generated during recognition, for example, the larger vegetable leaves, fruit peels and the like can influence the recognition of the plastic bag.
102. And obtaining the uniformity of each pixel point according to the gradient amplitude of each pixel point in the garbage gray level image and each pixel point in the eight neighborhoods of the garbage gray level image.
When the noise reduction processing is carried out on the image, the influence degree of noise on the edge pixel points during the edge detection needs to be considered, because the noise pixel points are distributed in a mixed and disorderly manner, and although the kitchen garbage has more morphological characteristics, the gray characteristics in a minimum range can be regarded as equal or similar, so the noise is distinguished from the original pixel points by calculating the uniformity of the pixel points in the small range.
The method for obtaining the uniformity of each pixel point comprises the following steps:
acquiring the gradient amplitude of each pixel point in the garbage gray image;
the invention firstly calculates the gradient continuity of each pixel point so as to obtain the uniformity degree of the pixel point, and the gradient amplitude and the gradient direction of each pixel point are calculated as follows:
Figure BDA0003903011500000051
Figure BDA0003903011500000052
wherein G (x, y) represents a gradient magnitude, θ represents a gradient direction, and G x Denotes the partial derivative of the binary function f (x, y) on x, g y Representing the partial derivatives of the binary function f (x, y) over y, the gradient direction will take the absolute value, and the resulting angular range is 0 0 ,180 0 ]At each pixel, the gradient has a magnitude and a direction, the x-direction gradient emphasizes vertical edge features and the y-direction gradient emphasizes horizontal edge features.
Acquiring the information entropy of each pixel point according to the gradient amplitude of each pixel point and the gradient amplitudes of the pixel points in the eight neighborhoods of each pixel point;
obtaining the mean value of the gradient amplitude difference between each pixel point and the eight neighborhood pixel points;
obtaining the uniformity of each pixel point according to the mean value of the gradient amplitude difference value between each pixel point and the eight neighborhood pixel points and the information entropy of the pixel point, wherein the expression is as follows:
Figure BDA0003903011500000061
in the formula, J i Indicating the homogeneity of the ith pixel point, G j (x, y) represents the gradient amplitude of the jth pixel point in the eight neighborhoods of the ith pixel point with the coordinate (x, y), G (x, y) represents the gradient amplitude of the ith pixel point at the central point, and p in The ratio of the gradient amplitudes of the nth pixel point in the eight neighborhoods of the ith pixel point is represented,
Figure BDA0003903011500000062
and expressing the information entropy of the ith pixel point in the eight neighborhoods and expressing the disorder degree of the gradient of each pixel point in the eight neighborhoods.
The gradient value of each pixel point represents the change of the gray value of the pixel point, and the integral gradient of neighborhood pixel points around each pixel point can be reflected by calculating the average gradient of eight neighborhood pixels where each pixel point is located; and calculating the information entropy of the gradient to express the change degree of the gradient in the template, wherein if the change amount of the gradient information entropy is larger, the change of the gray value of the pixel point in the template is more violent, and the uniformity degree is lower.
103. And acquiring the distance between every two pixel points with the same gradient amplitude in the garbage gray image, and acquiring the attachment of each pixel point according to the average distance between each pixel point and other pixel points with the same gradient amplitude and the uniformity of each pixel point.
In the denoising process of the garbage gray image, because the noise point is an unnecessary pixel point relative to the garbage pixel point on the image, and the noise point is mostly a scattered pixel point, compared with the pixel point of the garbage area in the image, the scattering degree of the pixel point is larger, and the adhesion degree represents the change of the aggregation and gray level of the pixel point on the area, the adhesion degree of the corresponding noise pixel point is smaller.
The method for acquiring the attachment degree of each pixel point comprises the following steps:
acquiring the average distance between each pixel point in the garbage gray level image and the same gradient pixel point;
and acquiring the attachment degree of each pixel point according to the uniformity of each pixel point in the garbage gray image, the gradient amplitude of each pixel point and the average distance between each pixel point and the same gradient pixel point.
The expression for obtaining the attachment degree of each pixel point is as follows:
Figure BDA0003903011500000063
wherein, F i Expressing the degree of adhesion of the ith pixel, d i Represents the average distance between the ith pixel and the pixel with the same gradient, G i Expressing the gradient amplitude of the ith pixel point, e expressing an exponential function with e as a base, J i And expressing the uniformity of the ith pixel point.
Because the gray value of the noise pixel point is different from the gray value of the pixel point of the original image and the distribution range of the noise is relatively disordered, the characteristic value of the pixel point is represented according to the gradient amplitude of the pixel point and the average distance between the pixel points, the larger the gradient amplitude is and the farther the average distance is compared with other pixels with the same gradient, the larger the gray change of the pixel point is represented, namely the more possible degree of the pixel point is represented as the noise, then the smaller the uniformity of the pixel point is, the more complicated the gray change of the pixel point in the neighborhood of the pixel point is, the more possible degree of the pixel point is represented as the noise pixel point is, therefore, the smaller the adhesion degree of the pixel point is, the more possibility of the pixel point as the noise is indicated, and the filtering window for filtering is selected according to the adhesion degree of each pixel point.
104. And obtaining the size of a filtering window of each pixel point according to the attachment degree of each pixel point, and filtering each pixel point in the garbage gray level image according to the size of the filtering window of each pixel point to obtain the filtered garbage gray level image.
The method for obtaining the size of the filtering window of each pixel point comprises the following steps:
taking the attachment degree of each pixel point as an index of an index function;
obtaining the product of the exponential function and a set constant and rounding upwards to obtain the size of a filtering window of each pixel point, wherein the expression is as follows:
Figure BDA0003903011500000071
wherein, K i The size of a filtering window representing the ith pixel point, e represents an exponential function with e as a base, F i And expressing the adhesion of the ith pixel point, setting a constant 10 to multiply the exponential function because the filtering window should be a positive integer, and rounding up the result obtained after multiplication so as to obtain the accurate size of the filtering window.
The smaller the adhesion degree is, the more likely the pixel point is noise, the larger the filtering window for filtering the pixel point is, so that when the Gaussian filtering is carried out on the pixel point, the smaller the weight of the central pixel point can be ensured, and the better denoising effect is achieved.
The method for filtering each pixel point in the garbage gray level image comprises the following steps:
taking each pixel point in the garbage gray image as a central point, and acquiring a Gaussian kernel of a filtering window where each central point is located according to gray values of all pixel points in the filtering window where each central point is located;
and performing Gaussian filtering on each central point according to the Gaussian core of the filtering window where each central point is located.
Generating a Gaussian kernel by using a Gaussian function according to the filtering window of each pixel point and the gray value of each pixel point:
Figure BDA0003903011500000081
in the formula, K i F (x, y) represents the gray value of the pixel point (x, y) and sigma is the standard deviation, the Gaussian kernel standard deviation sigma is obtained according to the variance, and the variance in a certain region of the image has the following calculation formula:
Figure BDA0003903011500000082
wherein:
Figure BDA0003903011500000083
in the formula (I), the compound is shown in the specification,
Figure BDA0003903011500000084
mean value of the gray values of the pixels in the convolution window representing the region in which the central point is located, S x,y Expressed as the size of the convolution window of the area where the central point (x, y) is located, f (m, N) represents the gray value of the pixel point with the coordinate (m, N) in the convolution window, N x,y The number of pixel points contained in a convolution window of the area where the central point (x, y) is located is represented, and S is obtained when the variance D (i, j) is larger x,y The larger the dispersion degree of the pixel values in the region is, the larger the weight of the Gaussian kernel coefficient generated by selecting smaller sigma is, and the smaller the influence on the region is.
The specific operation of adopting Gaussian filtering in the invention is as follows: each pixel in the image is scanned by a template (or called convolution and mask), and the weighted average gray value of the pixels in the neighborhood determined by the template is used to replace the value of the central pixel point of the template, which is a filtering process of gaussian filtering in the prior art and is not described in more detail in the invention.
105. And performing edge detection on the filtered garbage gray level image to obtain a plurality of connected domains, identifying the plastic bag in each connected domain by using a neural network, and separating the plastic bags.
The method for identifying the plastic bag in each connected domain by using the neural network comprises the following steps:
acquiring gray images of the plastic bags with various colors as a data set, and training a neural network by using the data set;
and carrying out target recognition on a plurality of connected domains in the filtered garbage gray level image by using the trained neural network to obtain a plastic bag in the filtered garbage gray level image.
Because the garbage bags are made of plastic materials, the colors of the garbage bags are single and easy to recognize, the colors of the garbage bags are trained and learned through the CNN neural network, the garbage bags with various colors are input, and the trained CNN neural network is used for recognizing plastic bags of denoised garbage gray images, so that the plastic bags on the conveyor belt are grabbed out, the influence of the plastic bags on secondary classification of garbage is removed, the residual garbage in the conveyor belt is further screened according to different garbage types, and the intelligent classification of the garbage is completed.
According to the invention, the distribution condition of the pixel points in the image is reflected by obtaining the uniformity of each pixel point in the image, the distribution condition can be used as one of indexes for distinguishing the pixel points and noise in the image, the attachment degree of each pixel point is further obtained by combining the average distance between each pixel point and the same gradient pixel point, the gray level change condition of the area where each pixel point is located and the aggregation degree of the pixel points can be reflected, so that the filtering window of each pixel point is obtained in a self-adaptive manner according to the attachment degree, a good filtering effect on the pixel points in the image is ensured, the accuracy of subsequent plastic bag identification can be effectively improved, and the intelligent classification of garbage is facilitated.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A computer vision-based intelligent garbage classification method is characterized by comprising the following steps
Breaking a plastic bag wrapped outside the garbage by using a roller device, flatly paving the garbage and the broken plastic bag on a conveyor belt, and collecting gray images of the garbage completely flatly paved on the conveyor belt;
obtaining the uniformity of each pixel point according to the gradient amplitude of each pixel point in the garbage gray level image and each pixel point in the eight neighborhoods of each pixel point;
acquiring the distance between every two pixel points with the same gradient amplitude in the garbage gray image, and acquiring the attachment of each pixel point according to the average distance between each pixel point and other pixel points with the same gradient amplitude and the uniformity of each pixel point;
obtaining the size of a filtering window of each pixel point according to the adhesion of each pixel point, and filtering each pixel point in the garbage gray image according to the size of the filtering window of each pixel point to obtain a filtered garbage gray image;
performing edge detection on the filtered garbage gray level image to obtain a plurality of connected domains, and outputting the filtered garbage gray level image as the input of a neural network to be a plastic bag in each connected domain; separating the plastic bag from the waste in each connected domain.
2. The intelligent computer vision-based garbage classification method according to claim 1, wherein the method for obtaining the uniformity of each pixel point comprises the following steps:
acquiring the gradient amplitude of each pixel point in the garbage gray image;
acquiring the information entropy of each pixel point according to the gradient amplitude of each pixel point and the pixel points in the eight neighborhoods of each pixel point;
obtaining the mean value of the gradient amplitude difference between each pixel point and the eight neighborhood pixel points;
and obtaining the uniformity of each pixel point according to the mean value of the gradient amplitude difference between each pixel point and the eight neighborhood pixel points and the information entropy of the pixel point.
3. The intelligent garbage classification method based on computer vision according to claim 1 is characterized in that the method for obtaining the attachment degree of each pixel point comprises the following steps:
acquiring the average distance between each pixel point in the garbage gray level image and the same gradient pixel point;
obtaining the gradient amplitude of each pixel point and the ratio of the average distance between each pixel point and the same gradient pixel point;
and acquiring the attachment degree of each pixel point according to the product of the ratio and the uniformity of each pixel point in the garbage gray level image.
4. The intelligent garbage classification method based on computer vision according to claim 3 is characterized in that the expression for obtaining the attachment degree of each pixel point is as follows:
Figure FDA0003903011490000021
wherein, F i Expressing the degree of adhesion of the ith pixel, d i Represents the average distance, G, between the ith pixel and the same gradient pixel i Expressing the gradient amplitude of the ith pixel point, e expressing an exponential function with e as a base, J i Indicating the uniformity of the ith pixel point.
5. The intelligent garbage classification method based on computer vision according to claim 1 is characterized in that the method for obtaining the size of the filtering window of each pixel point is as follows:
taking the adhesion degree of each pixel point as an index of an index function;
and obtaining the product of the exponential function and a set constant and rounding upwards to obtain the size of a filtering window of each pixel point.
6. The intelligent garbage classification method based on computer vision as claimed in claim 1 is characterized in that the method of outputting the filtered garbage gray image as the input of the neural network and the plastic bag in each connected domain comprises the following steps:
acquiring gray images of the plastic bags with various colors as a data set, and training a neural network by using the data set;
and performing target recognition on a plurality of connected domains in the filtered garbage gray level image by using the trained neural network to obtain a plastic bag in the filtered garbage gray level image.
7. The intelligent garbage classification method based on computer vision according to claim 1 is characterized in that the method for filtering each pixel point in the gray garbage image is as follows:
taking each pixel point in the garbage gray image as a central point, and acquiring a Gaussian kernel of a filtering window where each central point is located according to the gray values of all the pixel points in the filtering window where each central point is located;
and performing Gaussian filtering on each central point according to the Gaussian core of the filtering window where each central point is located.
CN202211296686.4A 2022-10-21 2022-10-21 Intelligent garbage classification method based on computer vision Pending CN115661646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211296686.4A CN115661646A (en) 2022-10-21 2022-10-21 Intelligent garbage classification method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211296686.4A CN115661646A (en) 2022-10-21 2022-10-21 Intelligent garbage classification method based on computer vision

Publications (1)

Publication Number Publication Date
CN115661646A true CN115661646A (en) 2023-01-31

Family

ID=84989966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211296686.4A Pending CN115661646A (en) 2022-10-21 2022-10-21 Intelligent garbage classification method based on computer vision

Country Status (1)

Country Link
CN (1) CN115661646A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453029A (en) * 2023-06-16 2023-07-18 济南东庆软件技术有限公司 Building fire environment detection method based on image data
CN117876402A (en) * 2024-03-13 2024-04-12 中国人民解放军总医院第一医学中心 Intelligent segmentation method for temporomandibular joint disorder image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453029A (en) * 2023-06-16 2023-07-18 济南东庆软件技术有限公司 Building fire environment detection method based on image data
CN116453029B (en) * 2023-06-16 2023-08-29 济南东庆软件技术有限公司 Building fire environment detection method based on image data
CN117876402A (en) * 2024-03-13 2024-04-12 中国人民解放军总医院第一医学中心 Intelligent segmentation method for temporomandibular joint disorder image
CN117876402B (en) * 2024-03-13 2024-05-07 中国人民解放军总医院第一医学中心 Intelligent segmentation method for temporomandibular joint disorder image

Similar Documents

Publication Publication Date Title
CN115661646A (en) Intelligent garbage classification method based on computer vision
Priyadumkol et al. Crack detection on unwashed eggs using image processing
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
Liming et al. Automated strawberry grading system based on image processing
CN106530297B (en) Grasping body area positioning method based on point cloud registering
CN108971190B (en) Machine vision-based household garbage sorting method
WO2019000653A1 (en) Image target identification method and apparatus
CN101984346A (en) Method of detecting fruit surface defect based on low pass filter
CN101059425A (en) Method and device for identifying different variety green tea based on multiple spectrum image texture analysis
CN103593670A (en) Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
CN102842032A (en) Method for recognizing pornography images on mobile Internet based on multi-mode combinational strategy
CN108416814B (en) Method and system for quickly positioning and identifying pineapple head
CN106483135A (en) Based on iblet detection identifying device and method under the complex background of machine vision
CN109961070A (en) The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
CN102855641A (en) Fruit level classification system based on external quality
CN116596924B (en) Stevioside quality detection method and system based on machine vision
CN109115775A (en) A kind of betel nut level detection method based on machine vision
CN114029943A (en) Target grabbing and positioning method and system based on image data processing
CN102680488B (en) Device and method for identifying massive agricultural product on line on basis of PCA (Principal Component Analysis)
CN112614139B (en) Conveyor belt ore briquette screening method based on depth map
Prem Kumar et al. Quality grading of the fruits and vegetables using image processing techniques and machine learning: a review
Zhao et al. An effective binarization method for disturbed camera-captured document images
Mohanapriya et al. Recognition of unhealthy plant leaves using naive bayes classifier
CN108960094A (en) A kind of driver's smoking motion detection algorithm based on histograms of oriented gradients
CN111257339B (en) Preserved egg crack online detection method and detection device based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination