CN117152175A - Machine vision-based waste plastic material position and orientation recognition method - Google Patents
Machine vision-based waste plastic material position and orientation recognition method Download PDFInfo
- Publication number
- CN117152175A CN117152175A CN202311158655.7A CN202311158655A CN117152175A CN 117152175 A CN117152175 A CN 117152175A CN 202311158655 A CN202311158655 A CN 202311158655A CN 117152175 A CN117152175 A CN 117152175A
- Authority
- CN
- China
- Prior art keywords
- waste plastic
- image
- point
- outline
- contour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 229920003023 plastic Polymers 0.000 title claims abstract description 76
- 239000004033 plastic Substances 0.000 title claims abstract description 76
- 239000002699 waste material Substances 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000000463 material Substances 0.000 title claims abstract description 14
- 230000011218 segmentation Effects 0.000 claims abstract description 29
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000036544 posture Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 238000004064 recycling Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000013361 beverage Nutrition 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
- 230000000379 polymerizing effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a waste plastic material position and orientation identification method based on machine vision, which comprises the following steps: obtaining an image of waste plastics through an industrial color camera; gray processing is carried out on the acquired RGB image according to a graying formula; dividing the image into a plurality of areas in the transverse direction, and carrying out threshold segmentation of the finite empirical gray level on each channel; and extracting the outline of the waste plastic of each channel, calculating the mass center of the channel, and finally calculating the mass center and the angle of the whole waste plastic according to the mass center coordinates of the channels to obtain the pose of the waste plastic. The method increases the calculated angle information, thereby increasing the success rate of sorting the large-angle waste plastics; and the traversing range of the optimal threshold value when the threshold value is segmented is reduced, so that the calculation amount of the threshold value segmentation is reduced, and the time consumed by image recognition is reduced.
Description
Technical Field
The invention relates to the field of machine vision, in particular to a waste plastic material position and orientation identification method based on machine vision.
Background
The beverage industry adopts light plastic bottles as packaging containers on a large scale, wherein the plastic bottles are obtained by polymerizing nonrenewable petroleum and are difficult to degrade in a natural state. If the waste plastics are not recycled, the generated garbage is buried in situ, burned or discarded at will, which brings great harm to the ecological environment, and a great amount of resource waste is caused, and the sorting and recycling of the waste plastics are key to solving the problem. The waste plastics are different in color, quality and size in the recycling process and need to be sorted. At present, the waste plastic sorting at home and abroad mainly adopts manual sorting, and the manual sorting has the advantages of high working strength, low sorting efficiency and poor working environment, so that quick and stable automatic sorting equipment is needed to replace the manual sorting.
The machine vision system can capture, process and analyze images at extremely high speed, so that the processing efficiency and the processing capacity are improved; can cope with waste plastics of different types and forms; can be in seamless connection with other equipment such as a conveyor belt, a pneumatic device and the like, and realizes a full-automatic waste plastic sorting production line. In general, the nozzles of the pneumatic sorting section at the end of the waste plastic sorting system are discrete and the centroid position of the plastic is often not coincident with the position of the spray head. When waste plastics with large shapes and postures are sorted, the air pressure and the flow rate of a single jet head are insufficient, so that sorting failure is caused. Therefore, when waste plastics are sorted, a plurality of nozzles may need to jet air flow at staggered time, which requires that not only the centroid position of the waste plastics be perceived, but also the attitude information of the waste plastics be obtained. The following are some research developments on position and gesture recognition algorithms.
The image recognition method disclosed in patent CN109190493a adopts graying treatment to obtain a graying image, then carries out multi-threshold search on the graying image to obtain an optimal sum sound threshold solution set for segmentation, and can accurately position the specific position of the apple.
Patent CN115082560a discloses a method for identifying the position and the posture of a part, which comprises the steps of obtaining an original image containing the part, generating a minimum external rectangle of the part in the original image, obtaining position data and a main direction included angle of the part according to the minimum external rectangle, and mapping the position data and the rotation angle into a world coordinate system to obtain the position and the posture of the part; patent CN113269835a provides a method for identifying the pose of an industrial part based on contour features, by acquiring an image of the industrial part, then acquiring edge pixel point coordinates of the industrial part according to the image, then acquiring corner point coordinates of four corner points of the minimum bounding rectangle of the industrial part according to the edge pixel point coordinates, finally calculating the center point coordinates of the minimum bounding rectangle as the position coordinates of the industrial part, and further calculating the pose angle of the industrial part, both the two patents use the minimum bounding rectangle method to calculate the position information and the angle information of the part, but experiments show that 256 gray-level information entropy needs to be calculated when the minimum bounding rectangle method is used for threshold segmentation to traverse the optimal threshold, and the requirement on the hardware system of a computer is higher.
Disclosure of Invention
Aiming at the problems, the invention provides a waste plastic material position and posture identification method based on machine vision, which can quickly identify the positions and postures of waste plastic on a conveyor belt, calculate position information and angle information of the waste plastic, reduce the time consumed by an algorithm and improve the sorting effect of the waste plastic with a large angle.
The aim of the invention is realized by the following technical scheme:
a waste plastic material position and orientation identification method based on machine vision comprises the following steps:
step S1: the industrial color camera is arranged above the conveyor belt, and when the waste plastic is in the visual range of the industrial color camera, the industrial color camera acquires the image of the waste plastic in real time;
step S2: carrying out graying treatment on the image obtained in the step S1 by using a weighted average method, and distinguishing waste plastics from the background in the original image;
step S3: dividing the gray-level processed image obtained in the step S2 into a plurality of transverse areas corresponding to the camera view field by using a transverse average cutting method, setting a threshold value of a fixed range for each area, and performing empirical threshold segmentation processing, wherein the threshold value is segmented to obtain a binarized image;
step S4: and (3) carrying out contour extraction on the plurality of areas by using a Suzuki algorithm on the binarized image obtained by threshold segmentation in the step (S3), calculating the positions of centroids of the plurality of areas by using contour moments, calculating the integral centroid and angle of the waste plastic according to the centroid position of each area through a formula, and finally realizing pose recognition of the waste plastic.
The invention has the beneficial effects that: the used experience threshold segmentation of the sub-areas uses transverse cutting to segment the image into N transverse areas corresponding to the camera view field, then the experience threshold segmentation is carried out, the angle of the whole waste plastics is calculated by using the mass center position of each area, the number of used nozzles and the time of each nozzle action when large-angle waste plastics are sorted can be accurately controlled, the algorithm calculation time is also reduced, and therefore the waste plastics sorting efficiency and success rate are improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart of an empirical threshold segmentation calculation for a split region.
Fig. 3 is an original image of waste plastic.
Fig. 4 is a gray scale of waste plastics.
Fig. 5 is a gray scale of the waste plastic after being divided by area.
Fig. 6 is a topological graph of contour boundaries.
Fig. 7 is a graph showing the result of contour extraction of waste plastics by region.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the present invention with reference to the accompanying drawings and examples, which are given by way of illustration only and not limitation of the present invention.
As shown in fig. 1 and 2, the invention relates to a pose recognition method for sorting waste plastics in real time, which specifically comprises the following steps:
step S1: the industrial color camera is arranged above the conveyor belt, and when the waste plastic appears in the visual range of the industrial color camera, the industrial color camera obtains images of the waste plastic in real time and transmits the images to the computer;
step S2: carrying out graying treatment on the image obtained in the step S1 by using a weighted average method, and distinguishing waste plastics from the background in the original image;
step S3: dividing the gray-level processed image obtained in the step S2 into N transverse areas corresponding to the camera view field by using a transverse average cutting method, setting a threshold value of a fixed range for each area, and performing empirical threshold segmentation processing, wherein the threshold value is segmented to obtain a binarized image;
step S4: performing contour extraction on N areas by using a Suzuki algorithm on the binarized image obtained after the segmentation of the regional experience threshold in the step S3, calculating the positions of the centroids of the N areas by using contour moments, calculating the integral centroids and angles of the waste plastics according to the centroid positions of each area through a formula, and finally realizing pose recognition of the waste plastics;
the method for identifying the waste plastic material position and the appearance based on machine vision comprises the following specific steps of: the method comprises the steps of multiplying color values of three RGB channels by different weights according to the importance of the three RGB channels of an image obtained by an industrial color camera to obtain a gray value, and finally converting an RGB image into a gray image, wherein a specific calculation formula is shown in a formula (1):
Grey=αR+βG+γB#(1)
the corresponding weights of alpha, beta and gamma are obtained according to the brightness contribution degrees of three different color channels respectively, and the best effect is obtained by gray scale treatment when alpha=0.07, beta=0.72 and gamma=0.21 are selected through experimental comparison and demonstration, and the following figures 3 and 4 are respectively an original color image of waste plastic and a result chart after gray scale treatment.
The method for identifying the waste plastic material position and the appearance based on machine vision comprises the following specific steps of: in order to shorten the traversing searching time of the segmentation threshold in the threshold segmentation process, the segmentation threshold is determined in advance instead of traversing the threshold for each frame of image, and the fixed threshold segmentation method is more suitable for occasions with single background and small fluctuation of the segmentation threshold and is more fit with the working scene of the invention. The black gray conveyor belt background used in the work of the invention has little local pixel value change in the motion process, but the color difference between different areas is larger, the gray value difference reaches about 50 in the gray mode in the range of 0-255, and if the different areas are segmented by adopting equal fixed thresholds, a large error can be generated. Therefore, the image shot by the camera is considered to be partitioned, the number of the areas is set to be N, an empirical threshold value of a fixed range is set in each area to be subjected to threshold segmentation, the height of each area is calculated according to the height of the image, and then the image is cut in the horizontal direction according to a predefined line number. The divided N strip-shaped areas can count the empirical average gray values of the N strip-shaped areas to be used as the basis for calculating the fixed threshold value of the area.
Taking n=12 as an example, fig. 5 is an effect diagram of a segmented gray image, and the partial data obtained by the test are listed in table 1 below, where the values in table 1 represent the average gray value of the channel with background only, and the blank space represents the presence of plastic to be identified in the data point and therefore is not counted in the calculation line.
Table 1 statistics of average gray values for each channel
The gray scale empirical values for the background of each channel were calculated from the data in table 1, with n=12, and the results are shown in table 2 below. The empirical gray average value in table 2 is added with a specific adjustment value, and the empirical value of the adjustment value is set to 50, which is the maximum entropy threshold traversal range for distinguishing the background from the foreground.
TABLE 2 statistical mean values of gray scale for each channel
When the threshold segmentation processing is carried out on an image, the whole image is segmented into N transverse areas, then each area is segmented into a single maximum entropy threshold value, and the background gray level statistical average value of the channel is assumed to be x, the traversing range of the maximum entropy segmentation threshold value is not [0, 255] but is [ x, x+50], if the maximum image entropy threshold value is enabled to be between x and x+50, the threshold value is the maximum entropy segmentation threshold value, if the entropy value of the image in the [ x, x+50] range is monotonically increased, x+50 is selected as the segmentation threshold value, the threshold value is not the maximum entropy segmentation threshold value in the [0, 255] full gray level range but is enough to distinguish plastics in the image from the background of a conveyor belt, so that the traversing is not needed to be continued to search the maximum entropy segmentation threshold value, and the traversing range of the segmentation threshold value of each channel is reduced from 0-255 to within 256 values, so that the image processing time is greatly shortened.
The method for identifying the waste plastic material position and the appearance based on machine vision comprises the following specific steps of: and carrying out contour extraction on the binarized image subjected to the segmentation of the regional experience threshold by using a Suzuki algorithm, carrying out topology analysis on the digital binary image by using the Suzuki algorithm, and creatively adopting the outer boundary, the hole boundary and the hierarchical relationship of the image to express the contour of the digital image. The definition for the outer boundary and the hole boundary is: if there is a 1 connected domain S1, a 0 connected domain S2, if S2 directly surrounds S1, the boundary between S2 and S1 is referred to as an outer boundary; if S1 is directly surrounding S2, the edge between S2 and S1 is referred to as the hole boundary, and both the outer boundary and the hole boundary are made up of 1 pixel. Wherein the surrounding of the communication area means that for two adjacent communication areas S1 and S2, if S2 can be reached for 4 directions at any one point on S1, S2 surrounds S1. The definition of the hierarchical relationship between contours is: assume that 1 connected domain S1 and S3 exist, 0 connected domain S2; s2 directly surrounds S1, S3 directly surrounds S2, the boundary between S1 and S2 is B1, the boundary between S2 and S3 is B2, then B2 is the parent boundary of B1, if S2 is background, then the parent boundary of B1 is the image border, and the topological relationship of the image and border is shown in fig. 6 below, for example. The Suzuki algorithm contour extraction mode is a raster scanning-like method, namely, the image pixel points are scanned from left to right from top to bottom, a boundary tracking algorithm is adopted to obtain a plurality of contours of the image, each contour is assigned with a unique number, the number of the currently tracked boundary is represented by NBD in the scanning process, and the number of the stored last boundary is represented by LNBD.
The specific process of extracting the outline primitive by the Suzuki algorithm is that if the input image is F= { F (i, j) }, wherein i and j are respectively the abscissa and the ordinate of the pixel point, the initial NBD is set to be 1, the frame of the image is regarded as the first outline boundary, the image is scanned, and when a certain pixel point F (i, j) noteq0 is scanned, the following steps are executed:
(1) Judging which case is
(a) If f (i, j) =1 and f (i, j-1) =0, (i, j) is the outer boundary starting point, nbd=nbd+1 is set, and (i) 2 ,j 2 ) The coordinates of (c) are equal to (i, j-1).
(b) If f (i, j) > 1 and f (i, j+1) =0, (i, j) is the hole boundary start point, nbd=nbd+1, and (i) is stored at the same time 2 ,j 2 ) The coordinates of (c) are equal to (i, j+1).
(c) Other cases jump to (4)
(2) Inquiring from the table 3 to obtain a father boundary of the current boundary according to the stored last boundary and the current encountered boundary, wherein B' is the last boundary and B is a new boundary;
TABLE 3 father boundary of the current boundary
(3) Boundary tracking is performed from the boundary starting point (i, j) according to the following algorithm
(3.1) centering on (i, j), (i) 2 ,j 2 ) And (3) searching eight adjacent domains of (i, j) in the clockwise direction for starting point, and judging whether non-zero pixel points exist or not. If a non-zero pixel point is found, the coordinate of the first non-zero pixel point in the clockwise direction is set as (i) 1 ,j 1 ) And go to (3.2); if not, let f (i, j) = -NBD and go to (4);
(3.2) updating (i) 2 ,j 2 ) The coordinates of (i) 1 ,j 1 ),(i 3 ,j 3 ) The coordinates of (i, j);
(3.3) to (i) 3 ,j 3 ) Is centered from (i) 2 ,j 2 ) Initially, look-up in a counter-clockwise direction (i 3 ,j 3 ) Whether or not there is a non-zero pixel in the eight neighborhoods of (i) and let (i) 4 ,j 4 ) For the coordinates of the first non-zero pixel encountered, where eight fields refer to eight neighboring pixels around a pixel, i.e., for a given pixel (i, j), its eight neighborhood positions may be denoted as (i-1, j-1), (i-1, j+1), (i, j-1), (i, j), (i, j+1), (i+1, j-1), (i+1, j), (i+1, j+1), and in contrast, if only four neighbors (up, down, left, right) are considered, the diagonal boundary information may be ignored, resulting in incomplete or inaccurate extracted contours;
(3.4) if (i) 3 ,j 3 +1) is the pixel inspected in (3.3) and zero, let f (i) 3 ,j 3 ) -NBD; if (i) 3 ,j 3 +1) is not the pixel inspected in (3.3) and is a non-zero pixel, let f (i) 3 ,j 3 )=NBD;
(3.5) if (i) 4 ,j 4 ) = (i, j) and (i) 3 ,j 3 )=(i 2 ,j 2 ) Representing a return to the starting point, jumping to (4); otherwise update (i) 2 ,j 2 ) The coordinates of (i) 3 ,j 3 ),(i 3 ,j 3 ) The coordinates of (i) 4 ,j 4 ) And go to (3.3);
(4) If f (i, j) +.1, then lnbd= |f (i, j) | continues scanning from point (i, j+1) and the algorithm ends when the lower right corner vertex of the picture is scanned.
The method improves and simplifies the original Suzuki algorithm flow, and uses the Suzuki algorithm flow as follows: let f (i, j) denote the pixel value of point (i, j), the first point on the outline is the line scan waste plastic binarized image until f (m-1, n) =0 and f (m, n) =255 appear at the pixel value of one point (m, n) of the connected region. The eight neighborhood of the point (m, n) is traversed anticlockwise by taking the point (m-1, n) as a starting point, and the eight neighborhood of the point (m, n) is traversed anticlockwise by taking the first point which is not 0 as the starting point, wherein the first point which is not 0 is the next point of the contour. In this way, the contour is closed completely. Pixels on the boundary of the connected domain are marked and a unique identifier is assigned to the boundary. After finding a new connected domain again, the identifier is added with 1, and the contour is tracked according to the same method. And (5) after obtaining the whole outline of the waste plastic, selecting the maximum outline and outputting the waste plastic.
After the contour of each area of the waste plastic is obtained, the contour centroid position of each area of the waste plastic is obtained by utilizing the moment of the contour. (p+q) order moment of contour m (pq) of the i-th region i The definition is as follows:
wherein x and y are the abscissa and ordinate values of the pixel point on the contour, I (x, y) is the pixel value (0 or 1) of the pixel coordinate point (x, y), n i The number of pixel points on the ith area outline; the centroid is solved using first order moments, i.e. the coefficients p and q have values of 0 or 1, respectively, but the sum of the two is 1, i.e. p+q=1.
In the contour image, the pixel values of points on the contour are all 1, so that the zero-order moment m (00) i The number of points on the i-th region outline is the first-order outline moment m (10) i 、m(01) i Respectively are provided withIs the accumulation of the x coordinate value and the y coordinate value of each pixel point on the ith area outline. Centroid position (x) corresponding to the i-th region outline i ,y i ) Can be calculated by equation (3).
The outline of the binarized image was extracted and the centroid position was found according to the above-described method, and the results obtained are shown in fig. 7 below. After the barycenter coordinate of each region is calculated, the total barycenter (x, y) of the waste plastic and the angle theta of the waste plastic are calculated respectively by the following formulas.
Wherein n is the number of connected domains, (x) i ,y i ) Is the centroid of the contour of the ith connected domain.
The centroid, angle and calculation time (n=12) calculated for the waste plastic bottle in fig. 7 using the above formula are shown in table 4 below.
Table 4 calculation results of the position and the posture of the waste plastic material
From the table, the method for dividing the threshold value by using the experience threshold value of the divided regions can effectively reduce the calculation amount of the threshold value division, the calculation time is reduced to 23ms, and the time consumed by image recognition is greatly reduced.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (5)
1. The waste plastic material position and posture identification method based on machine vision is characterized by comprising the following steps of:
step S1: the industrial color camera is arranged above the conveyor belt, and when the waste plastic is in the visual range of the industrial color camera, the industrial color camera acquires the image of the waste plastic in real time;
step S2: carrying out graying treatment on the image obtained in the step S1 by using a weighted average method, and distinguishing waste plastics from the background in the original image;
step S3: dividing the gray-level processed image obtained in the step S2 into a plurality of transverse areas corresponding to the camera view field by using a transverse average cutting method, setting a threshold value of a fixed range for each area, and performing empirical threshold segmentation processing, wherein the threshold value is segmented to obtain a binarized image;
step S4: and (3) carrying out contour extraction on the plurality of areas by using a Suzuki algorithm on the binarized image obtained by threshold segmentation in the step (S3), calculating the positions of centroids of the plurality of areas by using contour moments, calculating the integral centroid and angle of the waste plastic according to the centroid position of each area through a formula, and finally realizing pose recognition of the waste plastic.
2. The machine vision-based waste plastic material position and orientation recognition method according to claim 1 is characterized in that the step S2 of weighted average gray scale processing specifically comprises the steps of multiplying color values of three RGB channels according to the importance of the three RGB channels by different weights respectively to obtain a gray scale value, and finally converting an RGB image into a gray scale image.
3. The machine vision-based waste plastic material position and orientation recognition method according to claim 1, wherein the specific method for transverse average cutting and empirical threshold segmentation in the step S3 is as follows: firstly defining a cutting line, calculating the height of each region according to the height of an image, then cutting the image into a plurality of regions in the horizontal direction according to a predefined line number, counting the average gray average value of the background region of the conveyor belt for each region, setting a threshold traversing range for distinguishing the background from the foreground according to the average gray value, and finally carrying out independent threshold segmentation on each region to generate a corresponding binarized image.
4. The machine vision-based waste plastic material position and orientation recognition method according to claim 1, wherein the specific method for extracting the outline by using the Suzuki algorithm in the step S4 is as follows: initializing a binary image into a mark image with the same size, initializing all pixels into 0, and using f (i, j) to represent the pixel value of a point (i, j), scanning the waste plastic binary image in a row until encountering a point (m, n) of a connected area, wherein f (m-1, n) =0, and f (m, n) =255, wherein the point is the first point on the outline; traversing the eight neighborhood of the point (m, n) anticlockwise by taking the point (m-1, n) as a starting point, traversing the eight neighborhood of the point (m, n) anticlockwise by taking the first point which is not 0 as the starting point, and taking the first point which is not 0 as the next point of the contour; according to this method, until the profile is completely closed; marking pixels on the boundary of the connected domain and assigning a unique identifier to the boundary; after finding a new connected domain again, adding 1 to the identifier, and tracking the outline according to the same method; and (3) after obtaining all the outlines of the waste plastics, selecting the largest outline and outputting the outline, and finally obtaining a complete outline image.
5. The machine vision-based waste plastic position and orientation recognition method according to claim 1, wherein the calculation formulas of the mass center and the angle of the whole waste plastic are respectively as follows:
wherein (x, y) is the barycenter coordinate of the whole waste plastic, n is the number of areas, (x) i ,y i ) Is the centroid of the contour in the i-th region; θ is the angle of the whole waste plastic;
the centroid calculation formula of the contour in the i-th region is:
wherein m (00) i The number of points on the contour of the ith area, m (10) i 、m(01) i The x coordinate value and the y coordinate value of each pixel point on the ith area outline are respectively accumulated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311158655.7A CN117152175A (en) | 2023-09-08 | 2023-09-08 | Machine vision-based waste plastic material position and orientation recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311158655.7A CN117152175A (en) | 2023-09-08 | 2023-09-08 | Machine vision-based waste plastic material position and orientation recognition method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117152175A true CN117152175A (en) | 2023-12-01 |
Family
ID=88898563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311158655.7A Pending CN117152175A (en) | 2023-09-08 | 2023-09-08 | Machine vision-based waste plastic material position and orientation recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117152175A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117671497A (en) * | 2023-12-04 | 2024-03-08 | 广东筠诚建筑科技有限公司 | Engineering construction waste classification method and device based on digital images |
-
2023
- 2023-09-08 CN CN202311158655.7A patent/CN117152175A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117671497A (en) * | 2023-12-04 | 2024-03-08 | 广东筠诚建筑科技有限公司 | Engineering construction waste classification method and device based on digital images |
CN117671497B (en) * | 2023-12-04 | 2024-05-28 | 广东筠诚建筑科技有限公司 | Engineering construction waste classification method and device based on digital images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109409366B (en) | Distorted image correction method and device based on angular point detection | |
CN109753953B (en) | Method and device for positioning text in image, electronic equipment and storage medium | |
CN102742977B (en) | Method for controlling gluing path on basis of image processing | |
Valizadeh et al. | Binarization of degraded document image based on feature space partitioning and classification | |
CN117152175A (en) | Machine vision-based waste plastic material position and orientation recognition method | |
CN102156868A (en) | Image binaryzation method and device | |
CN112085024A (en) | Tank surface character recognition method | |
CN108133216B (en) | Nixie tube reading identification method capable of realizing decimal point reading based on machine vision | |
CN110443205A (en) | A kind of hand images dividing method and device | |
US12017368B2 (en) | Mix-size depalletizing | |
CN101577005A (en) | Target tracking method and device | |
CN112883881B (en) | Unordered sorting method and unordered sorting device for strip-shaped agricultural products | |
CN110598698A (en) | Natural scene text detection method and system based on adaptive regional suggestion network | |
CN104331695A (en) | Robust round identifier shape quality detection method | |
CN111914818B (en) | Method for detecting forest fire smoke root nodes based on multi-frame discrete confidence | |
CN110598708A (en) | Streetscape text target identification and detection method | |
CN104966072B (en) | It is a kind of based on shape without colour code machine fish pose recognizer | |
CN114419006A (en) | Method and system for removing watermark of gray level video characters changing along with background | |
CN112381844A (en) | Self-adaptive ORB feature extraction method based on image blocking | |
CN111611783A (en) | Positioning and dividing method and device for graphic table | |
CN107688812B (en) | Food production date ink-jet font repairing method based on machine vision | |
CN1217292C (en) | Bill image face identification method | |
CN115410184A (en) | Target detection license plate recognition method based on deep neural network | |
CN112288372B (en) | Express bill identification method capable of simultaneously identifying one-dimensional bar code and three-segment code characters | |
US12112499B2 (en) | Algorithm for mix-size depalletizing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |