CN117152175A - Machine vision-based waste plastic material position and orientation recognition method - Google Patents

Machine vision-based waste plastic material position and orientation recognition method Download PDF

Info

Publication number
CN117152175A
CN117152175A CN202311158655.7A CN202311158655A CN117152175A CN 117152175 A CN117152175 A CN 117152175A CN 202311158655 A CN202311158655 A CN 202311158655A CN 117152175 A CN117152175 A CN 117152175A
Authority
CN
China
Prior art keywords
waste plastic
image
point
contour
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311158655.7A
Other languages
Chinese (zh)
Inventor
彭斌彬
毛嘉炜
郭亚坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202311158655.7A priority Critical patent/CN117152175A/en
Publication of CN117152175A publication Critical patent/CN117152175A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于机器视觉的废旧塑料位姿识别方法,包括:通过工业彩色相机获取废旧塑料的图像;根据灰度化公式对获取的RGB图像进行灰度处理;将图像分割成横向的多个区域,并对每个通道进行有限经验灰度级的阈值分割;提取各个通道的废旧塑料轮廓,并计算通道质心,最终根据多个通道的质心坐标计算废旧塑料整体的质心和角度,得出废旧塑料的位姿。本方法增加了计算角度信息,从而增加了大角度废旧塑料分拣的成功率;并且减少了进行阈值分割时最佳阈值的遍历范围,从而减少了阈值分割的计算量,减少了图像识别所消耗的时间。

The invention discloses a method for identifying the pose of waste plastics based on machine vision, which includes: acquiring images of waste plastics through an industrial color camera; performing grayscale processing on the acquired RGB images according to the grayscale formula; dividing the image into horizontal Multiple regions, and threshold segmentation of limited empirical gray levels for each channel; extract the waste plastic contours of each channel, and calculate the channel centroid, and finally calculate the centroid and angle of the entire waste plastic based on the centroid coordinates of multiple channels, and we get Get rid of waste plastic. This method increases the calculation angle information, thereby increasing the success rate of large-angle waste plastic sorting; and reduces the traversal range of the optimal threshold when performing threshold segmentation, thereby reducing the calculation amount of threshold segmentation and reducing the consumption of image recognition. time.

Description

一种基于机器视觉的废旧塑料位姿识别方法A method for identifying the pose of waste plastics based on machine vision

技术领域Technical field

本发明涉及机器视觉领域,具体涉及一种基于机器视觉的废旧塑料位姿识别方法。The invention relates to the field of machine vision, and in particular to a method for identifying the pose of waste plastics based on machine vision.

背景技术Background technique

饮品行业大规模采用轻质塑料瓶作为包装容器,这类塑料瓶是由不可再生的石油经过聚合加工得到的,在自然状态下难以降解。如果对废旧塑料不进行回收利用处理,由此产生的垃圾就地掩埋、焚烧或者随意丢弃,都会给生态环境带来很大的危害,而且还会造成大量的资源浪费,对废旧塑料进行分拣和回收处理是解决这一问题的关键所在。废旧塑料回收过程中其颜色、质量和大小各异,需要进行分拣。目前国内外的废旧塑料分选主要以人工分选为主,人工分选不仅工作强度大,分选效率低而且工作环境差,因此亟需一种快速稳定的自动分选装备来代替人工分选。The beverage industry uses lightweight plastic bottles as packaging containers on a large scale. These plastic bottles are made from non-renewable petroleum through polymerization and are difficult to degrade in nature. If waste plastics are not recycled and processed, the resulting garbage will be buried, burned or discarded at will, which will bring great harm to the ecological environment and cause a lot of waste of resources. Sorting of waste plastics And recycling is the key to solving this problem. During the recycling process, waste plastics come in different colors, qualities and sizes and need to be sorted. At present, the sorting of waste plastics at home and abroad is mainly based on manual sorting. Manual sorting not only requires high work intensity, low sorting efficiency and poor working environment, so there is an urgent need for a fast and stable automatic sorting equipment to replace manual sorting. .

机器视觉系统能够以极快的速度进行图像捕捉、处理和分析,提高处理效率和处理能力;可以应对不同类型和形态的废旧塑料;可以与传送带和气动装置等其他设备进行无缝衔接,实现全自动的废旧塑料分选生产线。一般来说,废旧塑料分选系统末端的气动分选部分的喷嘴是离散的,塑料的质心位置与喷射头的位置往往不一致。在分选形状和位姿大的废旧塑料时,单个喷射头的气压和流量不够导致分选失效。因此,废旧塑料分选时可能需要多个喷嘴错时喷射气流,这就要求既能感知到废旧塑料的质心位置外,还需要获得废旧塑料的姿态信息。下面是一些有关位置和姿态识别算法的研究进展。The machine vision system can capture, process and analyze images at extremely fast speeds, improving processing efficiency and processing capabilities; it can deal with different types and shapes of waste plastics; it can seamlessly connect with other equipment such as conveyor belts and pneumatic devices to achieve comprehensive Automatic waste plastic sorting production line. Generally speaking, the nozzles of the pneumatic sorting part at the end of the waste plastic sorting system are discrete, and the position of the center of mass of the plastic is often inconsistent with the position of the spray head. When sorting waste plastics with large shapes and postures, the air pressure and flow rate of a single spray head are insufficient, resulting in sorting failure. Therefore, when sorting waste plastics, multiple nozzles may need to spray airflow at different times, which requires not only sensing the center of mass position of the waste plastics, but also obtaining the posture information of the waste plastics. The following are some research progress on position and attitude recognition algorithms.

专利CN109190493A公开的一种图像识别方法,采用灰度化处理得到灰度化图像,然后对灰度图像进行多阈值搜索,获得最优和声阈值解集进行分割,可以准确地定位到苹果的具体位置,此方法虽然用阈值分割精准地定位到了苹果的位置,但是并没有计算出它的姿态,不适用于分选大角度废旧塑料的场合。Patent CN109190493A discloses an image recognition method that uses grayscale processing to obtain a grayscale image, and then conducts a multi-threshold search on the grayscale image to obtain the optimal harmonic threshold solution set for segmentation, which can accurately locate the specific parts of the apple. Position. Although this method uses threshold segmentation to accurately locate the position of the apple, it does not calculate its posture and is not suitable for sorting waste plastics at large angles.

专利CN115082560A公开了一种物料位姿识别方法通过获取包含零件的原始图像,在原始图像中生成零件的最小外接矩形,根据最小外接矩形获取零件的位置数据和主方向夹角,把位置数据和旋转角度映射到世界坐标系中,得到零件的位姿;专利CN113269835A,提供了一种基于轮廓特征的工业零件位姿识别方法,通过获取工业零件的图像,然后根据图像获取所述工业零件的边缘像素点坐标,接着根据边缘像素点坐标获取所述工业零件的最小外接矩形的四个角点的角点坐标,最后计算所述最小外接矩形的中心点坐标作为所述工业零件的位置坐标,进而计算所述工业零件的姿态角度,这两个专利都使用了最小外接矩形法计算出了零件的位置信息和角度信息,但是实验表明,在使用最小外接矩形法进行阈值分割遍历最佳阈值时需要计算256个灰度级的信息熵,对计算机的硬件系统要求较高。Patent CN115082560A discloses a material pose recognition method that obtains the original image containing the part, generates the minimum circumscribed rectangle of the part in the original image, obtains the position data and main direction angle of the part based on the minimum circumscribed rectangle, and combines the position data and rotation The angle is mapped to the world coordinate system to obtain the pose of the part; patent CN113269835A provides a method for identifying the pose of industrial parts based on contour features, by obtaining the image of the industrial part, and then obtaining the edge pixels of the industrial part based on the image point coordinates, and then obtain the corner point coordinates of the four corner points of the minimum circumscribed rectangle of the industrial part based on the edge pixel point coordinates, and finally calculate the center point coordinates of the minimum circumscribed rectangle as the position coordinates of the industrial part, and then calculate Regarding the attitude angle of the industrial parts, both patents used the minimum circumscribed rectangle method to calculate the position information and angle information of the part. However, experiments show that calculations are required when using the minimum circumscribed rectangle method for threshold segmentation to traverse the optimal threshold. The information entropy of 256 gray levels has high requirements on the computer hardware system.

发明内容Contents of the invention

针对上述存在的问题,本发明提出了一种基于机器视觉的废旧塑料位姿识别方法,该方法能够快速识别传送带上废旧塑料的位姿,计算出位置信息以及废旧塑料的角度信息,既减少了算法消耗的时间,又提高了大角度的废旧塑料分拣的效果。In response to the above existing problems, the present invention proposes a method for identifying the posture of waste plastics based on machine vision. This method can quickly identify the posture of waste plastics on the conveyor belt and calculate the position information and angle information of the waste plastics, which not only reduces the The time consumed by the algorithm also improves the effect of large-angle waste plastic sorting.

本发明的目的通过如下技术方案来实现:The purpose of the present invention is achieved through the following technical solutions:

一种基于机器视觉的废旧塑料位姿识别方法,包括如下步骤:A method for identifying the pose of waste plastics based on machine vision, including the following steps:

步骤S1:将工业彩色相机安装至传送带上方,当废旧塑料出现在工业彩色相机视界范围内时,工业彩色相机实时采集到废旧塑料的图像;Step S1: Install the industrial color camera above the conveyor belt. When the waste plastic appears within the field of view of the industrial color camera, the industrial color camera collects images of the waste plastic in real time;

步骤S2:对步骤S1得到的图像使用加权平均法进行灰度化处理,将原始图像中的废旧塑料和背景区分开;Step S2: Use the weighted average method to grayscale the image obtained in step S1 to separate the waste plastic in the original image from the background;

步骤S3:将步骤S2得到的灰度处理后的图像使用横向平均切割法分割成与相机视场相对应的多个横向的区域,然后对每个区域设置一个固定范围的阈值进行经验阈值分割处理,阈值分割后得到二值化图像;Step S3: Use the horizontal average cutting method to segment the grayscale processed image obtained in step S2 into multiple horizontal areas corresponding to the camera field of view, and then set a fixed range threshold for each area to perform empirical threshold segmentation processing. , the binary image is obtained after threshold segmentation;

步骤S4:对步骤S3阈值分割得到的二值化图像使用Suzuki算法对多个区域进行轮廓提取,然后利用轮廓矩计算多个区域质心的位置,最后根据每个区域的质心位置通过公式计算出废旧塑料整体的质心和角度,最终实现废旧塑料的位姿识别。Step S4: Use the Suzuki algorithm to extract the contours of multiple regions on the binary image obtained by threshold segmentation in step S3, then use the contour moments to calculate the positions of the centroids of multiple regions, and finally calculate the waste according to the formula based on the centroid position of each region. The center of mass and angle of the entire plastic finally realize the position and posture identification of waste plastic.

本发明的有益效果:使用的分区域的经验阈值分割使用横向切割把图像分割成与相机视场相对应的N个横向的区域,再进行经验阈值分割,用每个区域的质心位置计算整体废旧塑料的角度,可以精准地控制分选大角度废旧塑料时喷嘴使用的个数以及每个喷嘴动作的时间,也减少了算法计算的时间,从而提升了废旧塑料分选的效率和成功率。Beneficial effects of the present invention: the empirical threshold segmentation of sub-regions is used to divide the image into N horizontal regions corresponding to the camera field of view using transverse cutting, and then the empirical threshold segmentation is performed, and the centroid position of each region is used to calculate the overall waste. The angle of the plastic can accurately control the number of nozzles used when sorting large-angle waste plastics and the action time of each nozzle. It also reduces the calculation time of the algorithm, thus improving the efficiency and success rate of waste plastic sorting.

附图说明Description of the drawings

图1为本发明的方法流程图。Figure 1 is a flow chart of the method of the present invention.

图2为分区域的经验阈值分割计算流程图。Figure 2 is a flow chart of the empirical threshold segmentation calculation of sub-regions.

图3为废旧塑料原始图像。Figure 3 shows the original image of waste plastic.

图4为废旧塑料灰度图。Figure 4 is a grayscale image of waste plastic.

图5为废旧塑料按区域分割后的灰度图。Figure 5 is a grayscale image of waste plastic segmented by area.

图6轮廓边界的拓扑关系图。Figure 6 Topological relationship diagram of contour boundary.

图7为废旧塑料按区域进行轮廓提取后的结果图。Figure 7 shows the result of contour extraction of waste plastics by area.

具体实施方式Detailed ways

下面根据附图和实例详细描述本发明,本发明的目的和效果将变得更加明白,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。The present invention will be described in detail below based on the accompanying drawings and examples, and the purpose and effect of the present invention will become clearer. The specific embodiments described here are only used to explain the present invention and are not intended to limit the present invention.

如图1和图2所示,本发明是用于实时分拣废旧塑料的位姿识别方法,具体包括如下步骤:As shown in Figures 1 and 2, the present invention is a pose recognition method for real-time sorting of waste plastics, which specifically includes the following steps:

步骤S1:将工业彩色相机安装至传送带上方,当废旧塑料出现在工业彩色相机视界范围内时,工业彩色相机会实时获取到废旧塑料的图像,并传输给计算机;Step S1: Install the industrial color camera above the conveyor belt. When the waste plastic appears within the field of view of the industrial color camera, the industrial color camera will acquire the image of the waste plastic in real time and transmit it to the computer;

步骤S2:对步骤S1得到的图像使用加权平均法进行灰度化处理,将原始图像中的废旧塑料和背景区分开;Step S2: Use the weighted average method to grayscale the image obtained in step S1 to separate the waste plastic in the original image from the background;

步骤S3:将步骤S2得到的灰度处理后的图像使用横向平均切割法分割成与相机视场相对应的N个横向的区域,然后对每个区域设置一个固定范围的阈值进行经验阈值分割处理,阈值分割后得到二值化图像;Step S3: Use the horizontal average cutting method to segment the grayscale processed image obtained in step S2 into N horizontal areas corresponding to the camera field of view, and then set a fixed range threshold for each area to perform empirical threshold segmentation processing. , the binary image is obtained after threshold segmentation;

步骤S4:对步骤S3分区域经验阈值分割后得到的二值化图像使用Suzuki算法对N个区域进行轮廓提取,然后利用轮廓矩计算这N个区域质心的位置,最后根据每个区域的质心位置通过公式计算出废旧塑料整体的质心和角度,最终实现废旧塑料的位姿识别;Step S4: Use the Suzuki algorithm to extract the contours of N regions from the binarized image obtained after the empirical threshold segmentation in step S3. Then use the contour moments to calculate the positions of the centroids of these N regions. Finally, based on the centroid position of each region Calculate the center of mass and angle of the entire waste plastic through formulas, and finally realize the pose identification of waste plastic;

所述的一种基于机器视觉的废旧塑料位姿识别方法,步骤S2具体方法为:将工业彩色相机获取到的图像根据RGB三种不同颜色通道的重要性,分别给RGB三个通道的颜色值乘上不同的权重后相加得到一个灰度值,最终实现RGB图转为灰度图,具体的计算公式如公式(1)所示:The described method for identifying the pose of waste plastics based on machine vision, the specific method of step S2 is: according to the importance of the three different color channels of RGB, the image acquired by the industrial color camera is given the color value of the three RGB channels respectively. After multiplying by different weights, a grayscale value is obtained, and finally the RGB image is converted into a grayscale image. The specific calculation formula is as shown in formula (1):

Grey=αR+βG+γB#(1)Grey=αR+βG+γB#(1)

其中α、β、γ分别根据三个不同颜色通道的亮度贡献程度得到的相应权重,经过实验对比论证选用α=0.07,β=0.72,γ=0.21时进行灰度处理得到的效果最好,如下图3和图4分别是废旧塑料的原始彩色图像和进行灰度化处理后的结果图。Among them, α, β, and γ are respectively obtained according to the corresponding weights based on the brightness contribution of three different color channels. After experimental comparison, it is demonstrated that the best effect is obtained when α = 0.07, β = 0.72, and γ = 0.21 for grayscale processing, as follows Figures 3 and 4 are respectively the original color image of waste plastic and the result image after grayscale processing.

所述的一种基于机器视觉的废旧塑料位姿识别方法,步骤S3具体方法为:为了缩短阈值分割过程中分割阈值的遍历寻找时间,提前确定分割阈值而不是对每一帧图像进行阈值的遍历,这种固定的阈值分割方法比较适合背景比较单一,分割阈值波动较小的场合,与本发明的工作场景较为贴合。本发明工作中使用的黑灰色传送带背景在运动过程中局部的像素值变化不大,但是不同区域之间的颜色差异较大,在0-255范围的灰度模式下灰度值差异达到了50左右,如果对于不同的区域采用相等的固定阈值进行分割会产生很大的误差。因此考虑对相机拍到的图像范围内进行分区处理,设置区域的个数为N个,并在每一个区域设置一个固定范围的经验阈值进行阈值分割处理,根据图像的高度计算出每个区域的高度,接着根据预定义的行数将图像水平方向上切割。分割后的N个条状区域可以统计其经验平均灰度值作为计算该区域固定阈值的依据。The described method for identifying the pose of waste plastics based on machine vision, the specific method of step S3 is: in order to shorten the traversal search time of the segmentation threshold during the threshold segmentation process, determine the segmentation threshold in advance instead of traversing the threshold for each frame of image. , this fixed threshold segmentation method is more suitable for occasions where the background is relatively single and the segmentation threshold fluctuates less, and is more suitable for the working scenario of the present invention. The black and gray conveyor belt background used in the work of the present invention has little local pixel value change during the movement, but the color difference between different areas is large. In the grayscale mode in the range of 0-255, the gray value difference reaches 50 If equal fixed thresholds are used for segmentation in different areas, large errors will occur. Therefore, consider partitioning the image range captured by the camera, setting the number of regions to N, and setting a fixed range of empirical thresholds in each region for threshold segmentation processing, and calculating the height of each region based on the height of the image. height, and then slice the image horizontally according to the predefined number of rows. The empirical average gray value of the segmented N strip areas can be calculated as the basis for calculating the fixed threshold of the area.

以N=12为例,图5为灰度图像分割后的效果图,测试得到的部分数据列举如下表1,其中表1中的数值表示只有背景时该通道的灰度平均值,空格表示该数据点中存在待识别的塑料因此不计入计算行列。Taking N=12 as an example, Figure 5 is the rendering after grayscale image segmentation. Some of the data obtained from the test are listed in Table 1 below. The values in Table 1 represent the average grayscale value of the channel when there is only the background, and the spaces represent the Data points containing plastics to be identified are not included in the calculation.

表1各通道平均灰度值统计表Table 1 Statistical table of average gray value of each channel

从表1中的数据计算得到N=12时各个通道的背景的灰度经验值,结果如下表2。表2中的经验灰度平均值加上特定的调整值,调整值的经验值设置为50,即可作为区分背景和前景的最大熵阈值遍历范围。The grayscale empirical value of the background of each channel when N=12 is calculated from the data in Table 1. The results are as follows in Table 2. The empirical grayscale average value in Table 2 plus a specific adjustment value, and the empirical value of the adjustment value is set to 50, can be used as the maximum entropy threshold traversal range to distinguish the background and foreground.

表2各通道的灰度统计平均值Table 2 Grayscale statistical average of each channel

在对一张图像进行阈值分割处理时,首先将整张图像分割为N个横向区域,而后对每一个区域进行单独的最大熵阈值分割,假设该通道的背景灰度统计平均值为x,则此时最大熵分割阈值的遍历范围不是[0,255]而是[x,x+50],若使得图像熵最大的阈值出现在x到x+50之间则该阈值即为最大熵分割阈值,若在[x,x+50]范围内图像的熵值单调递增则选定x+50作为分割阈值,该阈值虽然不是[0,255]全灰度级范围内的最大熵分割阈值但是足以将图像中的塑料和传送带背景区分开,因此不必继续遍历寻找最大熵分割阈值,如此一来将每个通道的分割阈值遍历范围从0-255共256个数值降低至50以内,大大压缩了图形处理的时间。When performing threshold segmentation processing on an image, first segment the entire image into N horizontal areas, and then perform separate maximum entropy threshold segmentation on each area. Assume that the statistical average of the background grayscale of this channel is x, then At this time, the traversal range of the maximum entropy segmentation threshold is not [0, 255] but [x, x+50]. If the threshold that maximizes image entropy appears between x and x+50, then the threshold is the maximum entropy segmentation threshold. , if the entropy value of the image increases monotonically in the range of [x, x+50], then x+50 is selected as the segmentation threshold. Although this threshold is not the maximum entropy segmentation threshold in the full grayscale range of [0, 255], it is sufficient. The plastic and conveyor belt background in the image are distinguished, so there is no need to continue traversing to find the maximum entropy segmentation threshold. In this way, the segmentation threshold traversal range of each channel is reduced from 0-255, a total of 256 values to less than 50, which greatly compresses the graphics. processing time.

所述的一种基于机器视觉的废旧塑料位姿识别方法,步骤S4具体方法为:对完成分区域经验阈值分割后的二值化图像使用Suzuki算法进行轮廓提取,Suzuki算法对数字二值图像进行拓扑分析,并创造性地采用图像的外边界、孔边界以及他们的层次关系来表述数字图像的轮廓。关于外边界和孔边界的定义为:若有1连通域S1,0连通域S2,如果S2直接环绕S1,则S2和S1之间的边界称为外边界;如果S1直接环绕S2,则S2和S1之间的边缘称为孔边界,外边界还是孔边界都是由1像素组成。其中连通区域的环绕是指连对于两个相邻的连通区域S1和S2,如果对于S1上任意一个点的4个方向,都能达到S2,那么S2环绕S1。关于轮廓之间层级关系的定义为:假设现有1连通域S1和S3,0连通域S2;S2直接环绕S1,S3直接环绕S2,S1与S2之间的边界为B1,S2与S3之间的边界为B2,则B2为B1的父边界,如果S2是背景,那么B1的父边界是图像边框,图像和边框的拓扑关系例如下图6所示。Suzuki算法轮廓提取方式为一种类似光栅扫描的方法即对图像像素点从左至右从上到下扫描,采用边界跟踪算法可以得到图像的多条轮廓,并对每条轮廓赋予一个唯一的编号,在扫描的过程中用NBD表示当前跟踪的边界的编号,用LNBD表示保存的上一个边界的编号。The described method for identifying the pose of waste plastics based on machine vision, the specific method of step S4 is: use the Suzuki algorithm to extract the contours of the binary image after completing the empirical threshold segmentation of the sub-region, and the Suzuki algorithm performs contour extraction on the digital binary image. Topological analysis, and creatively use the outer boundary of the image, the hole boundary and their hierarchical relationship to express the outline of the digital image. The definition of the outer boundary and the hole boundary is: if there are 1 connected domain S1 and 0 connected domain S2, if S2 directly surrounds S1, the boundary between S2 and S1 is called the outer boundary; if S1 directly surrounds S2, then S2 and The edge between S1 is called the hole boundary, and both the outer boundary and the hole boundary are composed of 1 pixel. The surround of a connected area refers to the connection to two adjacent connected areas S1 and S2. If S2 can be reached from any 4 directions of any point on S1, then S2 surrounds S1. The definition of the hierarchical relationship between contours is: Assume that there are existing 1-connected domains S1 and S3, and 0-connected domain S2; S2 directly surrounds S1, S3 directly surrounds S2, the boundaries between S1 and S2 are B1, and the boundaries between S2 and S3 The boundary is B2, then B2 is the parent boundary of B1. If S2 is the background, then the parent boundary of B1 is the image border. The topological relationship between the image and the border is shown in Figure 6 below. The Suzuki algorithm contour extraction method is a method similar to raster scanning, that is, scanning the image pixels from left to right and from top to bottom. Using the boundary tracking algorithm, multiple contours of the image can be obtained, and each contour is given a unique number. , during the scanning process, NBD is used to represent the number of the currently tracked boundary, and LNBD is used to represent the number of the previous saved boundary.

Suzuki算法提取轮廓原始的具体流程为,若输入图像为F={f(i,j)},其中i和j分别为该像素点的横坐标和纵坐标,将初始NBD设为1,将图像的边框视为第一个轮廓边界,对图像进行扫描,当扫描到某个像素点f(i,j)≠0时执行以下步骤:The specific process of extracting the original outline of the Suzuki algorithm is as follows: if the input image is F = {f (i, j)}, where i and j are the abscissa and ordinate of the pixel respectively, set the initial NBD to 1, and change the image The border is regarded as the first contour boundary, the image is scanned, and when a certain pixel point f(i, j)≠0 is scanned, the following steps are performed:

(1)判断为以下哪种情况(1) Which of the following situations is determined?

(a)若f(i,j)=1且f(i,j-1)=0,则(i,j)为外边界开始点,令NBD=NBD+1,同时保存(i2,j2)的坐标等于(i,j-1)。(a) If f(i,j)=1 and f(i,j-1)=0, then (i,j) is the starting point of the outer boundary, let NBD=NBD+1, and save (i 2 , j 2 ) The coordinates are equal to (i, j-1).

(b)若f(i,j)≥1且f(i,j+1)=0,则(i,j)为孔边界开始点,令NBD=NBD+1,同时保存(i2,j2)的坐标等于(i,j+1)。(b) If f(i,j)≥1 and f(i,j+1)=0, then (i,j) is the starting point of the hole boundary, let NBD=NBD+1, and save (i 2 , j 2 ) The coordinates are equal to (i, j+1).

(c)其他情况跳到(4)(c) In other cases, jump to (4)

(2)根据保存的上一个边界和当前遇到的边界,从表3中查询得到当前边界的父边界,其中B’为上一个边界,B为新边界;(2) Based on the saved previous boundary and the currently encountered boundary, query the parent boundary of the current boundary from Table 3, where B’ is the previous boundary and B is the new boundary;

表3当前边界的父边界Table 3 Parent boundary of current boundary

(3)从边界开始点(i,j)开始按照下面的算法进行边界跟踪(3) Starting from the boundary starting point (i, j), perform boundary tracking according to the following algorithm

(3.1)以(i,j)为中心,(i2,j2)为起始点,在顺时针方向查找(i,j)的八邻域,判断是否存在非零像素点。若找到非零像素点,则令顺时针方向的第一个非零像素点的坐标为(i1,j1)并转到(3.2);否则令f(i,j)=-NBD,并转到(4);(3.1) With (i, j) as the center and (i 2 , j 2 ) as the starting point, search the eight neighborhoods of (i, j) in the clockwise direction to determine whether there are non-zero pixels. If a non-zero pixel is found, let the coordinates of the first non-zero pixel in the clockwise direction be (i 1 , j 1 ) and go to (3.2); otherwise, let f (i, j) = -NBD, and Go to(4);

(3.2)更新(i2,j2)的坐标为(i1,j1),(i3,j3)的坐标为(i,j);(3.2) Update the coordinates of (i 2 , j 2 ) to (i 1 , j 1 ), and the coordinates of (i 3 , j 3 ) to (i, j);

(3.3)以(i3,j3)为中心,从(i2,j2)开始,按逆时针方向查找(i3,j3)的八邻域是否存在非零像素点,并令(i4,j4)为遇到的第一个非零像素点的坐标,其中八领域是指一个像素点周围的八个相邻像素点,即对于给定像素点(i,j),其八邻域的位置可以表示为(i-1,j-1),(i-1,j),(i-1,j+1),(i,j-1),(i,j),(i,j+1),(i+1,j-1),(i+1,j),(i+1,j+1),相比之下,如果只考虑四个邻域(上、下、左、右),可能会忽略掉对角线方向上的边界信息,导致提取的轮廓不完整或不准确;(3.3) Taking (i 3 , j 3 ) as the center, starting from (i 2 , j 2 ), search in the counterclockwise direction whether there are non-zero pixels in the eight neighborhoods of (i 3 , j 3 ), and let ( i 4 , j 4 ) are the coordinates of the first non-zero pixel encountered, where the eight areas refer to the eight adjacent pixels around a pixel, that is, for a given pixel (i, j), its The position of the eight neighbors can be expressed as (i-1, j-1), (i-1, j), (i-1, j+1), (i, j-1), (i, j), (i, j+1), (i+1, j-1), (i+1, j), (i+1, j+1). In contrast, if only four neighborhoods (above) are considered , bottom, left, right), the boundary information in the diagonal direction may be ignored, resulting in incomplete or inaccurate extracted contours;

(3.4)若(i3,j3+1)是(3.3)中检查过的像素点且为零像素点,则令f(i3,j3)=-NBD;若(i3,j3+1)不是(3.3)中检查过的像素点且为非零像素点,则令f(i3,j3)=NBD;(3.4) If (i 3 , j 3 +1) is the pixel checked in (3.3) and is a zero pixel, then let f (i 3 , j 3 )=-NBD; if (i 3 , j 3 +1) If it is not a pixel checked in (3.3) and is a non-zero pixel, then f(i 3 , j 3 )=NBD;

(3.5)若(i4,j4)=(i,j)且(i3,j3)=(i2,j2)代表回到了起始点,跳到(4);否则更新(i2,j2)的坐标为(i3,j3),(i3,j3)的坐标为(i4,j4),并转到(3.3);(3.5) If (i 4 , j 4 ) = (i, j) and (i 3 , j 3 ) = (i 2 , j 2 ) means that it has returned to the starting point, jump to (4); otherwise update (i 2 The coordinates of , j 2 ) are (i 3 , j 3 ), the coordinates of (i 3 , j 3 ) are (i 4 , j 4 ), and go to (3.3);

(4)若f(i,j)≠1,则LNBD=|f(i,j)|,从点(i,j+1)继续扫描,当扫描到图片的右下角顶点时算法结束。(4) If f(i,j)≠1, then LNBD=|f(i,j)|, continue scanning from point (i,j+1), and the algorithm ends when scanning to the lower right corner vertex of the picture.

对原始Suzuki算法流程进行改进和简化,本发明使用Suzuki算法流程为:用f(i,j)表示点(i,j)的像素值,行扫描废旧塑料二值化图像直到遇到连通区域的一个点(m,n)的像素值出现f(m-1,n)=0且f(m,n)=255时,这个点就是轮廓上的第一个点。以点(m-1,n)作为起始点,逆时针遍历点(m,n)的八邻域,以发现的第一个不为0的点为起始点逆时针遍历点(m,n)的八邻域,发现的第一个不为0的点就是轮廓的下一个点。依照此方法,直到轮廓完整闭合。标记这个连通域边界上的像素,并分配一个唯一的标示符给这个边界。再次发现新的连通域后,将标示符加1,按照相同的方法跟踪轮廓。得到废旧塑料的全部轮廓后选择最大的将其输出。The original Suzuki algorithm process is improved and simplified. The present invention uses the Suzuki algorithm process as follows: use f(i, j) to represent the pixel value of point (i, j), and scan the waste plastic binarized image line by line until encountering a connected area. When the pixel value of a point (m, n) appears f (m-1, n) = 0 and f (m, n) = 255, this point is the first point on the contour. Taking point (m-1, n) as the starting point, traverse the eight neighborhoods of point (m, n) counterclockwise, and traverse point (m, n) counterclockwise using the first discovered point that is not 0 as the starting point. In the eight-neighborhood, the first point found that is not 0 is the next point of the contour. Follow this method until the outline is completely closed. Mark pixels on the boundary of this connected domain and assign a unique identifier to this boundary. After discovering a new connected domain again, add 1 to the identifier and trace the contour in the same way. After getting all the contours of the waste plastic, select the largest one and export it.

在获取废旧塑料每个区域的轮廓以后,本发明利用轮廓的矩求取废旧塑料每个区域的轮廓质心位置。第i个区域的(p+q)阶轮廓矩m(pq)i定义如下:After obtaining the contour of each area of the waste plastic, the present invention uses the moments of the contour to obtain the contour centroid position of each area of the waste plastic. The (p+q)-order contour moment m(pq) i of the i-th region is defined as follows:

其中,x和y为轮廓上像素点的横坐标和纵坐标值,I(x,y)为像素坐标点(x,y)的像素值(值为0或者1),ni是第i个区域轮廓上像素点的个数;求解质心时用一阶轮廓矩,也就是系数p和q的值分别为0或者1,但两者之和为1,即p+q=1。Among them, x and y are the abscissa and ordinate values of the pixel point on the contour, I(x, y) is the pixel value of the pixel coordinate point (x, y) (the value is 0 or 1), n i is the i-th The number of pixels on the area contour; when solving the centroid, the first-order contour moment is used, that is, the values of the coefficients p and q are 0 or 1 respectively, but the sum of the two is 1, that is, p+q=1.

在轮廓图像中,轮廓上点的像素值全部为1,因此零阶矩m(00)i就是第i个区域轮廓上点的个数,一阶轮廓矩m(10)i、m(01)i分别是第i个区域轮廓上各像素点的x坐标值和y坐标值的累加。第i个区域轮廓所对应的质心位置(xi,yi)可以通过式(3)来计算。In the contour image, the pixel values of the points on the contour are all 1, so the zero-order moment m(00) i is the number of points on the contour of the i-th area, and the first-order contour moment m(10) i , m(01) i is respectively the accumulation of the x-coordinate value and y-coordinate value of each pixel on the i-th area outline. The centroid position (x i , y i ) corresponding to the i-th region outline can be calculated by equation (3).

按照上述方法提取二值化图像的轮廓并求得质心位置,得到的结果如下图7所示。计算出每个区域的质心坐标坐标后,用如下公式分别计算该废旧塑料总的质心(x,y)以及废旧塑料的角度θ。Extract the contour of the binary image and obtain the centroid position according to the above method. The result is shown in Figure 7 below. After calculating the coordinates of the center of mass of each area, use the following formulas to calculate the total center of mass (x, y) of the waste plastic and the angle θ of the waste plastic.

其中n为连通域的个数,(xi,yi)是第i个连通域的轮廓质心。Where n is the number of connected domains, (xi , y i ) is the contour centroid of the i-th connected domain.

利用上述公式对图7中废旧塑料瓶计算得到的质心、角度以及计算时间(N=12时)如下表4所示。The centroid, angle and calculation time (when N=12) calculated for the waste plastic bottles in Figure 7 using the above formula are shown in Table 4 below.

表4废旧塑料位姿计算结果Table 4 Calculation results of waste plastic poses

从上表可见采用分区域的经验阈值分割方法可以有效的减少阈值分割的计算量,计算时间减少到了23ms,大大减少了图像识别所消耗的时间。It can be seen from the above table that the use of regional empirical threshold segmentation method can effectively reduce the calculation amount of threshold segmentation, and the calculation time is reduced to 23ms, which greatly reduces the time consumed in image recognition.

上面对本发明的较佳实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域的普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下做出各种变化。The preferred embodiments of the present invention have been described in detail above, but the present invention is not limited to the above embodiments. Within the scope of knowledge possessed by those of ordinary skill in the art, other modifications can be made without departing from the spirit of the present invention. Various changes.

Claims (5)

1. The waste plastic material position and posture identification method based on machine vision is characterized by comprising the following steps of:
step S1: the industrial color camera is arranged above the conveyor belt, and when the waste plastic is in the visual range of the industrial color camera, the industrial color camera acquires the image of the waste plastic in real time;
step S2: carrying out graying treatment on the image obtained in the step S1 by using a weighted average method, and distinguishing waste plastics from the background in the original image;
step S3: dividing the gray-level processed image obtained in the step S2 into a plurality of transverse areas corresponding to the camera view field by using a transverse average cutting method, setting a threshold value of a fixed range for each area, and performing empirical threshold segmentation processing, wherein the threshold value is segmented to obtain a binarized image;
step S4: and (3) carrying out contour extraction on the plurality of areas by using a Suzuki algorithm on the binarized image obtained by threshold segmentation in the step (S3), calculating the positions of centroids of the plurality of areas by using contour moments, calculating the integral centroid and angle of the waste plastic according to the centroid position of each area through a formula, and finally realizing pose recognition of the waste plastic.
2. The machine vision-based waste plastic material position and orientation recognition method according to claim 1 is characterized in that the step S2 of weighted average gray scale processing specifically comprises the steps of multiplying color values of three RGB channels according to the importance of the three RGB channels by different weights respectively to obtain a gray scale value, and finally converting an RGB image into a gray scale image.
3. The machine vision-based waste plastic material position and orientation recognition method according to claim 1, wherein the specific method for transverse average cutting and empirical threshold segmentation in the step S3 is as follows: firstly defining a cutting line, calculating the height of each region according to the height of an image, then cutting the image into a plurality of regions in the horizontal direction according to a predefined line number, counting the average gray average value of the background region of the conveyor belt for each region, setting a threshold traversing range for distinguishing the background from the foreground according to the average gray value, and finally carrying out independent threshold segmentation on each region to generate a corresponding binarized image.
4. The machine vision-based waste plastic material position and orientation recognition method according to claim 1, wherein the specific method for extracting the outline by using the Suzuki algorithm in the step S4 is as follows: initializing a binary image into a mark image with the same size, initializing all pixels into 0, and using f (i, j) to represent the pixel value of a point (i, j), scanning the waste plastic binary image in a row until encountering a point (m, n) of a connected area, wherein f (m-1, n) =0, and f (m, n) =255, wherein the point is the first point on the outline; traversing the eight neighborhood of the point (m, n) anticlockwise by taking the point (m-1, n) as a starting point, traversing the eight neighborhood of the point (m, n) anticlockwise by taking the first point which is not 0 as the starting point, and taking the first point which is not 0 as the next point of the contour; according to this method, until the profile is completely closed; marking pixels on the boundary of the connected domain and assigning a unique identifier to the boundary; after finding a new connected domain again, adding 1 to the identifier, and tracking the outline according to the same method; and (3) after obtaining all the outlines of the waste plastics, selecting the largest outline and outputting the outline, and finally obtaining a complete outline image.
5. The machine vision-based waste plastic position and orientation recognition method according to claim 1, wherein the calculation formulas of the mass center and the angle of the whole waste plastic are respectively as follows:
wherein (x, y) is the barycenter coordinate of the whole waste plastic, n is the number of areas, (x) i ,y i ) Is the centroid of the contour in the i-th region; θ is the angle of the whole waste plastic;
the centroid calculation formula of the contour in the i-th region is:
wherein m (00) i The number of points on the contour of the ith area, m (10) i 、m(01) i The x coordinate value and the y coordinate value of each pixel point on the ith area outline are respectively accumulated.
CN202311158655.7A 2023-09-08 2023-09-08 Machine vision-based waste plastic material position and orientation recognition method Pending CN117152175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311158655.7A CN117152175A (en) 2023-09-08 2023-09-08 Machine vision-based waste plastic material position and orientation recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311158655.7A CN117152175A (en) 2023-09-08 2023-09-08 Machine vision-based waste plastic material position and orientation recognition method

Publications (1)

Publication Number Publication Date
CN117152175A true CN117152175A (en) 2023-12-01

Family

ID=88898563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311158655.7A Pending CN117152175A (en) 2023-09-08 2023-09-08 Machine vision-based waste plastic material position and orientation recognition method

Country Status (1)

Country Link
CN (1) CN117152175A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671497A (en) * 2023-12-04 2024-03-08 广东筠诚建筑科技有限公司 Engineering construction waste classification method and device based on digital images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671497A (en) * 2023-12-04 2024-03-08 广东筠诚建筑科技有限公司 Engineering construction waste classification method and device based on digital images
CN117671497B (en) * 2023-12-04 2024-05-28 广东筠诚建筑科技有限公司 Engineering construction waste classification method and device based on digital images

Similar Documents

Publication Publication Date Title
CN108596066B (en) A Character Recognition Method Based on Convolutional Neural Network
CN110517283B (en) Gesture tracking method, gesture tracking device and computer readable storage medium
CN109389121B (en) Nameplate identification method and system based on deep learning
CN104992449B (en) Information identification and surface defect online test method based on machine vision
CN102742977B (en) Method for controlling gluing path on basis of image processing
CN106446894B (en) A method of based on outline identification ball-type target object location
KR101298024B1 (en) Method and interface of recognizing user's dynamic organ gesture, and electric-using apparatus using the interface
CN112085024A (en) A method for character recognition on the surface of a tank
CN107688779A (en) A kind of robot gesture interaction method and apparatus based on RGBD camera depth images
US12017368B2 (en) Mix-size depalletizing
CN106529556B (en) A visual inspection system for instrument indicator lights
TWI448987B (en) Method and interface of recognizing user's dynamic organ gesture and electric-using apparatus using the interface
CN117152175A (en) Machine vision-based waste plastic material position and orientation recognition method
CN112883881B (en) A method and device for disorderly sorting of strip-shaped agricultural products
CN110009615A (en) Image corner detection method and detection device
CN114863492A (en) Method and device for repairing low-quality fingerprint image
Yang et al. Tracking and recognition algorithm for a robot harvesting oscillating apples
CN112686872A (en) Wood counting method based on deep learning
CN109271882B (en) A color-distinguishing method for extracting handwritten Chinese characters
CN115410184A (en) Target detection license plate recognition method based on deep neural network
CN112288372B (en) Express bill identification method capable of simultaneously identifying one-dimensional bar code and three-segment code characters
CN109978916A (en) Vibe moving target detecting method based on gray level image characteristic matching
Huang et al. A real-time algorithm for aluminum surface defect extraction on non-uniform image from CCD camera
CN112381844A (en) Self-adaptive ORB feature extraction method based on image blocking
CN115876786B (en) Wedge-shaped welding spot detection method and motion control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination