WO2023134251A1 - 一种基于聚类的光条提取方法及装置 - Google Patents

一种基于聚类的光条提取方法及装置 Download PDF

Info

Publication number
WO2023134251A1
WO2023134251A1 PCT/CN2022/126593 CN2022126593W WO2023134251A1 WO 2023134251 A1 WO2023134251 A1 WO 2023134251A1 CN 2022126593 W CN2022126593 W CN 2022126593W WO 2023134251 A1 WO2023134251 A1 WO 2023134251A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
coordinates
point
row
column
Prior art date
Application number
PCT/CN2022/126593
Other languages
English (en)
French (fr)
Inventor
苏德全
钟治魁
柳龙杰
王平江
黄剑峰
谢一首
柯榕彬
陈文奇
罗文贵
赖晓彬
黄达森
Original Assignee
泉州华中科技大学智能制造研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 泉州华中科技大学智能制造研究院 filed Critical 泉州华中科技大学智能制造研究院
Publication of WO2023134251A1 publication Critical patent/WO2023134251A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the invention relates to a method and device for extracting light bars based on clustering.
  • the principle of laser triangulation is to use a laser to project a laser bar of a certain wavelength onto the measured workpiece, modulate the laser bar through the measured workpiece, and then use a CCD camera to capture the modulated light bar pattern and combine the system conversion relationship to obtain the measured object. Measure the three-dimensional shape information of the workpiece. Therefore, in the line structured light 3D measurement system based on the triangulation method, how to quickly and accurately obtain the center coordinates of the light bar from the light bar image is the key to real-time precision measurement.
  • the energy center method mainly includes light strip extraction methods such as direction template method, gray scale center of gravity method, and Hessian matrix method.
  • Literature Wang Qingyang, Su Xianyu, Li Jingzhen, et al. A new algorithm for extracting the center of the line-structured light band [J]. Journal of Sichuan University (Engineering Science Edition), 2007, 39(4): 151-155.]
  • the direction template convolves the image, and obtains the extreme points in the image to obtain the center of the light stripe.
  • this method can suppress certain noises, it takes a long time and can only be used for light stripe patterns in a fixed direction.
  • Literature Choen Nian, Guo Yangkuan, Zhang Xiaoqing.
  • the purpose of the present invention is to address the deficiencies in the prior art, and propose a method and device for extracting light strips based on clustering, which can perform clustering on a single row or a single column to achieve the effect of denoising, and can simultaneously perform multi-row or multi-column data Processing, not only ensures the accuracy of light strip extraction, but also improves the speed of light strip extraction, and the noise resistance and robustness are stronger.
  • a method for extracting light bars based on clustering comprising the steps of:
  • A. Obtain the grayscale image of the initial light strip image, and perform filtering and denoising processing on the grayscale image to obtain the light strip image;
  • step B utilize Otsu algorithm to segment the light stripe image that step A obtains, obtain light stripe region and background region;
  • the clustering operation on each row or column specifically includes the following steps:
  • Parry as a set of pixel points with a pixel value of 255 in one row or column in the divided light strip area, P start and P end are the starting point and end point in the set respectively, and P vector is defined as a binary Dimensional collection, P temp is a one-dimensional collection;
  • step C6 When p i+1 is P end , enter step C6, otherwise, put p i+1 into the set P temp , and then enter step C3;
  • step C6 When p i+1 is P end , go to step C6; otherwise, put the points in the set P temp into the set P vector as a sub-set of the set P vector , then clear the set P temp , and set Put p i+1 into the emptied P temp as a new seed point, and then enter step C3;
  • step D Calculate the center coordinates of each effective area obtained in step C respectively, and use the center coordinates as the light bar center point of the effective area;
  • the set of center points of all light bars is the light bar coordinates of the initial light bar image.
  • step D includes the following steps:
  • step D2 respectively calculate the coordinates P c of the center point of the light bar corresponding to the effective area of each sub-set P vector max , form all the coordinates of the center point P c into a new set P cvector , and enter step D4;
  • step D4 Calculate the absolute value of the difference between the row coordinates or column coordinates of each point in the set P cvector and the row coordinates or column coordinates of other points in the set, and compare the sum of the absolute values of all differences with the number of points in the set Divide to obtain the average value of the difference, and enter step D5;
  • step D5. Compare the row coordinates or column coordinates of each point in the set P cvector with the average value of the difference obtained in step D4. When the row coordinates or column coordinates of a certain point are greater than the average value of the difference, the point is removed from the other Eliminate from the corresponding sub-set P vector max , and enter step D6;
  • step D6 Form a new set P final with each sub-set P vector max eliminated in step D5, and combine the sum of row coordinates or column coordinates of all sub-sets P vector max midpoints in the set P final with all sub-sets P vector max The total number of points is divided, and the obtained value is used as the row coordinate or column coordinate of the center point of the light bar in the effective area. same.
  • the extracted light bar coordinates are [(x 1 ,y 1 ),(x 2 ,y 2 )...(x m ,y m )], m is the light bar image after step B is divided The number of rows or columns of the obtained light bar area.
  • the step A includes: setting the pixel coordinates of the i-th row and j-th column in the initial light strip image as (xi , y j ), and the pixel value corresponding to this point is f′( xi , y j ), the pixel is filtered according to the following formula to obtain the corresponding gray value f( xi ,y j ): w represents the size of the filtering neighborhood, (row, col) represents the pixel coordinate point in the neighborhood, and f(x row , y col ) represents the corresponding pixel value of the pixel coordinate point in the neighborhood.
  • step B includes:
  • a red laser with a wavelength of 650nm is vertically projected onto the workpiece to be measured, and then the camera captures an image including the intersection line between the red laser and the workpiece to obtain the initial light strip image .
  • the set P vector is traversed in a loop to obtain its largest sub-set P vector max .
  • a cluster-based light bar extraction device comprising:
  • Preprocessing module used to obtain the grayscale image of the initial light strip image, and perform filtering and denoising processing on the grayscale image to obtain the light strip image;
  • Segmentation module used to segment the light strip image using the Otsu algorithm to obtain the light strip area and the background area;
  • Effective area determination module separately perform clustering operation on each row or column of the light strip area to obtain multiple effective areas, and the clustering operation on each row or column specifically includes:
  • P arry as a set of pixel points with a pixel value of 255 in a row or a column in the divided light strip area
  • P start and P end are the starting point and end point of the set respectively
  • P vector as a two-dimensional set
  • P temp is a one-dimensional set
  • put the starting point P start as a seed point into the one-dimensional set P temp ; judge whether the row coordinate or column coordinate difference between two adjacent points in the set P arry conforms to the following formula: p i+1 -p i 1; if so, when p i+1 is not the end point P end , put p i+1 into the set P temp , and repeat the process until p i+1 is If the end point P end does not match, when p i+1 is not the end point P end , put the points in the set P temp into the set P vector and make it a sub-set of the set P vector , and then put the set P Temp is cleared, and p i+1 is put into the a
  • Light bar center point determination module used to calculate the center coordinates of each effective area, and use the center coordinates as the light bar center point of the effective area;
  • Light bar acquisition module used to form a set of center points of all light bars, and the set is the light bar coordinates of the initial light bar image.
  • the present invention first filters and denoises the light strip image, and then utilizes the Otsu algorithm to adaptively segment the light strip image to obtain a rough light strip area.
  • single row or single column Carry out clustering to achieve the purpose of removing clutter noise and large spot noise (when the light bar area is a horizontal pattern, cluster a single column; when the light bar area is a vertical pattern, cluster a single row), so that it is fast and accurate Acquire multiple effective areas of the light bar area, and then calculate the center point of the light bar corresponding to each effective area for each effective area, and finally gather the center points of each light bar to obtain the light bar coordinates that need to be extracted, which ensures The accuracy of light strip extraction is improved, and the speed of light strip extraction is improved.
  • the noise resistance and robustness are stronger, and clustering is performed for a single column or single row, so that each clustering process is independent, so that simultaneous processing can be achieved.
  • Multi-row or multi-column data can greatly increase the speed of data processing without affecting the denoising effect.
  • the present invention clusters a single row or a single column, it can still be used for curved light bar images. That is to say, the present invention can be applied to light strip images of any shape, and has strong adaptability to light strip images.
  • the optimal global segmentation threshold T in step B of the present invention is obtained according to the variance between the foreground and the background of the image to be processed, so for different images, different optimal global segmentation thresholds T can be adaptively obtained, thereby Adaptively segment light strip images.
  • the coordinates of the center points of the light strips in each effective area are calculated according to the idea of finding the mean value, the calculation is simple, the calculation amount is small, and the calculation speed is fast, and the measured object is a material that is easy to reflect light on the surface and is easily affected by the outside world. Coordinate calculation errors are caused by the influence of light, and the occurrence of such problems can be greatly reduced by adopting the method of the present invention.
  • Fig. 1 is a flowchart of the present invention.
  • Fig. 2 is a schematic diagram of line structured light measurement based on triangulation.
  • Figure 3 is the initial light strip image captured by the camera.
  • Fig. 4 is an effect diagram of light strip extraction in the present invention.
  • Figure 5 is the effect diagram of the extraction of the ordinary gray-scale center of gravity method.
  • Figure 6 is an effect diagram of light strip extraction based on Hessian matrix.
  • the method for extracting light bars based on clustering includes the following steps:
  • A. Obtain the grayscale image of the initial light strip image, and perform filtering and denoising processing on the grayscale image to obtain the light strip image;
  • the process of obtaining the initial light strip image is as follows: a red laser with a wavelength of 650nm is vertically projected onto the surface of the workpiece 3 to be measured through the laser transmitter 2, forming an intersection line 4 between the red laser and the workpiece 3 to be measured, and using
  • the image taken by camera 1 including the intersection line 4 is the initial light strip image, as shown in Figure 3, where the workpiece 3 to be tested is a white rubber sole model, and the parameters of camera 1 are: model MER-131-210U3M,
  • the resolution is 1280X1024, the image sensor is CMOS, the frame rate is full frame and the maximum is 210FPS, the optical size is 1/2, the weight is 57g, the lens model is Computar M0814-MP2, the focal length is 8mm, and the aperture range is F1.4-F16C,
  • the viewing angle is 12.5° ⁇ 9.3°;
  • Mean filtering is also called linear filtering.
  • the main method it uses is the neighborhood averaging method.
  • the basic principle is to replace each pixel value in the original image with the mean value, specifically:
  • the initial light strip image be the coordinates of the pixel point in the i-th row and j-th column as (x i , y j ), and the pixel value corresponding to this point is f′(x i , y j ), and the pixel point is calculated according to the following formula
  • the corresponding gray value f(x i ,y j ) is obtained: w represents the size of the filtering neighborhood, (row, col) represents the pixel coordinate point in the neighborhood, f(x row , y col ) represents the pixel value corresponding to the pixel coordinate point in the neighborhood;
  • step B Utilize the Otsu algorithm to segment the light strip image obtained in step A to obtain the light strip area and the background area; specifically:
  • the corresponding segmentation threshold is the optimal global segmentation threshold T
  • the light stripe image obtained after processing in step A is segmented according to the optimal global segmentation threshold T to obtain the light stripe area and background area
  • g( xi , y j ) represents the pixel value of the segmented light strip image at coordinates ( xi , y j );
  • Parry as a set of pixel points with a pixel value of 255 in each column in the divided light strip area, P start and P end are the starting point and end point in the set respectively, and define P vector as a two-dimensional Set, P temp is a one-dimensional set;
  • step C6 When p i+1 is P end , enter step C6, otherwise, put p i+1 into the set P temp , and then enter step C3;
  • step C6 When p i+1 is P end , go to step C6; otherwise, put the points in the set P temp into the set P vector as a sub-set of the set P vector , then clear the set P temp , and set Put p i+1 into the emptied P temp as a new seed point, and then enter step C3;
  • step C6 calculate the center coordinates of each effective area that step C6 obtains, and use the center coordinates as the light bar center point of the effective area; specifically include the following steps:
  • step D2 respectively calculate the coordinates P c of the center point of the light bar corresponding to the effective area of each sub-set P vector max , form all the coordinates of the center point P c into a new set P cvector , and enter step D4;
  • step D5 Compare the column coordinates of each point in the set P cvector with the average value of the difference obtained in step D4. When the column coordinates of a certain point are greater than the average value of the difference, the point is removed from its corresponding subset P vector max , go to step D6;
  • step D6 Form a new set P final with each sub-set P vector max eliminated in step D5, and combine the sum of the column coordinates of the midpoints of all sub-sets P vector max in the set P final with the total points of all sub-sets P vector max
  • the obtained value is used as the column coordinate of the center point of the light bar in the effective area, and the row coordinate of the center point of the light bar is the same as the row coordinate of the midpoint of each subset P vector max ;
  • the set of all light bar center points is the light bar coordinates of the initial light bar image, as shown in Figure 4, the light bar coordinates are [(x 1 ,y 1 ),(x 2 ,y 2 )...(x m ,y m )], m is the column number of the light strip area obtained after the light strip image is divided in step B.
  • a cluster-based light bar extraction device comprising:
  • Preprocessing module used to obtain the grayscale image of the initial light strip image, and perform filtering and denoising processing on the grayscale image to obtain the light strip image;
  • Segmentation module used to segment the light strip image using the Otsu algorithm to obtain the light strip area and the background area;
  • Effective area determination module perform clustering operation on each column of the light bar area separately to obtain multiple effective areas, and the clustering operation of each column specifically includes:
  • Light bar center point determination module used to calculate the center coordinates of each effective area, and use the center coordinates as the light bar center point of the effective area;
  • Light bar acquisition module used to form a set of center points of all light bars, and the set is the light bar coordinates of the initial light bar image.
  • Fig. 4 is the light bar effect figure that adopts the light bar extraction method of the present invention to obtain
  • Fig. 5 and Fig. 6 are the light bar effect figure that adopts common greyscale center of gravity method and Hessian matrix to obtain, and it can be seen that the light bar in Fig. 4 strip works best.
  • a method and device for extracting light stripes based on clustering in the present invention can achieve the effect of denoising by clustering a single row or a single column, and can simultaneously process data of multiple rows or columns, which not only ensures the accuracy of light stripe extraction , which improves the speed of light strip extraction, and has stronger noise resistance and robustness.
  • the invention can be applied to light strip images of arbitrary shapes, has strong adaptability to light strip images, and has good industrial applicability.

Abstract

提供一种基于聚类的光条提取方法及装置,方法包括如下步骤:A、获取光条图像;B、利用大津算法对步骤A得到的光条图像进行分割,得到光条区域和背景区域;C、对光条区域的每一行或者每一列单独进行聚类操作以得到多个有效区域;D、分别计算步骤C得到的各有效区域的中心坐标,将中心坐标作为有效区域的光条中心点;E、所有光条中心点的集合即为初始光条图像的光条坐标。对单行或者单列进行聚类以达到去噪的效果,可同时进行多行或者多列的数据处理,既保证了光条提取的精度,又提高了光条提取的速度,且抗噪性和鲁棒性都更强。

Description

一种基于聚类的光条提取方法及装置 技术领域
本发明涉及一种基于聚类的光条提取方法及装置。
背景技术
激光三角测量法原理是利用激光器将一定波长的激光条投射到被测工件上,通过被测工件对激光条进行调制,再利用CCD相机拍摄调制后的光条图案结合系统转换关系即可得到被测工件的三维形貌信息。因此在基于三角测量法的线结构光三维测量系统中,如何快速精准的从光条图像中获取光条中心坐标是实时精密测量的关键。
文献[杨建华,杨雪荣,成思源,等.线结构光三维视觉测量中光条纹中心提取综述[J].广东工业大学学报,2014,(1):74-78.]总结了现有的光条提取方法主要分为几何中心提取法和能量中心法。几何中心法又包括边缘法、骨架细化法等光条提取方法。几何中心法在光条纹横截面灰度分布成理想高斯情形下能够得到良好的光条提取效果,并且提取速度较快,但是抗噪性较差,不适用于信噪比小的图像。而能量中心法主要包括方向模板法、灰度重心法、Hessian矩阵法等光条提取方法。文献[吴庆阳,苏显渝,李景镇,等.一种新的线结构光光带中心提取算法[J].四川大学学报(工程科学版),2007,39(4):151-155.]通过利用固定方向的模板对图像卷积,求取图像中的极值点得到光条纹中心,该方法虽然能够抑制一定的噪声,但是耗时较长,且只能针对固定方向的光条图案。文献[陈念,郭阳宽,张晓青.基于Hessian矩阵的线结构光光条中心提取[J].数字技术与应用,2019,37(03):126-127.]采用Hessain矩阵法能够实现亚像素级 光条提取,但是计算量大,无法满足实时在线检测要求。而普通的灰度重心法只是在两个方向进行扫描,无法处理复杂的光条形状图像,且容易受到噪声的影响。
发明内容
本发明的目的是针对现有技术的不足,提出一种基于聚类的光条提取方法及装置,对单行或者单列进行聚类以达到去噪的效果,可同时进行多行或者多列的数据处理,既保证了光条提取的精度,又提高了光条提取的速度,且抗噪性和鲁棒性都更强。
本发明通过以下技术方案实现:
一种基于聚类的光条提取方法,包括如下步骤:
A、获取初始光条图像的灰度图像,并对该灰度图像进行滤波去噪处理,得到光条图像;
B、利用大津算法对步骤A得到的光条图像进行分割,得到光条区域和背景区域;
C、对光条区域的每一行或者每一列单独进行聚类操作以得到多个有效区域,每一行或者每一列的聚类操作具体包括如下步骤:
C1、定义P arry为分割后的光条区域内一行或者一列中像素值为255的像素点的集合,P start、P end分别为该集合中的起始点与末端点,定义P vector为一个二维集合、P temp为一个一维集合;
C2、将起始点P start作为种子点放入一维集合P temp中;
C3、判断集合P arry中两相邻点之间的行坐标或者列坐标之差是否符合下式:p i+1-p i=1;若符合,进入步骤C4,否则,进入步骤C5;p i的最初取值为一维集合P temp中的种子点,p i+1为p i的相邻点;其中,两相邻点的列坐标相同或者行坐 标相同;
C4、当p i+1为P end时,进入步骤C6,否则,将p i+1放入集合P temp中,再进入步骤C3;
C5、当p i+1为P end时,进入步骤C6,否则,将集合P temp中的点放入集合P vector中并作为集合P vector的一个子集合,再将集合P temp清空,并将p i+1作为新的种子点放入清空后的P temp中,再进入步骤C3;
C6、取集合P vector中最大的子集合P vector max作为光条区域的有效区域;
D、分别计算步骤C得到的各有效区域的中心坐标,将该中心坐标作为该有效区域的光条中心点;
E、所有光条中心点的集合即为初始光条图像的光条坐标。
进一步的,所述步骤D包括如下步骤:
D1、判断子集合P vector max的个数等于1或者大于1,若等于1,则进入步骤D2,否则,进入步骤D3;
D2、将该子集合P vector max中每个点的行坐标或者列坐标求和,再除以该子集合P vector max中点的数量,即得到该子集合所对应的有效区域的光条中心点的行坐标或者列坐标,该光条中心点的列坐标或者行坐标,则与子集合P vector max中点的列坐标或者行坐标相同;
D3、按照步骤D2分别计算每个子集合P vector max对应的有效区域的光条中心点坐标P c,将所有中心点坐标P c组成新的集合P cvector,进入步骤D4;
D4、分别计算集合P cvector中每一点的行坐标或者列坐标与该集合中其他点的行坐标或者列坐标的差值绝对值,将所有差值绝对值之和与该集合中点的个数相除,得到差值平均值,进入步骤D5;
D5、将集合P cvector中每个点的行坐标或者列坐标与步骤D4得到的差值平均 值进行比较,当某一点的行坐标或者列坐标大于差值平均值时,则将该点从其对应的子集合P vector max中剔除,进入步骤D6;
D6、将经步骤D5剔除后的各子集合P vector max组成新的集合P final,将集合P final中所有子集合P vector max中点的行坐标或者列坐标之和与所有子集合P vector max的总点数相除,得到的值作为有效区域光条中心点的行坐标或者列坐标,该光条中心点的列坐标或者行坐标则与各子集合P vector max中点的列坐标或者行坐标相同。
进一步的,对于步骤E,所提取的光条坐标为[(x 1,y 1),(x 2,y 2)…(x m,y m)],m为光条图像经步骤B分割后得到的光条区域的行数或者列数。
进一步的,所述步骤A包括:设初始光条图像为中第i行第j列的像素点坐标为(x i,y j),该点对应的像素值为f′(x i,y j),该像素点根据下式进行均值滤波后,得到对应的灰度值f(x i,y j):
Figure PCTCN2022126593-appb-000001
w表示滤波邻域的大小,(row,col)表示邻域内的像素坐标点,f(x row,y col)表示该邻域内的像素坐标点相应的像素值。
进一步的,所述步骤B包括:
B1、将经步骤A处理后得到的光条图像按照分割阈值T′分割为前景与背景两部分,并定义前景与背景的类间方差δ:δ 2=ω 0ω 110) 2,ω 0、ω 1分别表示前景和背景的发生概率,μ 0、μ 1分别表示前景和背景的灰度均值;
B2、取类间方差δ最大时对应的分割阈值T′为最佳全局分割阈值T,将经步骤A处理后得到的光条图像按照最佳全局分割阈值T进行如下式的分割,得到光条区域和背景区域,
Figure PCTCN2022126593-appb-000002
其中,g(x i,y j)表示分割后的光条图像在坐标(x i,y j)处的像素值。
进一步的,所述步骤A中,将波长为650nm的红激光竖直投射到被测工件上,再由相机拍摄包括红激光与被测工件的相交线的图像,以得到所述初始光条图像。
进一步的,所述步骤C6中,对集合P vector进行循环遍历,以得到其最大的子集合P vector max
本发明还通过以下技术方案实现:
一种基于聚类的光条提取装置,包括:
预处理模块:用于获取初始光条图像的灰度图像,并对该灰度图像进行滤波去噪处理,得到光条图像;
分割模块:用于利用大津算法对光条图像进行分割,得到光条区域和背景区域;
有效区域确定模块:对光条区域的每一行或者每一列单独进行聚类操作以得到多个有效区域,每一行或者每一列的聚类操作具体包括:
定义P arry为分割后的光条区域内一行或者一列中像素值为255的像素点的集合,P start、P end分别为该集合中的起始点与末端点,定义P vector为一个二维集合、P temp为一个一维集合;将起始点P start作为种子点放入一维集合P temp中;判断集合P arry中两相邻点之间的行坐标或者列坐标之差是否符合下式:p i+1-p i=1;若符合,则在当p i+1不是末端点P end时,将p i+1放入集合P temp中,并重复该过程,直到p i+1为末端点P end,若不符合,则在当p i+1不是末端点P end时,将集合P temp中的点放入集合P vector中并作为集合P vector的一个子集合,再将集合P temp清空,并将p i+1作为新的种子点放入清空后的P temp中,并重复该过程,直到p i+1为末端点P end,最后取集合P vector中最大的子集合P vector max作为光条区域的有效区域;
光条中心点确定模块:用于计算各有效区域的中心坐标,并将该中心坐标 作为该有效区域的光条中心点;
光条获取模块:用于将所有光条的中心点组成集合,该集合即为初始光条图像的光条坐标。
本发明具有如下有益效果:
1、本发明首先对光条图像进行滤波去噪,再利用大津算法自适应地对光条图像进行分割,以得到粗略的光条区域,在对光条区域的处理中,分别对单行或者单列进行聚类以达到去除杂乱噪点和大的光斑噪点的目的(当光条区域为横向图案时,对单列进行聚类,当光条区域为纵向图案时,对单行进行聚类),从而快速准确地获取光条区域的多个有效区域,然后针对各个有效区域,分别求取各个有效区域对应的光条中心点,最后将集合各个光条中心点即得到所需提取的光条坐标,既保证了光条提取的精度,又提高了光条提取的速度,抗噪性和鲁棒性都更强,而且针对单列或者单行进行聚类,使得各个聚类过程独立开来,从而可实现同时处理多行或者多列的数据,在不影响去噪效果的同时,大大提高数据处理的速度,再者,因本发明是分别对单行或者单列进行聚类,因此对于曲线类的光条图像仍然可以用,即本发明能够适用于任意形状的光条图像,对光条图像的适应性强。
2、本发明步骤B中最佳全局分割阈值T是根据需要处理的图像的前景与背景的类间方差获得,因此对于不同的图像,能够自适应地得到不同的最佳全局分割阈值T,从而自适应地对光条图像进行分割。
3、本发明步骤D中,根据求均值思想求取各有效区域的光条中心点的坐标,计算简单,计算量小,计算速度快,且被测物体为表面易反光的材质,容易受外界光线影响而导致坐标计算错误,采用本发明的方法则可大大减少此类问题的发生。
附图说明
下面结合附图对本发明做进一步详细说明。
图1为本发明的流程图。
图2为基于三角测量法的线结构光测量示意图。
图3为相机拍摄的初始光条图像。
图4为本发明的光条提取效果图。
图5为普通灰度重心法提取效果图。
图6为基于Hessian矩阵的光条提取效果图。
其中,1、相机;2、激光发射器;3、被测工件;4、交线。
具体实施方式
如图1所示,基于聚类的光条提取方法,包括如下步骤:
A、获取初始光条图像的灰度图像,并对该灰度图像进行滤波去噪处理,得到光条图像;
如图2所示,初始光条图像的获得过程为:通过激光发射器2将波长650nm的红激光竖直投射到被测工件3表面,形成红激光与被测工件3的交线4,利用相机1拍摄包括该交线4的图像,即为初始光条图像,如图3所示,其中,被测工件3为白色橡胶鞋底模型,相机1的参数为:型号为MER-131-210U3M,分辨率为1280X1024,图像传感器CMOS,帧率为全画幅且最高210FPS,光学尺寸为1/2,重量为57g,镜头型号为Computar M0814-MP2,焦距为8mm,光圈范围为F1.4-F16C,视角为12.5°×9.3°;
获取初始光条图像后,利用均值滤波对其进行滤波去噪处理;
均值滤波也叫线性滤波,其采用的主要方法为邻域平均法,基本原理即是 用均值代替原图像中的各个像素值,具体为:
设初始光条图像为中第i行第j列的像素点坐标为(x i,y j),该点对应的像素值为f′(x i,y j),该像素点根据下式进行均值滤波后,得到对应的灰度值f(x i,y j):
Figure PCTCN2022126593-appb-000003
w表示滤波邻域的大小,(row,col)表示邻域内的像素坐标点,f(x row,y col)表示该邻域内的像素坐标点相应的像素值;
B、利用大津算法对步骤A得到的光条图像进行分割,得到光条区域和背景区域;具体为:
B1、将经步骤A处理后得到的光条图像按照分割阈值T′分割为前景与背景两部分,并定义前景与背景的类间方差δ:δ 2=ω 0ω 110) 2,ω 0、ω 1分别表示前景和背景的发生概率,μ 0、μ 1分别表示前景和背景的灰度均值;
B2、取类间方差δ最大时对应的分割阈值为最佳全局分割阈值T,将经步骤A处理后得到的光条图像按照最佳全局分割阈值T进行如下式的分割,得到光条区域和背景区域,
Figure PCTCN2022126593-appb-000004
其中,g(x i,y j)表示分割后的光条图像在坐标(x i,y j)处的像素值;
C、对光条区域的每一行或者每一列单独进行聚类操作以得到多个有效区域,在本实施例中,如图3所示,初始光条图像为横向图像,因此针对每一列进行聚类操作,具体包括如下步骤:
C1、定义P arry为分割后的光条区域内每一列中像素值为255的像素点的集合,P start、P end分别为该集合中的起始点与末端点,定义P vector为一个二维集合、P temp为一个一维集合;
C2、将起始点P start作为种子点放入一维集合P temp中;
C3、判断集合P arry中两相邻点之间的列坐标之差是否符合下式:p i+1-p i=1;若符合,进入步骤C4,否则,进入步骤C5;p i的最初取值为一维集合P temp中的种子点,p i+1为p i的相邻点;其中,两相邻点的行坐标相同;
C4、当p i+1为P end时,进入步骤C6,否则,将p i+1放入集合P temp中,再进入步骤C3;
C5、当p i+1为P end时,进入步骤C6,否则,将集合P temp中的点放入集合P vector中并作为集合P vector的一个子集合,再将集合P temp清空,并将p i+1作为新的种子点放入清空后的P temp中,再进入步骤C3;
C6、对集合P vector进行循环遍历,以得到其最大的子集合P vector max,并取集合P vector中最大的子集合P vector max作为光条区域的有效区域;
D、计算步骤C6得到的各有效区域的中心坐标,将该中心坐标作为该有效区域的光条中心点;具体包括如下步骤:
D1、判断子集合P vector max的个数等于1或者大于1,若等于1,则进入步骤D2,否则,进入步骤D3;
D2、将该子集合P vector max中每个点的列坐标求和,再除以该子集合P vector max中点的数量,即得到该子集合所对应的有效区域的光条中心点的列坐标,该光条中心点的行坐标,则与子集合P vector max中点的行坐标相同;
D3、按照步骤D2分别计算每个子集合P vector max对应的有效区域的光条中心点坐标P c,将所有中心点坐标P c组成新的集合P cvector,进入步骤D4;
D4、分别计算集合P cvector中每一点的列坐标与该集合中其他点的列坐标的差值绝对值,将所有差值绝对值之和与该集合中点的个数相除,得到差值平均值,进入步骤D5;
D5、将集合P cvector中每个点的列坐标与步骤D4得到的差值平均值进行比较, 当某一点的列坐标大于差值平均值时,则将该点从其对应的子集合P vector max中剔除,进入步骤D6;
D6、将经步骤D5剔除后的各子集合P vector max组成新的集合P final,将集合P final中所有子集合P vector max中点的列坐标之和与所有子集合P vector max的总点数相除,得到的值作为有效区域光条中心点的列坐标,该光条中心点的行坐标则与各子集合P vector max中点的行坐标相同;
E、所有光条中心点的集合即为初始光条图像的光条坐标,如图4所示,该光条坐标为[(x 1,y 1),(x 2,y 2)…(x m,y m)],m为光条图像经步骤B分割后得到的光条区域的列数。
一种基于聚类的光条提取装置,包括:
预处理模块:用于获取初始光条图像的灰度图像,并对该灰度图像进行滤波去噪处理,得到光条图像;
分割模块:用于利用大津算法对光条图像进行分割,得到光条区域和背景区域;
有效区域确定模块:对光条区域的每一列单独进行聚类操作以得到多个有效区域,每一列的聚类操作具体包括:
定义P arry为分割后的光条区域内一列中像素值为255的像素点的集合,P start、P end分别为该集合中的起始点与末端点,定义P vector为一个二维集合、P temp为一个一维集合;将起始点P start作为种子点放入一维集合P temp中;判断集合P arry中两相邻点之间的列坐标之差是否符合下式:p i+1-p i=1;若符合,则在当p i+1不是末端点P end时,将p i+1放入集合P temp中,并重复该过程,直到p i+1为末端点P end,若不符合,则在当p i+1不是末端点P end时,将集合P temp中的点放入集合P vector中并作为集合P vector的一个子集合,再将集合P temp清空,并将p i+1作为新的种子点放入清空 后的P temp中,并重复该过程,直到p i+1为末端点P end,最后取集合P vector中最大的子集合P vector max作为光条区域的有效区域;
光条中心点确定模块:用于计算各有效区域的中心坐标,并将该中心坐标作为该有效区域的光条中心点;
光条获取模块:用于将所有光条的中心点组成集合,该集合即为初始光条图像的光条坐标。
图4为采用本发明的光条提取方法所得到的光条效果图,图5和图6则是采用普通灰度重心法和Hessian矩阵所得到的光条效果图,可见,图4中的光条效果最好。
以上所述,仅为本发明的较佳实施例而已,故不能以此限定本发明实施的范围,即依本发明申请专利范围及说明书内容所作的等效变化与修饰,皆应仍属本发明专利涵盖的范围内。
工业实用性
本发明一种基于聚类的光条提取方法及装置,通过对单行或者单列进行聚类以达到去噪的效果,可同时进行多行或者多列的数据处理,既保证了光条提取的精度,又提高了光条提取的速度,且抗噪性和鲁棒性都更强。本发明能够适用于任意形状的光条图像,对光条图像的适应性强,具有良好的工业实用性。

Claims (10)

  1. 一种基于聚类的光条提取方法,其特征在于:包括如下步骤:
    A、获取初始光条图像的灰度图像,并对该灰度图像进行滤波去噪处理,得到光条图像;
    B、利用大津算法对步骤A得到的光条图像进行分割,得到光条区域和背景区域;
    C、对光条区域的每一行或者每一列单独进行聚类操作以得到多个有效区域,每一行或者每一列的聚类操作具体包括如下步骤:
    C1、定义P arry为分割后的光条区域内一行或者一列中像素值为255的像素点的集合,P start、P end分别为该集合中的起始点与末端点,定义P vector为一个二维集合、P temp为一个一维集合;
    C2、将起始点P start作为种子点放入一维集合P temp中;
    C3、判断集合P arry中两相邻点之间的行坐标或者列坐标之差是否符合下式:p i+1-p i=1;若符合,进入步骤C4,否则,进入步骤C5;p i的最初取值为一维集合P temp中的种子点,p i+1为p i的相邻点;其中,两相邻点的列坐标相同或者行坐标相同;
    C4、当p i+1为P end时,进入步骤C6,否则,将p i+1放入集合P temp中,再进入步骤C3;
    C5、当p i+1为P end时,进入步骤C6,否则,将集合P temp中的点放入集合P vector中并作为集合P vector的一个子集合,再将集合P temp清空,并将p i+1作为新的种子点放入清空后的P temp中,再进入步骤C3;
    C6、取集合P vector中最大的子集合P vector max作为光条区域的有效区域;
    D、分别计算步骤C得到的各有效区域的中心坐标,将该中心坐标作为该有效区域的光条中心点;
    E、所有光条中心点的集合即为初始光条图像的光条坐标。
  2. 根据权利要求1所述的一种基于聚类的光条提取方法,其特征在于:所述步骤D包括如下步骤:
    D1、判断子集合P vector max的个数等于1或者大于1,若等于1,则进入步骤D2,否则,进入步骤D3;
    D2、将该子集合P vector max中每个点的行坐标或者列坐标求和,再除以该子集合P vector max中点的数量,即得到该子集合所对应的有效区域的光条中心点的行坐标或者列坐标,该光条中心点的列坐标或者行坐标,则与子集合P vector max中点的列坐标或者行坐标相同;
    D3、按照步骤D2分别计算每个子集合P vector max对应的有效区域的光条中心点坐标P c,将所有中心点坐标P c组成新的集合P cvector,进入步骤D4;
    D4、分别计算集合P cvector中每一点的行坐标或者列坐标与该集合中其他点的行坐标或者列坐标的差值绝对值,将所有差值绝对值之和与该集合中点的个数相除,得到差值平均值,进入步骤D5;
    D5、将集合P cvector中每个点的行坐标或者列坐标与步骤D4得到的差值平均值进行比较,当某一点的行坐标或者列坐标大于差值平均值时,则将该点从其对应的子集合P vector max中剔除,进入步骤D6;
    D6、将经步骤D5剔除后的各子集合P vector max组成新的集合P final,将集合P final中所有子集合P vector max中点的行坐标或者列坐标之和与所有子集合P vector max的总点数相除,得到的值作为有效区域光条中心点的行坐标或者列坐标,该光条中心点的列坐标或者行坐标则与各子集合P vector max中点的列坐标或者行坐标相同。
  3. 根据权利要求1所述的一种基于聚类的光条提取方法,其特征在于:对于步骤E,所提取的光条坐标为[(x 1,y 1),(x 2,y 2)…(x m,y m)],m为光条图像经步骤B 分割后得到的光条区域的行数或者列数。
  4. 根据权利要求1所述的一种基于聚类的光条提取方法,其特征在于:所述步骤A包括:设初始光条图像为中第i行第j列的像素点坐标为(x i,y j),该点对应的像素值为f′(x i,y j),该像素点根据下式进行均值滤波后,得到对应的灰度值f(x i,y j):
    Figure PCTCN2022126593-appb-100001
    w表示滤波邻域的大小,(row,col)表示邻域内的像素坐标点,f(x row,y col)表示该邻域内的像素坐标点相应的像素值。
  5. 根据权利要求1或2或3所述的一种基于聚类的光条提取方法,其特征在于:所述步骤B包括:
    B1、将经步骤A处理后得到的光条图像按照分割阈值T′分割为前景与背景两部分,并定义前景与背景的类间方差δ:δ 2=ω 0ω 110) 2,ω 0、ω 1分别表示前景和背景的发生概率,μ 0、μ 1分别表示前景和背景的灰度均值;
    B2、取类间方差δ最大时对应的分割阈值T′为最佳全局分割阈值T,将经步骤A处理后得到的光条图像按照最佳全局分割阈值T进行如下式的分割,得到光条区域和背景区域,
    Figure PCTCN2022126593-appb-100002
    其中,g(x i,y j)表示分割后的光条图像在坐标(x i,y j)处的像素值。
  6. 根据权利要求1或2或3所述的一种基于聚类的光条提取方法,其特征在于:所述步骤A中,将波长为650nm的红激光竖直投射到被测工件上,形成红激光与被测工件的交线,再由相机拍摄包括红激光与被测工件的相交线的图像,以得到所述初始光条图像。
  7. 根据权利要求1或2或3所述的一种基于聚类的光条提取方法,其特征在于:所述步骤C6中,对集合P vector进行循环遍历,以得到其最大的子集合 P vector max
  8. 根据权利要求6所述的一种基于聚类的光条提取方法,其特征在于:所述被测工件为橡胶鞋底模型。
  9. 一种基于聚类的光条提取装置,其特征在于:包括:
    预处理模块:用于获取初始光条图像的灰度图像,并对该灰度图像进行滤波去噪处理,得到光条图像;
    分割模块:用于利用大津算法对光条图像进行分割,得到光条区域和背景区域;
    有效区域确定模块:对光条区域的每一行或者每一列单独进行聚类操作以得到多个有效区域,每一行或者每一列的聚类操作具体包括:
    定义P arry为分割后的光条区域内一行或者一列中像素值为255的像素点的集合,P start、P end分别为该集合中的起始点与末端点,定义P vector为一个二维集合、P temp为一个一维集合;将起始点P start作为种子点放入一维集合P temp中;判断集合P arry中两相邻点之间的行坐标或者列坐标之差是否符合下式:p i+1-p i=1;若符合,则在当p i+1不是末端点P end时,将p i+1放入集合P temp中,并重复该过程,直到p i+1为末端点P end,若不符合,则在当p i+1不是末端点P end时,将集合P temp中的点放入集合P vector中并作为集合P vector的一个子集合,再将集合P temp清空,并将p i+1作为新的种子点放入清空后的P temp中,并重复该过程,直到p i+1为末端点P end,最后取集合P vector中最大的子集合P vector max作为光条区域的有效区域;
    光条中心点确定模块:用于计算各有效区域的中心坐标,并将该中心坐标作为该有效区域的光条中心点;
    光条获取模块:用于将所有光条的中心点组成集合,该集合即为初始光条图像的光条坐标。
  10. 根据权利要求9一种基于聚类的光条提取装置,其特征在于:所述预处理模块将波长为650nm的红激光竖直投射到被测工件上,形成红激光与被测工件的交线,再由相机拍摄包括红激光与被测工件的相交线的图像,以得到所述初始光条图像。
PCT/CN2022/126593 2022-01-14 2022-10-21 一种基于聚类的光条提取方法及装置 WO2023134251A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210043195.2A CN114494165A (zh) 2022-01-14 2022-01-14 一种基于聚类的光条提取方法及装置
CN202210043195.2 2022-01-14

Publications (1)

Publication Number Publication Date
WO2023134251A1 true WO2023134251A1 (zh) 2023-07-20

Family

ID=81511831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/126593 WO2023134251A1 (zh) 2022-01-14 2022-10-21 一种基于聚类的光条提取方法及装置

Country Status (2)

Country Link
CN (1) CN114494165A (zh)
WO (1) WO2023134251A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494165A (zh) * 2022-01-14 2022-05-13 泉州华中科技大学智能制造研究院 一种基于聚类的光条提取方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050111700A1 (en) * 2003-10-03 2005-05-26 O'boyle Michael E. Occupant detection system
CN101673412A (zh) * 2009-09-29 2010-03-17 浙江工业大学 结构光三维视觉系统的光模板匹配方法
CN108510544A (zh) * 2018-03-30 2018-09-07 大连理工大学 一种基于特征聚类的光条定位方法
CN111325831A (zh) * 2020-03-04 2020-06-23 中国空气动力研究与发展中心超高速空气动力研究所 一种基于分层聚类和置信传播的彩色结构光光条检测方法
CN112669438A (zh) * 2020-12-31 2021-04-16 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN113554697A (zh) * 2020-04-23 2021-10-26 苏州北美国际高级中学 基于线激光的舱段轮廓精确测量方法
CN114494165A (zh) * 2022-01-14 2022-05-13 泉州华中科技大学智能制造研究院 一种基于聚类的光条提取方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050111700A1 (en) * 2003-10-03 2005-05-26 O'boyle Michael E. Occupant detection system
CN101673412A (zh) * 2009-09-29 2010-03-17 浙江工业大学 结构光三维视觉系统的光模板匹配方法
CN108510544A (zh) * 2018-03-30 2018-09-07 大连理工大学 一种基于特征聚类的光条定位方法
CN111325831A (zh) * 2020-03-04 2020-06-23 中国空气动力研究与发展中心超高速空气动力研究所 一种基于分层聚类和置信传播的彩色结构光光条检测方法
CN113554697A (zh) * 2020-04-23 2021-10-26 苏州北美国际高级中学 基于线激光的舱段轮廓精确测量方法
CN112669438A (zh) * 2020-12-31 2021-04-16 杭州海康机器人技术有限公司 一种图像重建方法、装置及设备
CN114494165A (zh) * 2022-01-14 2022-05-13 泉州华中科技大学智能制造研究院 一种基于聚类的光条提取方法及装置

Also Published As

Publication number Publication date
CN114494165A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
CN110033516B (zh) 基于双目相机图像采集识别的针片状颗粒含量检测方法
CN107369159B (zh) 基于多因素二维灰度直方图的阈值分割方法
CN115082419A (zh) 一种吹塑箱包生产缺陷检测方法
WO2018018987A1 (zh) 一种适用于光场相机的标定预处理方法
CN110580481B (zh) 一种基于epi的光场图像关键位置检测方法
CN104458764B (zh) 基于大景深条带图像投影的弯曲粗糙表面缺陷鉴别方法
CN109559324A (zh) 一种线阵图像中的目标轮廓检测方法
CN109540925B (zh) 基于差影法与局部方差测量算子的复杂瓷砖表面缺陷检测方法
CN108510544B (zh) 一种基于特征聚类的光条定位方法
CN113706566B (zh) 一种基于边缘检测的加香喷雾性能检测方法
WO2023134251A1 (zh) 一种基于聚类的光条提取方法及装置
WO2017113692A1 (zh) 一种图像匹配方法及装置
CN116358449A (zh) 一种基于双目面结构光的飞机铆钉凹凸量测量方法
CN114820474A (zh) 一种基于三维信息的列车车轮缺陷检测方法
CN115953550A (zh) 针对线结构光扫描的点云离群点剔除系统及方法
CN116188544A (zh) 一种结合边缘特征的点云配准方法
CN117333489B (zh) 一种薄膜破损检测装置及检测系统
CN112950650B (zh) 适用于高精度形貌测量的深度学习畸变光斑中心提取方法
CN114119957A (zh) 高速铁路钢轨廓形检测方法及装置
CN113252103A (zh) 一种基于matlab图像识别技术计算物料堆体积及质量的方法
CN116596987A (zh) 一种基于双目视觉的工件三维尺寸高精度测量方法
CN111243006A (zh) 一种基于图像处理的液滴接触角及尺寸的测量方法
CN109084721B (zh) 用于确定半导体器件中的目标结构的形貌参数的方法和设备
Gan et al. Sub-pixel extraction of laser stripe in complex background
CN112330667B (zh) 一种基于形态学的激光条纹中心线提取方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22919885

Country of ref document: EP

Kind code of ref document: A1