WO2024011851A1 - 一种基于边缘检测的车身门缝检测方法 - Google Patents

一种基于边缘检测的车身门缝检测方法 Download PDF

Info

Publication number
WO2024011851A1
WO2024011851A1 PCT/CN2022/141041 CN2022141041W WO2024011851A1 WO 2024011851 A1 WO2024011851 A1 WO 2024011851A1 CN 2022141041 W CN2022141041 W CN 2022141041W WO 2024011851 A1 WO2024011851 A1 WO 2024011851A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
distance
edge
gradient
body door
Prior art date
Application number
PCT/CN2022/141041
Other languages
English (en)
French (fr)
Inventor
陆晓佳
周旻昊
余儒
Original Assignee
南京思飞捷软件科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京思飞捷软件科技有限公司 filed Critical 南京思飞捷软件科技有限公司
Publication of WO2024011851A1 publication Critical patent/WO2024011851A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the invention relates to the field of vehicle door detection, and in particular to a vehicle body door seam detection method based on edge detection.
  • the body As one of the most basic components of a vehicle, the body can be called the basic skeleton of the vehicle. Almost all parts on the vehicle are installed on this basis. Therefore, the quality of the body will have a certain impact on the performance of the vehicle. , the car body is to connect a large number of stamping parts or metal components through welding, riveting and mechanical means to form a complex shell structure. Due to its superiority over other connection methods, welding has become the main method of car body connection.
  • the door is one of the important parts of the entire car body.
  • the size of the door gap and the degree of misalignment after the door is closed are two very important indicators for ensuring the normal closing of the door and the normal and safe operation of the car. If there is a deviation between these two indicators, it will affect the closing effect of the car door, resulting in poor sound insulation and waterproofing of the car, affecting personal safety and other issues. Therefore, convenient, fast and accurate detection of whether the door gap width is qualified has become an important part of the vehicle production process.
  • the present invention proposes a vehicle body door gap detection method extraction method based on edge detection, and designs A machine vision detection system for car body door seams is developed, which realizes the rapid collection of door seam image data and quantitatively detects the door seam width through edge detection.
  • the present invention proposes a vehicle body door seam detection method based on edge detection.
  • the specific steps are as follows:
  • Step 1 Build body door seam detection equipment to detect body door seam images.
  • the detection equipment includes: light source controller, CCD camera, laser pointer, network port communication, computer processing system, etc.;
  • Step 2 Filter the image. First, grayscale the detected body door seam, and then filter the grayscaled image;
  • Step 3 Use the first-order partial derivative of the Gaussian function to smooth the filtered image, then use the finite difference of the first-order partial derivative to calculate the gradient and direction of the image to form a gradient image, and use the Roberts operator as the edge detection operator;
  • Step 4 Perform non-maximum suppression on the gradient amplitude in the gradient image, retain only local maximum points, and obtain discrete points of the edge contour gradient map;
  • Step 5 given a low threshold and a high threshold, the gradient value greater than the high threshold is recorded as a strong edge pixel and used as an anchor point in the image. Points smaller than the low threshold set the pixel to 0 and connect the anchor points to form an edge image;
  • Step 6 Extract the feature points of the edge image, match the laser-marked image window, find the position perpendicular to the gap, calculate the Manhattan distance in the window, use the maximum Manhattan distance in the window as the Manhattan width of the body door gap, and determine the body door gap Width, based on the width of the door gap in the multi-frame image, obtain the minimum value of the door gap width.
  • step 1 the process of setting up the body door seam detection equipment in step 1 is as follows:
  • the laser pointer emits a bar-shaped beam perpendicular to the door gap.
  • the camera rotates at a certain angle with the beam at the location to be measured as the axis.
  • the laser is on the door. Breakpoints will be formed between the seams, and the camera will also move along the direction of the door seam to obtain multiple frames of images of the car body door seam, and upload the image data to the computer processing system through network communication.
  • Step 2.1 use the grayscale images of two consecutive frames as the input image and the guide image respectively, perform filtering processing through the following formula, and obtain the filtered output image:
  • O is the output image
  • G is the guide image
  • O i is the output image under the filter window
  • G i is the guide image under the filter window
  • a k and b k are filter coefficients
  • ⁇ k is the filter window
  • k is image location
  • Step 2.2 define the cost function of image filtering:
  • I is the input image
  • I i is the input image under the filter window
  • is the regularization parameter
  • Step 2.3 find the partial derivatives of a k and b k respectively, and make the partial derivatives equal to zero, we get:
  • mean means averaging
  • Cov is the covariance formula
  • Var is the variance formula
  • Step 2.4 after determining a k and b k , return to equation (1) to filter the input image to obtain the filtered image.
  • step 3 the process of making gradient images in step 3 is expressed as follows:
  • g x is the gradient value in the x-axis direction
  • g y is the gradient value in the y-axis direction
  • the x-axis represents the horizontal direction
  • the y-axis represents the vertical direction
  • f(x,y) is the (x ,y)The gray value at the position.
  • step 5 the process of obtaining the edge image in step 5 is expressed as follows:
  • Step 5.1 first calculate the Manhattan distance between each anchor point.
  • the distance calculation formula is as follows:
  • d is the distance between the two anchor points (x 1 , y 1 ) and (x 2 , y 2 ), x 1 and x 2 are the horizontal coordinates of the anchor point, y 1 and y 2 are the anchor points vertical coordinate;
  • Step 5.2 Screen out the anchor points with a small Manhattan distance from the surrounding anchor points, decompose them in different directions to obtain data with different analysis requirements, select the anchor point with the smallest Manhattan distance from the anchor point, and connect the two anchor points to obtain edge image.
  • step 6 the process of obtaining the minimum door gap width in step 6 is expressed as follows:
  • Step 6.1 extract the edge image feature point L Hessian according to the following formula:
  • is the current layer scale parameter
  • L xx and L yy are the second-order differential values of the image at x and y respectively
  • L xy is the second-order cross partial derivative
  • Step 6.2 Determine whether the two feature points match by calculating the Euclidean distance between feature vectors and combining the nearest neighbor and next nearest neighbor ratio method, and compare it with the set threshold. If the ratio is smaller, then use that point as a laser mark point;
  • Step 6.3 the camera uses the distance sensor to take photos of the standard size plate, and calculates the size corresponding to each pixel through equation (7).
  • the distance between the camera and the standard plate and the size corresponding to each pixel is a set of data;
  • Step 6.4 the camera is measured at different distances from the standard plate, and the corresponding size of each pixel of the camera at different distances is calculated;
  • Step 6.5 Calculate the divergence angle of the camera shooting based on the data in step 6.4.
  • Step 6.5 When measuring the actual object, keep the camera at the same distance from the reference laser line on the surface of the object being measured;
  • Step 6.6 if the distance in step 6.5 is different, calculate the size represented by each pixel at the measured distance through the angle and distance conversion calculated in step 6.5. Multiply the number of pixels in the Manhattan method to get the gap length measured at the distance;
  • Step 6.7 then obtain the required different size data by decomposing the measured minimum distance in different directions.
  • the present invention is a vehicle body door seam detection method based on edge detection.
  • the beneficial effects are:
  • the present invention proposes a vehicle body door seam detection method based on edge detection to realize automatic detection of the body door seam width;
  • the filtering process of the present invention has a fast and non-approximate linear time algorithm, which can effectively defog and feather the image;
  • the present invention uses Manhattan distance as the distance for edge detection anchor point screening, which can effectively reduce the phenomenon of over-segmentation and wrong segmentation;
  • This invention can quickly locate the car body door gap window by matching the edge image feature points with the laser mark;
  • the present invention is also applicable to the measurement of entities, but as the reverse method of gap measurement.
  • Figure 1 is a flow chart of the present invention
  • Figure 2 is an edge detection flow chart of the present invention
  • Figure 3 is a diagram of the shooting mode of the present invention.
  • Figure 4 is a photo of the standard size plate of the present invention.
  • Figure 5 is a schematic diagram of the present invention.
  • the present invention proposes a vehicle body door seam detection method based on edge detection. It uses the image data obtained by the CCD camera and combines image filtering, edge detection, feature point matching and other steps to realize the detection of the vehicle body door seam width. Through image filtering It can effectively defogging and feathering the image, and uses the Manhattan distance as the distance for edge detection anchor point screening, which can effectively reduce the phenomenon of over-segmentation and wrong segmentation.
  • Figure 1 is the system structure diagram of the present invention
  • Figure 5 is the principle of the present invention. picture. The steps of the present invention will be introduced in detail below in conjunction with the system structure diagram, that is, according to Figures 1 and 5.
  • Step 1 Build body door seam detection equipment to detect body door seam images (body door seam image data).
  • the detection equipment includes: light source controller, CCD camera, laser pointer, network port communication, computer processing system, etc.
  • Step 2 Filter the image. First, grayscale the detected body door seam, and then filter the grayscaled image; that is, image filtering.
  • Step 2.1 use the grayscale images of two consecutive frames as the input image and the guide image respectively, perform filtering processing through the following formula, and obtain the filtered output image:
  • O is the output image
  • G is the guide image
  • O i is the output image under the filter window
  • G i is the guide image under the filter window
  • a k and b k are filter coefficients
  • ⁇ k is the filter window
  • k is Image location.
  • Step 2.2 define the cost function of image filtering:
  • I is the input image
  • I i is the input image under the filter window
  • is the regularization parameter
  • Step 2.3 find the partial derivatives of a k and b k respectively, and make the partial derivatives equal to zero, we can get:
  • mean means averaging
  • Cov is the covariance formula
  • Var is the variance formula
  • Step 2.4 after determining a k and b k , return to equation (1) to filter the input image to obtain the filtered image.
  • Step 3 Use the first-order partial derivative of the Gaussian function to smooth the filtered image, then use the finite difference of the first-order partial derivative to calculate the gradient and direction of the image to form a gradient image, and use the Roberts operator as the edge detection operator, that is Perform edge detection on the filtered image; the edge detection process is shown in Figure 2.
  • Roberts is used to calculate the gradient amplitude and direction, then through non-edge suppression, and then calculate the Manhattan distance: select the anchor points; finally, connect the anchor points to obtain the edge image.
  • g x is the gradient value in the x-axis direction
  • g y is the gradient value in the y-axis direction
  • the x-axis represents the horizontal direction
  • the y-axis represents the vertical direction
  • f(x,y) is the (x ,y)The gray value at the position.
  • Step 4 Perform non-maximum suppression on the gradient amplitude in the gradient image, retain only local maximum points, and obtain discrete points of the edge contour gradient map.
  • Step 5 Given a low threshold and a high threshold, the gradient value greater than the high threshold is recorded as a strong edge pixel and serves as an anchor point in the image. Points smaller than the low threshold set the pixel to 0 and connect the anchor points to form an edge image.
  • Step 5.1 first calculate the Manhattan distance between each anchor point.
  • the distance calculation formula is as follows:
  • d is the distance between the two anchor points (x 1 , y 1 ) and (x 2 , y 2 ), x 1 and x 2 are the horizontal coordinates of the anchor point, y 1 and y 2 are the anchor points Vertical coordinates.
  • Step 5.2 Screen out the anchor points with a small Manhattan distance from the surrounding anchor points, decompose them in different directions to obtain data with different analysis requirements, select the anchor point with the smallest Manhattan distance from the anchor point, and connect the two anchor points. Get the edge image.
  • Step 6 Extract the feature points of the edge image, match the laser-marked image window, find the position perpendicular to the gap, calculate the Manhattan distance in the window, use the maximum Manhattan distance in the window as the Manhattan width of the body door gap, and determine the body door gap Width, based on the width of the door gap in the multi-frame image, obtain the minimum value of the door gap width (calculate the door gap width).
  • Step 6.1 extract the edge image feature point L Hessian according to the following formula:
  • is the current layer scale parameter
  • L xx and L yy are the second-order differential values of the image at x and y respectively
  • L xy is the second-order cross partial derivative.
  • Step 6.2 Determine whether the two feature points match by calculating the Euclidean distance between feature vectors and combining the nearest neighbor and next nearest neighbor ratio method, and compare it with the set threshold. If the ratio is smaller, then use that point as a laser Mark points.
  • Step 6.3 the camera uses the distance sensor to take photos of the standard size board.
  • the shooting method is shown in Figure 3.
  • the photo of the standard size board is shown in Figure 4.
  • the size corresponding to each pixel is calculated through Equation (7).
  • the camera and the standard board The distance between them and the size corresponding to each pixel is a set of data.
  • Step 6.4 the camera is measured at different distances from the standard plate, and the corresponding size of each pixel of the camera at different distances is calculated.
  • Step 6.5 Calculate the divergence angle of the camera shooting based on the data in step 6.4.
  • Step 6.5 Calculate the divergence angle of the camera shooting based on the data in step 6.4.
  • Step 6.6 if the distance in step 6.5 is different, calculate the size represented by each pixel at the measured distance through the angle and distance conversion calculated in step 6.5. Multiply the number of pixels in the Manhattan method to get the gap length measured at the distance.
  • Step 6.7 then obtain the required different size data by decomposing the measured minimum distance in different directions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于边缘检测的车身门缝检测方法,1)搭建车身门缝检测设备,检测车身门缝图像;2)对图像进行滤波处理;3)利用高斯函数的一阶偏导对滤波后的图像进行平滑,再利用一阶偏导的有限差分计算图像的梯度和方向,形成梯度图像;4)对梯度图像中的梯度幅值进行非极大值抑制,只保留局部最大值点,获得边缘轮廓梯度图的离散点;5)给定低阈值和高阈值,梯度值大于高阈值记为强边缘像素,作为图像中的锚点,小于低阈值的点将像素置0,连接锚点形成边缘图像;6)提取边缘图像的特征点,匹配激光标记的图像窗口,找到垂直于缝隙的位置,在该窗口处计算曼哈顿距离,获取门缝宽度最小值。本发明利用边缘检测和曼哈顿距离提高检测模型的鲁棒性和精确度。

Description

一种基于边缘检测的车身门缝检测方法
本申请要求于2022年07月14日提交中国专利局、申请号为202210825110.6、发明名称为“一种基于边缘检测的车身门缝检测方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及车门检测领域,特别是涉及一种基于边缘检测的车身门缝检测方法。
背景技术
车身作为车辆最基本的组成部分之一,可以称之为车辆的基本骨架,车辆上几乎所有的零部件都在此基础上进行安装,所以车身质量的好坏对整车性能会有一定的影响,车身就是将大量的冲压件或者金属构件,通过焊接、铆接以及机械方式来进行连接,从而形成一个复杂的壳体结构,焊接由于其较其他连接方式的优越性,成为车身连接的主要方式。
车门是整个车身的重要部件之一,车门关闭后的门缝大小和错位程度是确保车门正常合拢,是汽车正常安全运行的两个非常重要的指标。如果这两个指标出现偏差,就会影响车门的合拢效果,导致汽车隔音防水效果差,影响人身安全等问题。因而方便、快捷、准确地检测车门缝宽是否合格成为车辆生产过程中的重要一环,针对车身门缝宽度检测问题,本发明提出了一种基于边缘检测的车身门缝检测方法提取方法,设计了一种针对车身门缝的机器视觉检测系统,实现对门缝图像数据的快速采集,并通过边缘检测对门缝宽度的定量检测。
发明内容
为解决上述问题,本发明提出了一种基于边缘检测的车身门缝检测方法,具体步骤如下:
步骤1,搭建车身门缝检测设备,检测车身门缝图像,检测设备包括:光源控制器、CCD相机、激光指示器、网口通讯、计算机处理系统等;
步骤2,对图像进行滤波处理,先将所检测的车身门缝进行灰度化处理,再将灰度化处理后的图像进行滤波处理;
步骤3,利用高斯函数的一阶偏导对滤波后的图像进行平滑,再利用 一阶偏导的有限差分计算图像的梯度和方向,形成梯度图像,使用Roberts算子作为边缘检测算子;
步骤4,对梯度图像中的梯度幅值进行非极大值抑制,只保留局部最大值点,获得边缘轮廓梯度图的离散点;
步骤5,给定低阈值和高阈值,梯度值大于高阈值记为强边缘像素,作为图像中的锚点,小于低阈值的点将像素置0,连接锚点形成边缘图像;
步骤6,提取边缘图像的特征点,匹配激光标记的图像窗口,找到垂直于缝隙的位置,在该窗口处计算曼哈顿距离,将窗口内最大的曼哈顿距离作为车身门缝曼哈顿宽度,确定车身门缝宽度,根据多帧图像中的门缝宽度,获取门缝宽度最小值。
进一步,步骤1中搭建车身门缝检测设备的过程表示如下:
将CCD相机放置于待检测车辆车门正前方,激光指示器发射条形光束,垂直于门缝照射,相机以待测位置光束为轴心旋转一定角度,此时在相机拍摄图像上,激光在门缝间会形成断点,相机还要沿门缝的走向方向移动,获取多帧的车身门缝图像,通过网络通讯将图像数据上传至计算机处理系统。
进一步,步骤2中对图像进行滤波处理的过程表示如下:
步骤2.1,将连续两帧图像的灰度图像分别作为输入图像和引导图像,通过下式进行滤波处理,获得滤波后的输出图像:
O i=a kG i+b k,i∈ω k                (1)
式中,O为输出图像,G为引导图像,O i为滤波窗口下的输出图像,G i为滤波窗口下的引导图像,a k和b k为滤波系数,ω k为滤波窗口,k为图像位置;
步骤2.2,定义图像滤波的代价函数:
Figure PCTCN2022141041-appb-000001
式中,I为输入图像,I i为滤波窗口下的输入图像,ε为正则化参数;
步骤2.3,分别对a k和b k求偏导,并令偏导数等于零,得:
Figure PCTCN2022141041-appb-000002
Figure PCTCN2022141041-appb-000003
式中,mean为取平均,Cov为协方差公式,Var为方差公式;
步骤2.4,在确定a k和b k后,返回式(1)对输入图像进行滤波处理,获得滤波后的图像。
进一步,步骤3中制作梯度图像的过程表示如下:
通过下式将滤波后的图像转换为梯度图像
g x=f(x+1,y+1)-f(x,y)                     (5)
g y=f(x+1,y)-f(x,y+1)                  (6)
式中,g x为x轴方向的梯度值,g y为y轴方向的梯度值,x轴代表水平方向,y轴代表垂直方向,f(x,y)为滤波后灰度图像中(x,y)位置处的灰度值。
进一步,步骤5中获取边缘图像的过程表示如下:
步骤5.1,先计算每个锚点之间的曼哈顿距离,距离计算公式如下:
d=|x 1-x 2|+|y 1-y 2|                    (7)
式中,d为两个锚点(x 1,y 1)、(x 2,y 2)间的距离,x 1和x 2是锚点的水平方向坐标,y 1和y 2是锚点的垂直方向坐标;
步骤5.2筛选出与周围锚点曼哈顿距离较小的锚点,在不同方向上分解得到不同分析要求的数据,并选出与该锚点曼哈顿距离最小的锚点,并连接该两个锚点获得边缘图像。
进一步,步骤6中获取门缝宽度最小值的过程表示如下:
步骤6.1,根据下式提取边缘图像特征点L Hessian
Figure PCTCN2022141041-appb-000004
式中,σ为当前层尺度参数,L xx、L yy分别为图像在x、y处的二阶微分值,L xy为二阶交叉偏导数;
步骤6.2,通过计算特征向量间的欧式距离并结合最近邻次近邻比值 法来确定两个特征点是否匹配,并与设好的阈值进行比较,如果该比值更小,那么就将该处作为激光标记点;
步骤6.3,相机利用距离传感器拍摄标准尺寸板的照片,通过式(7)计算每一像素点对应的尺寸,相机与标准板之间的距离和每像素点对应的尺寸为一组数据;
步骤6.4,相机在离标准板不同距离测量,算出相机每像素在不同距离上对应的尺寸;
步骤6.5,通过步骤6.4的数据计算出相机拍摄的发散角度,对实物进行测量时,相机与被测物体表面参考激光线保持相同的距离;
步骤6.6,如果步骤6.5的距离不同,通过步骤6.5计算的角度和距离换算,算出每像素在该测量距离上代表的尺寸,乘上曼哈顿方法的像素点数量就是该距离上测出的缝隙长度;
步骤6.7,再通过对测得的最小距离在不同方向上的分解,得到需要的不同尺寸数据。
本发明一种基于边缘检测的车身门缝检测方法,有益效果在于:
1.本发明提出了一种基于边缘检测的车身门缝检测方法,实现车身门缝宽度的自动化检测;
2.本发明滤波处理具有快速且非近似的线性时间算法,可有效地对图像去雾和羽化;
3.本发明通过曼哈顿距离作为边缘检测锚点筛选的距离,可有效减少过分割和错误分割现象;
4.本发明通过边缘图像特征点匹配激光标记处,可快速定位车身门缝窗口;
5.本发明同样适用于对实体的测量,可是作为对间隙测量的反向方法。
说明书附图
图1为本发明的流程图;
图2为本发明的边缘检测流程图;
图3为本发明的拍摄方式图;
图4为本发明的标准尺寸板照片;
图5为本发明的原理图。
具体实施方式
下面结合附图与具体实施方式对本发明作进一步详细描述:
本发明提出了一种基于边缘检测的车身门缝检测方法,利用CCD相机获得的图像数据,结合图像滤波、边缘检测、特征点匹配等步骤,实现了对车身门缝宽度的检测,通过图像滤波可有效地对图像去雾和羽化,并将曼哈顿距离作为边缘检测锚点筛选的距离,可有效减少过分割和错误分割现象,图1为本发明的系统结构图,图5为本发明的原理图。下面结合系统结构图,即根据图1和图5,对本发明的步骤作详细介绍。
步骤1,搭建车身门缝检测设备,检测车身门缝图像(车身门缝图像数据),检测设备包括:光源控制器、CCD相机、激光指示器、网口通讯、计算机处理系统等。
将CCD相机放置于待检测车辆车门正前方,将激光指示器垂直于门缝照射,激光在门缝间会形成断点,相机还要沿门缝的走向方向移动,获取多帧的车身门缝图像,通过网络通讯将图像数据上传至计算机处理系统。
步骤2,对图像进行滤波处理,先将所检测的车身门缝进行灰度化处理,再将灰度化处理后的图像进行滤波处理;即图像滤波。
步骤2.1,将连续两帧图像的灰度图像分别作为输入图像和引导图像,通过下式进行滤波处理,获得滤波后的输出图像:
O i=a kG i+b k,i∈ω k                    (1)
式中,O为输出图像,G为引导图像,O i为滤波窗口下的输出图像,G i为滤波窗口下的引导图像,a k和b k为滤波系数,ω k为滤波窗口,k为图像位置。
步骤2.2,定义图像滤波的代价函数:
Figure PCTCN2022141041-appb-000005
式中,I为输入图像,I i为滤波窗口下的输入图像,ε为正则化参数。
步骤2.3,分别对a k和b k求偏导,并令偏导数等于零,可得:
Figure PCTCN2022141041-appb-000006
Figure PCTCN2022141041-appb-000007
式中,mean为取平均,Cov为协方差公式,Var为方差公式。
步骤2.4,在确定a k和b k后,返回式(1)对输入图像进行滤波处理,获得滤波后的图像。
步骤3,利用高斯函数的一阶偏导对滤波后的图像进行平滑,再利用一阶偏导的有限差分计算图像的梯度和方向,形成梯度图像,使用Roberts算子作为边缘检测算子,即对滤波后的图像进行边缘检测;边缘检测流程如图2所示。首先,对滤波后的图像,采用Roberts计算梯度幅值和方向,然后再经过非边缘抑制,再计算曼哈顿距离:选取锚点;最终,连接锚点获取边缘图像。
通过下式将滤波后的图像转换为梯度图像。
g x=f(x+1,y+1)-f(x,y)                 (5)
g y=f(x+1,y)-f(x,y+1)               (6)
式中,g x为x轴方向的梯度值,g y为y轴方向的梯度值,x轴代表水平方向,y轴代表垂直方向,f(x,y)为滤波后灰度图像中(x,y)位置处的灰度值。
步骤4,对梯度图像中的梯度幅值进行非极大值抑制,只保留局部最大值点,获得边缘轮廓梯度图的离散点。
步骤5,给定低阈值和高阈值,梯度值大于高阈值记为强边缘像素,作为图像中的锚点,小于低阈值的点将像素置0,连接锚点形成边缘图像。
步骤5.1,先计算每个锚点之间的曼哈顿距离,距离计算公式如下:
d=|x 1-x 2|+|y 1-y 2|                  (7)
式中,d为两个锚点(x 1,y 1)、(x 2,y 2)间的距离,x 1和x 2是锚点的水平方向坐标,y 1和y 2是锚点的垂直方向坐标。
步骤5.2,筛选出与周围锚点曼哈顿距离较小的锚点,在不同方向上分解得到不同分析要求的数据,并选出与该锚点曼哈顿距离最小的锚点,并连接该两个锚点获得边缘图像。
步骤6,提取边缘图像的特征点,匹配激光标记的图像窗口,找到垂直于缝隙的位置,在该窗口处计算曼哈顿距离,将窗口内最大的曼哈顿距离作为车身门缝曼哈顿宽度,确定车身门缝宽度,根据多帧图像中的门缝宽度,获取门缝宽度最小值(计算门缝宽度)。
步骤6.1,根据下式提取边缘图像特征点L Hessian
Figure PCTCN2022141041-appb-000008
式中,σ为当前层尺度参数,L xx、L yy分别为图像在x、y处的二阶微分值,L xy为二阶交叉偏导数。
步骤6.2,通过计算特征向量间的欧式距离并结合最近邻次近邻比值法来确定两个特征点是否匹配,并与设好的阈值进行比较,如果该比值更小,那么就将该处作为激光标记点。
步骤6.3,相机利用距离传感器拍摄标准尺寸板的照片,拍摄方式如图3所示,标准尺寸板照片如图4所示,通过式(7)计算每一像素点对应的尺寸,相机与标准板之间的距离和每像素点对应的尺寸为一组数据。
步骤6.4,相机在离标准板不同距离测量,算出相机每像素在不同距离上对应的尺寸。
步骤6.5,通过步骤6.4的数据计算出相机拍摄的发散角度,对实物进行测量时,相机与被测物体表面参考激光线保持相同的距离。
步骤6.6,如果步骤6.5的距离不同,通过步骤6.5计算的角度和距离换算,算出每像素在该测量距离上代表的尺寸,乘上曼哈顿方法的像素点数量就是该距离上测出的缝隙长度。
步骤6.7,再通过对测得的最小距离在不同方向上的分解,得到需要的不同尺寸数据。
以上所述,仅是本发明的较佳实施例而已,并非是对本发明作任何其他形式的限制,而依据本发明的技术实质所作的任何修改或等同变化,仍 属于本发明所要求保护的范围。

Claims (5)

  1. 一种基于边缘检测的车身门缝检测方法,具体步骤如下,其特征在于:
    步骤1,搭建车身门缝检测设备,检测车身门缝图像,检测设备包括:光源控制器、CCD相机、激光指示器、网口通讯、计算机处理系统;
    步骤2,对图像进行滤波处理,先将所检测的车身门缝进行灰度化处理,再将灰度化处理后的图像进行滤波处理;
    步骤3,利用高斯函数的一阶偏导对滤波后的图像进行平滑,再利用一阶偏导的有限差分计算图像的梯度和方向,形成梯度图像,使用Roberts算子作为边缘检测算子;
    步骤4,对梯度图像中的梯度幅值进行非极大值抑制,只保留局部最大值点,获得边缘轮廓梯度图的离散点;
    步骤5,给定低阈值和高阈值,梯度值大于高阈值记为强边缘像素,作为图像中的锚点,小于低阈值的点将像素置0,连接锚点形成边缘图像;
    步骤6,提取边缘图像的特征点,匹配激光标记的图像窗口,找到垂直于缝隙的位置,在该窗口处计算曼哈顿距离,将窗口内最大的曼哈顿距离作为车身门缝曼哈顿宽度,确定车身门缝宽度,根据多帧图像中的门缝宽度,获取门缝宽度最小值。
  2. 根据权利要求1所述的一种基于边缘检测的车身门缝检测方法,其特征在于,步骤1中搭建车身门缝检测设备的过程表示如下:
    将CCD相机放置于待检测车辆车门正前方,将激光指示器垂直于门缝照射,激光在门缝间会形成断点,相机还要沿门缝的走向方向移动,获取多帧的车身门缝图像,通过网络通讯将图像数据上传至计算机处理系统。
  3. 根据权利要求1所述的一种基于边缘检测的车身门缝检测方法,其特征在于,步骤2中对图像进行滤波处理的过程表示如下:
    步骤2.1,将连续两帧图像的灰度图像分别作为输入图像和引导图像,通过下式进行滤波处理,获得滤波后的输出图像:
    O i=a kG i+b k,i∈ω k  (1)
    式中,O为输出图像,G为引导图像,O i为滤波窗口下的输出图像, G i为滤波窗口下的引导图像,a k和b k为滤波系数,ω k为滤波窗口,k为图像位置;
    步骤2.2,定义图像滤波的代价函数:
    Figure PCTCN2022141041-appb-100001
    式中,I为输入图像,I i为滤波窗口下的输入图像,ε为正则化参数;
    步骤2.3,分别对a k和b k求偏导,并令偏导数等于零,得:
    Figure PCTCN2022141041-appb-100002
    Figure PCTCN2022141041-appb-100003
    式中,mean为取平均,Cov为协方差公式,Var为方差公式;
    步骤2.4,在确定a k和b k后,返回式(1)对输入图像进行滤波处理,获得滤波后的图像。
  4. 根据权利要求1所述的一种基于边缘检测的车身门缝检测方法,其特征在于,步骤5中获取边缘图像的过程表示如下:
    步骤5.1,先计算每个锚点之间的曼哈顿距离,距离计算公式如下:
    d=|x 1-x 2|+|y 1-y 2|  (7)
    式中,d为两个锚点(x 1,y 1)、(x 2,y 2)间的距离,x 1和x 2是锚点的水平方向坐标,y 1和y 2是锚点的垂直方向坐标;
    步骤5.2筛选出与周围锚点曼哈顿距离较小的锚点,在不同方向上分解得到不同分析要求的数据,并选出与该锚点曼哈顿距离最小的锚点,并连接该两个锚点获得边缘图像。
  5. 根据权利要求1所述的一种基于边缘检测的车身门缝检测方法,其特征在于,步骤6中获取门缝宽度最小值的过程表示如下:
    步骤6.1,根据下式提取边缘图像特征点L Hessian
    Figure PCTCN2022141041-appb-100004
    式中,σ为当前层尺度参数,L xx、L yy分别为图像在x、y处的二阶微分值,L xy为二阶交叉偏导数;
    步骤6.2,通过计算特征向量间的欧式距离并结合最近邻次近邻比值法来确定两个特征点是否匹配,并与设好的阈值进行比较,如果该比值更小,那么就将该处作为激光标记点;
    步骤6.3,相机利用距离传感器拍摄标准尺寸板的照片,通过式(7)计算每一像素点对应的尺寸,相机与标准板之间的距离和每像素点对应的尺寸为一组数据;
    步骤6.4,相机在离标准板不同距离测量,算出相机每像素在不同距离上对应的尺寸;
    步骤6.5,通过步骤6.4的数据计算出相机拍摄的发散角度,对实物进行测量时,相机与被测物体表面参考激光线保持相同的距离;
    步骤6.6,如果步骤6.5的距离不同,通过步骤6.5计算的角度和距离换算,算出每像素在该测量距离上代表的尺寸,乘上曼哈顿方法的像素点数量就是该距离上测出的缝隙长度;
    步骤6.7,再通过对测得的最小距离在不同方向上的分解,得到需要的不同尺寸数据。
PCT/CN2022/141041 2022-07-14 2022-12-22 一种基于边缘检测的车身门缝检测方法 WO2024011851A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210825110.6 2022-07-14
CN202210825110.6A CN115187552B (zh) 2022-07-14 2022-07-14 一种基于边缘检测的车身门缝检测方法

Publications (1)

Publication Number Publication Date
WO2024011851A1 true WO2024011851A1 (zh) 2024-01-18

Family

ID=83519271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/141041 WO2024011851A1 (zh) 2022-07-14 2022-12-22 一种基于边缘检测的车身门缝检测方法

Country Status (2)

Country Link
CN (1) CN115187552B (zh)
WO (1) WO2024011851A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037734A (zh) * 2024-04-12 2024-05-14 济宁丰玺生物材料有限公司 一种全降解塑料制品质量检测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187552B (zh) * 2022-07-14 2023-04-25 南京思飞捷软件科技有限公司 一种基于边缘检测的车身门缝检测方法
CN115984314B (zh) * 2022-11-25 2023-06-23 哈尔滨理工大学 一种基于计算全息二阶微分的图像边缘检测方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238488A1 (en) * 2008-03-20 2009-09-24 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for image interpolation based on low pass filtering
CN108470347A (zh) * 2017-02-23 2018-08-31 南宁市富久信息技术有限公司 一种彩色图像边缘检测方法
CN113838077A (zh) * 2021-08-17 2021-12-24 淮阴工学院 一种基于改进的Canny算子的亚像素边缘提取方法
CN113850786A (zh) * 2021-09-27 2021-12-28 深圳市深视智能科技有限公司 一种检测车辆车门缝隙参数的方法、装置及测量设备
CN114399522A (zh) * 2022-01-14 2022-04-26 东南大学 一种基于高低阈值的Canny算子的边缘检测方法
CN115187552A (zh) * 2022-07-14 2022-10-14 南京思飞捷软件科技有限公司 一种基于边缘检测的车身门缝检测方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103075973A (zh) * 2012-12-31 2013-05-01 吉林大学 车身缝隙尺寸非接触在线检测方法
CN111084606A (zh) * 2019-10-12 2020-05-01 深圳壹账通智能科技有限公司 基于图像识别的视力检测方法、装置、及计算机设备
CN113570654A (zh) * 2021-06-16 2021-10-29 上海工程技术大学 基于最小外接矩形的汽车表面缝隙尺寸检测方法及其应用

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090238488A1 (en) * 2008-03-20 2009-09-24 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for image interpolation based on low pass filtering
CN108470347A (zh) * 2017-02-23 2018-08-31 南宁市富久信息技术有限公司 一种彩色图像边缘检测方法
CN113838077A (zh) * 2021-08-17 2021-12-24 淮阴工学院 一种基于改进的Canny算子的亚像素边缘提取方法
CN113850786A (zh) * 2021-09-27 2021-12-28 深圳市深视智能科技有限公司 一种检测车辆车门缝隙参数的方法、装置及测量设备
CN114399522A (zh) * 2022-01-14 2022-04-26 东南大学 一种基于高低阈值的Canny算子的边缘检测方法
CN115187552A (zh) * 2022-07-14 2022-10-14 南京思飞捷软件科技有限公司 一种基于边缘检测的车身门缝检测方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAIMING HE, JIAN SUN, XIAOOU TANG: "Guided Image Filtering", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY., USA, vol. 35, no. 6, 1 June 2013 (2013-06-01), USA , pages 1397 - 1409, XP055256301, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2012.213 *
LI SHIXIONG; CAO GUANGZHONG; LI QING;PENG YEPING; LV JIEYIN: "Research on Anchor Points based Edge Detection Optimization", JOURNAL OF ELECTRONIC MEASUREMENT AND INSTRUMENT, vol. 32, no. 11, 15 November 2018 (2018-11-15), pages 9 - 16, XP009551927, ISSN: 1000-7105 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037734A (zh) * 2024-04-12 2024-05-14 济宁丰玺生物材料有限公司 一种全降解塑料制品质量检测方法

Also Published As

Publication number Publication date
CN115187552A (zh) 2022-10-14
CN115187552B (zh) 2023-04-25

Similar Documents

Publication Publication Date Title
WO2024011851A1 (zh) 一种基于边缘检测的车身门缝检测方法
WO2022007431A1 (zh) 一种Micro QR二维码的定位方法
CN105158257B (zh) 滑板测量方法及装置
CN109752392B (zh) 一种pcb板缺陷类型检测系统和方法
CN106996748A (zh) 一种基于双目视觉的轮径测量方法
CN110796728A (zh) 一种基于扫描式激光雷达的非合作航天器三维重建方法
CN102175700A (zh) 数字x射线图像焊缝分割和缺陷检测方法
CN107622480B (zh) 一种Kinect深度图像增强方法
WO2018072333A1 (zh) 一种元件错件检测方法和装置
US11132806B2 (en) Image processing apparatus and image processing method
CN108491810A (zh) 基于背景建模和双目视觉的车辆限高方法及系统
CN110189375A (zh) 一种基于单目视觉测量的图像目标识别方法
CN114627080B (zh) 基于计算机视觉的车辆冲压配件缺陷检测方法
US20130176300A1 (en) Disparity maps in uniform areas
CN113155839A (zh) 一种基于机器视觉的钢板外表面缺陷在线检测方法
CN103700082B (zh) 基于对偶四元数相对定向的图像拼接方法
CN113607058B (zh) 一种基于机器视觉的直刃刀尺寸检测方法及系统
CN113705564B (zh) 一种指针式仪表识别读数方法
CN109671084B (zh) 一种工件形状的测量方法
CN110580697B (zh) 一种从超声视频图像中测量胎儿颈背透明物厚度的视频图像处理方法及系统
WO2023035404A1 (zh) 一种基于单目视觉的相机平面和目标平面夹角估算方法
KR20140033597A (ko) 영상정보를 활용한 흘수 측정방법
CN114663882A (zh) 基于深度学习的电动汽车底盘划痕三维检测方法
CN113963012A (zh) 一种焊缝边缘检测方法
Chen et al. Design and Implementation of Crack Width Detection System Based on Android

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22950958

Country of ref document: EP

Kind code of ref document: A1