WO2023060927A1 - 一种3d光栅检测方法、装置、计算机设备及可读存储介质 - Google Patents

一种3d光栅检测方法、装置、计算机设备及可读存储介质 Download PDF

Info

Publication number
WO2023060927A1
WO2023060927A1 PCT/CN2022/099914 CN2022099914W WO2023060927A1 WO 2023060927 A1 WO2023060927 A1 WO 2023060927A1 CN 2022099914 W CN2022099914 W CN 2022099914W WO 2023060927 A1 WO2023060927 A1 WO 2023060927A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud information
workpiece
axis
point
Prior art date
Application number
PCT/CN2022/099914
Other languages
English (en)
French (fr)
Inventor
崔岩
刘强
Original Assignee
五邑大学
广东四维看看智能设备有限公司
中德(珠海)人工智能研究院有限公司
珠海市四维时代网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 五邑大学, 广东四维看看智能设备有限公司, 中德(珠海)人工智能研究院有限公司, 珠海市四维时代网络科技有限公司 filed Critical 五邑大学
Publication of WO2023060927A1 publication Critical patent/WO2023060927A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Definitions

  • the invention relates to the technical field of workpiece detection, in particular to a 3D grating detection method, device, computer equipment and readable storage medium.
  • 2D visual inspection systems are an indispensable part of automated machines because they realize non-contact real-time inspection functions.
  • the 2D detection method is adopted, and the pattern matching drive based on the contour is used to identify the position, size and direction of the workpiece, and to detect the installation condition of the target workpiece.
  • 2D detection does not support shape-related measurement, is easily affected by the lighting conditions in the environment, and is sensitive to the motion of the detected object, etc., which has certain limitations and affects the measurement results.
  • the inventor believes that there is a defect that the detection accuracy of the existing workpiece detection methods is low.
  • the present invention provides a 3D grating detection method, device, computer equipment and readable storage medium.
  • the present invention provides a 3D grating detection method, which has the characteristics of improving the detection accuracy of workpieces.
  • a 3D grating detection method comprising the following steps:
  • the raster scanner is embedded between the binocular cameras to provide a light source for the binocular cameras and to calibrate the binocular cameras
  • 3D modeling is performed based on the point cloud information after bilateral filtering to obtain a 3D model
  • the workpiece is detected based on the 3D model, and the data of each dimension of the workpiece is obtained from the four dimensions of X-axis, Y-axis, Z-axis and pose, and it is judged whether the workpiece is installed in place.
  • the present invention can be further configured as: the step of judging whether the workpiece is installed in place includes:
  • the present invention can be further configured as: after the step of smoothing and filtering the point cloud information by using bilateral filtering, the following steps are further included:
  • the filtered point cloud information is used as the processed point cloud information.
  • the present invention can be further configured as: the step of smoothing and filtering the point cloud information by using bilateral filtering includes:
  • the convolution operation is performed with the image matrix.
  • the present invention may be further configured as: the calibrating the binocular camera specifically includes:
  • the present invention can be further configured as follows: after the step of obtaining point cloud information including three-dimensional coordinates, the following steps are further included:
  • a preset low dynamic range image corresponding to each exposure time is selected, and a high dynamic range image is synthesized to optimize the point cloud information.
  • the grating scanner is embedded between the binocular cameras, and there is a grating light source between the two lenses of the binocular camera, so that the grating scanner provides a light source for the binocular camera, so that the images taken by the binocular camera The photo quality is better; calibrate the binocular camera, obtain the measurement results of the binocular camera, and calculate the target depth value based on the measurement results to obtain three-dimensional information from the two-dimensional image, and obtain the distance between the workpiece to be measured and the binocular camera depth, and then determine the spatial position of the workpiece to be measured; make the binocular camera shoot the workpiece, collect the image of the workpiece, and obtain the point cloud information including the three-dimensional coordinates, in order to obtain the three-dimensional information of the workpiece to be measured; The cloud information is smoothed and filtered, and the boundary information is retained at the same time to consider the spatial proximity information and color similarity information.
  • each point cloud calculates the average distance from it to all its neighboring point clouds, and by assuming that the resulting distribution is Gaussian with mean and standard deviation, therefore, its average distance is given by the global distance All point clouds outside the threshold defined by the mean and standard deviation can be considered as outlier point clouds, which can be removed from the data set to remove outlier point clouds, optimize the quality of the 3D model, and further improve the detection accuracy of workpieces.
  • bilateral filtering is used to smooth and filter the point cloud information, so that the weight of the spatial proximity of each point cloud information to the center point and the weight of the pixel value similarity are multiplied, and then convolution calculation is performed with the image matrix , in order to achieve the purpose of putting the two-dimensional Gaussian normal distribution on the image matrix for convolution operation, optimize the quality of the collected point cloud information, and achieve the effect of edge preservation and denoising.
  • the high dynamic range image is synthesized based on the low dynamic range image, and the image quality of the collected point cloud information is optimized, thereby optimizing the quality of the collected point cloud information, which is beneficial to further improving the detection accuracy of the workpiece.
  • the present invention provides a 3D grating detection device, which has the characteristics of improving the detection accuracy of workpieces.
  • a 3D grating detection device comprising:
  • a light source module configured to embed the raster scanner between the binocular cameras to provide a light source for the binocular cameras
  • a calibration module configured to calibrate the binocular camera
  • the point cloud information module is used to enable the binocular camera to photograph the workpiece, collect images of the workpiece, and obtain point cloud information including three-dimensional coordinates;
  • a processing module configured to smooth and filter the point cloud information by adopting bilateral filtering, while retaining boundary information
  • the modeling module is used to perform 3D modeling based on the point cloud information after bilateral filtering to obtain a 3D model
  • the detection module is used to detect the workpiece based on the 3D model, and obtains the data of each dimension of the workpiece from the four dimensions of X-axis, Y-axis, Z-axis and pose, and judges whether the workpiece is installed in place.
  • the present invention can be further configured as: further comprising:
  • the screening module is used to calculate the average distance between each point cloud information obtained by the point cloud information module and all other adjacent point clouds based on the Gaussian distribution, and remove the average distance between the thresholds set based on the global distance average and standard deviation. All point cloud information outside to filter point cloud information.
  • the present invention provides a computer device, which has the feature of improving the detection accuracy of workpieces.
  • a computer device includes a memory, a processor, and a computer program stored in the memory and operable on the processor, and the processor implements the steps of the above-mentioned 3D grating detection method when executing the computer program.
  • the present invention provides a computer-readable storage medium, which has the feature of improving workpiece detection accuracy.
  • a computer-readable storage medium where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above-mentioned 3D grating detection method are realized.
  • the present invention includes at least one of the following beneficial technical effects:
  • FIG. 1 is a schematic flowchart of a 3D grating detection method according to an embodiment of the present invention.
  • Fig. 2 is a schematic flow chart of smoothing and filtering point cloud information by bilateral filtering.
  • Fig. 3 is a schematic flow chart of judging whether the workpiece installation is in place.
  • FIG. 4 is a structural block diagram of a 3D grating detection device according to one embodiment of the present invention.
  • an embodiment of the present invention provides a 3D grating detection method, and the main steps of the method are described as follows.
  • S3 Make the binocular camera shoot the workpiece, collect the image of the workpiece, and obtain point cloud information including three-dimensional coordinates;
  • S8 Detect the workpiece based on the 3D model, obtain the data of each dimension of the workpiece from the four dimensions of X-axis, Y-axis, Z-axis and pose, and judge whether the workpiece is installed in place.
  • S1 Embed the grating scanner between the binocular cameras, and then there is a grating light source between the two lenses of the binocular camera, so that the grating scanner provides light sources for the binocular cameras in a frequency band scanning manner.
  • it may be in the form of sine wave scanning, so that the quality of the photos taken by the binocular camera is better.
  • the step of calibrating the binocular camera specifically includes: based on the calibration board calibration method, by taking a group of calibration board photos, preset corner points, detecting the corner point coordinates of the calibration board, and using Zhang Zhengyou calibration algorithm to calibrate the camera
  • the internal and external parameters of the binocular camera are obtained, including the internal and external parameters of the binocular camera, the homography matrix, the focal length f of the camera, and the baseline b of the left and right cameras, which are the measurement results of the binocular camera.
  • Zhang Zhengyou’s calibration algorithm is a camera calibration method based on a single-plane checkerboard. This method is simple and practical, and does not require additional equipment. A printed checkerboard is enough; the calibration is simple, and the camera and calibration board can be placed arbitrarily; High precision.
  • the original images are corrected according to the measurement results, so that the two corrected images are located on the same plane and parallel to each other.
  • the parallax calculates the workpiece and binocular
  • the target depth value z between the cameras is used to obtain three-dimensional information from the two-dimensional image, and obtain the depth between the workpiece to be measured and the binocular camera, and then determine the spatial position of the workpiece to be measured, and calibrate the position of the binocular camera.
  • S3 Make the binocular camera shoot the workpiece, collect the image of the workpiece, and obtain the point cloud information including the three-dimensional coordinates, so as to obtain the three-dimensional information of the workpiece to be measured.
  • S5 adopt bilateral filter to carry out the step that point cloud information is smoothed and filtered and comprise,
  • S51 preset the central point, calculate the weight of the spatial proximity of each point cloud information to the central point by the spatial proximity of each point cloud information to the central point;
  • Bilateral filtering is a kind of nonlinear filtering. Combining the spatial proximity of the image and the similarity of pixel values, a combination of two Gaussian filters is used. One Gaussian filter is responsible for calculating the weight of the spatial proximity, which is the commonly used Gaussian filter. principle; another Gaussian filter is responsible for calculating the weight of the pixel value similarity. When filtering, bilateral filtering considers spatial proximity information and color similarity information at the same time, while filtering out noise and smoothing the image, it also achieves edge preservation.
  • S51 preset center point, for a pixel point in the image, set a square neighborhood with a preset size, assume that the origin (0,0) of the neighborhood is the center point, and the coordinates of the pixel point are ( x,y).
  • each pixel in the image is scanned through the preset image matrix.
  • the image matrix is a mathematical matrix with a fixed size and composed of numerical parameters, and the data in the matrix are weights. The closer to the center point, the greater the weight.
  • the value of the pixel point is obtained, that is, the spatial proximity of the pixel point to the center point, and the spatial proximity of each point cloud information to the center point is calculated accordingly.
  • the pixel value similarity of each point cloud information is weighted and averaged, and the weight value of the pixel value similarity of each point cloud information is calculated.
  • bilateral filtering is used to smooth and filter the point cloud information, while retaining the boundary information to consider the spatial proximity information and color similarity information. While filtering out noise and smoothing the image, it also achieves edge preservation, so that the three-dimensional information of the workpiece to be measured more accurate and complete.
  • S7 Perform 3D modeling based on the point cloud information after bilateral filtering to obtain a 3D model
  • S8 Detect the workpiece based on the 3D model, and obtain each dimension of the workpiece from the four dimensions of X-axis, Y-axis, Z-axis and pose Data to judge whether the workpiece installation is in place.
  • the 3D Hough and GrabCut algorithms are also involved when the workpiece is detected based on the 3D model.
  • the GrabCut algorithm cuts the image of the workpiece target and the background. The segmentation speed is fast and the effect is good; through the 3D Hough algorithm, the classic Hough voting idea is used for 3D scene target recognition. The highest number of votes is regarded as the center of mass of the target object in the scene. position in order to get the exact target position.
  • S8 the step of judging whether the workpiece installation is in place includes,
  • S81 According to the four dimensions of X-axis, Y-axis, Z-axis and pose, respectively preset standard values and corresponding tolerance values corresponding to X-axis, Y-axis, Z-axis and pose;
  • S82 Calculate the difference between the X-axis, Y-axis, Z-axis and pose and the corresponding standard value respectively, and then compare with the corresponding tolerance value to obtain the dimension data result;
  • S61 Based on the Gaussian distribution, calculate the average distance between each point cloud information and all other adjacent point clouds;
  • S62 remove all point cloud information of average distance outside the threshold based on global distance mean value and standard deviation setting, to filter point cloud information;
  • the boundary of the function is determined by the global distance average and standard deviation. After the point cloud information is substituted into the function, the point cloud information whose real value is greater than the calculated value is removed.
  • a 3D grating detection method performs four-dimensional detection on the workpiece to be tested. If the error of the workpiece in any dimension exceeds the allowable range, it will be judged as unqualified, that is, the installation of the workpiece is not in place, so that the detection accuracy of the workpiece is higher. Break through the limitations of 2D detection.
  • the high dynamic range image is synthesized based on the low dynamic range image, and the image quality of the collected point cloud information is optimized, thereby optimizing the quality of the collected point cloud information, which is conducive to further improving the detection accuracy of the workpiece.
  • the point cloud information is smoothed by bilateral filtering, so that the weight of the spatial proximity of each point cloud information to the center point and the weight of the pixel value similarity are multiplied, and then convoluted with the image matrix to achieve the two
  • the Gaussian normal distribution is placed on the image matrix for the purpose of convolution operation, so as to achieve the effect of edge preservation and denoising, optimize the quality of the collected point cloud information, and make the detection accuracy of the workpiece higher.
  • Removing outlier point cloud information and optimizing the quality of 3D models will further improve the detection accuracy of workpieces.
  • an embodiment of the present invention further provides a 3D grating detection device, and the 3D grating detection device corresponds to the 3D grating detection method in the above embodiment one by one.
  • This a kind of 3D grating detection device comprises:
  • the light source module is used to embed the raster scanner between the binocular cameras, so that the raster scanner provides a light source for the binocular cameras;
  • a calibration module is used to calibrate the binocular camera
  • the point cloud information module is used to enable the binocular camera to photograph the workpiece, collect images of the workpiece, and obtain point cloud information including three-dimensional coordinates;
  • the optimization module is used for low dynamic range images based on different exposure times, selects preset low dynamic range images corresponding to each exposure time, synthesizes high dynamic range images, and optimizes point cloud information;
  • the processing module is used for smoothing and filtering the point cloud information by adopting bilateral filtering, while retaining boundary information;
  • the screening module is used to calculate the average distance between each point cloud information obtained by the point cloud information module and all other adjacent point clouds based on the Gaussian distribution, and remove the average distance outside the threshold based on the global distance average and standard deviation. All point cloud information to filter point cloud information;
  • a modeling module is used to perform 3D modeling based on the processed point cloud information to obtain a 3D model
  • the detection module is used to detect the workpiece based on the 3D model, obtain the data of each dimension of the workpiece from the four dimensions of X-axis, Y-axis, Z-axis and pose, and judge whether the workpiece is installed in place.
  • Each module in the above-mentioned 3D grating detection device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • a computer device which may be a server.
  • the computer device includes a processor, memory, network interface and database connected by a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer programs and databases.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal via a network connection. When the computer program is executed by the processor, a 3D grating detection device is realized.
  • a computer-readable storage medium including a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the following steps are implemented:
  • S3 Make the binocular camera shoot the workpiece, collect the image of the workpiece, and obtain point cloud information including three-dimensional coordinates;
  • S8 Detect the workpiece based on the 3D model, obtain the data of each dimension of the workpiece from the four dimensions of X-axis, Y-axis, Z-axis and pose, and judge whether the workpiece is installed in place.
  • any reference to memory, storage, database or other media used in the various embodiments provided by the present invention may include non-volatile and/or volatile memory.

Abstract

本发明涉及一种3D光栅检测方法、装置、计算机设备及可读存储介质,其方法包括在光栅扫描仪嵌入到双目相机之间为双目相机提供光源和对双目相机进行标定后,使双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;采用双边滤波对点云信息进行平滑滤波,同时保留边界信息;基于双边滤波后的点云信息进行3D建模,获得3D模型;基于3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。解决了现有的工件检测方式检测精度较低的问题。本发明具有提高工件检测精度的效果。

Description

一种3D光栅检测方法、装置、计算机设备及可读存储介质 技术领域
本发明涉及工件检测技术领域,尤其是涉及一种3D光栅检测方法、装置、计算机设备及可读存储介质。
背景技术
随着科技与计算机工业的进步,自动化机械在制造业中得到了广泛应用,特别地,2D视觉检测系统因实现了非接触的实时检测功能,是自动化机器不可或缺的组成部分。
目前,采用2D检测方式,基于轮廓的图案匹配驱动,以识别工件的位置、尺寸和方向,对目标工件的安装情况进行检测。
但2D检测不支持与形状相关的测量、易受环境中的照明条件影响、且对检测物件的运动敏感等,存在一定的局限性,影响测量结果。
针对上述中的相关技术,发明人认为存在有现有的工件检测方式检测精度较低的缺陷。
发明内容
为了提高工件检测的精度,本发明提供了一种3D光栅检测方法、装置、计算机设备及可读存储介质。
第一方面,本发明提供一种3D光栅检测方法,具有提高工件检测精度的特点。
本发明是通过以下技术方案得以实现的:
一种3D光栅检测方法,包括以下步骤:
在光栅扫描仪嵌入到双目相机之间为所述双目相机提供光源和对所述双目相机进行标定后,
使所述双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
采用双边滤波对所述点云信息进行平滑滤波,同时保留边界信息;
基于双边滤波后的点云信息进行3D建模,获得3D模型;
基于所述3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。
本发明在一较佳示例中可以进一步配置为:所述判断工件安装是否到位的步骤包括:
根据X轴、Y轴、Z轴和位姿四个维度,分别预设与X轴、Y轴、Z轴和位姿一一对应的标准值和对应的容差值;
分别计算X轴、Y轴、Z轴和位姿与对应标准值之间的差值,再与对应的容差值进行比较,获得维度数据结果;
当X轴、Y轴、Z轴和位姿中任一个维度数据结果位于对应的容差值的范围之外时,判断为工件安装不到位。
本发明在一较佳示例中可以进一步配置为:所述采用双边滤波对所述点云信息进行平滑滤波的步骤后,还包括以下步骤:
基于高斯分布,计算每个点云信息与其它所有相邻点云的平均距离;
去除平均距离在基于全局距离平均值和标准偏差设置的阈值之外的所有点云信息,以筛选所述点云信息;
使筛选后的所述点云信息作为处理的点云信息。
本发明在一较佳示例中可以进一步配置为:所述采用双边滤波对所述点云信息进行平滑滤波的步骤包括:
预设中心点,通过各个点云信息到中心点的空间临近度,计算各个点云信息到中心点的空间临近度的权值;
基于各个点云信息的像素值相似度,计算各个点云信息的像素值相似度的权值;
基于各个点云信息到中心点的空间临近度的权值和像素值相似度的权值进行乘积,再与图像矩阵作卷积运算。
本发明在一较佳示例中可以进一步配置为:所述对所述双目相机进行标定具体包括:
基于标定板标定法,通过拍摄一组标定板照片,预设角点,检测标定板的角点坐标,并利用张正友标定算法标定相机的内外参数。
本发明在一较佳示例中可以进一步配置为:所述获得包括三维坐标的点云信息的步骤后,还包括以下步骤:
基于不同的曝光时间的低动态范围图像,选取每个曝光时间相对应的预设的低动态范围图像,合成高动态范围图像,优化所述点云信息。
通过采用上述技术方案,将光栅扫描仪嵌入到双目相机之间,进而双目相机的两个镜头之间有一个光栅光源,使光栅扫描仪为双目相机提供光源,使得双目相机拍摄的照片质量更好;对双目相机进行标定,获取双目相机的测量结果,并基于测量结果计算目 标深度值,以从二维图像中获取三维信息,并得到待测工件和双目相机之间的深度,进而确定待测工件的空间位置;使双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息,以获取待测工件的三维信息;采用双边滤波对点云信息进行平滑滤波,同时保留边界信息,以考虑空间临近信息与颜色相似信息,在滤除噪声、平滑图像的同时,又做到边缘保存,使得待测工件的三维信息更准确完整;基于处理的点云信息进行3D建模,获得3D模型,基于3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位;进而一种3D光栅检测方法通过对待测工件进行四维度检测,待测工件在任一维度上的误差超出允许范围都会被判定为不合格,即工件的安装不到位,工件的检测精度更高,突破了2D检测的局限性。
进一步地,对于每个点云,计算从它到其所有相邻点云的平均距离,并通过假设所得到的分布是具有平均值和标准偏差的高斯分布,因此,其平均距离在由全局距离平均值和标准偏差定义的阈值之外的所有点云可以被认为是离群点云,可从数据集中去除,以去除离群点云,优化3D模型质量,利于进一步提高工件的检测精度。
进一步地,采用双边滤波对所述点云信息进行平滑滤波,使各个点云信息到中心点的空间临近度的权值和像素值相似度的权值进行乘积,再与图像矩阵作卷积计算,以实现将二维高斯正态分布放在图像矩阵上做卷积运算的目的,优化采集的点云信息的质量,从而达到保边去噪的效果。
进一步地,基于低动态范围图像合成高动态范围图像,优化采集的点云信息的图像画质,从而优化采集的点云信息的质量,利于进一步提高工件的检测精度。
第二方面,本发明提供一种3D光栅检测装置,具有提高工件检测精度的特点。
本发明是通过以下技术方案得以实现的:
一种3D光栅检测装置,包括:
光源模块,用于将光栅扫描仪嵌入到双目相机之间为所述双目相机提供光源;
标定模块,用于对所述双目相机进行标定;
点云信息模块,用于使所述双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
处理模块,用于采用双边滤波对所述点云信息进行平滑滤波,同时保留边界信息;
建模模块,用于基于双边滤波后的点云信息进行3D建模,获得3D模型;
检测模块,用于基于所述3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四 个维度上获取工件的各维度数据,判断工件安装是否到位。
本发明在一较佳示例中可以进一步配置为:还包括:
筛选模块,用于基于高斯分布,计算所述点云信息模块获取的每个点云信息与其它所有相邻点云的平均距离,去除平均距离在基于全局距离平均值和标准偏差设置的阈值之外的所有点云信息,以筛选点云信息。
第三方面,本发明提供一种计算机设备,具有提高工件检测精度的特点。
本发明是通过以下技术方案得以实现的:
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述3D光栅检测方法的步骤。
第四方面,本发明提供一种计算机可读存储介质,具有提高工件检测精度的特点。
本发明是通过以下技术方案得以实现的:
一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述3D光栅检测方法的步骤。
综上所述,本发明包括以下至少一种有益技术效果:
1、对待测工件进行四维度检测,待测工件在任一维度上的误差超出允许范围都会被判定为工件的安装不到位,进而工件的检测精度更高,突破了2D检测的局限性;
2、优化采集的点云信息的图像画质,优化采集的点云信息,利于进一步提高工件的检测精度;
3、从数据集中去除离群点云,以优化3D模型质量,进而利于进一步提高工件的检测精度;
4、采用双边滤波对点云信息进行平滑滤波,以实现将二维高斯正态分布放在图像矩阵上做卷积运算的目的,从而达到保边去噪的效果,优化采集的点云信息的质量,使得工件的检测精度更高。
附图说明
图1是本发明其中一实施例一种3D光栅检测方法的整体流程示意图。
图2是采用双边滤波对点云信息进行平滑滤波的流程示意图。
图3是判断工件安装是否到位的流程示意图。
图4是本发明其中一实施例一种3D光栅检测装置的结构框图。
具体实施方式
本具体实施例仅仅是对本发明的解释,其并不是对本发明的限制,本领域技术人员在阅读完本说明书后可以根据需要对本实施例做出没有创造性贡献的修改,但只要在本发明的权利要求范围内都受到专利法的保护。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
另外,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。
下面结合说明书附图对本发明实施例作进一步详细描述。
参照图1,本发明实施例提供一种3D光栅检测方法,所述方法的主要步骤描述如下。
S1:将光栅扫描仪嵌入到双目相机之间,使光栅扫描仪为双目相机提供光源;
S2:对双目相机进行标定;
S3:使双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
S5:采用双边滤波对点云信息进行平滑滤波,同时保留边界信息;
S7:基于双边滤波后的点云信息进行3D建模,获得3D模型;
S8:基于3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。
具体地,S1:将光栅扫描仪嵌入到双目相机之间,进而双目相机的两个镜头之间有一个光栅光源,使光栅扫描仪以频段扫描的方式给为双目相机提供光源,本实施例中可以为通过正弦波扫描形式,以使得双目相机拍摄的照片质量更好。
进一步地,S2:对双目相机进行标定的步骤具体包括:基于标定板标定法,通过拍摄一组标定板照片,预设角点,检测标定板的角点坐标,并利用张正友标定算法标定相机的内外参数,得到双目相机的内外参数、单应矩阵、相机焦距f和左右相机基线b等,即双目相机的测量结果。获取双目相机的测量结果,并基于测量结果计算工件和双目相 机之间的目标深度值。
张正友标定算法,基于单平面棋盘格的相机标定方法,该方法使用简单实用性强,不需要额外的器材,一张打印的棋盘格即可;标定简单,相机和标定板可以任意放置;标定的精度高。
再根据测量结果对原始图像进行校正,使得校正后的两张图像位于同一平面且互相平行。
对校正后的两张图像进行像素点匹配,获得匹配结果。
根据匹配结果计算每个像素的深度,测算拍摄的图像中各个像素点的深度值,获得深度图,使图像转换为深度图。
基于双目相机产生的图像中各个像素点的深度值,即视差,根据公式z=f*b/d,其中,f为相机焦距,b为左右相机基线,d为视差,计算工件和双目相机之间的目标深度值z,以从二维图像中获取三维信息,并得到待测工件和双目相机之间的深度,进而确定待测工件的空间位置,校准双目相机的位置。
S3:使双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息,以获取待测工件的三维信息。
参照图2,进一步地,S5:采用双边滤波对点云信息进行平滑滤波的步骤包括,
S51:预设中心点,通过各个点云信息到中心点的空间临近度,计算各个点云信息到中心点的空间临近度的权值;
S52:基于各个点云信息的像素值相似度,计算各个点云信息的像素值相似度的权值;
S53:基于各个点云信息到中心点的空间临近度的权值和像素值相似度的权值进行乘积,再与图像矩阵作卷积运算。
双边滤波是非线性滤波中的一种,结合图像的空间邻近度与像素值相似度,采用了两个高斯滤波的结合,一个高斯滤波负责计算空间邻近度的权值,也就是常用的高斯滤波器原理;而另一个高斯滤波负责计算像素值相似度的权值。在滤波时,双边滤波同时考虑空间临近信息与颜色相似信息,在滤除噪声、平滑图像的同时,又做到边缘保存。
其中,S51:预设中心点,对图像中一个像素点,设定一个预设大小的方形邻域,假设定该邻域的原点(0,0)为中心点,像素点的坐标为(x,y)。
基于图像上的每一个像素点的值是由其本身和邻域内其他像素点的值经过加权平均后得到的,通过预设的图像矩阵,扫描图像中的每一个像素点。图像矩阵是一个大小固定、由数值参数组成的数学矩阵,矩阵中的数据为权值。离中心点越近,权重越大。 使中心点与方形邻域内的其他像素值进行加权平均后,得到该像素点的值,即该像素点到中心点的空间临近度,依此计算各个点云信息到中心点的空间临近度。
计算各个点云信息到中心点的空间临近度的权值,即加权平均数。
S52:计算各个点云信息的像素值相似度,使该像素点到方形邻域内的其他像素点的像素值的差值的绝对值的平方作为该像素点的像素值相似度,依此计算各个点云信息的像素值相似度。
使各个点云信息的像素值相似度进行加权平均,计算各个点云信息的像素值相似度的权值。
S53:基于各个点云信息的权值和像素值相似度的权值进行乘积,获得优化后的权值,使优化后的权值与预设的图像矩阵进行卷积运算,实现将二维高斯正态分布放在图像矩阵上做卷积运算,从而达到保边去噪的效果。
进而采用双边滤波对点云信息进行平滑滤波,同时保留边界信息,以考虑空间临近信息与颜色相似信息,在滤除噪声、平滑图像的同时,又做到边缘保存,使得待测工件的三维信息更准确完整。
S7:基于双边滤波后的点云信息进行3D建模,获得3D模型;S8:基于3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。本实施例中,基于3D模型对工件进行检测时还涉及了3D Hough和GrabCut算法。先由GrabCut算法将工件目标与背景进行图像切割,分割速度快,效果好;通过3D Hough算法,将经典的将霍夫投票思想用于三维场景目标识别,票数最高视为目标物体质心在场景中的位置,以得出准确的目标位置。
其中,S8:判断工件安装是否到位的步骤包括,
S81:根据X轴、Y轴、Z轴和位姿四个维度,分别预设与X轴、Y轴、Z轴和位姿一一对应的标准值和对应的容差值;
S82:分别计算X轴、Y轴、Z轴和位姿与对应标准值之间的差值,再与对应的容差值进行比较,获得维度数据结果;
S83:当X轴、Y轴、Z轴和位姿中任一个维度数据结果位于对应的容差值的范围之外时,判断为工件安装不到位。
进一步地,S5:采用双边滤波对点云信息进行平滑滤波前,还包括以下步骤,
S4:基于不同的曝光时间的低动态范围(Low-Dynamic Range,LDR)图像,选取每个曝光时间相对应的预设的LDR图像,合成高动态范围(High-Dynamic Range,HDR)图像, 以利用每个曝光时间相对应最佳细节的LDR图像来合成最终HDR图像,优化点云信息。
S5:采用双边滤波对点云信息进行平滑滤波的步骤后,还包括以下步骤,
S61:基于高斯分布,计算每个点云信息与其它所有相邻点云的平均距离;
S62:去除平均距离在基于全局距离平均值和标准偏差设置的阈值之外的所有点云信息,以筛选点云信息;
S63:使筛选后的点云信息作为处理的点云信息。
基于高斯分布的函数,由全局距离平均值和标准偏差来确定该函数边界,将点云信息代入该函数后,将真实值大于计算值的点云信息去除。
进而一种3D光栅检测方法通过对待测工件进行四维度检测,待测工件在任一维度上的误差超出允许范围都会被判定为不合格,即工件的安装不到位,使得工件的检测精度更高,突破了2D检测的局限性。
基于低动态范围图像合成高动态范围图像,优化采集的点云信息的图像画质,从而优化采集的点云信息的质量,利于进一步提高工件的检测精度。
采用双边滤波对点云信息进行平滑滤波,使各个点云信息到中心点的空间临近度的权值和像素值相似度的权值进行乘积,再与图像矩阵作卷积计算,以实现将二维高斯正态分布放在图像矩阵上做卷积运算的目的,从而达到保边去噪的效果,优化采集的点云信息的质量,使得工件的检测精度更高。
去除离群点云信息,优化3D模型质量,利于进一步提高工件的检测精度。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
参照图2,本发明实施例还提供一种3D光栅检测装置,该一种3D光栅检测装置与上述实施例中一种3D光栅检测方法一一对应。该一种3D光栅检测装置包括:
光源模块,用于将光栅扫描仪嵌入到双目相机之间,使光栅扫描仪为双目相机提供光源;
标定模块,用于对双目相机进行标定;
点云信息模块,用于使双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
优化模块,用于基于不同的曝光时间的低动态范围图像,选取每个曝光时间相对应的预设的低动态范围图像,合成高动态范围图像,优化点云信息;
处理模块,用于采用双边滤波对点云信息进行平滑滤波,同时保留边界信息;
筛选模块,用于基于高斯分布,计算点云信息模块获取的每个点云信息与其它所有相邻点云的平均距离,去除平均距离在基于全局距离平均值和标准偏差设置的阈值之外的所有点云信息,以筛选点云信息;
建模模块,用于基于处理的点云信息进行3D建模,获得3D模型;
检测模块,用于基于3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。
关于一种3D光栅检测装置的具体限定可以参见上文中对于一种3D光栅检测装置的限定,在此不再赘述。上述一种3D光栅检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种3D光栅检测装置。
在一个实施例中,提供了一种计算机可读存储介质,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现以下步骤:
S1:将光栅扫描仪嵌入到双目相机之间,使光栅扫描仪为双目相机提供光源;
S2:对双目相机进行标定;
S3:使双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
S5:采用双边滤波对点云信息进行平滑滤波,同时保留边界信息;
S7:基于双边滤波后的点云信息进行3D建模,获得3D模型;
S8:基于3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机 可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述系统的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。

Claims (10)

  1. 一种3D光栅检测方法,其特征在于,包括以下步骤:
    在光栅扫描仪嵌入到双目相机之间为所述双目相机提供光源和对所述双目相机进行标定后,
    使所述双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
    采用双边滤波对所述点云信息进行平滑滤波,同时保留边界信息;
    基于双边滤波后的点云信息进行3D建模,获得3D模型;
    基于所述3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。
  2. 根据权利要求1所述的一种3D光栅检测方法,其特征在于,所述判断工件安装是否到位的步骤包括:
    根据X轴、Y轴、Z轴和位姿四个维度,分别预设与X轴、Y轴、Z轴和位姿一一对应的标准值和对应的容差值;
    分别计算X轴、Y轴、Z轴和位姿与对应标准值之间的差值,再与对应的容差值进行比较,获得维度数据结果;
    当X轴、Y轴、Z轴和位姿中任一个维度数据结果位于对应的容差值的范围之外时,判断为工件安装不到位。
  3. 根据权利要求1所述的一种3D光栅检测方法,其特征在于,所述采用双边滤波对所述点云信息进行平滑滤波的步骤后,还包括以下步骤:
    基于高斯分布,计算每个点云信息与其它所有相邻点云的平均距离;
    去除平均距离在基于全局距离平均值和标准偏差设置的阈值之外的所有点云信息,以筛选所述点云信息;
    使筛选后的所述点云信息作为处理的点云信息。
  4. 根据权利要求1所述的一种3D光栅检测方法,其特征在于,所述采用双边滤波对所述点云信息进行平滑滤波的步骤包括:
    预设中心点,通过各个点云信息到中心点的空间临近度,计算各个点云信息到中心点的空间临近度的权值;
    基于各个点云信息的像素值相似度,计算各个点云信息的像素值相似度的权值;
    基于各个点云信息到中心点的空间临近度的权值和像素值相似度的权值进行乘积,再与图像矩阵作卷积运算。
  5. 根据权利要求1所述的一种3D光栅检测方法,其特征在于,所述对所述双目相机进行标定具体包括:
    基于标定板标定法,通过拍摄一组标定板照片,预设角点,检测标定板的角点坐标,并利用张正友标定算法标定相机的内外参数。
  6. 根据权利要求1-5任意一项所述的一种3D光栅检测方法,其特征在于,所述获得包括三维坐标的点云信息的步骤后,还包括以下步骤:
    基于不同的曝光时间的低动态范围图像,选取每个曝光时间相对应的预设的低动态范围图像,合成高动态范围图像,优化所述点云信息。
  7. 一种3D光栅检测装置,其特征在于,包括:
    光源模块,用于将光栅扫描仪嵌入到双目相机之间为所述双目相机提供光源;
    标定模块,用于对所述双目相机进行标定;
    点云信息模块,用于使所述双目相机对工件进行拍摄,采集工件的图像,并获得包括三维坐标的点云信息;
    处理模块,用于采用双边滤波对所述点云信息进行平滑滤波,同时保留边界信息;
    建模模块,用于基于双边滤波后的点云信息进行3D建模,获得3D模型;
    检测模块,用于基于所述3D模型对工件进行检测,从X轴、Y轴、Z轴和位姿四个维度上获取工件的各维度数据,判断工件安装是否到位。
  8. 根据权利要求7所述的3D光栅检测装置,其特征在于,还包括:
    筛选模块,用于基于高斯分布,计算所述点云信息模块获取的每个点云信息与其它所有相邻点云的平均距离,去除平均距离在基于全局距离平均值和标准偏差设置的阈值之外的所有点云信息,以筛选点云信息。
  9. 一种计算机设备,其特征在于,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1-6任意一项所述的3D光栅检测方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-6任意一项所述的3D光栅检测方法的步骤。
PCT/CN2022/099914 2021-10-14 2022-06-20 一种3d光栅检测方法、装置、计算机设备及可读存储介质 WO2023060927A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111199654.8 2021-10-14
CN202111199654.8A CN114049304A (zh) 2021-10-14 2021-10-14 一种3d光栅检测方法、装置、计算机设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023060927A1 true WO2023060927A1 (zh) 2023-04-20

Family

ID=80204484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099914 WO2023060927A1 (zh) 2021-10-14 2022-06-20 一种3d光栅检测方法、装置、计算机设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN114049304A (zh)
WO (1) WO2023060927A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049304A (zh) * 2021-10-14 2022-02-15 五邑大学 一种3d光栅检测方法、装置、计算机设备及可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170150129A1 (en) * 2015-11-23 2017-05-25 Chicago Measurement, L.L.C. Dimensioning Apparatus and Method
CN110378967A (zh) * 2019-06-20 2019-10-25 江苏理工学院 一种光栅投射与立体视觉结合的虚拟靶标标定方法
CN112013792A (zh) * 2020-10-19 2020-12-01 南京知谱光电科技有限公司 一种复杂大构件机器人面扫描三维重建方法
CN113034600A (zh) * 2021-04-23 2021-06-25 上海交通大学 基于模板匹配的无纹理平面结构工业零件识别和6d位姿估计方法
CN113192179A (zh) * 2021-04-28 2021-07-30 沈阳工业大学 一种基于双目立体视觉的三维重建方法
CN114049304A (zh) * 2021-10-14 2022-02-15 五邑大学 一种3d光栅检测方法、装置、计算机设备及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170150129A1 (en) * 2015-11-23 2017-05-25 Chicago Measurement, L.L.C. Dimensioning Apparatus and Method
CN110378967A (zh) * 2019-06-20 2019-10-25 江苏理工学院 一种光栅投射与立体视觉结合的虚拟靶标标定方法
CN112013792A (zh) * 2020-10-19 2020-12-01 南京知谱光电科技有限公司 一种复杂大构件机器人面扫描三维重建方法
CN113034600A (zh) * 2021-04-23 2021-06-25 上海交通大学 基于模板匹配的无纹理平面结构工业零件识别和6d位姿估计方法
CN113192179A (zh) * 2021-04-28 2021-07-30 沈阳工业大学 一种基于双目立体视觉的三维重建方法
CN114049304A (zh) * 2021-10-14 2022-02-15 五邑大学 一种3d光栅检测方法、装置、计算机设备及可读存储介质

Also Published As

Publication number Publication date
CN114049304A (zh) 2022-02-15

Similar Documents

Publication Publication Date Title
CN109737874B (zh) 基于三维视觉技术的物体尺寸测量方法及装置
US11615552B2 (en) Crossing point detector, camera calibration system, crossing point detection method, camera calibration method, and recording medium
JP6506731B2 (ja) ビジョンシステムで3dポイントクラウドマッチングに使用するクラッタをスコアリングするためのシステム及び方法
WO2023060926A1 (zh) 一种基于3d光栅引导机器人定位抓取的方法、装置及设备
CN110717942B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
EP3496035B1 (en) Using 3d vision for automated industrial inspection
CN109479082B (zh) 图象处理方法及装置
CN113160161B (zh) 目标边缘处缺陷的检测方法和装置
KR102073468B1 (ko) 비전 시스템에서 컬러 이미지에 대해 컬러 후보 포즈들의 점수화를 위한 시스템 및 방법
CN112033965A (zh) 基于差分图像分析的3d弧形表面缺陷检测方法
WO2019001164A1 (zh) 滤光片同心度测量方法及终端设备
WO2023060927A1 (zh) 一种3d光栅检测方法、装置、计算机设备及可读存储介质
CN111369611B (zh) 图像像素深度值优化方法及其装置、设备和存储介质
CN113379845A (zh) 一种相机标定方法及装置、电子设备及存储介质
CN107680035B (zh) 一种参数标定方法和装置、服务器及可读存储介质
CN111354038A (zh) 锚定物检测方法及装置、电子设备及存储介质
CN117274258A (zh) 主板图像的缺陷检测方法、系统、设备及存储介质
CN116839473A (zh) 焊缝定位及尺寸计算方法、装置、存储介质及电子设备
CN116596987A (zh) 一种基于双目视觉的工件三维尺寸高精度测量方法
CN114841943A (zh) 一种零件检测方法、装置、设备及存储介质
JP2018088228A (ja) 交点検出装置、カメラ校正システム、交点検出方法、カメラ校正方法、プログラムおよび記録媒体
JP7298687B2 (ja) 物体認識装置及び物体認識方法
RU2351091C2 (ru) Способ автоматического определения и коррекции радиальной дисторсии на цифровом изображении
CN111630569A (zh) 双目匹配的方法、视觉成像装置及具有存储功能的装置
Wen et al. 3D inspection technology combining passive stereo matching and active structured light for steel plate surface sample

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22879875

Country of ref document: EP

Kind code of ref document: A1