CN112819935A - Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision - Google Patents

Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision Download PDF

Info

Publication number
CN112819935A
CN112819935A CN202011242913.6A CN202011242913A CN112819935A CN 112819935 A CN112819935 A CN 112819935A CN 202011242913 A CN202011242913 A CN 202011242913A CN 112819935 A CN112819935 A CN 112819935A
Authority
CN
China
Prior art keywords
workpiece
image
realizing
scale
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011242913.6A
Other languages
Chinese (zh)
Inventor
汪贵华
罗晓杰
张印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202011242913.6A priority Critical patent/CN112819935A/en
Publication of CN112819935A publication Critical patent/CN112819935A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开了一种基于双目立体视觉实现工件三维重建的方法,包括构建工件图像采集系统;工件三维旋转装置从初始位置开始每旋转一个角度,双目相机硬件系统采集一帧工件图像,测量工件三维旋转装置倾斜角度;对采集的图像进行处理;采用SIFT算法对左右轮廓图提取特征点,进行立体匹配;将特征点的像素坐标转换成实际的坐标;将得到图像特征点的世界坐标进行曲线拟合,得到工件轮廓图。本发明可在不同条件下完成工作,高效且检测精度较高。

Figure 202011242913

The invention discloses a method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision. The tilt angle of the three-dimensional rotating device of the workpiece; process the collected images; use the SIFT algorithm to extract feature points from the left and right contour images, and perform stereo matching; convert the pixel coordinates of the feature points into actual coordinates; Curve fitting to obtain the contour of the workpiece. The present invention can complete work under different conditions, has high efficiency and high detection accuracy.

Figure 202011242913

Description

Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision
Technical Field
The invention belongs to the technical field of metering detection, and particularly relates to a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
Background
As computer vision applications have become more and more concerned by various industries, the range of involvement has been expanding. Computer vision technology creates opportunities for people and also presents many challenges. Three-dimensional reconstruction techniques have been widely used in research and life as one of the most important research subjects. In industry, three-dimensional reconstruction can be used in projects such as workpiece welding, dies and the like. However, due to the complex industrial environment, on one hand, contact measurement is difficult to perform, and on the other hand, due to insufficient illumination and many workpiece cavities, non-contact measurement is easily interfered by the environment and is difficult to perform effective measurement.
Disclosure of Invention
The invention aims to provide a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
The technical scheme for realizing the purpose of the invention is as follows: a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision comprises the following steps:
step 1: constructing a workpiece image acquisition system, wherein the workpiece image acquisition system comprises a workpiece three-dimensional rotating device and a binocular camera hardware measurement system;
step 2: when the workpiece three-dimensional rotating device rotates for one angle from the initial position, the binocular camera hardware system collects a frame of workpiece image and measures the inclination angle of the workpiece three-dimensional rotating device;
and step 3: carrying out gray processing, ROI region selection and self-adaptive median filtering on the collected image to obtain a binary image, and carrying out contour extraction on the binary image by using a canny edge extraction algorithm;
and 4, step 4: extracting feature points from the left and right contour maps by adopting an SIFT algorithm, and performing stereo matching;
step 5, converting the pixel coordinates of the characteristic points into coordinates under a world coordinate system according to the calibration result obtained in the step 1 and the distance measured by the laser radar;
and 6, performing curve fitting on the coordinates of the obtained image characteristic points in a world coordinate system to obtain a workpiece contour map.
Preferably, the binocular camera hardware measurement system comprises two cameras and a laser radar, the centers of the two cameras and the workpiece rotating device are located on the same horizontal line, and the laser radar is located between the two cameras.
Preferably, an inclination sensor is arranged on the workpiece three-dimensional rotating device and used for measuring a rotating angle.
Preferably, the specific steps of extracting the feature points of the left and right contour maps by adopting the SIFT algorithm are as follows:
searching image positions on all scales, and identifying interest points which are invariable in scale and rotation through a Gaussian differential function;
at the position of each interest point, the position and the scale of the feature point are determined by a fitting model.
Preferably, the specific method for determining the positions and the dimensions of the feature points through the fitting model is as follows:
performing curve fitting by using a Talor expansion of the DoG function in the scale space, wherein the Talor expansion of the DoG function in the scale space is as follows:
Figure BDA0002768968300000021
wherein D (X) is a Gaussian difference operator, X (X, y, sigma) represents pixel coordinates under a scale, sigma is a scale factor, X and y are coordinates of any pixel point in an image pixel coordinate system, and X0(x0,y00) An origin coordinate of an image pixel coordinate system under an original scale;
and (3) carrying out derivation on the Talor expansion and making the equation equal to zero to obtain the offset of the extreme point as follows:
Figure BDA0002768968300000022
the corresponding extreme point equation has the value:
Figure BDA0002768968300000023
preferably, the conversion relationship between the pixel coordinates and the world coordinates of the feature points is specifically as follows:
Figure BDA0002768968300000024
wherein (u, v) is the coordinate of the characteristic point in the pixel coordinate system, dy is the size of the characteristic point pixel in the X and y directions in the physical coordinate system, f represents the focal length of the camera, R represents the rotation third order matrix, T represents the translational column vector, (X)W,YW,ZW) Indicating the position of the point in a world coordinate system.
Compared with the prior art, the invention has the following remarkable advantages:
(1) the method is simple to operate, simple and quick in operation processing, low in requirement on environment and suitable for workpiece measurement in different environments;
(2) the method solves the problem that shadow areas are generated on the images due to insufficient illumination in the industrial environment, and the images of the workpieces at different angles are obtained by rotating the workpieces, so that the influence of the shadow areas on the subsequent three-dimensional reconstruction work is effectively avoided;
(3) the distance between the workpiece and the camera can be measured by adopting the laser radar, and the complete three-dimensional coordinates of the workpiece can be obtained by combining image coordinate conversion;
(4) the invention adopts the inclination sensor ADXL345 to measure the rotation angle of the workpiece, and the characteristic matching is carried out on the left image and the right image through multiple angles;
(5) the invention directly transplants OpenCV to an ARM development board, calls a related function kernel algorithm of a computer vision library, and carries out a series of image preprocessing of acquisition kernel on the workpiece image, thereby selecting and identifying the ROI area.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a flow chart of a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
Fig. 2 is a schematic diagram of a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
Fig. 3 is a schematic diagram of the rotation of a workpiece according to the present invention.
Detailed Description
As shown in fig. 1 to 3, a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision includes:
step 1: constructing a workpiece image acquisition system, wherein the workpiece image acquisition system comprises a workpiece three-dimensional rotating device and a binocular camera hardware measurement system, and an inclination sensor is arranged on the workpiece three-dimensional rotating device and used for measuring a rotating angle; the binocular camera hardware measurement system comprises two cameras and a laser radar, the centers of the two cameras and the workpiece rotating device are located on the same horizontal line, and the laser radar is located between the two cameras; the distance between the binocular camera hardware measurement system and the workpiece three-dimensional rotating device is measured, and the two cameras can move to achieve measurement of different distances.
Calibrating the left camera and the right camera to obtain internal parameters and relative attitude parameters of the two cameras; measuring the Z coordinate of the characteristic point of the workpiece by using the laser radar;
in some embodiments, the binocular camera hardware measurement system and the workpiece three-dimensional rotating device center are located on the same horizontal line, and the calculation amount of space conversion can be reduced.
When the workpiece acquires an image, under the industrial ring 00, a large shadow area is generated due to the influence of insufficient illumination. Therefore, the influence of the shadow area needs to be reduced, and the invention adopts the method of rotating the workpiece to acquire the workpiece images under different angles, so as to reduce the influence of the shadow area.
Step 2: starting from an initial position, when the workpiece three-dimensional rotating device rotates by an angle, a binocular camera hardware system collects a frame of image, and an inclination angle is measured by an inclination sensor;
the video Capture in OpenCV is used to open the camera, which is used to process the video file or the video stream of the camera, and can control the opening and closing of the camera, and the video stream can be read into the hardware platform and stored in the matrix frame by using the cap > frame, so as to process each frame image in the video.
And step 3: analyzing and processing the acquired image, wherein the analyzing and processing comprises gray processing, ROI area selection and self-adaptive median filtering to obtain a binary image, and extracting the contour of the binary image by using a canny edge extraction algorithm;
because the video collected by the camera is colorful, the video is processed into a gray level image when being processed, and three components of the gray level image R, G, B in the RGB format are equal and equal to the gray level value. In OpenCV, the functional declaration that enables the conversion of RGB color space to grayscale is: cvcvcvtcolor (const CvArr src, CvArr dst, int code), i.e. converting the original image src to dst, code representing the color space conversion parameter, and using this function to perform the gray-scale conversion on each frame of color image. The specific function is implemented as cvtColor (frame, edges, CV _ BGR2GRAY), where frame is the original image and edges is the grayscale image.
The image denoising is a commonly used step in image preprocessing, and commonly used image denoising algorithms include adaptive median filtering, gaussian filtering and the like. Wherein the adaptive median filtering is more suitable for such salt and pepper noise with abrupt white or black spots. The image noise mainly comes from the image acquisition and transmission process, and common noises include additive noise, multiplicative noise, quantization noise, salt and pepper noise and the like. Therefore, the present invention employs adaptive median filtering to eliminate noise.
Canny edge detection is carried out on the binary image, the edge of the image is detected, and a workpiece edge contour map is obtained;
and 4, extracting feature points from the left and right contour maps by adopting an SIFT algorithm, and performing stereo matching.
The SIFT algorithm is a description used in the field of image processing, can detect key points in an image, and is a local feature descriptor. The SIFT algorithm is mainly divided into scale space extreme value detection, key point positioning and key point feature description.
And (3) detection of extreme values in the scale space: image locations at all scales are searched and potential scale and rotation invariant points of interest are identified by gaussian differential functions. The dimensional image of the space is described as:
Figure BDA0002768968300000041
in the formula, L (x, y, σ) represents an image in a scale space, I (x, y) is an input image, G (x, y, σ) represents a two-dimensional gaussian kernel function whose scale can be changed, coordinates (x, y) of a pixel point, and σ is a scale factor.
Key point positioning: at the location of each point of interest, the location and scale are determined by a fitting fine model. In some embodiments, curve fitting is performed using a Talor expansion of the DoG function in scale space;
the Talor expansion of the DoG function in scale space is:
Figure BDA0002768968300000042
wherein D (X) is a Gaussian difference operator, X (X, y, sigma) represents pixel coordinates under a scale, sigma is a scale factor, X and y are coordinates of any pixel point in an image pixel coordinate system, and X0(x0,y00) An origin coordinate of an image pixel coordinate system under an original scale;
and (3) obtaining the offset of the extreme point by obtaining the derivation and the yield equal to zero:
Figure BDA0002768968300000051
the corresponding extreme point equation has the value:
Figure BDA0002768968300000052
and matching the characteristic points obtained from the left image and the right image.
Step 5, converting the coordinates of the characteristic points under the image pixel coordinate system into the coordinates under the world coordinate system according to the calibration result obtained in the step 1 and the distance measured by the laser radar;
step 5.1: converting the pixel coordinates of the image feature points into image physical coordinates;
for the feature point p, its coordinates are (u, v) in the pixel coordinate system and (x, y) in the physical coordinate system. Given that the dimensions of a single pixel in the x and y directions in the physical coordinate system are dx and dy, respectively, the following equations hold:
Figure BDA0002768968300000053
Figure BDA0002768968300000054
the arrangement into the form of its secondary transformation matrix is as follows:
Figure BDA0002768968300000055
in the formula (u)0,v0) Coordinates representing the origin of the physical coordinate system of the image
Step 5.2: camera coordinates that convert the physical coordinates of the image feature points.
The camera coordinate system is a space three-dimensional coordinate system established by taking the optical center of a camera lens as an origin, the Z axis is vertical to the image physical coordinate system, and a conversion matrix is obtained according to the similar triangle principle as follows:
Figure BDA0002768968300000056
in the formula (X)C,YC,ZC) Is the coordinate of the camera coordinate system, and f is the focal length of the camera.
Step 5.3, converting the camera coordinates of the image feature points into world coordinates;
finally, the conversion relation between the world coordinate system and the pixel coordinate system is obtained as follows:
Figure BDA0002768968300000061
where f denotes the focal length of the camera, R denotes a rotational third-order matrix, T denotes a translational column vector, and (X) denotes a translational column vectorW,YW,ZW) Is the world coordinate system coordinate.
And 6, performing curve fitting on the world coordinates of the obtained image feature points to obtain a workpiece contour map.
The invention realizes the three-dimensional reconstruction of the workpiece by adopting multi-angle fusion, records the angle data of the workpiece by rotating the workpiece for a certain angle, and respectively collects one frame of image by a left camera and a right camera. And then, performing image algorithms such as image preprocessing, image feature matching and the like on each frame of image to extract the feature points of the image, acquiring the pixel coordinates of the feature points, and then performing coordinate conversion on the pixel coordinates of the feature points to acquire the actual physical coordinates of the workpiece feature points. The method is simple and convenient to operate, can effectively realize three-dimensional reconstruction of the small workpiece, and effectively reduces the influence of the image shadow area.

Claims (6)

1.一种基于双目立体视觉实现工件三维重建的方法,其特征在于,包括:1. a method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision, is characterized in that, comprising: 步骤1:构建工件图像采集系统,所述工件图像采集系统包括工件三维旋转装置和双目相机硬件测量系统;Step 1: build a workpiece image acquisition system, the workpiece image acquisition system includes a workpiece three-dimensional rotation device and a binocular camera hardware measurement system; 步骤2:工件三维旋转装置从初始位置开始每旋转一个角度,双目相机硬件系统采集一帧工件图像,测量工件三维旋转装置倾斜角度;Step 2: Every time the workpiece three-dimensional rotation device rotates by an angle from the initial position, the binocular camera hardware system collects a frame of workpiece image, and measures the inclination angle of the workpiece three-dimensional rotation device; 步骤3:对采集的图像进行灰度处理、ROI区域选取、自适应中值滤波得到二值化图像,利用canny边缘提取算法对二值化图像进行轮廓提取;Step 3: Perform grayscale processing, ROI region selection, and adaptive median filtering on the collected image to obtain a binarized image, and use the canny edge extraction algorithm to extract the contour of the binarized image; 步骤4:采用SIFT算法对左右轮廓图提取特征点,进行立体匹配;Step 4: Use the SIFT algorithm to extract feature points from the left and right contour images, and perform stereo matching; 步骤5、根据步骤1得到标定结果及激光雷达测量出来的距离,将特征点的像素坐标转换成世界坐标系下的坐标;Step 5. Obtain the calibration result and the distance measured by the lidar according to step 1, and convert the pixel coordinates of the feature points into coordinates in the world coordinate system; 步骤6、将得到图像特征点在世界坐标系的坐标进行曲线拟合,得到工件轮廓图。Step 6: Perform curve fitting on the coordinates of the obtained image feature points in the world coordinate system to obtain a contour map of the workpiece. 2.根据权利要求1所述的基于双目立体视觉实现工件三维重建的方法,其特征在于,所述双目相机硬件测量系统包括两个相机和激光雷达,两个相机中心与工件旋转装置处于同一水平线上,激光雷达位于两个相机之间。2. The method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 1, wherein the binocular camera hardware measurement system comprises two cameras and a laser radar, and the centers of the two cameras are located at the workpiece rotating device. On the same horizontal line, the lidar is located between the two cameras. 3.根据权利要求1所述的基于双目立体视觉实现工件三维重建的方法,其特征在于,所述工件三维旋转装置上设置有倾斜传感器,所述倾斜传感器用于测量旋转角度。3 . The method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 1 , wherein the three-dimensional rotation device of the workpiece is provided with a tilt sensor, and the tilt sensor is used to measure the rotation angle. 4 . 4.根据权利要求1所述的基于双目立体视觉实现工件三维重建的方法,其特征在于,采用SIFT算法对左右轮廓图提取特征点的具体步骤为:4. the method for realizing workpiece three-dimensional reconstruction based on binocular stereo vision according to claim 1, is characterized in that, the concrete step that adopts SIFT algorithm to extract feature point to left and right contour map is: 搜索所有尺度上的图像位置,通过高斯微分函数识别对于尺度和旋转不变的兴趣点;Search image locations at all scales and identify points of interest that are invariant to scale and rotation through differential Gaussian functions; 在每个兴趣点的位置上,通过拟合模型确定特征点位置和尺度。At the location of each interest point, the feature point location and scale are determined by fitting the model. 5.根据权利要求4所述的基于双目立体视觉实现工件三维重建的方法,其特征在于,通过拟合模型确定特征点位置和尺度的具体方法为:5. the method for realizing the three-dimensional reconstruction of workpiece based on binocular stereo vision according to claim 4, is characterized in that, the concrete method that determines feature point position and scale by fitting model is: 利用DoG函数在尺度空间的Talor展开式进行曲线拟合,DoG函数在尺度空间的Talor展开式为:Curve fitting is performed by using the Talor expansion of the DoG function in the scale space. The Talor expansion of the DoG function in the scale space is:
Figure FDA0002768968290000011
Figure FDA0002768968290000011
式中,D(X)为高斯差分算子,X(x,y,σ)表示尺度下像素坐标,σ为尺度因子,(x,y)为图像像素坐标系的任一像素点坐标,X0(x0,y00)为原始尺度下图像像素坐标系原点坐标;In the formula, D(X) is the Gaussian difference operator, X(x, y, σ) represents the pixel coordinates under the scale, σ is the scale factor, (x, y) is the coordinate of any pixel in the image pixel coordinate system, X 0 (x 0 , y 0 , σ 0 ) is the origin coordinate of the image pixel coordinate system at the original scale; 对Talor展开式求导并让方程等于零,得到极值点的偏移量为:Taking the derivative of the Talor expansion and making the equation equal to zero yields the offset of the extreme point as:
Figure FDA0002768968290000021
Figure FDA0002768968290000021
对应极值点方程的值为:The value of the corresponding extreme point equation is:
Figure FDA0002768968290000022
Figure FDA0002768968290000022
6.根据权利要求1所述的基于双目立体视觉实现工件三维重建的方法,其特征在于,特征点的像素坐标和世界坐标转换关系具体为:6. the method for realizing workpiece three-dimensional reconstruction based on binocular stereo vision according to claim 1, is characterized in that, the pixel coordinate of feature point and the world coordinate conversion relation are specifically:
Figure FDA0002768968290000023
Figure FDA0002768968290000023
式中,(u,v)为特征点在像素坐标系坐标,dy为特征点像素在物理坐标系中x,y方向的尺寸,f表示相机焦距,R表示的是旋转三阶矩阵,T代表平移的列向量,(XW,YW,ZW)表示该点在世界坐标系坐标点的位置。In the formula, (u, v) are the coordinates of the feature point in the pixel coordinate system, dy is the size of the feature point pixel in the x, y direction in the physical coordinate system, f represents the camera focal length, R represents the rotation third-order matrix, T represents The translation column vector, (X W , Y W , Z W ) represents the position of the point in the world coordinate system.
CN202011242913.6A 2020-11-09 2020-11-09 Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision Pending CN112819935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011242913.6A CN112819935A (en) 2020-11-09 2020-11-09 Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011242913.6A CN112819935A (en) 2020-11-09 2020-11-09 Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision

Publications (1)

Publication Number Publication Date
CN112819935A true CN112819935A (en) 2021-05-18

Family

ID=75853361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011242913.6A Pending CN112819935A (en) 2020-11-09 2020-11-09 Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision

Country Status (1)

Country Link
CN (1) CN112819935A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113390344A (en) * 2021-07-06 2021-09-14 桂林电子科技大学 Method for rapidly detecting dimension and geometric tolerance of stepped shaft
CN113762544A (en) * 2021-08-26 2021-12-07 深圳证券通信有限公司 Intelligent machine room equipment position inspection and management method based on computer vision
CN114519674A (en) * 2022-01-18 2022-05-20 贵州省质安交通工程监控检测中心有限责任公司 Slope stability analysis system and method based on machine vision
CN117218301A (en) * 2023-11-09 2023-12-12 常熟理工学院 Elevator traction sheave groove reconstruction method and system based on multi-channel stereoscopic vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范敬利: "变焦高精度双目立体视觉测量技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113390344A (en) * 2021-07-06 2021-09-14 桂林电子科技大学 Method for rapidly detecting dimension and geometric tolerance of stepped shaft
CN113762544A (en) * 2021-08-26 2021-12-07 深圳证券通信有限公司 Intelligent machine room equipment position inspection and management method based on computer vision
CN114519674A (en) * 2022-01-18 2022-05-20 贵州省质安交通工程监控检测中心有限责任公司 Slope stability analysis system and method based on machine vision
CN117218301A (en) * 2023-11-09 2023-12-12 常熟理工学院 Elevator traction sheave groove reconstruction method and system based on multi-channel stereoscopic vision
CN117218301B (en) * 2023-11-09 2024-02-09 常熟理工学院 Elevator traction wheel groove reconstruction method and system based on multi-channel stereo vision

Similar Documents

Publication Publication Date Title
CN112819935A (en) Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision
CN109215063B (en) Registration method of event trigger camera and three-dimensional laser radar
CN111476841B (en) A method and system for recognition and positioning based on point cloud and image
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN103093191B (en) A kind of three dimensional point cloud is in conjunction with the object identification method of digital image data
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN107084680B (en) Target depth measuring method based on machine monocular vision
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN107452030B (en) Image registration method based on contour detection and feature matching
CN108007388A (en) A kind of turntable angle high precision online measuring method based on machine vision
Tran et al. Non-contact gap and flush measurement using monocular structured multi-line light vision for vehicle assembly
CN111798453B (en) Point cloud registration method and system for unmanned driving assisted positioning
CN112164117A (en) V-SLAM pose estimation method based on Kinect camera
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
TWI659390B (en) Data fusion method for camera and laser rangefinder applied to object detection
CN112184811A (en) Monocular space structured light system structure calibration method and device
CN109410272B (en) A transformer nut identification and positioning device and method
JP2003216931A (en) Specific pattern recognizing method, specific pattern recognizing program, specific pattern recognizing program storage medium and specific pattern recognizing device
CN114963981A (en) Monocular vision-based cylindrical part butt joint non-contact measurement method
CN117710469B (en) An online dense reconstruction method and system based on RGB-D sensor
CN111915670A (en) Vacuum cup size measuring method based on vision
CN117253059A (en) Part identification method based on improved template matching method
CN112862868B (en) Motion sea wave image registration fusion method based on linear transformation and wavelet analysis
CN109815966A (en) An implementation method of mobile robot visual odometry based on improved SIFT algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518