CN112819935A - Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision - Google Patents
Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision Download PDFInfo
- Publication number
- CN112819935A CN112819935A CN202011242913.6A CN202011242913A CN112819935A CN 112819935 A CN112819935 A CN 112819935A CN 202011242913 A CN202011242913 A CN 202011242913A CN 112819935 A CN112819935 A CN 112819935A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- image
- coordinates
- coordinate system
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000005259 measurement Methods 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000009795 derivation Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 235000002566 Capsicum Nutrition 0.000 description 2
- 239000006002 Pepper Substances 0.000 description 2
- 241000722363 Piper Species 0.000 description 2
- 235000016761 Piper aduncum Nutrition 0.000 description 2
- 235000017804 Piper guineense Nutrition 0.000 description 2
- 235000008184 Piper nigrum Nutrition 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 150000003839 salts Chemical class 0.000 description 2
- 208000009115 Anorectal Malformations Diseases 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision, which comprises the steps of constructing a workpiece image acquisition system; the binocular camera hardware system collects a frame of workpiece image every time the workpiece three-dimensional rotating device rotates by an angle from the initial position, and the inclination angle of the workpiece three-dimensional rotating device is measured; processing the collected image; extracting feature points from the left and right contour maps by adopting an SIFT algorithm, and performing stereo matching; converting the pixel coordinates of the characteristic points into actual coordinates; and performing curve fitting on the world coordinates of the obtained image characteristic points to obtain a workpiece contour map. The invention can complete the work under different conditions, and has high efficiency and high detection precision.
Description
Technical Field
The invention belongs to the technical field of metering detection, and particularly relates to a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
Background
As computer vision applications have become more and more concerned by various industries, the range of involvement has been expanding. Computer vision technology creates opportunities for people and also presents many challenges. Three-dimensional reconstruction techniques have been widely used in research and life as one of the most important research subjects. In industry, three-dimensional reconstruction can be used in projects such as workpiece welding, dies and the like. However, due to the complex industrial environment, on one hand, contact measurement is difficult to perform, and on the other hand, due to insufficient illumination and many workpiece cavities, non-contact measurement is easily interfered by the environment and is difficult to perform effective measurement.
Disclosure of Invention
The invention aims to provide a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
The technical scheme for realizing the purpose of the invention is as follows: a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision comprises the following steps:
step 1: constructing a workpiece image acquisition system, wherein the workpiece image acquisition system comprises a workpiece three-dimensional rotating device and a binocular camera hardware measurement system;
step 2: when the workpiece three-dimensional rotating device rotates for one angle from the initial position, the binocular camera hardware system collects a frame of workpiece image and measures the inclination angle of the workpiece three-dimensional rotating device;
and step 3: carrying out gray processing, ROI region selection and self-adaptive median filtering on the collected image to obtain a binary image, and carrying out contour extraction on the binary image by using a canny edge extraction algorithm;
and 4, step 4: extracting feature points from the left and right contour maps by adopting an SIFT algorithm, and performing stereo matching;
step 5, converting the pixel coordinates of the characteristic points into coordinates under a world coordinate system according to the calibration result obtained in the step 1 and the distance measured by the laser radar;
and 6, performing curve fitting on the coordinates of the obtained image characteristic points in a world coordinate system to obtain a workpiece contour map.
Preferably, the binocular camera hardware measurement system comprises two cameras and a laser radar, the centers of the two cameras and the workpiece rotating device are located on the same horizontal line, and the laser radar is located between the two cameras.
Preferably, an inclination sensor is arranged on the workpiece three-dimensional rotating device and used for measuring a rotating angle.
Preferably, the specific steps of extracting the feature points of the left and right contour maps by adopting the SIFT algorithm are as follows:
searching image positions on all scales, and identifying interest points which are invariable in scale and rotation through a Gaussian differential function;
at the position of each interest point, the position and the scale of the feature point are determined by a fitting model.
Preferably, the specific method for determining the positions and the dimensions of the feature points through the fitting model is as follows:
performing curve fitting by using a Talor expansion of the DoG function in the scale space, wherein the Talor expansion of the DoG function in the scale space is as follows:
wherein D (X) is a Gaussian difference operator, X (X, y, sigma) represents pixel coordinates under a scale, sigma is a scale factor, X and y are coordinates of any pixel point in an image pixel coordinate system, and X0(x0,y0,σ0) An origin coordinate of an image pixel coordinate system under an original scale;
and (3) carrying out derivation on the Talor expansion and making the equation equal to zero to obtain the offset of the extreme point as follows:
the corresponding extreme point equation has the value:
preferably, the conversion relationship between the pixel coordinates and the world coordinates of the feature points is specifically as follows:
wherein (u, v) is the coordinate of the characteristic point in the pixel coordinate system, dy is the size of the characteristic point pixel in the X and y directions in the physical coordinate system, f represents the focal length of the camera, R represents the rotation third order matrix, T represents the translational column vector, (X)W,YW,ZW) Indicating the position of the point in a world coordinate system.
Compared with the prior art, the invention has the following remarkable advantages:
(1) the method is simple to operate, simple and quick in operation processing, low in requirement on environment and suitable for workpiece measurement in different environments;
(2) the method solves the problem that shadow areas are generated on the images due to insufficient illumination in the industrial environment, and the images of the workpieces at different angles are obtained by rotating the workpieces, so that the influence of the shadow areas on the subsequent three-dimensional reconstruction work is effectively avoided;
(3) the distance between the workpiece and the camera can be measured by adopting the laser radar, and the complete three-dimensional coordinates of the workpiece can be obtained by combining image coordinate conversion;
(4) the invention adopts the inclination sensor ADXL345 to measure the rotation angle of the workpiece, and the characteristic matching is carried out on the left image and the right image through multiple angles;
(5) the invention directly transplants OpenCV to an ARM development board, calls a related function kernel algorithm of a computer vision library, and carries out a series of image preprocessing of acquisition kernel on the workpiece image, thereby selecting and identifying the ROI area.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a flow chart of a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
Fig. 2 is a schematic diagram of a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision.
Fig. 3 is a schematic diagram of the rotation of a workpiece according to the present invention.
Detailed Description
As shown in fig. 1 to 3, a method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision includes:
step 1: constructing a workpiece image acquisition system, wherein the workpiece image acquisition system comprises a workpiece three-dimensional rotating device and a binocular camera hardware measurement system, and an inclination sensor is arranged on the workpiece three-dimensional rotating device and used for measuring a rotating angle; the binocular camera hardware measurement system comprises two cameras and a laser radar, the centers of the two cameras and the workpiece rotating device are located on the same horizontal line, and the laser radar is located between the two cameras; the distance between the binocular camera hardware measurement system and the workpiece three-dimensional rotating device is measured, and the two cameras can move to achieve measurement of different distances.
Calibrating the left camera and the right camera to obtain internal parameters and relative attitude parameters of the two cameras; measuring the Z coordinate of the characteristic point of the workpiece by using the laser radar;
in some embodiments, the binocular camera hardware measurement system and the workpiece three-dimensional rotating device center are located on the same horizontal line, and the calculation amount of space conversion can be reduced.
When the workpiece acquires an image, under the industrial ring 00, a large shadow area is generated due to the influence of insufficient illumination. Therefore, the influence of the shadow area needs to be reduced, and the invention adopts the method of rotating the workpiece to acquire the workpiece images under different angles, so as to reduce the influence of the shadow area.
Step 2: starting from an initial position, when the workpiece three-dimensional rotating device rotates by an angle, a binocular camera hardware system collects a frame of image, and an inclination angle is measured by an inclination sensor;
the video Capture in OpenCV is used to open the camera, which is used to process the video file or the video stream of the camera, and can control the opening and closing of the camera, and the video stream can be read into the hardware platform and stored in the matrix frame by using the cap > frame, so as to process each frame image in the video.
And step 3: analyzing and processing the acquired image, wherein the analyzing and processing comprises gray processing, ROI area selection and self-adaptive median filtering to obtain a binary image, and extracting the contour of the binary image by using a canny edge extraction algorithm;
because the video collected by the camera is colorful, the video is processed into a gray level image when being processed, and three components of the gray level image R, G, B in the RGB format are equal and equal to the gray level value. In OpenCV, the functional declaration that enables the conversion of RGB color space to grayscale is: cvcvcvtcolor (const CvArr src, CvArr dst, int code), i.e. converting the original image src to dst, code representing the color space conversion parameter, and using this function to perform the gray-scale conversion on each frame of color image. The specific function is implemented as cvtColor (frame, edges, CV _ BGR2GRAY), where frame is the original image and edges is the grayscale image.
The image denoising is a commonly used step in image preprocessing, and commonly used image denoising algorithms include adaptive median filtering, gaussian filtering and the like. Wherein the adaptive median filtering is more suitable for such salt and pepper noise with abrupt white or black spots. The image noise mainly comes from the image acquisition and transmission process, and common noises include additive noise, multiplicative noise, quantization noise, salt and pepper noise and the like. Therefore, the present invention employs adaptive median filtering to eliminate noise.
Canny edge detection is carried out on the binary image, the edge of the image is detected, and a workpiece edge contour map is obtained;
and 4, extracting feature points from the left and right contour maps by adopting an SIFT algorithm, and performing stereo matching.
The SIFT algorithm is a description used in the field of image processing, can detect key points in an image, and is a local feature descriptor. The SIFT algorithm is mainly divided into scale space extreme value detection, key point positioning and key point feature description.
And (3) detection of extreme values in the scale space: image locations at all scales are searched and potential scale and rotation invariant points of interest are identified by gaussian differential functions. The dimensional image of the space is described as:
in the formula, L (x, y, σ) represents an image in a scale space, I (x, y) is an input image, G (x, y, σ) represents a two-dimensional gaussian kernel function whose scale can be changed, coordinates (x, y) of a pixel point, and σ is a scale factor.
Key point positioning: at the location of each point of interest, the location and scale are determined by a fitting fine model. In some embodiments, curve fitting is performed using a Talor expansion of the DoG function in scale space;
the Talor expansion of the DoG function in scale space is:
wherein D (X) is a Gaussian difference operator, X (X, y, sigma) represents pixel coordinates under a scale, sigma is a scale factor, X and y are coordinates of any pixel point in an image pixel coordinate system, and X0(x0,y0,σ0) An origin coordinate of an image pixel coordinate system under an original scale;
and (3) obtaining the offset of the extreme point by obtaining the derivation and the yield equal to zero:
the corresponding extreme point equation has the value:
and matching the characteristic points obtained from the left image and the right image.
Step 5, converting the coordinates of the characteristic points under the image pixel coordinate system into the coordinates under the world coordinate system according to the calibration result obtained in the step 1 and the distance measured by the laser radar;
step 5.1: converting the pixel coordinates of the image feature points into image physical coordinates;
for the feature point p, its coordinates are (u, v) in the pixel coordinate system and (x, y) in the physical coordinate system. Given that the dimensions of a single pixel in the x and y directions in the physical coordinate system are dx and dy, respectively, the following equations hold:
the arrangement into the form of its secondary transformation matrix is as follows:
in the formula (u)0,v0) Coordinates representing the origin of the physical coordinate system of the image
Step 5.2: camera coordinates that convert the physical coordinates of the image feature points.
The camera coordinate system is a space three-dimensional coordinate system established by taking the optical center of a camera lens as an origin, the Z axis is vertical to the image physical coordinate system, and a conversion matrix is obtained according to the similar triangle principle as follows:
in the formula (X)C,YC,ZC) Is the coordinate of the camera coordinate system, and f is the focal length of the camera.
Step 5.3, converting the camera coordinates of the image feature points into world coordinates;
finally, the conversion relation between the world coordinate system and the pixel coordinate system is obtained as follows:
where f denotes the focal length of the camera, R denotes a rotational third-order matrix, T denotes a translational column vector, and (X) denotes a translational column vectorW,YW,ZW) Is the world coordinate system coordinate.
And 6, performing curve fitting on the world coordinates of the obtained image feature points to obtain a workpiece contour map.
The invention realizes the three-dimensional reconstruction of the workpiece by adopting multi-angle fusion, records the angle data of the workpiece by rotating the workpiece for a certain angle, and respectively collects one frame of image by a left camera and a right camera. And then, performing image algorithms such as image preprocessing, image feature matching and the like on each frame of image to extract the feature points of the image, acquiring the pixel coordinates of the feature points, and then performing coordinate conversion on the pixel coordinates of the feature points to acquire the actual physical coordinates of the workpiece feature points. The method is simple and convenient to operate, can effectively realize three-dimensional reconstruction of the small workpiece, and effectively reduces the influence of the image shadow area.
Claims (6)
1. A method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision is characterized by comprising the following steps:
step 1: constructing a workpiece image acquisition system, wherein the workpiece image acquisition system comprises a workpiece three-dimensional rotating device and a binocular camera hardware measurement system;
step 2: the binocular camera hardware system collects a frame of workpiece image every time the workpiece three-dimensional rotating device rotates by an angle from the initial position, and the inclination angle of the workpiece three-dimensional rotating device is measured;
and step 3: carrying out gray processing, ROI region selection and self-adaptive median filtering on the collected image to obtain a binary image, and carrying out contour extraction on the binary image by using a canny edge extraction algorithm;
and 4, step 4: extracting feature points from the left and right contour maps by adopting an SIFT algorithm, and performing stereo matching;
step 5, converting the pixel coordinates of the characteristic points into coordinates under a world coordinate system according to the calibration result obtained in the step 1 and the distance measured by the laser radar;
and 6, performing curve fitting on the coordinates of the obtained image characteristic points in a world coordinate system to obtain a workpiece contour map.
2. The method for achieving three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 1, wherein the binocular camera hardware measurement system comprises two cameras and a laser radar, centers of the two cameras and the workpiece rotating device are located on the same horizontal line, and the laser radar is located between the two cameras.
3. The method for achieving three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 1, wherein a tilt sensor is arranged on the workpiece three-dimensional rotating device, and the tilt sensor is used for measuring a rotating angle.
4. The method for realizing three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 1, wherein the specific steps of extracting feature points from the left and right contour maps by adopting an SIFT algorithm are as follows:
searching image positions on all scales, and identifying interest points which are invariable in scale and rotation through a Gaussian differential function;
at the position of each interest point, the position and the scale of the feature point are determined by a fitting model.
5. The method for achieving three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 4, wherein the specific method for determining the positions and the dimensions of the feature points through the fitting model comprises the following steps:
performing curve fitting by using a Talor expansion of the DoG function in the scale space, wherein the Talor expansion of the DoG function in the scale space is as follows:
wherein D (X) is a Gaussian difference operator, X (X, y, sigma) represents pixel coordinates under a scale, sigma is a scale factor, and (X, y) is a graphAny pixel point coordinate, X, of a pixel coordinate system0(x0,y0,σ0) The coordinate of the origin of an image pixel coordinate system under the original scale is obtained;
and (3) carrying out derivation on the Talor expansion and making the equation equal to zero to obtain the offset of the extreme point as follows:
the corresponding extreme point equation has the value:
6. the method for achieving three-dimensional reconstruction of a workpiece based on binocular stereo vision according to claim 1, wherein the conversion relationship between the pixel coordinates and the world coordinates of the feature points is specifically as follows:
wherein (u, v) is the coordinate of the characteristic point in the pixel coordinate system, dy is the size of the characteristic point pixel in the X and y directions in the physical coordinate system, f represents the focal length of the camera, R represents the rotation third order matrix, T represents the translational column vector, (X)W,YW,ZW) Indicating the position of the point in a world coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011242913.6A CN112819935A (en) | 2020-11-09 | 2020-11-09 | Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011242913.6A CN112819935A (en) | 2020-11-09 | 2020-11-09 | Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112819935A true CN112819935A (en) | 2021-05-18 |
Family
ID=75853361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011242913.6A Pending CN112819935A (en) | 2020-11-09 | 2020-11-09 | Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112819935A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113390344A (en) * | 2021-07-06 | 2021-09-14 | 桂林电子科技大学 | Method for rapidly detecting dimension and geometric tolerance of stepped shaft |
CN113762544A (en) * | 2021-08-26 | 2021-12-07 | 深圳证券通信有限公司 | Intelligent machine room equipment position inspection and management method based on computer vision |
CN117218301A (en) * | 2023-11-09 | 2023-12-12 | 常熟理工学院 | Elevator traction sheave groove reconstruction method and system based on multi-channel stereoscopic vision |
-
2020
- 2020-11-09 CN CN202011242913.6A patent/CN112819935A/en active Pending
Non-Patent Citations (1)
Title |
---|
范敬利: "变焦高精度双目立体视觉测量技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113390344A (en) * | 2021-07-06 | 2021-09-14 | 桂林电子科技大学 | Method for rapidly detecting dimension and geometric tolerance of stepped shaft |
CN113762544A (en) * | 2021-08-26 | 2021-12-07 | 深圳证券通信有限公司 | Intelligent machine room equipment position inspection and management method based on computer vision |
CN117218301A (en) * | 2023-11-09 | 2023-12-12 | 常熟理工学院 | Elevator traction sheave groove reconstruction method and system based on multi-channel stereoscopic vision |
CN117218301B (en) * | 2023-11-09 | 2024-02-09 | 常熟理工学院 | Elevator traction sheave groove reconstruction method and system based on multi-channel stereoscopic vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109215063B (en) | Registration method of event trigger camera and three-dimensional laser radar | |
CN112819935A (en) | Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision | |
CN109883533B (en) | Low-frequency vibration measurement method based on machine vision | |
CN107729893B (en) | Visual positioning method and system of die spotting machine and storage medium | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
CN109801333B (en) | Volume measurement method, device and system and computing equipment | |
CN107452030B (en) | Image registration method based on contour detection and feature matching | |
CN107084680B (en) | Target depth measuring method based on machine monocular vision | |
JP6899189B2 (en) | Systems and methods for efficiently scoring probes in images with a vision system | |
CN108007388A (en) | A kind of turntable angle high precision online measuring method based on machine vision | |
CN112200203B (en) | Matching method of weak correlation speckle images in oblique field of view | |
CN106897995B (en) | A kind of components automatic identifying method towards Automatic manual transmission process | |
CN114029946A (en) | Method, device and equipment for guiding robot to position and grab based on 3D grating | |
Tran et al. | Non-contact gap and flush measurement using monocular structured multi-line light vision for vehicle assembly | |
CN111612765A (en) | Method for identifying and positioning circular transparent lens | |
CN108447092B (en) | Method and device for visually positioning marker | |
CN116129037A (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
CN115937203A (en) | Visual detection method, device, equipment and medium based on template matching | |
CN113689365B (en) | Target tracking and positioning method based on Azure Kinect | |
JP2003216931A (en) | Specific pattern recognizing method, specific pattern recognizing program, specific pattern recognizing program storage medium and specific pattern recognizing device | |
CN114485433A (en) | Three-dimensional measurement system, method and device based on pseudo-random speckles | |
TWI659390B (en) | Data fusion method for camera and laser rangefinder applied to object detection | |
CN117496401A (en) | Full-automatic identification and tracking method for oval target points of video measurement image sequences | |
CN111243006A (en) | Method for measuring liquid drop contact angle and size based on image processing | |
CN116823708A (en) | PC component side mold identification and positioning research based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210518 |
|
RJ01 | Rejection of invention patent application after publication |