CN113327246A - Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint - Google Patents

Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint Download PDF

Info

Publication number
CN113327246A
CN113327246A CN202110746882.6A CN202110746882A CN113327246A CN 113327246 A CN113327246 A CN 113327246A CN 202110746882 A CN202110746882 A CN 202110746882A CN 113327246 A CN113327246 A CN 113327246A
Authority
CN
China
Prior art keywords
dimensional
point
point cloud
texture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110746882.6A
Other languages
Chinese (zh)
Inventor
赵慧洁
王云帆
梁莹
姜宏志
李旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Shanghai Space Precision Machinery Research Institute
Original Assignee
Beihang University
Shanghai Space Precision Machinery Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Shanghai Space Precision Machinery Research Institute filed Critical Beihang University
Priority to CN202110746882.6A priority Critical patent/CN113327246A/en
Publication of CN113327246A publication Critical patent/CN113327246A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

A three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint is used for acquiring the three-dimensional appearance of the surface of a riveting piece and evaluating the forming quality of a riveting point. The method is based on a binocular stripe projection sensor, and utilizes a multi-brightness projection high dynamic range measurement technology and four-step phase-shift stereoscopic vision to reconstruct dense three-dimensional points on the surface of a measured object. Aiming at the problems of head shielding and step of the rivet, a high-dynamic-range two-dimensional texture map is synthesized according to the obtained multi-brightness stripe map, the head contour is extracted from the texture map, and head identification and point cloud upper surface and lower surface segmentation are carried out. And reconstructing edge points by using the fitted plane and the geometric relation of the camera, removing outliers, fitting a space elliptical circle, and estimating the diameter and the center. Calculating the height of the heading according to the fitted lower surface and the extracted center. The invention can realize high-precision three-dimensional measurement of the riveting piece and the heading, extract the rivet forming quality parameters in a non-contact, digital and automatic manner, and improve the detection precision and efficiency.

Description

Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint
Technical Field
The invention relates to a three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint, which can be used for three-dimensional scanning of the surface of a riveting piece and automatic acquisition of the diameter and the height of a riveting upset head. The invention belongs to the field of machine vision.
Background
The number of rivets of the riveting cabin section of the carrier rocket is hundreds of thousands, the forming quality of the rivets directly influences the connecting strength of the rocket body structure, and the influence on the reusable structural cabin section is stronger. On one hand, the existing riveting quality detection mainly adopts nondestructive detection means such as electromagnetic induction and ultrasonic detection, and cannot acquire the geometric morphology information of the upset head and the riveting surface. On the other hand, the reflectivity of the surfaces of the upset head and the stringer is different, and the step edge is discontinuous, so that the traditional three-dimensional optical measurement is seriously interfered, the measurement precision cannot be ensured, and even the measurement is failed. At present, the forming quality of the rivet is still ensured mainly through experience of operators, the feasibility of manual measurement after riveting is poor, the workload is huge, visual fatigue is easy to generate, and the effectiveness and the reliability of the quality of riveting connection cannot be ensured reliably.
Therefore, the automatic detection technology for the rivet forming quality based on visual identification can realize non-contact, digital detection and automatic judgment of the riveting quality, accurately find unqualified rivets, accurately position the points of the unqualified rivets, timely replace unqualified products, realize comprehensive management of riveting quality data of a rocket cabin section, and has important significance for improving the cabin section riveting detection efficiency and quality.
Disclosure of Invention
In order to solve the problems that the three-dimensional measurement of the surface of a riveting piece and the edge of an upset head is invalid and the like due to the fact that the edge of the upset head for rivet forming is stepped, the surface reflectivity is not uniform, and the protrusions are shielded, the three-dimensional scanning technology combining stripe projection and texture constraint is designed. The system adopts a binocular stripe projection sensor, and a grating phase method is used for acquiring stripe images and carrying out preliminary three-dimensional data reconstruction during measurement. For the problem of non-uniform reflectivity, a multi-brightness projection mode is adopted for high dynamic measurement. Aiming at the problem of edge accuracy failure, firstly calculating a texture image according to a stripe image, synthesizing a multi-brightness high-dynamic-range two-dimensional texture image, then extracting an ellipse characteristic from the high-dynamic-range two-dimensional texture image, and finally combining preliminary three-dimensional data to carry out edge refinement reconstruction so as to calculate the diameter of the surface of the upset head. And segmenting three-dimensional data of the upper surface and the bottom surface of the heading head by using texture images, and performing plane fitting and height calculation respectively.
The technical problem to be solved by the invention is as follows: and designing a measuring technology which can carry out high-precision three-dimensional reconstruction on the surface of the riveting piece and the upset head thereof and automatically extract the size characteristics of the height and the diameter of the upset head. On one hand, because the surface reflectivity of the upset head is not uniform, the detector is saturated, and the edge step causes phase errors, the measurement precision of the traditional binocular stripe projection three-dimensional scanning technology at the local position of the upset head on the surface of the riveting piece is poor, and the high-precision extraction of forming quality factors such as the diameter and the like cannot be carried out; on the other hand, the direct disordered point cloud feature extraction technology is slow, and the extraction of the high-quality and high-precision features of large-scale heading quality forming is difficult to realize.
The technical solution of the invention is as follows: a rivet forming quality three-dimensional visual detection technical scheme based on stripe projection and image texture constraint comprises the following contents:
(1) the riveting piece surface three-dimensional data main body is obtained by a multi-brightness-level binocular vision grating phase method, for edge points, a system adopts a high dynamic range two-dimensional texture image synthesized by a fringe pattern to perform heading edge pixel extraction, and three-dimensional data obtained by the grating phase and a pinhole model are utilized to solve accurate three-dimensional coordinates of the edge points.
(2) In order to solve the problem of measurement failure caused by nonuniform reflectivity, compared with the traditional multi-brightness projection grating method, the method further introduces multi-brightness texture synthesis, namely respectively calculating texture maps aiming at the collected multi-brightness grade stripes, and obtaining a high dynamic range two-dimensional texture image by utilizing a height state image synthesis algorithm for upset head segmentation and edge reconstruction.
(3) The extraction of the rivet heading is realized by a two-three-dimensional combination method, namely, the heading is identified by an ellipse characteristic on a two-dimensional texture image, the upper surface and the surrounding bottom surface of the heading are segmented, corresponding point cloud data are searched according to the segmented image, and the automatic extraction of parameters such as height, diameter and the like is realized.
Compared with the prior art, the invention has the advantages that:
(1) according to the invention, structural parameters such as field angle, base line distance and the like are optimally designed according to the detection precision index requirement of the surface forming quality of the upset head of the riveting piece. Compared with the traditional grating projection method based on the phase, the method further utilizes the stripe image to obtain the texture image, and the edge point reconstruction is refined according to the texture image, so that the step edge reconstruction precision of the rivet head of the traditional grating phase method is improved.
(2) The invention utilizes the synthesized high dynamic range two-dimensional texture image to finely divide the three-dimensional point cloud data and realizes the feature extraction of parameters such as the diameter, the height and the like of the surface of the heading. Compared with the traditional feature extraction algorithm based on disordered point cloud, the method provided by the invention fully utilizes the stripe projection three-dimensional sensor reconstruction process, realizes three-dimensional point cloud segmentation by combining a high-dynamic-range two-dimensional texture image, and improves the automatic extraction efficiency of the forming quality features.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic view of a measurement system. In the figure, 1 is the surface of a riveting piece of a measured object, 2 is a riveting point heading, and 3 is a binocular stripe projection sensor.
FIG. 3 is a schematic diagram of sub-pixel extraction. In the figure, 1 is an internal normal of a pixel point, 2 is an external normal of the pixel point, and 3 is an edge contour pixel to be detected.
FIG. 4 is a schematic diagram of edge reconstruction according to the present invention. In the figure, 1 is a fitting plane of the upper surface of the heading, 2 is a riveting piece, 3 is a three-dimensional point reconstructed by the edge of the heading, 4 is the heading outline of a texture map of a left camera, and 5 is the extracted outline and the outline point of the texture map of a right camera.
Detailed Description
For a better understanding of the present invention, the technical solutions of the present invention will be described in detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, a three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint comprises the following steps:
1. before detecting the riveting piece, preparation work before measurement is needed, and the preparation work comprises debugging measurement software, calibrating a binocular stripe projection sensor, determining multi-brightness parameters and the like.
2. And carrying out three-dimensional scanning on the surface of the riveting piece by using a binocular stripe projection sensor to obtain stripe image data. And changing the projection brightness and the exposure time of the projector, scanning and acquiring fringe images at different brightness levels. And performing three-dimensional reconstruction by using a multi-brightness projection stripe projection method and a four-step dependent projection grating method to obtain initial three-dimensional data.
In the technology, a high dynamic range stripe pattern is synthesized by utilizing the maximum principle of a stripe system. First, Mask is calculatedn(x, y), the formula is as follows:
Figure BDA0003144133990000041
Figure BDA0003144133990000042
wherein B isnThe bar system is adopted. After obtaining the mask, multiplying the mask by the fringe image respectively to obtain a high dynamic fringe pattern Fi(x, y), i.e.
Figure BDA0003144133990000043
And then, acquiring a wrapping phase by using four-step dependence, unfolding by using a multi-frequency heterodyne method, and performing three-dimensional reconstruction according to a binocular stereoscopic vision principle.
3. And synthesizing a high dynamic range two-dimensional texture map according to the stripe image. Firstly, according to the collected fringe pattern, utilizing the following formula to respectively calculate the texture image under each brightness level,
Figure BDA0003144133990000044
wherein Imn(x, y) is the m-th step stripe image at the nth brightness level, I0n(x, y) is the texture image at the nth brightness level.
And then, synthesizing a high dynamic range two-dimensional texture image by using a Debevec high dynamic range imaging technology. The product of projection brightness and exposure time is used as equivalent exposure time, an equivalent camera response curve is estimated, and the curve is used for calculating texture images I under different brightness levels0n(x, y) high dynamic range synthesis.
4. And extracting the edge of the upset whole pixel on the high-dynamic-range two-dimensional texture image. The heading region was first located using the Hough transform and a 1.5 times diameter region of interest (ROI) was defined for the blocking process. For each ROI, the Canny operator is applied to extract the edges. And then, respectively carrying out ellipse fitting on each edge contour, carrying out primary screening according to the conditions of average gray scale, long-short axis ratio and area threshold, and removing edges with overlarge long-short axis ratio, small area and small average gray scale.
5. And extracting sub-pixels on the high-dynamic-range two-dimensional texture image to extract the edge of the heading. Firstly, shadow false contours are removed from the preliminarily screened edges. And for each contour point pixel, taking the direction of an inner normal pointing to the center of the contour fitting ellipse, and respectively calculating the gray level average values of four points on the inner normal and the outer normal. If the inner normal average gray level is greater than the outer found average gray level, the pixel is retained. And then, extracting the sub-pixel edge of the point by using the ROI gray image and applying a Steger operator to obtain the sub-pixel edge coordinate.
6. And (4) partitioning local point clouds on the upper surface and the lower surface of the heading head. According to the contour extracted from the high-dynamic-range two-dimensional texture image, the corresponding three-dimensional point inside the upset closed contour is taken as an upper surface point cloud, and the corresponding point cloud outside the contour is taken as a lower surface (upset riveting bottom surface) point cloud. The inside of the upset is determined by the internal normal calculated in 5. Because the two-dimensional image coordinates of the binocular system correspond to the three-dimensional point clouds one by one, the corresponding three-dimensional points are searched according to the image coordinates in the upsets of the upper surface and the lower surface, and therefore the segmentation of the upper surface and the lower surface is achieved.
7. And reconstructing edge point coordinates. And (4) selecting a stripe projection sensor corresponding to the upper surface of the heading head to reconstruct a three-dimensional point according to the segmentation result in the step (4), and performing plane fitting. And for the heading outlines extracted by the left camera and the right camera, searching a corresponding matching relation by using limit constraint. The extracted edge profile is obtained by intersecting the space ray and the upper surface of the upset head by using the monocular reconstruction principle, namely
Figure BDA0003144133990000051
Wherein n is the normal of the upper surface plane of the heading, i represents a left camera and a right camera, M is a camera calibration parameter, p is an image point, and X is a space three-dimensional point.
8. Outliers were removed. Side outliers are first removed. And fitting the lower surface plane according to the cloud result of the segmentation points in the step 4. And calculating the distance between each point in the ROI and the upper and lower surfaces, wherein if the distance is within the threshold value, the point is a plane point, and otherwise, the point is an outlier and is deleted. And then removing error edge reconstruction points. And (4) projecting the reconstructed edge three-dimensional points and the in-plane points in the step (7) to the same plane together, calculating 8 neighborhood pixels of each point, and if the number of the 8 neighborhood points is less than a threshold value, rejecting the edge reconstructed points so as to obtain accurate edge compensation.
9. And (4) extracting the characteristics of the molding quality parameters (diameter and height). And taking a union set of the reconstructed ellipse three-dimensional points, refitting, and taking the average value of the major axis and the minor axis as the diameter of the upper surface of the upset head. And performing constant compensation on the upper surface to obtain the diameter of the middle part of the upset head. Constant compensation is determined by calibration in advance, namely, the riveting upsets are subjected to contact detection, and the average deviation is taken as a compensation value. And extracting the three-dimensional coordinates of the central point, and calculating the vertical distance from the central point to the lower surface of the heading to be used as the height of the heading. And identifying defects of the acquired high-dynamic-range two-dimensional texture image. If any one of the three characteristics is different from the preset standard, the riveting point is judged to be unqualified.

Claims (5)

1. A three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint is characterized by comprising the following steps:
(1) and calculating texture images according to the obtained fringe images, and synthesizing a high-dynamic-range two-dimensional texture map by adopting a multi-brightness projection mode.
(2) And segmenting an upset head region according to the contour extracted from the high-dynamic-range two-dimensional texture image, and segmenting scanning point clouds into an upper surface point cloud and a lower surface point cloud of the upset head according to the region.
(3) And (4) fitting the point cloud of the upper surface plane, reconstructing three-dimensional points of the edge of the heading head and removing outliers according to the pinhole model and the calibration parameters.
(4) And (5) performing diameter fitting on the edge points of the upper surface, and extracting the diameter and the circle center of the three-dimensional contour of the upper surface. And performing constant compensation on the diameter of the upper surface, and extracting the diameter of the middle part of the upset head.
2. The three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint is characterized in that: in the step (1), the stripe image acquired by the multiple brightness (projection brightness and exposure time) of the stripe projection sensor is utilized to synthesize the high dynamic range two-dimensional texture image, so that the high dynamic range three-dimensional point on the surface of the reflective riveting piece is obtained, and the surface high dynamic range texture information of the reflective riveting piece is also obtained.
3. The three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint is characterized in that: in the step (2), in the binocular stripe projection sensor, the high dynamic two-dimensional texture image is utilized to assist heading local point cloud segmentation, namely, the corresponding three-dimensional point cloud is searched according to the extracted rivet heading outline sub-pixel coordinates in the high dynamic two-dimensional texture image. And taking the corresponding three-dimensional point inside the heading closed contour as an upper surface point cloud, and taking the corresponding point cloud outside the contour as a lower surface point cloud.
4. The three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint is characterized in that: and (3) calculating the intersection point of a ray emitted by the upset head contour point and the point cloud of the upper surface plane of the upset head according to the camera calibration parameters and the pinhole model, and solving the three-dimensional coordinates of the edge point. And setting a reasonable threshold according to the edge point information, and removing the side outliers. And (3) projecting and reconstructing a three-dimensional upper surface point cloud by using binocular stripes, projecting the local point cloud and the edge reconstruction point to a two-dimensional plane, setting a reasonable edge threshold value, and removing error edge points reconstructed by the textures.
5. The three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint is characterized in that: in the step (4), space ellipse fitting is firstly carried out on the extracted three-dimensional points, the diameter of the upper surface of the upset head is estimated according to the mean value of the major axis and the minor axis, and the diameter of the middle part of the upset head is calculated by using a constant compensation quantity calibrated in advance and is output as the diameter of the final upset head.
CN202110746882.6A 2021-07-01 2021-07-01 Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint Pending CN113327246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110746882.6A CN113327246A (en) 2021-07-01 2021-07-01 Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110746882.6A CN113327246A (en) 2021-07-01 2021-07-01 Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint

Publications (1)

Publication Number Publication Date
CN113327246A true CN113327246A (en) 2021-08-31

Family

ID=77425350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110746882.6A Pending CN113327246A (en) 2021-07-01 2021-07-01 Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint

Country Status (1)

Country Link
CN (1) CN113327246A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781500A (en) * 2021-09-10 2021-12-10 中国科学院自动化研究所 Method and device for segmenting cabin segment image instance, electronic equipment and storage medium
CN116772746A (en) * 2023-08-17 2023-09-19 湖南视比特机器人有限公司 Flatness profile measuring method using spot light pattern detection and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108020172A (en) * 2016-11-01 2018-05-11 中国科学院沈阳自动化研究所 A kind of aircraft surface workmanship detection method based on 3D data
US20200166333A1 (en) * 2016-12-07 2020-05-28 Ki 'an Chishine Optoelectronics Technology Co., Ltd. Hybrid light measurement method for measuring three-dimensional profile
CN112101130A (en) * 2020-08-21 2020-12-18 上海航天精密机械研究所 Rivet forming quality detection and judgment system and method based on visual identification technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108020172A (en) * 2016-11-01 2018-05-11 中国科学院沈阳自动化研究所 A kind of aircraft surface workmanship detection method based on 3D data
US20200166333A1 (en) * 2016-12-07 2020-05-28 Ki 'an Chishine Optoelectronics Technology Co., Ltd. Hybrid light measurement method for measuring three-dimensional profile
CN112101130A (en) * 2020-08-21 2020-12-18 上海航天精密机械研究所 Rivet forming quality detection and judgment system and method based on visual identification technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUNFAN WANG等: "High-Accuracy 3-D Sensor for Rivet Inspection Using Fringe Projection Profilometry with Texture Constraint", 《SENSORS》 *
李红卫: "基于三维点云的飞机机体结构铆钉轮廓提取算法研究", 《机电工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781500A (en) * 2021-09-10 2021-12-10 中国科学院自动化研究所 Method and device for segmenting cabin segment image instance, electronic equipment and storage medium
CN113781500B (en) * 2021-09-10 2024-04-05 中国科学院自动化研究所 Method, device, electronic equipment and storage medium for segmenting cabin image instance
CN116772746A (en) * 2023-08-17 2023-09-19 湖南视比特机器人有限公司 Flatness profile measuring method using spot light pattern detection and storage medium

Similar Documents

Publication Publication Date Title
CN106839977B (en) Shield dregs volume method for real-time measurement based on optical grating projection binocular imaging technology
CN109658398B (en) Part surface defect identification and evaluation method based on three-dimensional measurement point cloud
CN110017773B (en) Package volume measuring method based on machine vision
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
CN104930985B (en) Binocular vision 3 D topography measurement method based on space-time restriction
CN110443836A (en) A kind of point cloud data autoegistration method and device based on plane characteristic
CN102135417B (en) Full-automatic three-dimension characteristic extracting method
CN108109139B (en) Airborne LIDAR three-dimensional building detection method based on gray voxel model
CN113327246A (en) Three-dimensional visual inspection technology for rivet forming quality based on stripe projection and image texture constraint
CN102496161B (en) Method for extracting contour of image of printed circuit board (PCB)
CN106526593B (en) Sub-pixel-level corner reflector automatic positioning method based on the tight imaging model of SAR
CN105783786B (en) Part chamfering measuring method and device based on structured light vision
US10572987B2 (en) Determination of localised quality measurements from a volumetric image record
CN109580630A (en) A kind of visible detection method of component of machine defect
CN104359403A (en) Plane part size measurement method based on sub-pixel edge algorithm
CN104567758B (en) Stereo imaging system and its method
CN104748683A (en) Device and method for online and automatic measuring numerical control machine tool workpieces
CN111028221B (en) Airplane skin butt-joint measurement method based on linear feature detection
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
CN102798349A (en) Three-dimensional surface extraction method based on equal-gray line search
CN111179335A (en) Standing tree measuring method based on binocular vision
CN103884294B (en) The method and its device of a kind of infrared light measuring three-dimensional morphology of wide visual field
CN115096206A (en) Part size high-precision measurement method based on machine vision
CN113390357B (en) Rivet levelness measuring method based on binocular multi-line structured light
Zhang et al. Accurate profile measurement method for industrial stereo-vision systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210831