CN111008602B - Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part - Google Patents

Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part Download PDF

Info

Publication number
CN111008602B
CN111008602B CN201911244323.4A CN201911244323A CN111008602B CN 111008602 B CN111008602 B CN 111008602B CN 201911244323 A CN201911244323 A CN 201911244323A CN 111008602 B CN111008602 B CN 111008602B
Authority
CN
China
Prior art keywords
dimensional
scribing
measured
images
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911244323.4A
Other languages
Chinese (zh)
Other versions
CN111008602A (en
Inventor
李文龙
陈栋
王振忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haizhichen Industrial Equipment Co ltd
Original Assignee
Qingdao Haizhichen Industrial Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haizhichen Industrial Equipment Co ltd filed Critical Qingdao Haizhichen Industrial Equipment Co ltd
Priority to CN201911244323.4A priority Critical patent/CN111008602B/en
Publication of CN111008602A publication Critical patent/CN111008602A/en
Application granted granted Critical
Publication of CN111008602B publication Critical patent/CN111008602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a two-dimensional and three-dimensional vision combined marking feature extraction method for a small-curvature thin-wall part, which belongs to the field of machine vision and comprises a multi-camera measurement system calibration step S1, a marking point arrangement step S2, a step S3 of image acquisition of a part to be measured after marking, and a step S4 of extracting marking features from a plurality of images acquired once; the method comprises the steps of carrying out characteristic point matching on scribing characteristics in a plurality of images, establishing a corresponding relation of the scribing characteristics of the plurality of images, carrying out three-dimensional reconstruction on the scribing characteristics of the plurality of images to obtain three-dimensional point cloud data of the scribing characteristics of the part to be measured, changing the pose for multiple times, measuring and extracting three-dimensional point cloud data of the scribing characteristics of the part to be measured, which are measured for a plurality of times, and carrying out data fusion on the three-dimensional point cloud data of the scribing characteristics of the part to be measured through marking points. The invention solves the technical problem that the trimming of the aircraft skin robot is difficult to process according to a design model.

Description

Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part
Technical Field
The invention belongs to the field of three-dimensional vision, and particularly relates to a scribing feature extraction method combining two-dimensional vision and three-dimensional vision for a small-curvature thin-wall part.
Background
The determination of the allowance of the part to be processed by the comparison scribing is a common means for determining the processing allowance in the existing aircraft skin repairing, and a certain allowance is required to be reserved for repairing in the assembly process of part of the skin due to the accumulation of errors in the manufacturing and assembly processes of the aircraft skin. As shown in fig. 1, the aircraft skin is identified by the numeral 100, the manual score line is identified by the numeral 101, the part boundary of the aircraft skin 100 is identified by the numeral 103, and the machining allowance 105 of the aircraft skin is located between the manual score line 101 and the part boundary 103.
The traditional aircraft skin is mostly subjected to manual repair (namely manual repair grinding) after comparison scribing, and the mode has the defects that the machining quality is difficult to guarantee and the consistency is poor. The robot is used as an execution body of manufacturing equipment, an intelligent sensor is integrated, a human hand or a numerical control machine tool is replaced, and the purposes of milling, grinding, hole making and drilling and riveting of large-scale complex parts with small allowance are achieved, so that the robot becomes one of the leading research directions in the field of intelligent manufacturing. The machining efficiency and the machining quality of the large-size small-curvature thin-wall part can be effectively improved by adopting robot machining, and the machining consistency of the small-size and medium-size thin-wall part can be effectively improved.
In order to realize the trimming processing of the aircraft skin by the robot according to the specified path, the robot needs to generate according to the processing allowance of the part, and the most commonly used processing path is generated based on a design model at present, which is similar to the method for generating the numerical control processing track. However, for aircraft skins, the actual tooling margins are not exactly in line with the design model, and it is difficult to estimate tooling margins and generate tooling trajectories from the design model because the build-up of manufacturing and assembly errors requires the tooling margins to be determined from the actual assembly conditions for repair.
The digital measurement method based on the combination of two-dimensional and three-dimensional vision can acquire three-dimensional information of scribing on a part to be machined in place, can be used for acquiring the actual boundary of a blank to be machined for calculating machining allowance with an actual assembly area to generate a machining path, and can better ensure machining precision, efficiency and consistency compared with the traditional manual grinding.
Disclosure of Invention
In view of the above drawbacks of the prior art, it would be advantageous to provide a scribe feature extraction method for small curvature thin-walled parts that combines two-dimensional and three-dimensional vision.
The general design concept of the invention is as follows: the pixel information of the scribing feature is extracted by adopting two-dimensional vision, a binocular or multi-view three-dimensional reconstruction method is adopted, the matching relation of the pixel information of the scribing feature is extracted based on the three-dimensional vision, three-dimensional measurement of the scribing feature (namely the scribing feature of the part to be measured) on the part to be measured is realized, and the measurement data of a plurality of postures are spliced and fused based on the marking points arranged on the part to be measured, so that the complete scribing feature of the part to be measured is obtained.
In order to achieve the above purpose, the invention provides a scribing feature extraction method combining two-dimensional vision and three-dimensional vision for a small-curvature thin-wall part, which is characterized by comprising the following steps:
s1: calibrating a multi-camera measurement system consisting of at least two CCD/CMOS cameras;
s2: arranging marking points around the part to be tested;
s3: the marked part to be measured is subjected to image acquisition through the calibrated multi-camera measuring system in the step S1;
s4: extracting center line sub-pixel coordinates of scribing features from a plurality of images acquired in the step S3 by a multi-camera measurement system in a single mode by a two-step gray level gravity center method;
s5: establishing a epipolar constraint relation of a relative space pose relation between CCD/CMOS cameras according to the calibration of the multi-camera measurement system in the step S1, and carrying out characteristic point matching on the scribing characteristics in the plurality of images extracted in the step S4 through epipolar lines;
s6: establishing a corresponding relation of scribing characteristics of the plurality of images extracted in the step S4 through the matched characteristic points obtained in the step S5;
s7: carrying out three-dimensional reconstruction on the scribing features of the plurality of images with the corresponding relation established in the step S6 to obtain three-dimensional point cloud data of the scribing features;
s8: the marking characteristics of the part to be measured are measured for multiple times by changing the pose of each camera in the multi-camera measuring system calibrated in the step S1, and the acquired data are used for extracting three-dimensional point cloud data of the marking characteristics of the part to be measured, which are measured for single time, through the steps S4 to S7;
s9: and (3) carrying out data fusion on the three-dimensional point cloud data of the marking characteristics of the part to be measured, which are obtained in the step (S8), through the marking points arranged around the part to be measured in the step (S1), so as to obtain the complete marking characteristics of the part to be measured, namely carrying out multi-pose measurement data fusion based on the marking points arranged around the part to be measured, so as to obtain the complete marking characteristics.
Further, in step S1, "calibrating a multi-camera measurement system composed of at least two CCD/CMOS cameras" is specifically: and calibrating internal parameters of each CCD/CMOS camera in the multi-camera measurement system through a planar target in the space, and calibrating the spatial pose relation of each CCD/CMOS camera in the measurement system.
Still further, in step S3, "the calibrated multi-camera measurement system performs image acquisition on the scribed part to be measured" specifically includes: the relative spatial pose relation of each CCD/CMOS camera in the calibrated multi-camera measurement system is kept unchanged, and image acquisition is carried out on the part to be measured at different visual angles at the same time.
Still further, in step S4, "the center line subpixel coordinates of the scribe line feature are extracted by the two-step gray-level gravity center method for the plurality of images acquired by the multi-camera measurement system in a single step obtained in step S3" specifically includes: for images of a plurality of imagesI(x,y) First, for an imageI(x,y) Edge of the framexShafts oryShaft calculation coarse extraction center line coordinatesC x (x,y j ) Or (b)C y (x i ,y) The calculation formula is as follows:
or alternatively
Then, based on the rough extraction of the center line coordinatesC x (x,y j ) Or (b)C y (x i ,y) Calculating coordinates of each point on the fine extraction center lineC(x ,y ) The calculation formula is as follows
Wherein, the liquid crystal display device comprises a liquid crystal display device,I(x i ,y j ) Representing pixel coordinates as%x i ,y j ) S is the pixel value of the pixel point @ sx i ,y j ) Normal fetch point count of (c).
Furthermore, in step S5, "according to the calibration of the multi-camera measurement system in step S1, a epipolar constraint relationship related to the spatial pose relationship between the CCD/CMOS cameras is established, and feature point matching is performed on the scribe features in the plurality of images extracted in step S4 through epipolar lines", which specifically includes: the center line coordinate in the image A in the plurality of images isC A (x ,y ) According to the central line coordinates in the image AC A (x ,y ) And the opposite pole pointe A Establishing polar linesl A Based on pole in image B of the plurality of images according to the epipolar geometry principlee B Establishing polar linesl B Thereby determining the center line coordinates in the image B and the image A asC A (x ,y ) Corresponding centerline coordinatesC B (x ,y )。
Still further, in step S7, "three-dimensional reconstruction is performed on the scribe line features of the plurality of images for which the correspondence has been established in step S6, so as to obtain three-dimensional point cloud data of the scribe line features", specifically: according to the matching relation of characteristic points of scribing characteristics in a plurality of images:
solving three-dimensional coordinates of feature points of scribe featuresC(x,y,z) Where T represents the matrix transpose and F represents the base matrix, i.e. the conversion of plane a to plane B.
Still further, in step S8, "the three-dimensional point cloud data of the scribing feature of the part to be measured, which is measured once by changing the spatial pose of the multi-camera measurement system calibrated in step S1, is extracted by the collected data in steps S4 to S7, specifically: according to the size of the part to be measured, the calibrated multi-camera measurement system is used for measuring a plurality of space poses, and the space poses during measurement are planned according to the depth of field, the visual field and the like of the multi-camera measurement system based on the principle of visual cone, so that complete measurement data of the part to be measured are obtained.
Further, in the step S9, "data fusion of three-dimensional point cloud data of scribing features of the part to be measured obtained in the step S8 through the mark points arranged around the part to be measured in the step S1" specifically includes: randomly arranging at least three marking points on a part to be measured, ensuring that the number of common marking points of previous measurement and subsequent measurement (namely previous and subsequent measurement) is at least three, obtaining three-dimensional coordinates of the marking points through measurement, and calculating a rotation transformation matrix based on an ICP algorithmTAnd fusion of multiple measurement data is realized.
Compared with the prior art, the invention has the following advantages: the defect that scribing characteristics are difficult to extract after the feature of the part to be measured is subjected to dense reconstruction by a traditional three-dimensional measurement method is overcome, an effective digital measurement means is provided for aircraft skin repair, and the technical problem that the trimming of an aircraft skin robot is difficult to process according to a design model is effectively solved.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
The construction and further objects and advantages of the present invention will be better understood from the following description taken in conjunction with the accompanying drawings, wherein like reference numerals identify like elements:
FIG. 1 schematically illustrates part boundaries of an aircraft skin, a manual score line, and tooling allowance therebetween;
FIG. 2 is a flow chart of a scribe feature extraction method for a small curvature thin-walled part with a combination of two-dimensional and three-dimensional vision in accordance with one embodiment of the present invention;
FIG. 3 schematically illustrates scribe features in a two-dimensional image;
FIG. 4 is a schematic diagram of two-step gray-scale gravity center method crude extraction and fine extraction;
fig. 5 is a schematic illustration of epipolar constraints in two camera spaces.
Detailed Description
Specific embodiments of the present invention will be described below with reference to the accompanying drawings.
In general, in the invention, the scribing characteristics of the part to be detected are collected through a CCD/CMOS camera in a multi-camera measurement system, the scribing characteristics in the image are extracted through a two-step gray level gravity center method, the scribing characteristics in the extracted image are matched based on epipolar geometry, and three-dimensional reconstruction is carried out on the scribing characteristics in the image by adopting a stereoscopic vision principle. In addition, the thin-walled part with small curvature in the present embodiment is the aircraft skin 100 shown in fig. 1.
As shown in fig. 2, and referring to fig. 3, 4 and 5, the scribing feature extraction method for a small-curvature thin-walled part using two-dimensional and three-dimensional vision in combination according to an embodiment of the present invention includes steps S1 to S9. FIG. 3 is a schematic view of the scribe features of the part under test in a two-dimensional image, with reference to FIG. 1, numeral 101 still identifying the manual scribe of the aircraft skin 100, numeral 103 still identifying the part boundary of the aircraft skin 100, and numeral 105 still identifying the aircraft skin 100 between the manual scribe 101 and the manual scribe 101Machining allowance between part boundaries 103. In fig. 4, numeral 1 is used to identify the rough extraction and numeral 3 is used to identify the fine extraction. In fig. 5, C and D represent two scribe features on the part under test, i.e. the aircraft skin 100,C A andC B corresponding to the images of scribe feature C in camera a and camera B respectively,D A andD B corresponding to the images of scribe feature D in camera a and camera B respectively, l A andl B representing the corresponding epipolar line.
S1: calibrating a multi-camera measurement system consisting of at least two CCD/CMOS cameras;
specifically, in this embodiment, the internal parameters of each CCD/CMOS camera in the multi-camera measurement system are calibrated by the planar targets in the space, and the spatial pose relationship of each CCD/CMOS camera is calibrated at the same time.
S2: randomly arranging marking points around the part to be tested;
specifically, in this embodiment, marking points are randomly arranged on the surface of the part to be measured, for splicing the multi-space pose measurement data.
S3: the marked part to be measured (namely the marking characteristic of the part to be measured) is subjected to image acquisition through the multi-camera measuring system calibrated in the step S1;
specifically, in this embodiment, the relative spatial pose relationship of each CCD/CMOS camera in the calibrated multi-camera measurement system is kept unchanged, and image acquisition is performed on the part to be measured at different viewing angles at the same time.
S4: extracting center line sub-pixel coordinates of scribing features from a plurality of images acquired in step S3 by a multi-camera measurement system in a single mode by a two-step gray level gravity center method;
specifically, in the present embodiment, for an image among a plurality of imagesI(x,y) First, for the image edgexShafts oryShaft calculation coarse extraction center line coordinatesC x (x,y j ) Or (b)C y (x i ,y),The calculation formula is as follows:
or alternatively
Then, based on the rough extraction of the center line coordinatesC x (x,y j ) Or (b)C y (x i ,y) Calculating coordinates of each point on the fine extraction center lineC(x ,y ) The calculation formula is as follows
Wherein, the liquid crystal display device comprises a liquid crystal display device,I(x i ,y j ) Representing pixel coordinates as%x i ,y j ) S is the pixel value of the pixel point @ sx i ,y j ) Normal fetch point count of (c).
S5: establishing a epipolar constraint relation between the CCD/CMOS cameras according to the spatial pose relation calibrated by the CCD/CMOS cameras of the multi-camera measurement system in the step S1, and carrying out characteristic point matching on the scribing characteristics in the plurality of images extracted in the step S4 through epipolar lines;
specifically, in the present embodiment, the center line coordinates in the image a among the plurality of images areC A (x ,y ) According to the central line coordinates in the image AC A (x ,y ) And the opposite pole pointe A Establishing polar linesl A Based on pole in image B of the plurality of images according to the epipolar geometry principlee B Establishing polar linesl B Thereby determining the center line coordinates in the image B and the image AC A (x ,y ) Corresponding centerline coordinatesC B (x ,y )。
S6: establishing a corresponding relation of scribing characteristics of the plurality of images extracted in the step S4 through the matched characteristic points obtained in the step S5;
s7: performing three-dimensional reconstruction on the scribing characteristics of the plurality of images with the corresponding relation established in the step S6 to obtain three-dimensional point cloud data of the scribing characteristics of the part to be detected;
specifically, in the present embodiment, the matching relationship of feature points of scribe features in a plurality of images is based on:
solving three-dimensional coordinates of feature points of scribe featuresC(x,y,z)。
S8: the marking characteristics of the part to be measured are measured for multiple times by changing the spatial pose of the multi-camera measuring system calibrated in the step S1, and the three-dimensional point cloud data of the marking characteristics of the part to be measured, which are measured for single time, are extracted by the collected data through the steps S4 to S7;
specifically, in this embodiment, according to the size of the part to be measured, the calibrated multi-camera measurement system is used to measure at a plurality of spatial poses, and the spatial poses during measurement are planned according to the principle of depth of field, field of view, and the like of the multi-camera measurement system based on the visual cone, so as to obtain complete measurement data of the part to be measured.
S9: and (3) carrying out data fusion on the three-dimensional point cloud data of the marking characteristics of the part to be measured, which are obtained in the step (S8) and measured for multiple times, through the marking points arranged around the part to be measured in the step (S1), so as to obtain the complete marking characteristics of the part to be measured.
Specifically, in the present embodiment, at least three marking points are randomly arranged on the part to be measured, and it is ensured that the number of common marking points measured in the tandem is at least three, three-dimensional coordinates of the marking points are obtained by measurement, and a rotation transformation matrix is calculated based on an ICP algorithmTAnd fusion of multiple measurement data is realized.
The method is suitable for extracting the scribing characteristics of the small-curvature thin-wall part, namely, the method is suitable for extracting the scribing characteristics of various small-curvature thin-wall parts by combining two-dimensional and three-dimensional vision.
While the foregoing is directed to the embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (8)

1. A scribing feature extraction method combining two-dimensional vision and three-dimensional vision for a small-curvature thin-wall part comprises the following steps:
s1: calibrating a multi-camera measurement system consisting of at least two CCD/CMOS cameras;
s2: arranging marking points around the part to be tested;
s3: the marked part to be measured is subjected to image acquisition through the calibrated multi-camera measuring system in the step S1;
s4: extracting the center line coordinates of the scribing characteristics of the plurality of images acquired in the step S3 by a multi-camera measurement system in a single mode by adopting a two-step gray level gravity center method;
s5: establishing a epipolar constraint relation of a relative space pose relation between CCD/CMOS cameras according to the calibration of the multi-camera measurement system in the step S1, and carrying out characteristic point matching on the scribing characteristics in the plurality of images extracted in the step S4 through epipolar lines;
s6: establishing a corresponding relation of scribing features in the plurality of images extracted in the step S4 through the matched feature points obtained in the step S5;
s7: performing three-dimensional reconstruction on the scribing characteristics of the plurality of images with the corresponding relation established in the step S6 to obtain three-dimensional point cloud data of the scribing characteristics of the part to be detected;
s8: carrying out multiple measurements on the scribing characteristics of the part to be measured by changing the spatial pose of the multi-camera measurement system calibrated in the step S1, and extracting three-dimensional point cloud data of the scribing characteristics of the part to be measured, which are measured once, by the collected data through the steps S4 to S7;
s9: and (3) carrying out data fusion on the three-dimensional point cloud data of the marking characteristics of the part to be measured, which are obtained in the step (S8) and measured for multiple times, through the marking points arranged around the part to be measured in the step (S1), so as to obtain the complete marking characteristics of the part to be measured.
2. A method for extracting scribing features for a thin-walled part with a small curvature by combining two-dimensional and three-dimensional vision as claimed in claim 1, wherein in the step S1, the calibration comprises: and calibrating internal parameters of each CCD/CMOS camera in the multi-camera measurement system through a plane target in space, and calibrating the spatial pose relation of each CCD/CMOS camera.
3. The method for extracting scribing features combining two-dimensional and three-dimensional vision for small-curvature thin-walled parts according to claim 2, wherein in the step S3, the image acquisition is performed on the part to be measured at different viewing angles at the same time under the condition that the relative spatial pose relationship of each CCD/CMOS camera in the calibrated multi-camera measurement system remains unchanged.
4. A method for extracting features of scribing in combination of two-dimensional and three-dimensional vision for thin-walled parts with small curvature as claimed in claim 3, wherein in said step S4, for the image I (x, y) of said plurality of images acquired at one time, first, the image I (x, y) is followed byxShafts oryShaft calculation coarse extraction center line coordinatesC x (x,y j ) Or (b)C y (x i ,y) The calculation formula is as follows:
or->Next, based on the rough extraction of the centerline coordinatesC x (x,y j ) Or (b)C y (x i ,y) Calculating coordinates of each point on the fine extraction center lineC(x ,y ) The calculation formula is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,I(x i ,y j ) Representing pixel coordinates as%x i ,y j ) S is the pixel value of the pixel point @ sx i ,y j ) Normal fetch point count of (c).
5. The method for extracting features of scribing in combination of two-dimensional and three-dimensional vision for thin-walled parts with small curvature according to claim 4, wherein in said step S5, the center line coordinates of the image A among said plurality of images are as followsC A (x ,y ) According to the central line coordinates of the image AC A (x ,y ) And is opposite toPolee A Establishing polar linesl A Based on pole in image B of the plurality of images according to the epipolar geometry principlee B Establishing polar linesl B Thereby determining the center line coordinates of the image B and the image AC A (x ,y ) Corresponding centerline coordinatesC B (x ,y )。
6. The method for extracting features of scribing features combining two-dimensional and three-dimensional vision for a thin-walled part with small curvature according to claim 5, wherein in the step S7, the feature point matching relationship of the scribing features in the plurality of images is determined according to:
solving three-dimensional coordinates of feature points of the scribing featureC(x,y,z) Where T represents a matrix transpose and F represents a base matrix, which is a conversion relationship from plane a to plane B.
7. The method for extracting scribing features combining two-dimensional and three-dimensional vision for a small-curvature thin-wall part according to claim 6, wherein in the step S8, measurement is performed in a plurality of spatial poses by the calibrated multi-camera measurement system according to the size and the dimension of the part to be measured, and the spatial poses are planned according to the depth of field and the field of view of the multi-camera measurement system based on the principle of visual cone, so as to obtain complete measurement data of the part to be measured.
8. The method for extracting the scribing features combining two-dimensional and three-dimensional vision for the small-curvature thin-wall part according to claim 7, wherein in the step S9, at least three marking points are randomly arranged on the part to be measured, and it is ensured that at least three common marking points of the previous measurement and the subsequent measurement are arranged, three-dimensional coordinates of the marking points are obtained through measurement, a rotation transformation matrix T is calculated based on an ICP algorithm, and fusion of multiple measurement data is achieved.
CN201911244323.4A 2019-12-06 2019-12-06 Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part Active CN111008602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911244323.4A CN111008602B (en) 2019-12-06 2019-12-06 Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911244323.4A CN111008602B (en) 2019-12-06 2019-12-06 Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part

Publications (2)

Publication Number Publication Date
CN111008602A CN111008602A (en) 2020-04-14
CN111008602B true CN111008602B (en) 2023-07-25

Family

ID=70113899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911244323.4A Active CN111008602B (en) 2019-12-06 2019-12-06 Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part

Country Status (1)

Country Link
CN (1) CN111008602B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537237B (en) * 2021-06-25 2024-01-16 西安交通大学 Multi-feature part quality information intelligent sensing method, system and device
CN113427488A (en) * 2021-07-13 2021-09-24 西安交通大学 Digital marking method, system and device based on geometric feature recognition
CN114115123B (en) * 2021-11-16 2024-04-09 上海交通大学 Parameterized numerical control machining method and system for aviation large thin-wall non-rigid part

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106841206A (en) * 2016-12-19 2017-06-13 大连理工大学 Untouched online inspection method is cut in heavy parts chemical milling
CN107977997A (en) * 2017-11-29 2018-05-01 北京航空航天大学 A kind of Camera Self-Calibration method of combination laser radar three dimensional point cloud
WO2018103152A1 (en) * 2016-12-05 2018-06-14 杭州先临三维科技股份有限公司 Three-dimensional digital imaging sensor, and three-dimensional scanning system and scanning method thereof
WO2018152929A1 (en) * 2017-02-24 2018-08-30 先临三维科技股份有限公司 Three-dimensional scanning system and scanning method thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276415A (en) * 2008-03-03 2008-10-01 北京航空航天大学 Apparatus and method for realizing multi-resolutions image acquisition with multi-focusing video camera
CN103913131B (en) * 2014-04-14 2017-04-12 大连理工大学 Free curve method vector measurement method based on binocular vision
CN104657587B (en) * 2015-01-08 2017-07-18 华中科技大学 A kind of center line extraction method of laser stripe
CN105894574B (en) * 2016-03-30 2018-09-25 清华大学深圳研究生院 A kind of binocular three-dimensional reconstruction method
CN106767527B (en) * 2016-12-07 2019-06-04 西安知象光电科技有限公司 A kind of optics mixing detection method of three-D profile
CN107767442B (en) * 2017-10-16 2020-12-25 浙江工业大学 Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
EP3503030A1 (en) * 2017-12-22 2019-06-26 The Provost, Fellows, Foundation Scholars, & the other members of Board, of the College of the Holy & Undiv. Trinity of Queen Elizabeth, Method and apparatus for generating a three-dimensional model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018103152A1 (en) * 2016-12-05 2018-06-14 杭州先临三维科技股份有限公司 Three-dimensional digital imaging sensor, and three-dimensional scanning system and scanning method thereof
CN106841206A (en) * 2016-12-19 2017-06-13 大连理工大学 Untouched online inspection method is cut in heavy parts chemical milling
WO2018152929A1 (en) * 2017-02-24 2018-08-30 先临三维科技股份有限公司 Three-dimensional scanning system and scanning method thereof
CN107977997A (en) * 2017-11-29 2018-05-01 北京航空航天大学 A kind of Camera Self-Calibration method of combination laser radar three dimensional point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
激光再制造机器人待加工零件形貌三维重建;张海明;杨洗陈;高贵;;中国激光(11);全文 *

Also Published As

Publication number Publication date
CN111008602A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN110370286B (en) Method for identifying rigid body space position of dead axle motion based on industrial robot and monocular camera
CN111008602B (en) Scribing feature extraction method combining two-dimensional vision and three-dimensional vision for small-curvature thin-wall part
CN107214703B (en) Robot self-calibration method based on vision-assisted positioning
CN103913131B (en) Free curve method vector measurement method based on binocular vision
CN103471531B (en) The online non-contact measurement method of axial workpiece linearity
CN102135417B (en) Full-automatic three-dimension characteristic extracting method
CN109000557B (en) A kind of nuclear fuel rod pose automatic identifying method
CN107133983B (en) Bundled round steel end face binocular vision system and space orientation and method of counting
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN112614098B (en) Blank positioning and machining allowance analysis method based on augmented reality
CN103175485A (en) Method for visually calibrating aircraft turbine engine blade repair robot
CN111531407B (en) Workpiece attitude rapid measurement method based on image processing
CN108472706B (en) Deformation processing support system and deformation processing support method
CN107121967A (en) A kind of laser is in machine centering and inter process measurement apparatus
CN110044374A (en) A kind of method and odometer of the monocular vision measurement mileage based on characteristics of image
CN114170284B (en) Multi-view point cloud registration method based on active landmark point projection assistance
CN104614372B (en) Detection method of solar silicon wafer
CN106447729A (en) 2 dimensional digital image related compensation method based on transformation of coordinates and 2 dimensional optical digital image related extensometer
Deng et al. 3D reconstruction of rotating objects based on line structured-light scanning
CN115326835B (en) Cylinder inner surface detection method, visualization method and detection system
CN109373901B (en) Method for calculating center position of hole on plane
CN112734842B (en) Auxiliary positioning method and system for centering installation of large ship equipment
CN115641326A (en) Sub-pixel size detection method and system for ceramic antenna PIN needle image
CN111325802B (en) Circular mark point identification and matching method in helicopter wind tunnel test
CN106123808A (en) A kind of method measured for the deflection of automobile rearview mirror specular angle degree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Method for Extracting Lined Features of Small Curvature Thin-walled Parts Using a Combination of 2D and 3D Vision

Effective date of registration: 20231011

Granted publication date: 20230725

Pledgee: Weihai commercial bank Limited by Share Ltd. Qingdao branch

Pledgor: QINGDAO HAIZHICHEN INDUSTRIAL EQUIPMENT Co.,Ltd.

Registration number: Y2023980060713

PE01 Entry into force of the registration of the contract for pledge of patent right