CN108596929A - The light of fusion plane grid depth calculation cuts data modeling reconstructing method - Google Patents
The light of fusion plane grid depth calculation cuts data modeling reconstructing method Download PDFInfo
- Publication number
- CN108596929A CN108596929A CN201810319307.6A CN201810319307A CN108596929A CN 108596929 A CN108596929 A CN 108596929A CN 201810319307 A CN201810319307 A CN 201810319307A CN 108596929 A CN108596929 A CN 108596929A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- data
- space
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
A kind of light of fusion plane grid depth calculation involved in the present invention cuts data modeling reconstructing method.This method mainly includes the following steps:Modeling scaling board image sequence acquisition, image space grid data obtains, and object space grid data and image space grid data mapping relations are established in grid data matrix scaling correction in image space physical plane, mesh generation rule, output model parameter, output cross, longitudinal degree coordinate.The invention is that the monocular depth under being combined with light cross-section method camera measures, it is mainly used in the depth data measurement model established in macro object optical cutting measuring environment, the method that emphasis solves the macroscopical optical cutting measuring of two dimension, it is either simple in application environment, measuring principle, hardware configuration, cost advantage, and also avoid the problem of binocular stereo vision edge detection and matching difficulty.
Description
Technical field
The present invention relates to a kind of light of fusion plane grid depth calculation to cut data modeling reconstructing method, and this method uses step
Into translating device calibration for cameras and location model is established, at equal intervals by computer motion control card control stepper motor translation stage
Distance is mobile and synchronous acquisition object space subscript fixes image a little, and track and identify, positioning and demarcating lattice site changes, to meter
Calculation machine list entries image, while merging preservation label point image and corresponding object space coordinate;System mainly include turntable,
Stepping translation stage and CCD camera, and basic data matrix modeling method is cut using light and realizes horizontal, the longitudinal depth information of characteristic point
It is accurate to measure, the correspondence of object space and image space is realized with basic data matrix, exports the world of target object surface
Coordinate.
Background technology
In recent years, with the continuous improvement of the development and commercial production levels of society, automation industry is interior to the three of object
The demand that dimension information measures is increasing, and traditional measuring apparatus cannot meet the needs of production gradually, therefore three
Tie up the flourishing rapidly development of measuring apparatus.
Structural light three-dimensional mensuration mainly uses project structured light system by light source projects to body surface, and structure light triangle is surveyed
Amount method is to apply one of wide, technology relative maturity non-contact surface measuring technique at present, using linear structure light three
Angular measurement is also known as light cross-section method, vivid cable architecture smooth surface is regarded as a finishing tool to project body surface, in body surface shape
At laser stripe, laser is deformed by body surface modulation, this also embodies the scrambling of body surface, then passes through phase
Machine acquires laser stripe image, especially to those smooth surfaces, contrast is low, textural shape is complicated macro object, uses
Laser line generator projection can be in the apparent characteristic point of body surface degree of being contrasted, and the later stage of being more convenient for carries out computer assisted image processing
Characteristic point is extracted, the three-dimensional appearance information that the calculating of triangulation model analyzing image can be obtained by body surface is cut using light.
The three-dimensional measurement model of the existing spatial coordinates calculation about light cross-section method is based primarily upon optical pattern method and camera
Two kinds.The tripleplanes such as optical pattern method such as Structure light method, Moire fringe technique, phase method measurement model, this class model will need
It includes being accurately positioned for striped to want optical grating projection, and the more processes of calibrating parameters are loaded down with trivial details, and optical grating projection equipment cost is relatively high.Base
It is first had in the three-dimensional pin-hole measurements model such as camera such as single eye stereo vision, binocular stereo vision and projection model, this class model
A kind of projection model ideally is established, the distortion of optical system is then considered, ideal model is corrected by distortion analysis
Position error, depend on optimization method calibrating parameters, such methods calibration is cumbersome, and optimization process often has unusual ask
Topic, be easy to cause the unstable of data reconstruction result.
The present invention for existing light cut scanning system there are the problem of, it is proposed that a kind of fusion plane grid depth calculation
Light cuts data modeling reconstructing method, is improved to system, cooperation stepping translation stage, turntable, calibration model, laser line generator and
Industrial CCD camera etc. control the coordinated movement of various economic factors be acquired sequence image and image analysis processing, establish object space 2-D data and
The correspondence of image space data calculates body surface cross, longitudinal depth information.The modeling method is built according to grid data
The mapping relations of the space coordinate of vertical image space and object space coordinate avoid in traditional measurement through distortion correction into line number
It the step of according to optimization, is fitted using grid data, compare and obtain series of parameters, to obtain light cutting system quantization mould
Type.A patent of invention had previously been declared by our research groups, and " the basic data matrix for SLM microoperation high accuracy positionings is built
Mould method ", it is proposed that the data reconstruction of optical stereo microscopic system, main feature performance are solved the problems, such as using basic data
:(1) the binocular microscopy stereo vision system for being directed to optical stereo microscope structure establishes microscope imaging model;(2) three are used for
The high-precision micrometering environment of dimension;(3) the image mapping relations in microscopy environment are established using space three-dimensional grid.We are quasi-
This patent declared has area substantially with " the basic data matrix modeling method for being used for SLM microoperation high accuracy positionings "
Not, it is mainly manifested in:(1) invention establish it is a kind of cutting the location model of environment applied to light, and the patent previously applied with
Light is cut unrelated;(2) invention is applied to macroscopic measurement environment, and the modeling conditions and parameter that the two considers have apparent difference;
(3) invention establishes model using the camera of monocular, and stereomicroscope structure be binocular stereoscopic vision environment;(4) macro
Seeing single camera vision system has scaling problem in apparent far and near change procedure, must take into consideration this factor in a model, and light
This problem is not present in the modeling learned in stereomicroscope environment.The invention is that the monocular depth under being combined with light cross-section method camera is surveyed
Amount, is mainly used in the depth data measurement model established in macro object optical cutting measuring environment, and emphasis solves the macroscopical object of two dimension
The method of body optical cutting measuring.Either simple in application environment, measuring principle, hardware configuration, system cost all has advantage, and
And also avoid the problem of binocular stereo vision edge detection and matching difficulty.
Invention content
The present invention realizes the correspondence meter by object space plane grid data and image space plane grid data
It calculates horizontal body surface, longitudinal direction depth information and carries out three-dimensional data reconstruct, according to the characteristic point in image space plane grid
Image coordinate carries out coordinate conversion, determines object space depth coordinate information.The invention mainly comprises six big modules (such as Fig. 1):Control
Molding block (M1), image capture module (M2), image processing module (M3), grid matrix output module (M4), model encapsulation and
Depth coordinate output module (M5), light cut three-dimensionalreconstruction module (M6).The implementation process (such as Fig. 2) of the present invention:By controlling mould
Block (S1) opens laser line generator projection system, and optical section vertical calibrating plate (such as Fig. 4) plane is made to be projected, control stepping translation
Platform drives calibration model to be carried out on optical section mobile and image capture module (S2) is combined to synchronously complete modeling mark at equal intervals
Fixed board image sequence acquisition (S3);Characteristic point pixel coordinate (S4) on image is extracted by image procossing again;According to image sequence
Upper feature point coordinates establishes image space physical plane grid data matrix (S5);Due to image space physical plane grid data
The scrambling of matrix obtains plane grid data matrix (S6) after image space correction after zooming in and out correction to it;In conjunction with
Motion recording data and scaling board specification data establish object space plane grid data matrix (S7) to control module platform at equal intervals.
According to image space correct plane grid data matrix (S6) and object space plane grid data matrix (S7) establish object space with
Image space data mapping relations (S8);The quantitative relationship of analyte spatial data and image space data, establishes mesh generation
Regular (S9), thereby determines that model parameter (S10);Plane grid cross, depth degrees of data computation model are established according to model parameter
(S11);Input feature vector point image coordinate (S12) calculates output object table region feature point cross, depth degrees of data by model at this time
(S13);To realize three-dimensionalreconstruction, completes light in conjunction with control module control electric rotating machine and cut three-dimensionalreconstruction (S14).
The light of fusion plane grid depth calculation according to the present invention cuts data modeling reconstructing method, is to cut scanning by light
System acquisition uncalibrated image sequence simultaneously preserves corresponding object space coordinate, establishes image space plane grid data model.When
When knowing object space feature point image, you can to calculate counterpart space characteristics according to image space characteristic point image pixel coordinates
Point cross, depth degrees of data, the light of the plane grid depth calculation are cut data method and are included the following steps:
1, modeling scaling board image sequence acquisition
The main body (such as Fig. 3) that light cuts 3-D scanning vision system includes mainly:Optical table 1, is fixed on optical table
Stepping translation stage 2 and the turntable 3 that is fixed on stepping translation stage, calibration model is fixed on electric rotating machine objective table 3
On central shaft, stepping translation stage 2 is controlled by control module and is moved at equal intervals, while making to be fixed on laser line generator fixed frame 6
The projection of laser line generator 5 optical section 7 vertical calibrating plate plane and the overwinding turntable center axis always that are formed, 8 water of industrial CCD camera
For safety on Camera fixing seat 9, the control of whole system and display computer 10, camera 8 pass through USB2.0 interfaces and calculating
Machine 10 connects, and stepping translation stage 2 and stepping turntable 3 are all communicated by serial communication with computer 10.It first will calibration
Plate 4 is placed on stepping turntable 4, and it is that the optical section corresponding to preliminary examination rotation angle (i.e. 0 °) is flat for the first depth to define this position
Face controls and shows that at this time computer 10 controls 2 output pulse signal of stepping translation stage by serial ports, makes translation stage 2 at equal intervals
Distance a carries out characteristic point image sequence (such as Fig. 5-(a)) I on transverse shifting and control 8 synchronous acquisition scaling board of camera1、I2、I3、
I4…IN, and be saved in 10 control system memory of computer.
2, image space grid data obtains
Previous step is extracted by image procossing and acquires scaling board sequence signature point, and is extracted and marked point image on every image
Coordinate creates matrix (such as Fig. 5-(b)) Ic1、Ic2、Ic3、Ic4…IcN, it is merged into the same image coordinate system ZCYCIn, establish figure
As physical plane grid data matrix IC(such as Fig. 7).Analysis data are fitted the characteristic point of each row Corresponding matching, compare
Obtain existing quantitative relationship between data.In the three-dimensional system of coordinate Z of object spaceWOYWIn, according to known calibration model specification
And the distance a at equal intervals of setting, object space plane grid data matrix I can be obtainedW(such as Fig. 6)
3, image space physical plane grid data matrix scaling correction
Due to object space to be established and image space correspondence, image space physical plane grid data matrix IC(such as
Fig. 5) irregular structure is not easy to establish correspondence, further analyzes existing amount between image space grid data each column
Change relationship.According to camera imaging principle, object distance camera lens is closer, the picture Linear Amplifer in image, certainly will there is a scaling
Point S (Sx, Sy).Scaling intersection point (scaling a little) is asked to the fitting of multigroup characteristic matching point, establishes the scaling relationship according to quantization,
Plane grid data matrix (such as figure after image space correction is obtained after zooming in and out reduction on the basis of first row characteristic point data
8)。
4, object space grid data and image space grid data mapping relations are established
Object space grid data matrix is the rectangular configuration of standard, and adjacent characteristic point is also identical with point distance.By
Plane grid data matrix is irregular rectangular configuration, adjacent feature after the reasons such as pattern distortion, image space correction
Point match point further analyzes data and finds out quantitative relationship to establish object space grid data and image space apart from unequal
Grid data mapping relations.
5, mesh generation rule
Data are one-to-one relationships in object space plane grid internal data and image flame detection space plane grid, when
When the image of a known characteristic point, the image coordinate of the point can be determined according to the image of the point, so that it is determined that corresponding object is empty
Between point coordinate.There are nonlinear changes for distance between neighbor point known to analysis foundation data, so mesh generation is carried out,
Characteristic point depth distance is split, helps to reduce measurement error.
6, output model parameter, coordinate
Object space grid data matrix (D1) can be obtained by step 1, step 2 and step 3, image space corrects grid
Data matrix (D2) and plane grid data structure (D3) determine basic data model parameter (D4) by data processing.<GP>
Indicate known point in mesh generation rule-based algorithm;<CTW>Indicate the coordinate calculating pair in the image space correction grid of known point
Answer object space point coordinates;<GC>Indicate that known point image coordinate carries out correction calculating.Known point image coordinate (D5) is inputted to carry out
Data model operation, horizontal, the longitudinal depth distance (D6) of output object space.
Description of the drawings
Fig. 1 is the light cutting system basic module composition of the present invention
Fig. 2 is the implementation flow chart of the present invention
Fig. 3 is that light of the present invention cuts dimensional visual measurement system
Fig. 4 is calibration model of the present invention
Fig. 5-(a) is characteristic point image sequence on punctuate model of the present invention
Fig. 5-(b) is characteristic point coordinates matrix on punctuate model of the present invention
Fig. 6 is object space plane grid data matrix of the present invention
Fig. 7 is image space physical plane grid data matrix of the present invention
Fig. 8 is plane grid data matrix after image space of the present invention correction
Fig. 9 is fitted scaling center to be of the present invention using image space physical plane grid data
Figure 10 is that model foundation of the present invention and depth data export flow chart
Figure 11 is that known point P of the present invention shows positioning schematic diagram in different spaces
Description of symbols in attached drawing
1, optical table
2, stepping translation stage
3, stepping turntable
4, scaling board
5, laser line generator
6, laser line generator fixed frame
7, optical section
8, industrial CCD camera
9, fixed seat
10, control and show computer
M1, control module
M2, image capture module
M3, image processing module
M4, grid matrix output module
M5, model encapsulation and depth coordinate output module
M6, light cut three-dimensionalreconstruction module
S1, motion-control module
S2, acquisition module
S3, modeling scaling board image sequence acquisition
Feature point coordinates on S4, image procossing extraction image
S5, image space physical plane grid data matrix is established
S6, image space correction plane grid data matrix is established
S7, object space plane grid data matrix
S8, object space and image space data mapping relations are established
S9, mesh generation rule is established
S10, model parameter determine
S11, plane grid cross, depth degrees of data computation model are established
S12, input feature vector point image coordinate
S13, output object table region feature point cross, depth degrees of data
S14, three-dimensionalreconstruction is cut in conjunction with electric rotating machine completion light
A, stepping translation motor translatory distance at equal intervals
B, punctuate model black and white developed width
Object space plane grid, the 1st row, the 1st row point
Object space plane grid, M rows, the point of the 1st row
A1, the 1st row point of object space plane grid
A2, the 2nd row point of object space plane grid
A3, the 3rd row point of object space plane grid
AN, object space plane grid Nth column point
I1, image space plane grid the 1st row point corresponding image
I2, image space plane grid the 2nd row point corresponding image
I3, image space plane grid the 3rd row point corresponding image
IN, image space plane grid Nth column point corresponding image
IC1, image space plane grid the 1st row point corresponding matrix
IC2, image space plane grid the 2nd row point corresponding matrix
IC3, image space plane grid the 3rd row point corresponding matrix
ICN, image space plane grid Nth column point corresponding matrix
B1, the 1st row point of image space physical plane grid basic data
B2, the 2nd row point of image space physical plane grid basic data
B3, the 3rd row point of image space physical plane grid basic data
BN, image space physical plane grid basic data Nth column point
Image space physical plane grid basic data matrix, the 1st row, the 1st row point
Image space physical plane grid basic data matrix, the 1st row, the 2nd row point
Image space physical plane grid basic data matrix, the 1st row, Nth column point
Image space physical plane grid basic data matrix, the 2nd row, the 1st row point
Image space physical plane grid basic data matrix, M rows, the 1st row point
C1, the 1st row point of image flame detection space plane grid basic data matrix
C2, the 2nd row point of image flame detection space plane grid basic data matrix
C3, the 3rd row point of image flame detection space plane grid basic data matrix
CN, image flame detection space plane grid basic data matrix Nth column point
Image flame detection space plane grid basic data matrix, the 1st row, the 1st row point
Image flame detection space plane grid basic data matrix, the 1st row, the 2nd row point
Image flame detection space plane grid basic data matrix, the 1st row, Nth column point
Image flame detection space plane grid basic data matrix, the 2nd row, the 1st row point
Image flame detection space plane grid basic data matrix, M rows, the 1st row point
S (Sx, Sy), image space physical plane grid basic data matrix each column data zooming centre coordinate
P, object space point
Pc, image space actual point
Pcc, point in image flame detection space
D1, object space grid data matrix
D2, image space correct grid data matrix
D3, plane grid data structure
D4, basic data model parameter
D5, input known point image coordinate
D6, object space cross, longitudinal depth data
Specific implementation mode
The present invention is further elaborated in conjunction with attached drawing.Fig. 1-Figure 11 shows base plane grid of the present invention
The light of depth calculation cuts the flow chart of data reconstruction method, and plane grid basic data matrix modeling method includes the following steps:
Lower four coordinate systems of Liru of building together:Object space plane grid coordinate system ZwOYw (such as Fig. 6) is established in drive component
On;Image coordinate system ZcYc establishes in scaling board image (such as Fig. 5-(a)), establishes image coordinate system as unit of pixel;
Since camera coordinates in scaling board moving process, tracking, identification, on positioning and demarcating plate mark point in the same image coordinate system
The feature deviated in ZcYc, so the coordinate system of the image space physical plane grid basic data matrix of synthesis is also ZcYc
(such as Fig. 7), it is made of signature point coordinates on scaling board sequence image.
1, modeling scaling board image sequence acquisition
It is moved at equal intervals by computer control system control stepping translation stage and calibration model is driven to be cut along object space light
The horizontal movement of face direction often moves a spacing distance and acquires a scaling board image in view field space, collects successively
Image sequence (such as Fig. 5-(a)) I1、I2、I3、I4…IN, have on each scaling board a row that point image, total N is marked to arrange, each column
It is respectively defined as A1, A2, A3…AN, wherein each show M point.These point definition be:In Zw axis directions, a littleTotal N row, in Yw axis directions, a littleTotal M rows.Each column point is preserved to exist
The corresponding image of object space preserves image sequence (such as Fig. 5-(a)), each row point in visual field respectively on the image plane at
The each column point of picture, object space can all form an image, and scaling board each column characteristic point image collection is denoted as Ic respectively.It is complete as a result,
At modeling scaling board image sequence acquisition.
2, image space grid data obtains
Extract previous step acquisition scaling board sequence image I1、I2、I3、I4…INIn mark point extracted by image procossing respectively
Label point coordinate data matrix (such as Fig. 5-(b)) Ic is obtained after centre coordinate1、Ic2、Ic3、Ic4…IcN, it is merged into the same figure
In image space coordinate system ZcYc.Each column is respectively defined as B1, B2, B3…BN, wherein each show M point.The definition of these points
For:Point in Zc axis directionsTotal N row, in Yc axis directions, a littleAltogether
M rows.Image space physical plane grid basic data matrix Ic (such as Fig. 7) is established as a result,.Analyze each row pair known to data
It is linearly the presence of quantitative relationship to answer matched characteristic point offset.In the three-dimensional system of coordinate ZwOYw of object space, according to known calibration
Object space plane grid data matrix Iw (such as Fig. 6) can be obtained in model and at equal intervals distance a.
3, image space physical plane grid data matrix scaling correction
In order to establish object space and image space correspondence, by testing the data obtained, observation finds that image space is real
Border plane grid data matrix Ic (such as Fig. 7) is an irregular data matrix, so establishing object space grid data and image
Space lattice data correspondence further analyzes the characteristic point offset of each row Corresponding matching known to image space real data
It is that this is also complied with has scaling phenomenon in perspective geometry between image there are quantitative relationship.Object distance camera lens is closer, in image
Picture scaling central point S (S that are bigger, and infinitely being approached there are onex,Sy) (such as Fig. 9), it is quasi- to multigroup characteristic matching point Linear
Conjunction method seeks fit line intersection point (scaling a little).According to the total kilometres of mobile platform at equal intervals, within the scope of viewing field of camera, acquisition
In image sequence, B on the first width image space1Row characteristic point pixel total number and N image space BNRow characteristic point pixel is total
Number determines the scaling of each row characteristic point.It is built such that practical scaling relationship, is contracted on the basis of the first column data
It releases plane grid data matrix Icc (such as Fig. 8) each columns after obtaining image space correction after original and is respectively defined as C1, C2, C3…
CN, wherein each show M point.These point definition be:In Zc axis directions, a littleTotal N row,
In Yc axis directions, a littleTotal M rows.
4, object space grid data and image space grid data mapping relations are established
Grid data, image space physical plane grid data and the image space of object space are corrected every in grid data
A point is one-to-one, i.e. reference axis ZW、YWRespectively with ZC、YCIt is corresponding;Grid data point in object space coordinate systemRespectively with image space coordinate system actual grid data point And image space
Grid data point after coordinate system correctionIt is corresponding;Grid data point in object space coordinate systemRespectively with image space coordinate system actual grid data pointWith image sky
Between coordinate system correction after grid data pointIt is corresponding;Grid each column data A in object space coordinate system1,
A2, A3…ANRespectively with image space coordinate system actual grid per column data B1, B2, B3…BNWith the practical net of image space coordinate system
Lattice data are per column data C1, C2, C3…CNIt is corresponding.
5, mesh generation rule
On the basis of the first column data of object space and each column data interval of known object space is identical, total N row, to object space
It is respectively that grid is evenly dividing (N-1) part by segmentation benchmark with every column data with image space, in image space grid data, often
There are nonlinear changes for lateral depth distance between column data match point.This ensure that object space and each columns of image space
According to the data correspondence between correspondence and adjacent two row, the analysis of image space depth data nonlinear regression is converted into
Object space data linear regression analysis;To narrow down to error inside grid, depth survey precision is improved.
6, model parameter, output depth data
In conjunction with previous step model parameter and algorithm, general formula can be obtained in (such as Figure 10):
(ZC, YC) indicate characteristic point image space cross, ordinate;
(ZW, YW) indicate characteristic point object space cross, longitudinal depth coordinate;
<CTW>Indicate that the coordinate in the image space correction grid of point calculates counterpart space point coordinates;
<GP>Indicate point in mesh generation rule-based algorithm;
<GC>Indicate point image coordinate to progress correction algorithm
Assuming that the point of object space is P (such as Figure 11), it is P in the corresponding point (such as Figure 11-(b)) of image real spacec,
The corresponding point P in image flame detection spacecc.P can be obtained by image procossingcThe image coordinate of point, it is assumed that table at this time
Show the point coordinates (such as Figure 11-(a)) of object spaceCorresponding to point coordinates in image real space isImage flame detection space plane grid corresponding points (such as Figure 11-(c)) coordinateFitting is special
Scaling center S (Sx, the S that sign point straight-line intersection obtainsy), work as PcWhen inside image real space, by tie point P and point S,
It zooms in and out reduction on first row scaling board on the basis of characteristic point fit line and obtains point PccCoordinate.At this point, being carried out to object space
Mesh generation establishes the depth distance that known object space deviates generation at equal intervalsWith data columns n (0≤n≤
N-1 linear relationship)Depth distance between analysis image flame detection space characteristics match pointWith the non-linear relation of data columns n physical presence It can be carried out with matlab
Data quantization relationship is established in fitting.In this way when the point coordinates in known image space, pass throughIt can determine this lateral depth information(It is defaulted as 0), passing through
Vision measurement obtains the length and width of each pixel, can also calculate P point fore-and-aft distancesFinally by motion control electric rotating
Machine angularly rotates recording angular information completion light and cuts three-dimensionalreconstruction, exports P point object space world coordinates.
It will be apparent to those skilled in the art that the present invention obviously makes a variety of improvement and variation, by one kind to practical figure
Show that the method for characteristic point changing rule and the method for mesh generation reduce error coverage as data carry out analysis, to
Distortion is reduced to the interference of camera and improves depth survey precision, as long as falling into appended claims and its equivalent range
It is interior, just cover the such modifications and variations of the present invention.
Claims (2)
1. the light of fusion plane grid depth calculation cuts data modeling reconstructing method, it is characterised in that:
It is horizontal, longitudinal that body surface is calculated by the correspondence of object space plane grid data and image space plane grid data
Depth information simultaneously carries out three-dimensional data reconstruct, according to the image coordinate of the characteristic point in image space plane grid, carries out coordinate
Conversion, determines object space depth coordinate information;
Including six big modules:Control module, image capture module, image processing module, grid matrix output module, model encapsulation
Three-dimensionalreconstruction module is cut with output depth data output module, light;
Stepping is controlled by the motion control card serial communication in control module and translates platform output pulse signal, controls objective table
Upper object translation, rotation and laser break-make, while recording translation data and spin data;
It is projected in scaling board plane by opening laser in image processing module, control objective table drives scaling board at equal intervals
Translation is preserved image file and is shown on computers with the CCD camera synchronous acquisition image sequence in image capture module
Show;
Image procossing is carried out by the image sequence preserved to image capture module, lifts stripe centerline and black and white on punctuate plate
Intersection point on edge line exports characteristic point image coordinate as characteristic point;
Object space grid matrix and basic data structure are established in conjunction with the translation data combination punctuate plate gauge lattice that control module records,
Image space grid matrix is established in conjunction with the characteristic point image coordinate that image processing module exports;
Object space grid matrix, basic data structure, the image space grid matrix obtained by grid matrix output module is true
Rational method and mapping relations, then determine that mesh generation rule carries out model encapsulation, output by analyzing data quantization relationship
Characteristic point cross, depth degrees of data;
Control motor is angularly rotated and is operated with animal body swing circle, is characteristic sequence point different angle letter on each striation
Recording angular information is ceased, the light that three-dimensionalreconstruction module completion object table millet cake is cut by light cuts three-dimensionalreconstruction.
2. according to the method described in claim 1, it is characterized by comprising the following steps:
Lower four coordinate systems of Liru of building together:Object space plane grid coordinate system ZwOYw is established on drive component;Image coordinate
It is ZcYc, establishes in scaling board image, establish image coordinate system as unit of pixel;Due to camera cooperation scaling board movement
In the process, it tracks, identify, the feature that mark point deviates in the same image coordinate system ZcYc on positioning and demarcating plate, so closing
At the coordinate system of image space physical plane grid basic data matrix be also ZcYc, it is by special on scaling board sequence image
Sign label point coordinates is constituted;
1), modeling scaling board image sequence acquisition
It is moved at equal intervals by computer control system control stepping translation stage and drives calibration model along object space optical section side
To horizontal movement, often moves a spacing distance and acquire a scaling board image in view field space, collect image successively
Sequence I1、I2、I3、I4…IN, have on each scaling board a row that point image, total N is marked to arrange, each column is respectively defined as A1, A2,
A3…AN, wherein each show M point;These point definition be:In Zw axis directions, a littleTotal N
Row, in Yw axis directions, a littleTotal M rows;Each column point is preserved in the corresponding image of object space, is preserved
Each row point in image sequence visual field is imaged on the image plane respectively, and each column point of object space can all form an image,
Scaling board each column characteristic point image collection is denoted as Ic respectively;Modeling scaling board image sequence acquisition is completed as a result,;
2), image space grid data obtains
Extract previous step acquisition scaling board sequence image I1、I2、I3、I4…INIn by image procossing respectively extract label dot center
Label point coordinate data matrix Ic is obtained after coordinate1、Ic2、Ic3、Ic4…IcN, it is merged into the same image space coordinate system ZcYc
In;Each column is respectively defined as B1, B2, B3…BN, wherein each show M point;These point definition be:In Zc axis directions
PointTotal N row, in Yc axis directions, a littleTotal M rows;It establishes as a result,
Image space physical plane grid basic data matrix Ic;The characteristic point offset for analyzing each row Corresponding matching known to data is line
There are quantitative relationships for property;In the three-dimensional system of coordinate ZwOYw of object space, according to known calibration model and distance a at equal intervals, obtain
Object space plane grid data matrix Iw;
3), image space physical plane grid data matrix scaling correction
In order to establish object space and image space correspondence, by testing the data obtained, observation finds that image space is practical flat
Surface grids data matrix Ic is an irregular data matrix, so establishing object space grid data and image space grid data
Correspondence, the characteristic point offset for further analyzing each row Corresponding matching known to image space real data are that have quantization to close
System, this is also complied with has scaling phenomenon in perspective geometry between image;Object distance camera lens is closer, and the picture in image is bigger, and
The scaling central point S (S infinitely approached there are onex,Sy), fit line intersection point is asked to multigroup characteristic matching point Linear approximating method
It scales a little;According to the total kilometres of mobile platform at equal intervals, within the scope of viewing field of camera, acquire in image sequence, the first width figure
B on image space1Row characteristic point pixel total number and N image space BNRow characteristic point pixel total number determines each row feature
The scaling of point;It is built such that practical scaling relationship, image sky is obtained after reduction is zoomed in and out on the basis of the first column data
Between correct after plane grid data matrix Icc;Each column is respectively defined as C1, C2, C3…CN, wherein each show M point;These
Point definition be:In Zc axis directions, a littleTotal N row, in Yc axis directions, a littleTotal M rows;
4) object space grid data and image space grid data mapping relations, are established
Grid data, image space physical plane grid data and the image space of object space correct each point in grid data
It is one-to-one, i.e. reference axis ZW、Y WRespectively with ZC、Y CIt is corresponding;Grid data point in object space coordinate systemRespectively with image space coordinate system actual grid data point And image space
Grid data point after coordinate system correctionIt is corresponding;Grid data point in object space coordinate systemRespectively with image space coordinate system actual grid data pointWith image sky
Between coordinate system correction after grid data pointIt is corresponding;Grid each column data A in object space coordinate system1,
A2, A3…ANRespectively with image space coordinate system actual grid per column data B1, B2, B3…BNWith the practical net of image space coordinate system
Lattice data are per column data C1, C2, C3…CNIt is corresponding;
5), mesh generation rule
On the basis of the first column data of object space and each column data interval of known object space is identical, total N row, to object space and figure
Image space is respectively that grid is evenly dividing (N-1) part by segmentation benchmark with every column data, in image space grid data, per columns
According to the lateral depth distance between match point, there are nonlinear changes;
6), model parameter, output depth data
In conjunction with previous step model parameter and algorithm, general formula is obtained:
(ZC, YC) indicate characteristic point image space cross, ordinate;
(ZW, YW) indicate characteristic point object space cross, longitudinal depth coordinate;
<CTW>Indicate that the coordinate in the image space correction grid of point calculates counterpart space point coordinates;
<GP>Indicate point in mesh generation rule-based algorithm;
<GC>Indicate point image coordinate to progress correction algorithm
Assuming that the point of object space is P, it is P in the corresponding point of image real spacec, corresponding in image flame detection space
Point Pcc;P is obtained by image procossingcThe image coordinate of point, it is assumed that indicate the point coordinates of object space at this time
Corresponding to point coordinates in image real space isImage flame detection space plane grid corresponds to point coordinatesScaling center S (the S x, S that fit characteristic point straight-line intersection obtainsy), work as PcIn image real space
When portion, by tie point P and point S, zooms in and out reduction on first row scaling board on the basis of characteristic point fit line and obtain point Pcc
Coordinate;At this point, carrying out mesh generation to object space, the depth distance that known object space deviates generation at equal intervals is establishedWith the linear relationship of data columns nIt is empty to analyze image flame detection
Between depth distance between characteristic matching pointWith the non-linear relation of data columns n physical presence It is fitted with matlab and establishes data quantization relationship;In this way when a little sitting in known image space
When mark, pass throughDetermine this lateral depth information It is defaulted as
0, the length and width of each pixel are obtained by vision measurement, also calculate P point fore-and-aft distancesIt is rotated finally by motion control
Motor angularly rotates recording angular information completion light and cuts three-dimensionalreconstruction, exports P point object space world coordinates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810319307.6A CN108596929B (en) | 2018-04-11 | 2018-04-11 | Light-section data modeling reconstruction method integrating plane grid depth calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810319307.6A CN108596929B (en) | 2018-04-11 | 2018-04-11 | Light-section data modeling reconstruction method integrating plane grid depth calculation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108596929A true CN108596929A (en) | 2018-09-28 |
CN108596929B CN108596929B (en) | 2022-04-19 |
Family
ID=63621540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810319307.6A Active CN108596929B (en) | 2018-04-11 | 2018-04-11 | Light-section data modeling reconstruction method integrating plane grid depth calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596929B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260727A (en) * | 2020-02-19 | 2020-06-09 | 广州海格星航信息科技有限公司 | Grid positioning method and device based on image processing and storage medium |
CN112857249A (en) * | 2019-11-28 | 2021-05-28 | 株洲中车时代电气股份有限公司 | Calibration method and device for contact net detection equipment |
CN113327229A (en) * | 2021-05-27 | 2021-08-31 | 扬州大学 | Method for quickly positioning image point grid |
CN113344611A (en) * | 2021-05-19 | 2021-09-03 | 天津旗滨节能玻璃有限公司 | Cost determination method, smart device, computer program product and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090238449A1 (en) * | 2005-11-09 | 2009-09-24 | Geometric Informatics, Inc | Method and Apparatus for Absolute-Coordinate Three-Dimensional Surface Imaging |
CN104573180A (en) * | 2014-12-02 | 2015-04-29 | 浙江工业大学 | Real-person shoe type copying device and shoe tree manufacturing method based on single-eye multi-angle-of-view robot vision |
CN106289099A (en) * | 2016-07-28 | 2017-01-04 | 汕头大学 | A kind of single camera vision system and three-dimensional dimension method for fast measuring based on this system |
CN106767433A (en) * | 2016-12-19 | 2017-05-31 | 北京市计算中心 | A kind of method and system for measuring foot sizing |
CN106907988A (en) * | 2017-02-27 | 2017-06-30 | 北京工业大学 | The micro- visual modeling method of basic data matrix |
-
2018
- 2018-04-11 CN CN201810319307.6A patent/CN108596929B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090238449A1 (en) * | 2005-11-09 | 2009-09-24 | Geometric Informatics, Inc | Method and Apparatus for Absolute-Coordinate Three-Dimensional Surface Imaging |
CN104573180A (en) * | 2014-12-02 | 2015-04-29 | 浙江工业大学 | Real-person shoe type copying device and shoe tree manufacturing method based on single-eye multi-angle-of-view robot vision |
CN106289099A (en) * | 2016-07-28 | 2017-01-04 | 汕头大学 | A kind of single camera vision system and three-dimensional dimension method for fast measuring based on this system |
CN106767433A (en) * | 2016-12-19 | 2017-05-31 | 北京市计算中心 | A kind of method and system for measuring foot sizing |
CN106907988A (en) * | 2017-02-27 | 2017-06-30 | 北京工业大学 | The micro- visual modeling method of basic data matrix |
Non-Patent Citations (3)
Title |
---|
YUEZONG WANG 等: "Microscopic vision modeling method by direct mapping analysis formicro-gripping system with stereo light microscope", 《MICRON》 * |
刘冲 等: "显微三维表面重构", 《机械工程学报》 * |
徐刚等: "基于网格点投影灰度相似性的三维重建新方法", 《光学学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112857249A (en) * | 2019-11-28 | 2021-05-28 | 株洲中车时代电气股份有限公司 | Calibration method and device for contact net detection equipment |
CN111260727A (en) * | 2020-02-19 | 2020-06-09 | 广州海格星航信息科技有限公司 | Grid positioning method and device based on image processing and storage medium |
CN111260727B (en) * | 2020-02-19 | 2023-06-20 | 广州海格星航信息科技有限公司 | Grid positioning method and device based on image processing and storage medium |
CN113344611A (en) * | 2021-05-19 | 2021-09-03 | 天津旗滨节能玻璃有限公司 | Cost determination method, smart device, computer program product and storage medium |
CN113344611B (en) * | 2021-05-19 | 2023-04-18 | 天津旗滨节能玻璃有限公司 | Cost determination method, smart device, and storage medium |
CN113327229A (en) * | 2021-05-27 | 2021-08-31 | 扬州大学 | Method for quickly positioning image point grid |
CN113327229B (en) * | 2021-05-27 | 2023-09-22 | 扬州大学 | Method for rapidly positioning image point grid |
Also Published As
Publication number | Publication date |
---|---|
CN108596929B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921901B (en) | Large-view-field camera calibration method based on precise two-axis turntable and laser tracker | |
CN111473739B (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
CN102376089B (en) | Target correction method and system | |
CN108596929A (en) | The light of fusion plane grid depth calculation cuts data modeling reconstructing method | |
WO2016138758A1 (en) | Calibration method of telecentric imaging three-dimensional shape measurement system | |
CN103712555B (en) | Automotive frame pilot hole vision on-line measurement system and method thereof | |
US8803943B2 (en) | Formation apparatus using digital image correlation | |
CN104567727B (en) | Global unified calibration method for linear structured light profile sensor through three-dimensional target | |
CN108198224B (en) | Linear array camera calibration device and calibration method for stereoscopic vision measurement | |
CN102184563B (en) | Three-dimensional scanning method, three-dimensional scanning system and three-dimensional scanning device used for plant organ form | |
CN105931234A (en) | Ground three-dimensional laser scanning point cloud and image fusion and registration method | |
CN102679959B (en) | Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor | |
CN103292695A (en) | Monocular stereoscopic vision measuring method | |
CN107274453A (en) | Video camera three-dimensional measuring apparatus, system and method for a kind of combination demarcation with correction | |
CN108389233B (en) | Laser scanner and camera calibration method based on boundary constraint and mean value approximation | |
CN106683068A (en) | Three-dimensional digital image acquisition method and equipment thereof | |
CN110443879B (en) | Perspective error compensation method based on neural network | |
CN105627948A (en) | Large-scale complex curved surface measurement system and application thereof | |
CN104655011A (en) | Non-contact optical measurement method for volume of irregular convex-surface object | |
CN109141226A (en) | The spatial point coordinate measuring method of one camera multi-angle | |
CN106323286B (en) | A kind of robot coordinate system and the transform method of three-dimensional measurement coordinate system | |
CN110009667A (en) | Multi-viewpoint cloud global registration method based on Douglas Rodríguez transformation | |
CN106289086B (en) | A kind of double camera measurement method for apart from Accurate Calibration between optical indicia point | |
CN112802123B (en) | Binocular linear array camera static calibration method based on stripe virtual target | |
CN205352322U (en) | Large -scale complicated curved surface measurement system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |