CN110246192A - Binocular crag deforms intelligent identification Method - Google Patents

Binocular crag deforms intelligent identification Method Download PDF

Info

Publication number
CN110246192A
CN110246192A CN201910537671.4A CN201910537671A CN110246192A CN 110246192 A CN110246192 A CN 110246192A CN 201910537671 A CN201910537671 A CN 201910537671A CN 110246192 A CN110246192 A CN 110246192A
Authority
CN
China
Prior art keywords
point
image
crag
video camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910537671.4A
Other languages
Chinese (zh)
Inventor
黄河
阎宗岭
徐峰
唐胜传
杨伟
张小松
谭玲
刘中帅
罗溢
袁青海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Chongqing Communications Research and Design Institute Co Ltd
Original Assignee
China Merchants Chongqing Communications Research and Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Chongqing Communications Research and Design Institute Co Ltd filed Critical China Merchants Chongqing Communications Research and Design Institute Co Ltd
Priority to CN201910537671.4A priority Critical patent/CN110246192A/en
Publication of CN110246192A publication Critical patent/CN110246192A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

Binocular crag provided by the invention deforms intelligent identification Method, include the following steps, initially set up the left and right camera model with first order radial distortion, one is found by optimum combination that all parameters of left and right video camera are constituted to define stereo visual system using particle swarm algorithm, and left and right video camera is demarcated;It is then based on circular indicia point and carries out images match, clearly measurement target;Finally on the basis of images match, three-dimensional coordinate recovery is carried out to identification point matched in image, compare the three-dimensional coordinate of identification point under the state of front and back, the in-plane displacement deviation, acoplanarity displacement deviation and the strain value of each mark dot center of mark dot center are calculated, finally to determine that whether there is or not deformations on object structures surface.This method has many advantages, such as that simple, lossless, real-time is good, non-contact, precision is high, can effectively overcome the problems, such as that traditional object deformation measurement method precision is insufficient, can play an important role in the practical engineering application of side slope crag detection.

Description

Binocular crag deforms intelligent identification Method
Technical field
The present invention relates to crag technology for deformation monitoring fields, and in particular to binocular crag deforms intelligent identification Method.
Background technique
When the states such as the stability of side slope are monitored, the surface layer deformation of side slope is that characterization side slope state changes Become it is the most significant a bit.Highway Administration of Guangdong Province is imaged in one kind filed on September 4th, 2012 based on twin camera The highway High and dangerous slope monitoring method (application number: CN201210324273.2) of technology, disclose following steps: the first step is Side slope three-dimensional is rebuild, and is installed video camera along length of slope directional spreding to be monitored, is made each scheduled monitoring point while being in two The coverage of video camera;Three-dimensional reconstruction specifically includes: being imaged using linear transformation method or perspective projection transformation Matrix Solving Machine parameter, and orthogonal spin matrix and translation matrix of the world coordinate system relative to camera coordinate system are calculated, then from Liang Tai Characteristic point is extracted in the image of video camera shooting, using based on the transformation model and grab sample one for rotating and scaling invariant Cause method carries out Corresponding matching to characteristic point, realizes that side slope three-dimensional is rebuild.Second step is slope monitoring, is imaged according to two The image of machine shooting calculates the space coordinate for the monitoring point laid in side slope, obtains the space coordinate point set of monitoring point, periodically counts The Euclidean distance between the three-dimensional space of two neighboring monitoring point is calculated, slope surface situation of change is monitored;It is pre- finally to carry out dispute disaster Slope displacement or slip value that monitoring obtains are compared, at preset rules by report with the default critical value allowed Reason.
Side slope three-dimensional deformation is observed using circular markers a kind of filed on April 8th, 2015 by Dalian University of Technology Method (application number CN201410817771.X) discloses following technical scheme: proposing to use binocular vision system, and passes through mark Note circular feature point deforms to observe side slope three-dimensional.A series of regular circles reached the same goal to label on the slope of side slope Characteristic point carries out captured in real-time, obtains the two-dimensional coordinate of all mark points of each moment, then passes through program for two-dimensional coordinate two Two matchings, obtain 3 d space coordinate.
Above-mentioned technical proposal has the following problems: (1) not accounting for linear transformation in side slope three-dimensional reconstruction process In journey the problem of first order radial distortion, the accuracy of side slope crag micro-strain detection is inadequate;(2) in images match process In the problem of not accounting for image border precision.
Summary of the invention
For the defects in the prior art, the present invention provides a kind of binocular crag deformation intelligent identification Method, can solve existing There is the problem that accuracy is inadequate in the detection of side slope crag in technology.
Binocular crag provided by the invention deforms intelligent identification Method, includes the following steps,
S1. establish the left and right camera model for having first order radial distortion, using particle swarm algorithm find one by it is left, Optimum combination that all parameters of right video camera are constituted defines stereo visual system, demarcates to left and right video camera;
S2. images match is carried out based on circular indicia point, clearly measurement target;
S3. on the basis of images match, three-dimensional coordinate recovery, relatively front and back are carried out to identification point matched in image The three-dimensional coordinate of identification point under state finally calculates in-plane displacement deviation, acoplanarity displacement deviation and each mark of mark dot center The strain value of dot center is known, to determine that whether there is or not deformations on object structures surface.
Further, the left and right camera model with first order radial distortion is established method particularly includes:
S11. the three-dimensional system of coordinate of left and right video camera is constructed using the pin-hole model with first order radial distortion;
S12. the image coordinate system on left and right video camera imaging face is constructed;
S13. the computer picture coordinate system of left and right video camera is constructed;
S14. the three-dimensional coordinate transformation of one point P of object under test surface is obtained into computer picture coordinate system by double The mathematical model for the stereo visual system that CCD camera is constituted;
S15. the mathematical model of the stereo visual system constituted according to double CCD cameras finds out the minimum of point P three-dimensional coordinate Two multiply solution, find the optimum combination being made of all parameters of left and right video camera using particle swarm algorithm.
Further, by the three-dimensional coordinate transformation of one point P of object under test surface to computer picture coordinate system in S14 In specific steps are as follows:
(1) the Rigid Body In Space evolution from three-dimensional system of coordinate to camera coordinate system is described with homogeneous coordinates;
(2) ideal image coordinate system is transformed to from camera coordinate system;
(3) from ideal image coordinate system transformation to real image coordinate system, distortion model is established;
(4) from real image coordinate system transformation to computer picture coordinate system.
Further, one is found using particle swarm algorithm in S15 to be made of most all parameters of left and right video camera Excellent combined specific steps are as follows:
(1) optimization of the full algorithm of particle is established according to the mathematical model for the stereo visual system being made of double CCD cameras Model;
(2) stereopsis constituted using the three-dimensional coordinate of one point P of the body surface of actual measurement and by double CCD cameras The residual error between three-dimensional coordinate that the mathematical model of feel system is calculated establishes objective function.
Further, images match is carried out based on circular indicia point in S2, clearly measures target method particularly includes:
S21. Canny algorithm coarse positioning image border is utilized;
S22. image border is accurately determined using Zernike square operator;
S23. ellipse fitting method is utilized, circular indicia point feature is extracted;
S24. the characteristic point of extraction is matched, then determines object to be measured.
It further, is to be filtered by two-dimensional Gaussian function first differential with image convolution using Canny in S21 Then wave finds local maximum to filtered image, specifically:
(1) using Gaussian function to image filtering;
(2) gradient magnitude of each pixel is obtained to image convolution using the first differential of Gaussian function | G | and gradient Direction θ;
(3) gradient direction is divided into four areas, measurement point gradient magnitude is compared with adjacent pixel gradient value on gradient direction Compared with whether label measurement point is marginal point;
(4) gradient magnitude of each pixel is counted, and calculates gradient mean value D and variances sigma, by gradient mean value and variance High threshold of the sum as edge detection, using 0.4 times of high threshold as Low threshold;
(5) edge connection is carried out, it is rough to determine measurement target.
Further, image border is accurately determined using Zernike square operator in S22 specifically:
(1) determine whether the point is marginal point by calculating four parameters of each pixel, the four of the pixel A parameter is respectively as follows: image background gray scale h, the step height k of gray scale, the height l of central point to edge, central point to edge Vertical line and x-axis included angle;
(2) measurement neighborhood of a point each point is mapped to the inside of unit circle, according to the point f's (x, y) in discrete picture The A of Zernike orthogonal moment calculating measurement pointnm
(3) after the square A11 and A20 that obtain marginal point, central point is calculated to the height l and central point at edge to edge Vertical line and x-axis included angle;
(4) then, the subpixel coordinates of marginal point are obtained.
Further, it is matched in characteristic point of the S24. to extraction specifically:
(1) epipolar-line constraint matches: matching left video camera by EP point analytic equation and normalizes corresponding points P in plane The corresponding right video camera imaging plane of characteristic point on coordinate;
(2) tracking identification point is carried out using Kalman filtering algorithm: the position of first coarse positioning subsequent time identification point, so Carry out neighborhood search again afterwards.
As shown from the above technical solution, beneficial effects of the present invention: binocular crag deformation intelligent identification Method, including with Lower step initially sets up the left and right camera model with first order radial distortion, using particle swarm algorithm find one by it is left, Optimum combination that all parameters of right video camera are constituted defines stereo visual system, demarcates to left and right video camera;Then Images match is carried out based on circular indicia point, clearly measurement target;Finally on the basis of images match, to being matched in image Identification point carry out three-dimensional coordinate recovery, the three-dimensional coordinate of identification point, finally calculates mark dot center relatively under the state of front and back The strain value of in-plane displacement deviation, acoplanarity displacement deviation and each mark dot center, to determine that whether there is or not changes on object structures surface Shape.This method has many advantages, such as that simple, lossless, real-time is good, non-contact, precision is high, can effectively overcome traditional object to become The problem of shape measurement method precision deficiency can play an important role in the practical engineering application of side slope crag detection.
Detailed description of the invention
It, below will be to tool in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Body embodiment or attached drawing needed to be used in the description of the prior art are briefly described.In all the appended drawings, similar member Part or part are generally identified by similar appended drawing reference.In attached drawing, each element or part might not be drawn according to actual ratio System.
Fig. 1 is the flow diagram that binocular crag of the present invention deforms intelligent identification Method.
Fig. 2 is the left and right camera model that first order radial distortion is had in the embodiment of the present invention.
Fig. 3 is plane sub-pixel edge step illustraton of model in the embodiment of the present invention.
Fig. 4 is epipolar geom etry constraint principles figure in the embodiment of the present invention.
Specific embodiment
It is described in detail below in conjunction with embodiment of the attached drawing to technical solution of the present invention.Following embodiment is only used In clearly illustrating technical solution of the present invention, therefore it is only used as example, and cannot be used as a limitation and limit protection of the invention Range.
It should be noted that unless otherwise indicated, technical term or scientific term used in this application should be this hair The ordinary meaning that bright one of ordinary skill in the art are understood.
If Fig. 1 is the flow diagram that binocular crag provided in this embodiment deforms intelligent identification Method, specific steps Are as follows:
S1. the left and right camera model for having first order radial distortion is established as shown in Figure 2, is found using particle swarm algorithm One defines stereo visual system by the optimum combination that all parameters of left and right video camera are constituted, and carries out to left and right video camera Calibration;
Establish the left and right camera model with first order radial distortion method particularly includes:
S11. the three-dimensional system of coordinate of left and right video camera is constructed using the pin-hole model with first order radial distortion.Wherein, Optic center point of the left and right video camera in three-dimensional system of coordinate is respectively OclAnd OcR, OclCoordinate in three-dimensional system of coordinate is (Xcl,Ycl,Zcl),OcCoordinate of the r in three-dimensional system of coordinate is (Xcr,Ycr,Zcr), wherein ZclAxis and ZcrAxis is taken the photograph with left and right respectively Camera optical axis coincidence.Coordinate of the one point P of object under test surface in three-dimensional system of coordinate is (Xw, Yw, Zw).
S12. the image coordinate system on left and right video camera imaging face is constructed, O is enabledilXlYlAnd OirXrYrIt is respectively left and right to take the photograph Image coordinate system on camera imaging surface, wherein picture centre OilAnd OirRespectively optical axis ZclAnd ZcrWith left and right cameras image The intersection point of plane, XclAxis is respectively and XlAxis, XrAxis is parallel, YclAxis respectively with YlAxis, YrAxis is parallel;
S13. the computer picture coordinate system for constructing left and right video camera, enables OlulvlAnd OrurvrRespectively left and right camera shooting The computer picture coordinate system of machine, origin OlPositioned at the upper left corner in left video camera imaging face, u and v respectively indicate pixel and are located at number The columns and line number of group.(Xu, Yu) is image coordinate of the P point under ideal national forest park in Xiaokeng.(Xd, Yd) is the reality of P point Image coordinate deviates from its position ideal image coordinate (Xu, Yu) because of the radial deformation of lens.
S14. by the three-dimensional coordinate (Xw, Yw, Zw) of one point P of object under test surface transform to computer picture coordinate (u, v);Specific shift step are as follows:
(1) the Rigid Body In Space evolution from three-dimensional system of coordinate to camera coordinate system is described with homogeneous coordinates, with next Coordinate (formula 1) description:
In formula (1), the orthogonal rotational transformation matrix that R is 3 × 3, the translation vector that T is 3 × 1.R and T have been determined Orientation of the left and right video camera relative to three-dimensional system of coordinate, the matrix element of R can there are three Eulerian angles γ, β and α to indicate, then R, T are as follows:
T=[Tx, Ty, Tz]T (2)
(2) ideal image coordinate system is transformed to from camera coordinate system;
(3) from ideal image coordinate system transformation to real image coordinate system, distortion model is established;
In formula (4)K is first order radial distortion coefficient.
(4) from real image coordinate system transformation to the conversion of computer picture coordinate system:
In formula (5), (cx, cy) be point O pixel coordinate, that is, principal point coordinate, (dx, dy) is respectively on the plane of delineation Distance on the direction x, y between unit pixel, Sx are to compare between the two, i.e. aspect ratio.
Joint type (1), (3), (4), (5) obtain:
Principle, which must be inspected, according to CCD camera is obtained it is found that being observed by left and right two video cameras space object Two images are obtained, computer picture coordinate uses (u respectivelyl,vl) and (ur,vr) indicate, the sky of object can be uniquely determined Between three-dimensional position (Xw, Yw, Zw) therefore, the model of left and right two video cameras of comprehensive analysis is available by double CCD cameras The mathematical model of the stereo visual system of composition, is indicated with matrix form:
It is denoted as:
The least square solution of three-dimensional coordinate (Xw, Yw, Zw) can be found out by formula (8):
[Xw, Yw, Zw]T=(BTB)-1BTD (9)
The specific method for the optimum combination that one is made of all parameters of left and right video camera is found using particle swarm algorithm Are as follows:
The optimization mould of the full algorithm of particle is established according to the mathematical model for the stereo visual system being made of double CCD cameras Type, the parameter in Optimized model include video camera all in stereo visual system mathematical model outwardly and inwardly totally 24 ginsengs Number (including video camera external parameter: α111,Tx1, Ty1, Tz1rrr,Txr, Tyr, TzrWith inner parameter f1,sx1, dy1,k1, cx1, cy1,fr,sxr,dyr,kr, cxr, cyrTotally 24).Their constraint condition, i.e., the value range of each variable, this 24 A parameter can be obtained by actual measurement or reference product specification.
The stereoscopic vision constituted using the calibration point three-dimensional coordinate (Xw, Yw, Zw) of actual measurement and by double CCD cameras Three-dimensional coordinate (the X ' that the mathematical model of system is calculatedw, Y 'w, Z 'w) between residual error establish following objective function:
In formula (10), N represents the quantity of calibration point, and x represents parameter all in vision measurement system.
Then:
X=[α111,Tx1,Ty1, Tz1, f1, sx1, dy1,k1,cx1,
cy1r, βr, γr,Txr,Tyr,Tzr,fr,sxr,dyr, kr, cxr, cyr]T (11)
Simply it is denoted as:
X=[x1, x2..., x24]T (12)
S2. images match is carried out based on circular indicia point, clearly measurement target;
Specifically:
S21 utilizes Canny algorithm coarse positioning image border;
Canny algorithm is to be filtered by two-dimensional Gaussian function first differential with image convolution, then to filtered Image finds local maximum, specifically:
(1) using Gaussian function to image filtering;
(2) gradient magnitude of each pixel is obtained to image convolution using the first differential of Gaussian function | G | and gradient Direction θ;
Dimensional Gaussian convolution function are as follows:
The gradient magnitude of pixel are as follows:
Wherein,
(3) gradient direction is divided into four areas, measurement point gradient magnitude is compared with adjacent pixel gradient value on gradient direction Compared with whether label measurement point is marginal point.If specifically: the pixel gradient amplitude is less than consecutive points gradient magnitude, marks Measurement point is non-edge point, gradient value is set to zero, otherwise as candidate marginal.
(4) gradient magnitude of each pixel is counted, and calculates gradient mean value D and variances sigma, by gradient mean value and variance High threshold Th, i.e. Th=D+ σ of the sum as edge detection;Using 0.4Th as Low threshold T1.
(5) edge connects;
Each pixel is labeled as strong edge point and weak marginal point first with dual threshold, strong edge point refers to gradient magnitude Greater than the pixel of high threshold, weak marginal point refers to pixel of the gradient magnitude between high threshold and Low threshold, is otherwise non-edge Point.
Then, line trace connection is clicked through to strong edge, it, can if there is strong edge point in four neighborhoods of weak marginal point It is handled using connecting as marginal point, otherwise it is assumed that it is non-edge point.
S22. image border is accurately determined using Zernike square operator;
Zernike square operator is to determine whether the point is marginal point by calculating four parameters of each pixel.Picture Four parameters of vegetarian refreshments are respectively as follows: image background gray scale h, the step height k of gray scale, the height l of central point to edge, center Point arrives the vertical line at edge and the included angle of x-axis.
It is illustrated in figure 3 plane sub-pixel edge step illustraton of model, the Zernike of the point f (x, y) in discrete picture Orthogonal moment are as follows:
According to formula (15) as can be seen that in order to calculate the A of measurement pointnm, need the neighborhood of a point each point being mapped to list The inside of circle of position, wherein Vnm(ρ, θ) is indicated are as follows:
In Zernike edge detection algorithm specific implementation process, since in four parameters of marginal point, value is used Central point to edge height l and central point to the vertical line at edge and the included angle of x-axis, so need to only derive 2 moulds Plate, i.e. A11 and A20.
Only 7 × 7 neighborhood gray values of template and pixel need to be subjected to convolution in calculating, operation is simply easy to It realizes.After obtaining the square A11 and A20 of marginal point, calculate central point to edge height l and central point to edge vertical line With the included angle of x-axis.
Then, the subpixel coordinates of marginal point are available:
S23. ellipse fitting method is utilized, circular indicia point feature is extracted;
The general expression of elliptic curve mode are as follows:
Ax2+Bxy+Cy2+ Dx+Ey+F=0 (19)
The edge of circular indicia point image is accurately positioned during (S21) above, can using least square method To obtain six parameters of A, B, C, D, E, F in formula (19).
Establish objective function:
Wherein,
Utilize G pairs of objective functionDerivation is acquired when objective function G is minimum value
The canonical form of elliptic equation are as follows:
By x=xcos θ+ysin θ;Y=-xsin θ+ycos θ is substituted into (18), is obtained:
x2(Acos2θ-Bcosθ+Csin2θ)+xy(2Acosθsinθ+B(cos2θ…
-sin2θ)-2Ccosθsinθ)+y2(Asin2θ+Bcosθsinθ+Ccos2θ)…
+ x (Dcos θ-Esin θ)+y (Dsin θ+Ecos θ)+F=0 (24)
(21) and the comparison of (22) formula are available:
2Acosθsinθ+B(cos2θ-sin2θ) sin θ=0-2Ccos θ
θ=1/2 × arctan (B/ (C-A)) (25)
Wherein, A '=Acos2θ-Bcosθsinθ+Csin2θ, C '=Asin2θ+Bcosθsinθ+ Ccos2θ, D '=Dcos θ-Esin θ, E '=Dcos θ+Esin θ.
It, can be further to each edge on the basis of above scheme after obtaining least square fitting elliptic equation For point to fitted ellipse apart from given threshold, iteration improves the precision at fitted ellipse center, rejects 5% marginal point every time, directly It is less than threshold position to criterion distance difference, thus can effectively controls elliptical fitting precision.
S24 Feature Points Matching
(1) limit epipolar-line constraint matches,
Fig. 4 show epipolar geom etry constraint principles figure, and object P normalizes empty imaging plane in left and right video camera in Fig. 4 Picture point on П l and П r is respectively pl and pr, and the optical center O1O2 of two cameras of left and right returns with object point P formed plane and left and right camera One changes plane intersection, obtains left outside polar curve elpl and right EP point erpr, el and er are baseline O1O2 and the normalization of two cameras is flat The crosspoint in face, while being also subpoint of two camera photocentres in normalization plane, referred to as Multi- extended.It can from Fig. 3 Object point P1, P2, P3 on left camera optical axis correspond to same point Pl in left video camera normalization plane out, but in right camera shooting Three different points are incident upon in machine normalization plane, and on right polar curve erpr, this is to say if in left figure In have a characteristic point, then the projection one in its corresponding right video camera imaging plane is scheduled on corresponding right polar curve.
If p=(x, y, 1), p'=(x', y', 1) is respectively that object point normalizes on empty imaging plane in left and right video camera Picture point.Then EP point analytic equation are as follows:
In formula 26, E is the eigenmatrix of outer polar plane: E=[t]xR.Spin matrix of the R between two cameras, t For the translation vector between two cameras, symbol [t]xThe antisymmetric matrix defined for vector t:
(2) Kalman tracks
Tracking identification point is carried out using Kalman filtering algorithm, firstly, the position of coarse positioning subsequent time identification point, so Afterwards, then neighborhood search is carried out, can reduce search radius in this way, saved and calculate the time.
Kalman filtering is a special case of Bayesian filter, is known and line in system mode function and observation function When property, when process noise, observation noise and posterior probability density function Gaussian distributed, available best posterior probability.
System state equation are as follows: xk=AKxk-1+AKuk-1+wk-1
Observational equation are as follows: sk=Hkxk+vk
Wherein it is assumed that process noise wk-1 and observation noise vk be it is independent identically distributed, shown in following formula: wk~N (0, Qk), vk~N (0, Rk) wherein Qk, RkThe respectively covariance matrix of process noise and observation noise.
Specific step are as follows:
Prior estimate
Prior estimate error covariance
Kalman gain
Posterior estimator
Posterior estimator error covariance pK=(I-KKH)P- k (31)
Firstly, being initialized to Kalman filtering, if the state vector (X-direction) of target are as follows:
X (t)=[x (t), v (t), a (t)]T (32)
State matrix is initialized as:
Observing matrix is initialized as: HK=[1 0 0]
The estimation initial value that state is established using the observation of target in first three frame image, is considered respectively in the estimation The displacement of target, speed, acceleration.
Then Posterior estimator error covariance are as follows:
σ in formula (34)xRWith σxQRespectively observation noise and process noise.
S5. on the basis of images match, three-dimensional coordinate recovery, relatively front and back are carried out to expression point matched in image The three-dimensional coordinate of identification point under state finally calculates in-plane displacement deviation, acoplanarity displacement deviation and each mark for indicating dot center The strain value of dot center is known, to determine that whether there is or not deformations on object structures surface.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations; Although present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its It is still possible to modify the technical solutions described in the foregoing embodiments, or special to some or all of technologies Sign is equivalently replaced;And these are modified or replaceed, various embodiments of the present invention that it does not separate the essence of the corresponding technical solution The range of technical solution should all cover within the scope of the claims and the description of the invention.

Claims (8)

1. binocular crag deforms intelligent identification Method, it is characterised in that: include the following steps,
S1. the left and right camera model for having first order radial distortion is established, one is found using particle swarm algorithm and is taken the photograph by left and right Optimum combination that all parameters of camera are constituted defines stereo visual system, demarcates to left and right video camera;
S2. images match is carried out based on circular indicia point, clearly measurement target;
S3. on the basis of images match, three-dimensional coordinate recovery is carried out to identification point matched in image, relatively under the state of front and back The three-dimensional coordinate of identification point finally calculates in the in-plane displacement deviation, acoplanarity displacement deviation and each identification point for identifying dot center The strain value of the heart, to determine that whether there is or not deformations on object structures surface.
2. binocular crag according to claim 1 deforms intelligent identification Method, it is characterised in that: establish radial with single order The left and right camera model of distortion method particularly includes:
S11. the three-dimensional system of coordinate of left and right video camera is constructed using the pin-hole model with first order radial distortion;
S12. the image coordinate system on left and right video camera imaging face is constructed;
S13. the computer picture coordinate system of left and right video camera is constructed;
S14. the three-dimensional coordinate transformation of one point P of object under test surface is obtained being imaged by double CCD into computer picture coordinate system Mechanism at stereo visual system mathematical model;
S15. the mathematical model of the stereo visual system constituted according to double CCD cameras finds out the least square of point P three-dimensional coordinate Solution, finds the optimum combination being made of all parameters of left and right video camera using particle swarm algorithm.
3. binocular crag according to claim 2 deforms intelligent identification Method, it is characterised in that: by determinand in S14 The three-dimensional coordinate transformation of one point P of body surface face is to the specific steps in computer picture coordinate system are as follows:
(1) the Rigid Body In Space evolution from three-dimensional system of coordinate to camera coordinate system is described with homogeneous coordinates;
(2) ideal image coordinate system is transformed to from camera coordinate system;
(3) from ideal image coordinate system transformation to real image coordinate system, distortion model is established;
(4) from real image coordinate system transformation to computer picture coordinate system.
4. binocular crag according to claim 2 deforms intelligent identification Method, it is characterised in that: utilize particle in S15 Group's algorithm finds the specific steps for the optimum combination that one is made of all parameters of left and right video camera are as follows:
(1) Optimized model of the full algorithm of particle is established according to the mathematical model for the stereo visual system being made of double CCD cameras;
(2) stereo visual system constituted using the three-dimensional coordinate of one point P of the body surface of actual measurement and by double CCD cameras The mathematical model three-dimensional coordinate that is calculated between residual error establish objective function.
5. binocular crag according to claim 1 deforms intelligent identification Method, it is characterised in that: based on round mark in S2 Know point and carry out images match, clearly measures target method particularly includes:
S21. Canny algorithm coarse positioning image border is utilized;
S22. image border is accurately determined using Zernike square operator;
S23. ellipse fitting method is utilized, circular indicia point feature is extracted;
S24. the characteristic point of extraction is matched, then determines object to be measured.
6. binocular crag according to claim 5 deforms intelligent identification Method, it is characterised in that: utilize Canny in S21 To be filtered by two-dimensional Gaussian function first differential with image convolution, local maxima then is found to filtered image Value, specifically:
(1) using Gaussian function to image filtering;
(2) gradient magnitude of each pixel is obtained to image convolution using the first differential of Gaussian function | G | and gradient direction θ;
(3) gradient direction is divided into four areas, measurement point gradient magnitude is compared with adjacent pixel gradient value on gradient direction, mark Remember whether measurement point is marginal point;
(4) count the gradient magnitude of each pixel, and calculate gradient mean value D and variances sigma, by gradient mean value and variance and make For the high threshold of edge detection, using 0.4 times of high threshold as Low threshold;
(5) edge connection is carried out, it is rough to determine measurement target.
7. binocular crag according to claim 5 deforms intelligent identification Method, it is characterised in that: utilized in S22 Zernike square operator accurately determines image border specifically:
(1) determine whether the point is marginal point by calculating four parameters of each pixel, four ginsengs of the pixel Number is respectively as follows: image background gray scale h, the step height k of gray scale, the height l of central point to edge, the vertical line of central point to edge With the included angle of x-axis;
(2) measurement neighborhood of a point each point is mapped to the inside of unit circle, according to the point f's (x, y) in discrete picture The A of Zernike orthogonal moment calculating measurement pointnm
(3) after the square A11 and A20 that obtain marginal point, calculate central point to edge height l and central point to edge vertical line With the included angle of x-axis;
(4) then, the subpixel coordinates of marginal point are obtained.
8. binocular crag according to claim 5 deforms intelligent identification Method, it is characterised in that: in S24. to the spy of extraction Sign point is matched specifically:
(1) epipolar-line constraint matches: matching the feature that left video camera normalizes corresponding points P in plane by EP point analytic equation Coordinate in the corresponding right video camera imaging plane of point;
(2) carry out tracking identification point using Kalman filtering algorithm: the position of first coarse positioning subsequent time identification point, then again into Row neighborhood search.
CN201910537671.4A 2019-06-20 2019-06-20 Binocular crag deforms intelligent identification Method Pending CN110246192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910537671.4A CN110246192A (en) 2019-06-20 2019-06-20 Binocular crag deforms intelligent identification Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910537671.4A CN110246192A (en) 2019-06-20 2019-06-20 Binocular crag deforms intelligent identification Method

Publications (1)

Publication Number Publication Date
CN110246192A true CN110246192A (en) 2019-09-17

Family

ID=67888390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537671.4A Pending CN110246192A (en) 2019-06-20 2019-06-20 Binocular crag deforms intelligent identification Method

Country Status (1)

Country Link
CN (1) CN110246192A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103323209A (en) * 2013-07-02 2013-09-25 清华大学 Structural modal parameter identification system based on binocular stereo vision
CN104501735A (en) * 2014-12-23 2015-04-08 大连理工大学 Method for observing three-dimensional deformation of side slope by utilizing circular marking points
CN106504284A (en) * 2016-10-24 2017-03-15 成都通甲优博科技有限责任公司 A kind of depth picture capturing method combined with structure light based on Stereo matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103323209A (en) * 2013-07-02 2013-09-25 清华大学 Structural modal parameter identification system based on binocular stereo vision
CN104501735A (en) * 2014-12-23 2015-04-08 大连理工大学 Method for observing three-dimensional deformation of side slope by utilizing circular marking points
CN106504284A (en) * 2016-10-24 2017-03-15 成都通甲优博科技有限责任公司 A kind of depth picture capturing method combined with structure light based on Stereo matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
廉小磊等: "基于粒子群算法的双目立体视觉系统标定", 《计算机工程与应用》 *
申宇: "基于双目立体视觉的结构变形监测技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN109035200B (en) Bolt positioning and pose detection method based on single-eye and double-eye vision cooperation
CN111223133B (en) Registration method of heterogeneous images
CN104318548B (en) Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN105716539B (en) A kind of three-dimentioned shape measurement method of quick high accuracy
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN104121902B (en) Implementation method of indoor robot visual odometer based on Xtion camera
CN106204544A (en) A kind of automatically extract index point position and the method and system of profile in image
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN106485757A (en) A kind of Camera Calibration of Stereo Vision System platform based on filled circles scaling board and scaling method
CN110763204B (en) Planar coding target and pose measurement method thereof
CN110879080A (en) High-precision intelligent measuring instrument and measuring method for high-temperature forge piece
CN102788572A (en) Method, device and system for measuring attitude of lifting hook of engineering machinery
CN105335699B (en) Read-write scene is read and write intelligent identification and the application thereof of element three-dimensional coordinate
CN108007345A (en) A kind of digger operating device measuring method based on monocular camera
CN107145890A (en) A kind of pointer dashboard automatic reading method under remote various visual angles environment
CN106097430B (en) A kind of laser stripe center line extraction method of more gaussian signal fittings
CN111462198A (en) Multi-mode image registration method with scale, rotation and radiation invariance
CN115861448A (en) System calibration method and system based on angular point detection and characteristic point extraction
CN110245634A (en) Multiposition, multi-angle crag deformation judgement and analysis method
CN107679542A (en) A kind of dual camera stereoscopic vision recognition methods and system
Datondji et al. Rotation and translation estimation for a wide baseline fisheye-stereo at crossroads based on traffic flow analysis
CN103400377B (en) A kind of three-dimensional circular target based on binocular stereo vision detects and determination methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190917

RJ01 Rejection of invention patent application after publication