CN107203759A - A kind of branch's recursion road restructing algorithm based on two view geometries - Google Patents

A kind of branch's recursion road restructing algorithm based on two view geometries Download PDF

Info

Publication number
CN107203759A
CN107203759A CN201710419548.3A CN201710419548A CN107203759A CN 107203759 A CN107203759 A CN 107203759A CN 201710419548 A CN201710419548 A CN 201710419548A CN 107203759 A CN107203759 A CN 107203759A
Authority
CN
China
Prior art keywords
msub
mrow
image
mtd
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710419548.3A
Other languages
Chinese (zh)
Inventor
陈剑
贾丙西
张凯祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201710419548.3A priority Critical patent/CN107203759A/en
Publication of CN107203759A publication Critical patent/CN107203759A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The invention discloses a kind of branch's recursion road restructing algorithm based on two view geometries.Two view geometry models are built, each row that image is gathered to double camera carries out recursively three-dimensionalreconstruction and Road Detection, so as to obtain the road area in image;Specially build two view geometry models, three-dimensionalreconstruction is carried out to each row and iterative calculation obtains the elevation information of pixel, the elevation information and the image value information of image line obtained to every a line with three-dimensionalreconstruction carries out Road Detection, every a line branch processing for image repeats the above steps since image most bottom row upwards successively to each row progress three-dimensionalreconstruction and Road Detection, obtains the road area in image.The present invention proposes general two view geometries model, and the geological information of road scene is described based on reference planes, so as to carry out geometry reconstruction to road area, and the degree of accuracy is high, and operand is small, and to color characteristic more robust.

Description

A kind of branch's recursion road restructing algorithm based on two view geometries
Technical field
The invention belongs to the field of computer vision, for being related to a kind of branch's recursion road based on two view geometries Restructing algorithm, for the intelligent vehicle with dual camera systems.
Background technology
In the application of intelligent vehicle, environment sensing is wherein part and parcel, can driving road detection be pass therein Key function, has obtained extensive research in the past more than ten years.It is (super usually using range sensor in conventional research Sound wave, laser sensor etc.) and vision sensor.By contrast, vision sensor has lower cost and can carried For more abundant environmental information.However, developing reliable road sensory perceptual system not a duck soup, it is desirable to match somebody with somebody in different physics Put, there is stronger flexibility and robustness under illumination condition, road type and background object.In conventional research and application In, usually using two kinds of configuration modes of monocular and binocular.
The use of monocular camera system is more flexible, and with lower cost.Classical method is based on color (J.Alvarez,T.Gevers,Y.LeCun,A.Lopez.Road scenesegmentation from a single Image.European Conference on Computer Vision, 2012,376-389), texture (P.Wu, C.Chang, C.H.Lin.Lane-mark extraction forautomobiles under complex conditions.Pattern Recognition.2014,47 (8), 2756-2767) etc. appearance information.Method based on color is typically by pixel according to road Color model is classified, but is due to that road outward appearance is influenceed by various environmental factors, Road Detection largely according to Rely the extensive degree in road model.General, around road area on specific texture information, such as structured road Lane line (X.Du, K.K.Tan.Vision-based approach towards laneline detection and Vehicle localization.Machine Visionand Applications, 2015,27 (2), 175-191) and non-knot Edge (P.Moghadam, StarzykS., W.S.Wijesoma.Fast vanishing-point on structure road detection in unstructuredenvironments.IEEE Transactions on ImageProcessing.2012,21(1),497–500).Furthermore it is possible to estimate image using the perspective effect in image End point indicates the trend of road.But, end point is for crankcase ventilaton, congested traffic and the inadequate robust of shade.For Dependence of the reduction to picture appearance, another method is based on image homography Lai Dui road waypoints and non-rice habitats points is classified (C.Lin,S.Jiang,Y.Pu,K.Song.Robust ground planedetection for obstacle avoidance of mobile robots using amonocular camera.IEEE/RSJ InternationalConference on Intelligent Robots and Systems,2010,3706–3711).It is false If road is plane, its image shot under two poses can be associated with homography.It is general in conventional work Based on two images pixel transform error (J.Arrospide, L.Salgado, M.Nieto, R.Mohedano.Homography-based ground plane detection using a singleon-board Camera.IET Intelligent Transport Systems, 2010,4 (2), 149-160), the mapping fault of characteristic point (D.Conrad,G.N.DeSouza.Homography-based groundplane detection for mobile robot navigation using a modifiedEM algorithm.IEEE International Conferenceon Robotics and Automation, 2010,910-915) and synthesis (S.Qu, the C.Meng.Statistical of the two classification based fast drivableregion detection for indoor mobile Robot.InternationalJournal of Humanoid Robotics, 2014,11 (1)), so as to identify the phase The picture point hoped in plane.But, the method based on homography is available just for surface road, but most of outdoor road is all It is not strict plane.In addition, the texture of outdoor road is general weaker or repeats, so that it is difficult to robustly matching characteristic point, And the pixel error of image can equally have ambiguity.
By contrast, binocular camera system additionally provides more information except appearance information, and typical method is based on binocular The disparity map that Stereo matching is obtained, so as to cosmetic variation more robust.A kind of conventional method is to be based on u parallaxes and v parallaxes Figure splits road area, thus road area easily can be divided in discontinuous position out (R.Labayrade, D.Aubert,J.P.Tarel.Real time obstacledetection in stereovision on non flat road geometrythrough“v-disparity”representation.IEEE Intelligent Vehicle Symposium,2002,646–651).In (M.Wu, S.K.Lam, T.Srikanthan.Nonparametrictechnique based high-speed road surface detection.IEEE Transactions on Intelligent Transportation Systems, 2015,16 (2), 874-884) in, proposed based on u parallaxes and v disparity maps for plane With the partitioning algorithm of on-plane surface road.In (F.Oniga, S.Nedevschi.Processing dense stereo data usingelevation maps:Road surface,traffic isle,and obstacledetection.IEEE Transactions on Vehicular Technology, 2010,59 (3), 1172-1182) in, image has been divided into side Lattice, and each grid is represented using maximum of which height value.Then according to the distribution of height be categorized into road and Non-rice habitats region.But, the effect of these methods depends on the quality of Stereo matching, and generates accurate, dense regard in real time Poor figure is relatively difficult, especially for the region of weak texture and repetition texture.In practice, stereo visual system needs essence True correction is so as to ensure that two cameras are parallel and only have the distance of horizontal direction, so that corresponding picture point is two Same a line of individual image is to reduce region of search.In order to improve measurement accuracy, longer baseline is generally required, but now just need Search for bigger space to find corresponding points, and there are more error hidings, limit the flexibility of system.
The content of the invention
In order to overcome the shortcomings of conventional art, a kind of branch's recursion road reconstruct based on two view geometries of the present invention is calculated Method.
The technical solution adopted by the present invention is:
For the double camera vision system on vehicle, two view geometry models are built, figure is gathered to double camera Each row of picture carries out recursively three-dimensionalreconstruction and Road Detection, so as to obtain the road area in image.
The two view geometry models of the present invention are directed to road scene, establish reference planes, sky is described with reference planes Between o'clock in two camera view images corresponding picture point mapping relations.The mapping relations are referred to as projected disparity, reflect Elevation information of the spatial point from reference planes.
The three-dimensionalreconstruction algorithm of the present invention, constructs the object function comprising image similarity and smoothing factor, and to mesh Scalar functions are iterated optimization and obtain elevation information.
The image value information distribution of the Road Detection algorithm of the present invention, the elevation information obtained with three-dimensionalreconstruction and image line Situation, road edge is identified.
The algorithm is specially:
1) two view geometry models are built;
2) three-dimensionalreconstruction is carried out to each row and iterative calculation obtains the elevation information of pixel;
3) to every a line, the image value information of the elevation information obtained with three-dimensionalreconstruction and image line carries out Road Detection;
4) handled for every a line branch of image, repeat the above steps 2) and 3) since image most bottom row it is upward successively Three-dimensionalreconstruction and Road Detection are carried out to each row, the road area in image is obtained.
The recursive procedure of the present invention, since image most bottom row, branch carries out three-dimensionalreconstruction and Road Detection, utilizes upper one Capable Road Detection result constructs the probabilistic model and area-of-interest of next line road edge, guides the three-dimensionalreconstruction of next line With Road Detection process.
The step 1) be specially:
1.1) as shown in Fig. 2 double camera vision system is by cameraAnd cameraComposition, defines R and xfRespectively in cameraUnder from cameraTo cameraSpin matrix and translation matrix, define A, A ' be respectively cameraAnd cameraInternal reference square Battle array, according to pinhole camera model, internal reference matrix A and internal reference matrix A ' be expressed as:
Wherein, αuvRespectively cameraFocal length under transverse and longitudinal pixel dimension, (u0,v0) it is principal point for camera coordinate, θ is cameraThe angle of image coordinate axle;α′u,α′vRespectively cameraFocal length under transverse and longitudinal pixel dimension, (u '0, v′0) it is principal point for camera coordinate, the angle for two image coordinate axles that θ ' is gathered by double camera;
1.2) cameraShoot obtained image I, cameraShoot obtained image I ', image I and image I ' size phases Together:The columns of two images is that the one-row pixels point in N, image I is defined as P=[p1 … pi … pN], wherein pi=[ui vi 1]T;The one-row pixels point that identical line position is put in image I ' is defined as P '=[p '1 … p′i … p′N], wherein p 'i= [u′i v′i 1]T, uiAnd viPicture point p is represented respectivelyiThe coordinate in image transverse and longitudinal direction, u ' where along itselfiWith v 'iRepresent respectively Picture point p 'iThe coordinate in image transverse and longitudinal direction, T representing matrix transposition where along itself;
1.3) two camerasWithA reference planes π is arbitrarily set up in front, obtains reference planes π in cameraUnder Normal vector n and cameraPhotocentre is from reference planes π distances d;
1.4) as shown in figure 3, for two camerasWithAny point O in front spacei, OiIn image I and image The upper correspondence positions of I ' are respectively picture point piTo picture point p 'iPicture point piTo picture point p 'iCoordinate position in respective image Respectively pi=[ui vi 1]TWith p 'i=[u 'i v′i 1]T
From picture point piTo picture point p 'iCoordinate transform is carried out using below equation:
Wherein, G is projection homography matrix, βiFor picture point piThe projected disparity information at place, ziRepresent point OiAlong camera The coordinate of optical axis direction, z 'iRepresent point OiAlong cameraThe coordinate of optical axis direction, A ' expression camerasInternal reference matrix, xfRepresent In cameraUnder from cameraArriveTranslation matrix;
Project homography matrix G and acquisition is calculated using below equation:
Wherein, A represents cameraInternal reference matrix, R be in cameraUnder from cameraArriveSpin matrix.
The step 2) be specially:
The elevation information that three-dimensionalreconstruction obtains pixel is carried out to each row;
2.1) to step 1.4) coordinate conversion relation utilize transforming function transformation function [u 'i v′i]TExpression:
Wherein, gklTo project homography matrix G row k l row, wherein k, l=1,2, the sequence of 3 representing matrix row and columns Number;
Set up functional relation [ui′ vi′]T=w (βi,pi) calculate obtain cameraThe picture point p of shootingiIn cameraShoot Image in corresponding point pi'=[ui′ vi′ 1]T
2.2) for each image line, the function pair projected disparity information β of B-spline Curve is usediParameterized, By picture point piThe projected disparity information β at placeiThe projected disparity information of each picture point of acquisition is represented with fitting function β=F Φ F Φ, so as to obtain picture point p in image IiIt is expert in the coordinate w (F Φ, P) of the middle corresponding points of image I ';
The step 2.2) in fitting function represented using below equation:
β=F Φ
Wherein, β=[β1 … βN]T, N represents image I total columns;Φ represents control point value in B-spline Curve Set, Φ=[φ-1 … φM-2]T, M represents the total number at control point in B-spline Curve, φ-1Represent the control of sequence number -1 Make the value of point;F is N × M matrix, and the i-th row of matrix F is expressed as:
Fi=[0 ... 0 f0(ti) f1(ti) f2(ti) f3(ti) 0 … 0]
Wherein, the quantity of the element of forward part 0 of the row of matrix F i-th is Expression is rounded downwards, rear 0 yuan of part Element quantity befl(t) it is the basic function of B-spline Curve,K represent two it is adjacent The distance at control point, i represents the row sequence number of control point in the picture,
2.3) by step 2.2) row coordinate w (F Φ, P) be described as the maximization of the object function represented by below equation Problem:
E (Φ)=c (S, w (F Φ, P))-λ1r1(Φ)-λ2r2(Φ)
Wherein, λ12Respectively first, second weight factor, λ12>0, c is cross-correlation coefficient, r1Represent current line Smoothness, r2Represent the degree of closeness of current line and lastrow, I ' (w (F Φ, P)) represent image I ' be expert at coordinate w (F Φ, P) the sets of pixel values at place, S represents the picture point sets of pixel values with row coordinate w (F Φ, P) corresponding current line in image I;
The step 2.3) in:
Cross-correlation coefficient c is used for the similarity for describing two groups of picture points, and cross-correlation coefficient c is calculated such as using below equation Under:
Wherein, S '=[s1 … sN]TFor the sets of pixel values of a line picture point in image I, S '=[s1′ … s′N]TFor With the picture point sets of pixel values of row coordinate w (F Φ, P) corresponding current line in image I ', pass through I ' (w (F Φ, P))=[I ' (w(β1,p1)) … I′(w(β1,pN))]TCalculate, be the middle w (β of image I 'i,pi) the picture point sets of pixel values be expert at;δiFor The weight factor of i-th of picture point, if piBeyond image boundary then δi=0, if piWithout departing from image boundary then δi=1;
Smoothness r1With degree of closeness r2Below equation calculating is respectively adopted:
Wherein,For the weight term in view of discontinuity at image border, λ3For scale factor,For figure As I is in point piThe image gradient size at place, ηiFor the probability distribution based on lastrow road Identification result;In r2In (Φ), β-For The parallax value of lastrow;
Ignore r when being detected for image most bottom row2(Φ) item.
Probability distribution η based on lastrow road Identification resultiCalculated using below equation:
Wherein, H is the road edge point set obtained after lastrow is detected, and h is the point in road edge point set H, and σ is Known standard deviation, ε is the factor in view of discrete road edge, uiRepresent the abscissa value of i-th of picture point.
Probability distribution η based on lastrow road Identification resultiInitial value is 1.
2.4) be directed to step 2.3) object function, being iterated optimization using Levenberg-Marquardt methods makes Numerical convergence is obtained, the picture point p per a line is obtainediThe projected disparity information β at placeiOptimal solution;
Because the convergence domain of iteration optimization algorithms is limited, initial value when detection is calculated per a line using in the following manner is most Near excellent solution:
During initial calculation, the control point set Φ of the SPL of most bottom row is initialized as 0.
Due to thinking that road plane is continuous, and in repeat step 2) and 3) during, use the optimal of lastrow Solution as next line initial value.
2.5) finally calculated using below equation and obtain point OiTo the distance of reference planes, elevation information is used as:
Di=ziβi
Wherein, DiFor point OiTo the distance of reference planes, ziRepresent point OiAlong cameraThe coordinate of optical axis direction.
The step 3) be specially:
3.1) as shown in figure 5, by step 2) calculate the obtained point O as elevation informationiTo reference planes apart from DiKnot The image value information for closing image line calculates acquisition pixel respectively along the edge strength of image transverse and longitudinal using below equation:
Wherein,Represent respectively along u, the local derviation on v directions,Represent point OiTo reference planes apart from DiEdge The local derviation on u direction,Represent point OiTo reference planes apart from DiLocal derviation along on v directions,Represent in image I Point OiCorresponding pixel pixel value along the local derviation on u direction,Represent image I midpoint OiCorresponding pixel pixel value edge The local derviation on v directions;CuRepresent image I midpoint OiEdge of the corresponding pixel along image horizontal (i.e. u coordinate directions) is strong Degree, CvRepresent image I midpoint OiEdge strength of the corresponding pixel along image longitudinal direction (i.e. v coordinate direction);
3.2) the most bottom row central point in initial setting image I belongs to waypoint, and road waypoint is represented positioned at road area Pixel, then uses ad hoc fashion to carry out to consecutive points handle judging to obtain each pixel for road waypoint or non-road The judged result of waypoint.
The step 3.2) be specially:
3.2.1) first in image I most bottom row, from the central point of most bottom row initially as critical point to both sides using following Mode is spread, and each pixel is judged successively:
For each critical point, critical point is and non-rice habitats point or does not judge a little adjacent road waypoint, if critical The edge strength C of pointuu, τuThe threshold value of horizontal proliferation is represented, then the point for judging the adjacent both sides of critical point is waypoint, otherwise The point for judging the adjacent both sides of critical point is non-rice habitats point;
3.2.2 it is) and then upward from image I most bottom row, every a line is judged in the following ways successively:
For each pixel of current line, if the point positioned at same column position corresponding with the pixel in lastrow For road waypoint and the edge strength C of the pixel of current linevv, τvThe threshold value of longitudinal diffusion is represented, then judges current line The pixel is road waypoint, does not otherwise deal with and uses step 3.2.1 again) judge;
3.2.3 it) steps be repeated alternatively until that image I each pixel completes to judge.
In actual environment, road edge is usually continuous, is introduced in the recursive Road Detection of branch and is based on upper one The area-of-interest of row testing result, the information without handling complete image line, greatly reduces amount of calculation.
The step 3.2) in, when carrying out real-time judge to other rows in addition to image most bottom row, according to lastrow Road edge point set H setting area-of-interest Ω in testing result, then to current line in the range of area-of-interest Ω Judged:
Ω=[max (min (H)-μ, 1), min (max (H)+μ, N)]
Wherein, μ is relaxation factor.
When carrying out real-time judge to most bottom row, it is area-of-interest Ω to be set to full line.
But, for some exceptions, such as road edge is discontinuous, if the road edge detected with it is interested The coincident in region, then be expanded to whole image row, and re-start the road reconstruct of this line by area-of-interest.
For the dual camera systems in such as Fig. 3, the three-dimensionalreconstruction of branch is carried out to image scene based on two view geometries, and It is reference planes by the ground plane selection of vehicle.In road scene, because most of area-of-interest is from reference planes Nearer road surface region, therefore this three-dimensional reconstruction method has higher efficiency.Then, the geological information at scene midpoint can be with Described, can be converted into relative to reference planes further according to vehicle configuration information using the projected disparity relative to reference planes Elevation information.
Compared to needing the three-dimensionalreconstruction for first completing whole image to enter again in the method based on stereoscopic vision in the prior art The road reconstructing method proposed in row lane segmentation, the present invention is that branch is recursive, including geometry reconstruction and road Identification Etc. process.
For every a line, it is primarily based on two view geometry model reconstructions and goes out height of each pixel relative to reference planes Information, then road area can be split using the distribution of elevation information and half-tone information.This process from image most Descending beginning is recursively performed until the road width in current line is less than threshold value, so as to obtain complete road area.Separately Outside, geometry three-dimensionalreconstruction and Road Detection are interactional, and road area is that the distribution based on geological information is split, and The area-of-interest of next line and the prior probability distribution of road edge can be determined based on lane segmentation result.Detailed process As shown in Figure 1.
The beneficial effects of the invention are as follows:
The present invention proposes general two view geometries model, and the geometry letter of road scene is described based on reference planes Breath, so as to carry out geometry reconstruction to road area, the degree of accuracy is high, and operand is small, and to color characteristic more robust.
Brief description of the drawings
Fig. 1 is the flow chart of branch's recurrence road restructing algorithm.
Fig. 2 is the configuration of vehicle vision system.
Fig. 3 is two view geometry model schematics.
Fig. 4 is the process of road scene three-dimensionalreconstruction.
Fig. 5 is the result of lane segmentation.
Fig. 6 is the result of lane segmentation under another scene.
Embodiment
The present invention is described in detail below in conjunction with embodiment.
As shown in figure 1, the branch's recursion road restructing algorithm based on two view geometries proposed is by following mistake Journey iteration is completed:Three-dimensionalreconstruction, Road Detection, the determination of area-of-interest, the construction of probabilistic model.
Embodiments of the invention are as follows:
The implementation branch of the present invention is recursively carried out, and each step includes three-dimensionalreconstruction, Road Detection, area-of-interest really The fixed, construction of probabilistic model.
Such as Fig. 4 is the processing procedure for image a line, and the lane segmentation result based on lastrow can obtain current line Area-of-interest and probabilistic model, then optimal projected disparity value β obtained by three-dimensionalreconstruction make the correspondence in two images Point is most matched, so as to be split based on three-dimensional information and image information to road area.
For lane segmentation result such as Fig. 5 of the image, wherein white portion is the road area of segmentation.If Fig. 6 is pin To the lane segmentation result of another scene.From the results, it was seen that road edge has carried out effective point in appropriate region Cut.

Claims (8)

1. a kind of branch's recursion road restructing algorithm based on two view geometries, for the double camera vision on vehicle System, it is characterised in that:Two view geometry models are built, each row that image is gathered to double camera carries out recursively Three-dimensional Gravity Structure and Road Detection, so as to obtain the road area in image.
2. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 1, its feature exists In:The algorithm is specially:
1) two view geometry models are built;
2) three-dimensionalreconstruction is carried out to each row and iterative calculation obtains the elevation information of pixel;
3) to every a line, the image value information of the elevation information obtained with three-dimensionalreconstruction and image line carries out Road Detection;
4) handle, repeated the above steps 2) and 3) since image most bottom row upwards successively to every for every a line branch of image A line carries out three-dimensionalreconstruction and Road Detection, obtains the road area in image.
3. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 1, its feature exists In:The step 1) be specially:
1.1) double camera vision system is by cameraAnd cameraComposition, defines R and xfRespectively in cameraUnder from cameraArrive CameraSpin matrix and translation matrix, define A, A ' be respectively cameraAnd cameraInternal reference matrix, internal reference matrix A and Internal reference matrix A ' be expressed as:
<mrow> <mi>A</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>&amp;alpha;</mi> <mi>u</mi> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>&amp;alpha;</mi> <mi>v</mi> </msub> <mi>cot</mi> <mi>&amp;theta;</mi> </mrow> </mtd> <mtd> <msub> <mi>u</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mfrac> <msub> <mi>&amp;alpha;</mi> <mi>v</mi> </msub> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> </mrow> </mfrac> </mtd> <mtd> <msub> <mi>v</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mrow> <msup> <mi>A</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>&amp;alpha;</mi> <mi>u</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>v</mi> <mo>&amp;prime;</mo> </msubsup> <msup> <mi>cot&amp;theta;</mi> <mo>&amp;prime;</mo> </msup> </mrow> </mtd> <mtd> <msubsup> <mi>u</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mfrac> <msubsup> <mi>&amp;alpha;</mi> <mi>v</mi> <mo>&amp;prime;</mo> </msubsup> <mrow> <msup> <mi>sin&amp;theta;</mi> <mo>&amp;prime;</mo> </msup> </mrow> </mfrac> </mtd> <mtd> <msubsup> <mi>v</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, αuvRespectively cameraFocal length under transverse and longitudinal pixel dimension, (u0,v0) it is principal point for camera coordinate, θ is CameraThe angle of image coordinate axle;α′u,α′vRespectively cameraFocal length under transverse and longitudinal pixel dimension, (u '0,v′0) The angle of the two image coordinate axles gathered for principal point for camera coordinate, θ ' by double camera;
1.2) cameraShoot obtained image I, cameraObtained image I ' is shot, image I is identical with image I ' sizes:Two The columns of width image is that the one-row pixels point in N, image I is defined as P=[p1 … pi … pN], wherein pi=[ui vi 1 ]T;The one-row pixels point that identical line position is put in image I ' is defined as P '=[p '1 … p′i … p′N], wherein p 'i=[u 'i v′i 1]T, uiAnd viPicture point p is represented respectivelyiThe coordinate in image transverse and longitudinal direction, u ' where along itselfiWith v 'iImage is represented respectively Point p 'iThe coordinate in image transverse and longitudinal direction, T representing matrix transposition where along itself;
1.3) two camerasWithA reference planes π is arbitrarily set up in front, obtains reference planes π in cameraUnder normal direction Measure n and cameraPhotocentre is from reference planes π distances d;
1.4) for two camerasWithAny point O in front spacei, OiThe correspondence position point on image I and image I ' Wei not picture point piTo picture point p 'i
From picture point piTo picture point p 'iCoordinate transform is carried out using below equation:
<mrow> <msubsup> <mi>p</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mfrac> <msub> <mi>z</mi> <mi>i</mi> </msub> <msubsup> <mi>z</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mfrac> <mrow> <mo>(</mo> <msub> <mi>Gp</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <msup> <mi>A</mi> <mo>&amp;prime;</mo> </msup> <mo>&amp;CenterDot;</mo> <mfrac> <msub> <mi>x</mi> <mi>f</mi> </msub> <mi>d</mi> </mfrac> <mo>)</mo> </mrow> </mrow>
Wherein, G is projection homography matrix, βiFor picture point piThe projected disparity information at place, ziRepresent point OiAlong cameraOptical axis The coordinate in direction, z 'iRepresent point OiAlong cameraThe coordinate of optical axis direction, A ' expression camerasInternal reference matrix, xfRepresent in phase MachineUnder from cameraArriveTranslation matrix;
Project homography matrix G and acquisition is calculated using below equation:
<mrow> <mi>G</mi> <mo>=</mo> <msup> <mi>A</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mo>(</mo> <mi>R</mi> <mo>+</mo> <msub> <mi>x</mi> <mi>f</mi> </msub> <mfrac> <msup> <mi>n</mi> <mi>T</mi> </msup> <mi>d</mi> </mfrac> <mo>)</mo> </mrow> <msup> <mi>A</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow>
Wherein, A represents cameraInternal reference matrix, R be in cameraUnder from cameraArriveSpin matrix.
4. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 1, its feature exists In:The step 2) be specially:
The elevation information that three-dimensionalreconstruction obtains pixel is carried out to each row;
2.1) to step 1.4) coordinate conversion relation utilize transforming function transformation function [u 'i v′i]TExpression:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>u</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>v</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mfrac> <mrow> <msub> <mi>g</mi> <mn>11</mn> </msub> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>12</mn> </msub> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>13</mn> </msub> <mo>+</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>u</mi> <mo>&amp;prime;</mo> </msubsup> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>y</mi> </mrow> </msub> <msup> <mi>cot&amp;theta;</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>u</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>z</mi> </mrow> </msub> <mo>)</mo> <mo>/</mo> <mi>d</mi> </mrow> <mrow> <msub> <mi>g</mi> <mn>31</mn> </msub> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>32</mn> </msub> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>33</mn> </msub> <mo>+</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>z</mi> </mrow> </msub> <mo>/</mo> <mi>d</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <msub> <mi>g</mi> <mn>21</mn> </msub> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>22</mn> </msub> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>23</mn> </msub> <mo>+</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>&amp;alpha;</mi> <mi>v</mi> <mo>&amp;prime;</mo> </msubsup> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>y</mi> </mrow> </msub> <mo>/</mo> <msup> <mi>sin&amp;theta;</mi> <mo>&amp;prime;</mo> </msup> <mo>+</mo> <msubsup> <mi>v</mi> <mn>0</mn> <mo>&amp;prime;</mo> </msubsup> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>z</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>/</mo> <mi>d</mi> </mrow> <mrow> <msub> <mi>g</mi> <mn>31</mn> </msub> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>32</mn> </msub> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>g</mi> <mn>33</mn> </msub> <mo>+</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <msub> <mi>x</mi> <mrow> <mi>f</mi> <mi>z</mi> </mrow> </msub> <mo>/</mo> <mi>d</mi> </mrow> </mfrac> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, gklTo project homography matrix G row k l row, wherein k, l=1,2, the sequence number of 3 representing matrix row and columns;
Set up functional relation [u 'i v′i]T=w (βi,pi) calculate obtain cameraThe picture point p of shootingiIn cameraThe figure of shooting The corresponding point p ' as ini=[u 'i v′i 1]T
2.2) for each image line, the function pair projected disparity information β of B-spline Curve is usediParameterized, will be schemed Picture point piThe projected disparity information β at placeiThe projected disparity information F Φ of each picture point of acquisition are represented with fitting function β=F Φ, So as to obtain picture point p in image IiIt is expert in the coordinate w (F Φ, P) of the middle corresponding points of image I ';
2.3) by step 2.2) row coordinate w (F Φ, P) be described as the maximization problems of the object function represented by below equation:
E (Φ)=c (S, w (F Φ, P))-λ1r1(Φ)-λ2r2(Φ)
Wherein, λ12Respectively first, second weight factor, λ12>0, c is cross-correlation coefficient, r1Represent the smooth of current line Degree, r2Represent the degree of closeness of current line and lastrow, I ' (w (F Φ, P)) represents that image I ' is expert at coordinate w (F Φ, P) place Sets of pixel values, S represents the picture point sets of pixel values with row coordinate w (F Φ, P) corresponding current line in image I;
Smoothness r1With degree of closeness r2Below equation calculating is respectively adopted:
<mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>&amp;Phi;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>&amp;part;</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> </mrow> <mrow> <mo>&amp;part;</mo> <mi>u</mi> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msup> <mi>e</mi> <mrow> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mn>3</mn> </msub> <msub> <mi>&amp;eta;</mi> <mi>i</mi> </msub> <mo>|</mo> <msub> <mi>VI</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </msup> </mrow>
<mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>&amp;Phi;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;beta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msubsup> <mi>&amp;beta;</mi> <mi>i</mi> <mo>-</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow>
Wherein,For the weight term in view of discontinuity at image border, λ3For scale factor,For image I In point piThe image gradient size at place, ηiFor the probability distribution based on lastrow road Identification result;In r2In (Φ), β-To be upper The parallax value of a line;
2.4) be directed to step 2.3) object function, being iterated optimization using Levenberg-Marquardt methods makes to succeed in one's scheme Result convergence is calculated, the picture point p per a line is obtainediThe projected disparity information β at placeiOptimal solution;
2.5) acquisition point O is finally calculated using below equationiTo the distance of reference planes, elevation information is used as:
Di=ziβi
Wherein, DiFor point OiTo the distance of reference planes, ziRepresent point OiAlong cameraThe coordinate of optical axis direction.
5. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 4, its feature exists In:The step 2.3) in, cross-correlation coefficient c is calculated as follows using below equation:
<mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>S</mi> <mo>,</mo> <msup> <mi>S</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>&amp;delta;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>s</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>-</mo> <mover> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> </mrow> <mrow> <msqrt> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>&amp;delta;</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mi>i</mi> </msub> <mo>-</mo> <mover> <mi>s</mi> <mo>&amp;OverBar;</mo> </mover> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <msqrt> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msub> <mi>&amp;delta;</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>-</mo> <msup> <mover> <mi>s</mi> <mo>&amp;OverBar;</mo> </mover> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mfrac> </mrow>
Wherein, S '=[s1 … sN]TFor the sets of pixel values of a line picture point in image I, S '=[s '1 … s′N]TFor image With the picture point sets of pixel values of row coordinate w (F Φ, P) corresponding current line in I ', pass through I ' (w (F Φ, P))=[I ' (w (β1,p1)) … I′(w(β1,pN))]TCalculate, be the middle w (β of image I 'i,pi) the picture point sets of pixel values be expert at;δiFor The weight factor of i picture point, if piBeyond image boundary then δi=0, if piWithout departing from image boundary then δi=1.
6. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 1, its feature exists In:The step 3) be specially:
3.1) by step 2) calculate the obtained point O as elevation informationiTo reference planes apart from DiWith reference to the image of image line Value information is calculated using below equation obtains pixel respectively along the edge strength of image transverse and longitudinal:
<mrow> <msub> <mi>C</mi> <mi>u</mi> </msub> <mo>=</mo> <msub> <mo>&amp;dtri;</mo> <mi>u</mi> </msub> <mi>D</mi> <mo>&amp;CenterDot;</mo> <msub> <mo>&amp;dtri;</mo> <mi>u</mi> </msub> <mi>S</mi> </mrow>
<mrow> <msub> <mi>C</mi> <mi>v</mi> </msub> <mo>=</mo> <msub> <mo>&amp;dtri;</mo> <mi>v</mi> </msub> <mi>D</mi> <mo>&amp;CenterDot;</mo> <msub> <mo>&amp;dtri;</mo> <mi>v</mi> </msub> <mi>S</mi> </mrow>
Wherein,Represent respectively along u, the local derviation on v directions,Represent point OiTo reference planes apart from DiAlong u side Upward local derviation,Represent point OiTo reference planes apart from DiLocal derviation along on v directions,Represent image I midpoint OiIt is right The pixel pixel value answered along the local derviation on u direction,Represent image I midpoint OiCorresponding pixel pixel value is along v side Upward local derviation;CuRepresent image I midpoint OiEdge strength of the corresponding pixel along image transverse direction, CvRepresent image I midpoint Oi Edge strength of the corresponding pixel along image longitudinal direction;
3.2) the most bottom row central point in initial setting image I belongs to waypoint, and road waypoint represents the pixel positioned at road area Point, then uses ad hoc fashion to carry out to consecutive points handle judging to obtain each pixel for road waypoint or non-rice habitats point Judged result.
7. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 6, its feature exists In:The step 3.2) be specially:
3.2.1) first in image I most bottom row, from the central point of most bottom row initially as critical point to both sides in the following ways Diffusion, judges each pixel successively:
For each critical point, critical point is and non-rice habitats point or does not judge a little adjacent road waypoint, if critical point Edge strength Cuu, τuThe threshold value of horizontal proliferation is represented, then the point for judging the adjacent both sides of critical point is waypoint, is otherwise judged The point of the adjacent both sides of critical point is non-rice habitats point;
3.2.2 it is) and then upward from image I most bottom row, every a line is judged in the following ways successively:
For each pixel of current line, if the Dian Wei roads positioned at same column position corresponding with the pixel in lastrow The edge strength C of the pixel of waypoint and current linevv, τvThe threshold value of longitudinal diffusion is represented, then judges the picture of current line Vegetarian refreshments is road waypoint, does not otherwise deal with and uses step 3.2.1 again) judge;
3.2.3 it) steps be repeated alternatively until that image I each pixel completes to judge.
8. a kind of branch's recursion road restructing algorithm based on two view geometries according to claim 6, its feature exists In:The step 3.2) in, when carrying out real-time judge to other rows in addition to image most bottom row, detected and tied according to lastrow Road edge point set H setting area-of-interest Ω in fruit, then sentence in the range of area-of-interest Ω to current line It is disconnected:
Ω=[max (min (H)-μ, 1), min (max (H)+μ, N)]
Wherein, μ is relaxation factor.
CN201710419548.3A 2017-06-06 2017-06-06 A kind of branch's recursion road restructing algorithm based on two view geometries Pending CN107203759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710419548.3A CN107203759A (en) 2017-06-06 2017-06-06 A kind of branch's recursion road restructing algorithm based on two view geometries

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710419548.3A CN107203759A (en) 2017-06-06 2017-06-06 A kind of branch's recursion road restructing algorithm based on two view geometries

Publications (1)

Publication Number Publication Date
CN107203759A true CN107203759A (en) 2017-09-26

Family

ID=59907262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710419548.3A Pending CN107203759A (en) 2017-06-06 2017-06-06 A kind of branch's recursion road restructing algorithm based on two view geometries

Country Status (1)

Country Link
CN (1) CN107203759A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636917A (en) * 2018-11-02 2019-04-16 北京微播视界科技有限公司 Generation method, device, the hardware device of threedimensional model
CN109711352A (en) * 2018-12-28 2019-05-03 中国地质大学(武汉) Vehicle front road environment based on geometry convolutional neural networks has an X-rayed cognitive method
CN110211172A (en) * 2018-02-28 2019-09-06 2236008安大略有限公司 Rapidly plane in stereo-picture is distinguished
CN111507339A (en) * 2020-04-16 2020-08-07 北京深测科技有限公司 Target point cloud obtaining method based on intensity image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103712602A (en) * 2013-12-09 2014-04-09 广西科技大学 Binocular vision based method for automatic detection of road obstacle
CN106446785A (en) * 2016-08-30 2017-02-22 电子科技大学 Passable road detection method based on binocular vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103712602A (en) * 2013-12-09 2014-04-09 广西科技大学 Binocular vision based method for automatic detection of road obstacle
CN106446785A (en) * 2016-08-30 2017-02-22 电子科技大学 Passable road detection method based on binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BINGXI JIA 等: "Drivable Road Reconstruction for Intelligent Vehicles Based on Two-View Geometry", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211172A (en) * 2018-02-28 2019-09-06 2236008安大略有限公司 Rapidly plane in stereo-picture is distinguished
CN109636917A (en) * 2018-11-02 2019-04-16 北京微播视界科技有限公司 Generation method, device, the hardware device of threedimensional model
CN109711352A (en) * 2018-12-28 2019-05-03 中国地质大学(武汉) Vehicle front road environment based on geometry convolutional neural networks has an X-rayed cognitive method
CN111507339A (en) * 2020-04-16 2020-08-07 北京深测科技有限公司 Target point cloud obtaining method based on intensity image
CN111507339B (en) * 2020-04-16 2023-07-18 北京深测科技有限公司 Target point cloud acquisition method based on intensity image

Similar Documents

Publication Publication Date Title
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN109974707B (en) Indoor mobile robot visual navigation method based on improved point cloud matching algorithm
Ku et al. Joint 3d proposal generation and object detection from view aggregation
US20210142095A1 (en) Image disparity estimation
Barabanau et al. Monocular 3d object detection via geometric reasoning on keypoints
CN104156968B (en) Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN112270249A (en) Target pose estimation method fusing RGB-D visual features
CN111126304A (en) Augmented reality navigation method based on indoor natural scene image deep learning
CN107203759A (en) A kind of branch&#39;s recursion road restructing algorithm based on two view geometries
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
Won et al. OmniSLAM: Omnidirectional localization and dense mapping for wide-baseline multi-camera systems
CN101866497A (en) Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
CN102999942A (en) Three-dimensional face reconstruction method
CN104318569A (en) Space salient region extraction method based on depth variation model
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
CN112734839B (en) Monocular vision SLAM initialization method for improving robustness
CN114299405A (en) Unmanned aerial vehicle image real-time target detection method
CN109813334A (en) Real-time high-precision vehicle mileage calculation method based on binocular vision
Alcantarilla et al. Large-scale dense 3D reconstruction from stereo imagery
CN115359372A (en) Unmanned aerial vehicle video moving object detection method based on optical flow network
CN107492107A (en) The object identification merged based on plane with spatial information and method for reconstructing
CN106407975B (en) Multiple dimensioned layering object detection method based on space-optical spectrum structural constraint
CN112037282B (en) Aircraft attitude estimation method and system based on key points and skeleton

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170926

RJ01 Rejection of invention patent application after publication