CN105809687A - Monocular vision ranging method based on edge point information in image - Google Patents

Monocular vision ranging method based on edge point information in image Download PDF

Info

Publication number
CN105809687A
CN105809687A CN201610131438.2A CN201610131438A CN105809687A CN 105809687 A CN105809687 A CN 105809687A CN 201610131438 A CN201610131438 A CN 201610131438A CN 105809687 A CN105809687 A CN 105809687A
Authority
CN
China
Prior art keywords
depth
frame
map
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610131438.2A
Other languages
Chinese (zh)
Other versions
CN105809687B (en
Inventor
程农
杨盛
李清
田振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201610131438.2A priority Critical patent/CN105809687B/en
Publication of CN105809687A publication Critical patent/CN105809687A/en
Application granted granted Critical
Publication of CN105809687B publication Critical patent/CN105809687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a monocular vision ranging method based on edge point information in an image and belongs to the unmanned aerial vehicle navigation and positioning technical field. The method includes the following steps that: two frames of images are selected from an image sequence captured by a downward-looking monocular camera fixedly connected with an unmanned aerial vehicle so as be adopted to construct an initial map and an initial depth graph, a first frame is adopted as a first key frame in the map, and a camera coordinate system corresponding to the first frame is adopted as a world coordinate system, and initialization is completed; and three threads, namely, a motion estimation thread, a map construction thread and a depth graph estimation thread are carried out parallelly, wherein the motion estimation thread aligns known map and depth graph information with a current frame so as to obtain a ranging result, and optimizes the existing map information according to the ranging result, and the map construction thread and the depth graph estimation thread operate simultaneously so as to maintain the map and depth graph information. According to the method of the invention, the multi-core architecture of a modern processor is fully utilized, and the edge point information in the image can be effectively utilized, and based on corner point information, the efficiency of the algorithm is improved. The method has higher adaptability.

Description

A kind of based on the monocular vision ranging method of edge dot information in image
Technical field
The invention belongs to Navigation of Pilotless Aircraft field of locating technology, particularly to a kind of monocular vision ranging method that make use of edge dot information in image.
Background technology
In many application scenarios, people can not rely on GPS to be accurately positioned, as between indoor, city building or jungle, area, mountain valley or even outer celestial body.Utilizing vision to be navigated is meet very much intuition: in nature, all animals having eyes all utilize vision to carry out some form of navigation, the insecticide such as fly, Apis judges the relative motion between self and target by light stream, and in the most of the time, people judges the position being in by the scenery seen.
Under the premise not using GPS, for the consideration of cost and weight, small unmanned plane majority adopts Inertial Measurement Unit and the navigation scheme of single camera combination, it is also possible to be equipped with ultrasonic sensor and baroceptor to obtain absolute altitude information.The unmanned plane of this configuration can realize entirely autonomous way point and follow the tracks of.
Carrying the carrier of single or multiple photographic head, input merely with its image, the process that displacement is estimated is called vision ranging VO (VisualOdometry).Application includes robot, augmented reality and automatic Pilot etc..There is similarity in this process and traditional wheel odometry.Wheel ranging calculates mileage by the rotation of accumulative wheel, the vision ranging change by perception input picture, incrementally estimates carrier pose.In effective service requirement environment of vision ranging algorithm, illumination is sufficient, and scene texture is enough abundant.
Monocular vision ranging is merely with single photographic head as input, and system configuration is simple, and the ability adapting to environmental scale change is better than multi-vision visual system.Existing monocular vision ranging method, Corner Feature in general image carries out mating between frame with frame, (Corner Feature is the key character of image, pixel prominent in some attribute in representative image cannot to adapt to lack the scene of Corner Feature.In scene in FIG, Corner Feature occurs in the point of intersection of straight line, rare numbers and existence repeatability in structure more, and the existing monocular vision ranging algorithm based on Corner Feature will likely inefficacy).Additionally, these Corner Features only account for the sub-fraction of the whole pixel of image, from the utilization rate of image information, the performance of monocular vision ranging can also improve.
The term that the present invention relates to illustrates as follows:
Frame: in vision ranging field, custom claims the piece image obtained to be a frame, and such as, the image that camera previous moment obtains is called former frame, and the image that camera current time obtains is called present frame, and the continuous two width images that camera obtains are called consecutive frame etc.;
Key frame: owing to the frame per second of Current camera is higher, pose change between consecutive frame is often smaller, in order to strengthen the accuracy of estimation, generally take the strategy of key frame, namely, in certain pose excursion, newly obtained image only carries out aliging to estimate current pose with a certain specific frame, and only after beyond certain scope, we just take new specific frame to carry out the image alignment of next stage, and namely claiming these particular frames being used for carrying out image alignment is key frame;
Reference frame: the frame of present image of being used for aliging is called the reference frame of present image;
Map: in vision ranging field, saves known environmental information (such as the position of calculated point, the image etc. that obtains), is called map.Map can mate as successive image, the prior information of estimation to be to increase the precision of ranging.
Summary of the invention
The present invention is directed to the deficiency of prior art, propose a kind of based on the monocular vision ranging method of edge dot information in image, the edge dot information in image can be effectively utilized, in conjunction with angle point information (as auxiliary), the scene lacking Corner Feature is had higher adaptability.
It is a kind of based on the method for the unmanned plane monocular vision ranging of edge dot information in image that the present invention proposes, it is characterised in that the method includes initializing and the estimation of parallel processing, map structuring and depth map estimation;Specifically include following steps:
1) initializing: select two frames to build initial map and ID figure from the image sequence caught depending on monocular camera being connected with unmanned plane, map is with one group of key frame set { rmAnd one group of three-dimensional feature point set { piRepresent, wherein r represents that key frame, subscript m represent m-th key frame, m is positive integer, and p represents three-dimensional feature point, and subscript i represents i-th three-dimensional feature point, i is positive integer, and the some set of the two-dimensional angular of its correspondence, corresponding to the Corner Feature in image, is designated as by three-dimensional feature pointWherein ucRepresenting angle point, subscript i represents i-th angle point, and i is positive integer;Depth map is made up of the uncertainty of edge point coordinates, edge point depth value and depth value, depth map and key frame one_to_one corresponding, and depth map is usedRepresenting, wherein D represents the depth map that edge point is corresponding, and subscript m represents m width depth map, for positive integer, ueRepresent edge point, deRepresent the depth value of edge point, σeRepresenting the uncertainty that the edge point degree of depth is reciprocal, subscript i represents i-th edge point, for positive integer;
Wherein, the first frame is taken as first key frame in map, and camera coordinates system corresponding for the first frame is taken as world coordinate system, complete to initialize;
2) after initializing map and depth map information, carry out estimation, map structuring and depth map parallel and estimate three threads: wherein, estimation thread utilizes known map and depth map information, carry out aliging with present frame and obtain ranging result, and according to ranging result, existing cartographic information is optimized, map structuring and depth map estimate that thread runs to safeguard map and depth map information simultaneously.
The feature of the present invention and beneficial effect:
The present invention propose based on the monocular vision ranging method of edge dot information in image, using the image caught depending on monocular camera under being connected with unmanned plane as input, through airborne computer computing, the result of output is the ranging result of unmanned plane;By estimation thread, map structuring thread and depth map, the method estimates that three parallel threads of thread realize;Wherein estimation thread utilizes the existing information in map and depth map to do the pose of the estimation current kinetic that aligns with present frame, and it is the core of ranging algorithm, it is necessary to assure hard real-time;Map structuring thread is for safeguarding the three-dimensional feature map corresponding to key frame information and image Corner Feature;Depth map estimates that thread is used for safeguarding the estimation of Depth of edge dot information in key frame.
Estimation, map structuring and depth map are estimated to be separately operable in three threads by the present invention, make full use of the multicore architecture of modern processors, improve efficiency of algorithm.Not having the process of feature extraction and matching in motion estimation algorithm, adopt quick feature point extraction algorithm during map structuring, the pixel that in depth map estimation, extracting directly brightness step is bigger, this all effectively reduces amount of calculation.Simultaneously in the initial estimation process of motion estimation algorithm, make use of the edge dot information in image, and devise the depth map transmission method of edge point accordingly.Edge point in image is to enrich a lot of characteristics of image than angle point, and algorithm adopts edge dot information can reduce the algorithm dependence to Corner Feature in initial estimation, strengthens the adaptive capacity to environment of algorithm.
Accompanying drawing explanation
Fig. 1 is the scene that the monocular vision ranging method based on Corner Feature is likely to lose efficacy;
Fig. 2 is the overview flow chart improving monocular vision ranging method based on edge point that the present invention proposes;
Fig. 3 is the estimation thread flow figure of the embodiment of the present invention;
Fig. 4 is the map structuring thread flow figure of the embodiment of the present invention;
The depth map that Fig. 5 is the embodiment of the present invention estimates thread flow figure;
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in detail.
The present invention propose a kind of based on the overall procedure of the method for the unmanned plane monocular vision ranging of edge dot information in image as shown in Figure 2, it is characterised in that, including initializing and the estimation of parallel processing, map structuring and depth map are estimated;Specifically include following steps:
1) initializing: select two frames to build initial map and ID figure from the image sequence caught depending on monocular camera being connected with unmanned plane, in the present embodiment, map is with one group of key frame set { rmAnd one group of three-dimensional feature point set { piRepresent, wherein r represents that key frame, subscript m represent m-th key frame, for positive integer, p represents three-dimensional feature point, and subscript i represents i-th three-dimensional feature point, for positive integer, the two-dimensional angular point set of its correspondence, corresponding to the Corner Feature in image, is generally designated as by three-dimensional feature pointWherein ucRepresenting angle point, subscript i represents i-th angle point, for positive integer;
In the present embodiment, each width key frame in map calculating the depth map of its correspondence, depth map is made up of the uncertainty of edge point coordinates, edge point depth value and depth value, and namely depth map is usedRepresenting, wherein D represents the depth map that edge point is corresponding, and subscript m represents m width depth map, for positive integer (depth map and key frame one_to_one corresponding), ueRepresent edge point, deRepresent the depth value of edge point, σeRepresenting the uncertainty that the edge point degree of depth is reciprocal, subscript i represents i-th edge point, for positive integer.
Wherein, first frame is taken as first key frame in map, and camera coordinates system corresponding for the first frame is taken as world coordinate system, and (world coordinate system refers to that the coordinate system such as east northeast ground coordinate system of relative real world has the coordinate system determining transformation relation, it is world coordinate system that the present embodiment elects camera coordinates system corresponding to the first frame as convenient calculating), complete to initialize;
2) after initializing map and depth map information, carry out estimation, map structuring and depth map parallel and estimate three threads: wherein, estimation thread utilizes known map and depth map information, carry out aliging with present frame and obtain ranging result, and according to ranging result, existing cartographic information is optimized, map structuring and depth map estimate that thread runs to safeguard map and depth map information simultaneously;Three threads are described in detail below:
21) estimation thread: the current frame image newly obtained for camera, utilizes the edge point depth information in current key frame, carries out the image alignment based on motion model, obtain present frame pose T in world coordinate systemK, wInitial estimationWherein T represents pose, and subscript k represents that present frame is the kth frame image that camera obtains, and for positive integer, subscript w represents world coordinate system, subscript-expression initial estimation, utilizes initial estimation, it is possible to by three-dimensional feature point { p existing in mapiBe incident upon in present image, and obtained and { p by Block-matchingiCorresponding two-dimentional angle pointWithFor measuring value,For initial value, successively to present frame pose T in world coordinate system wK, wIt is optimized with three-dimensional feature point position relevant in map.This estimation specific embodiment is as it is shown on figure 3, include:
S11: note current frame image is Ik, wherein I represents the image that camera obtains, and first current for unmanned plane pose is initialized as the pose of previous moment, namelyWith current key frame rmAs reference frame, minimize I with Gauss-Newton iterative algorithmkAnd rmBetween weighted intensity residual error such as formula (1-1):
Obtain present frame IkRelative to reference frame rmPose TK, m, whereinRepresent reference frame rmMiddle degree of depth inverse uncertainty is less than the uncertainty threshold value (the 1/200 of degree of depth inverse maximum estimated value) set and at present frame visible edge point set, uiRepresenting the pixel in image, wherein u represents that pixel, subscript i represent ith pixel point, for positive integer, wi(ui) represent pixel uiWeights, be taken asσiRepresent the uncertainty of the degree of depth of ith pixel point, pixel uiGray scale residual error δ R (TK, m, ui) for formula (1-2):
Wherein π is projection function, the three-dimensional point in camera coordinates system is projected in two dimensional image coordinate system, π-1For inverse projection function,WithRepresent that pixel * is at frame I respectivelykWith frame rmIn gray value, in conjunction with reference frame rmPose T relative to world coordinate systemM, w, then the initial estimation of present frame pose can be calculatedFor formula (1-3):
T k , w - = T k , m · T m , w - - - ( 1 - 3 )
If participating in the edge point quantity of weighting less than the edge point threshold value (the present embodiment value as 300) set, going to step S12, being otherwise directly entered step S13;
S12: be estimated as initial value with the pose obtained in step S11, by former frame Ik-1Reference frame as present frame carries out image alignment, minimizes I by Gauss-Newton iteration optimization algorithmskAnd Ik-1Between gray scale residual error obtain the present frame pose T relative to former frameK, k-1, such as formula (1-4):
WhereinRepresent the degree of depth in frame k-1 it is known that and the visible angle point set of present frame (projected by the three-dimensional feature point in map and obtain).In conjunction with the former frame pose T relative to world coordinate systemK-1, w, then the initial estimation of present frame pose can be calculatedFor formula (1-5):
T k , w - = T k , k - 1 · T k - 1 , w - - - ( 1 - 5 )
Now, by present frame IkElect the new key frame reference frame as follow-up one section of image as;The input information of thread is estimated simultaneously as map structuring and depth map;
S13:(utilizes the initial estimation of present frame poseBy the three-dimensional feature spot projection in map to present frame, one group of two-dimensional coordinate can be obtained, but due toThere is error, these two-dimensional coordinates are not necessarily the three-dimensional feature point actual position in present frame imaging.) according to initial estimationJudge the three-dimensional feature point set { p that can be observed by present frame in mapi, obtain it at reference frame rmThe angular coordinate of middle correspondence is estimatedWithFor initial value, calculate { p with image tracking algorithm KLT (Kanade-Lucas-TomasiFeatureTracker, the public algorithm for conventional)iIn the true imaging position of present frame
( u c i , δ R ‾ ) = arg min ( u c i , δ R ‾ ) 1 2 | | R I k ( u c i ) - A i · R I m ( u c i ′ ) + δ R ‾ | | 2 , ∀ i - - - ( 1 - 6 )
Wherein, AiRepresent the transformation matrix that reference segment is transformed to present image.Represent that the average gray with reference to the coupling segment in segment and present image is poor, be used for eliminating the illumination effect (three-dimensional feature point { p that this step establishes in mapiAnd image in two dimension angle pointThe corresponding relation of sub-pixel precision);
S14: estimate through the initial pose of the step S13 two-dimensional coordinate obtained with present frameNo longer meeting geometric Projection Constraint, i.e. formula (1-7):
| | δu c i | | = | | u c i - π ( T k , w - · p i ) | | ≠ 0 - - - ( 1 - 7 )
WithFor initial value, by minimizing re-projection error to TK, wIt is updated the present frame pose T obtainedK, wNamely it is the final result of estimation, such as formula (1-8):
T k , w = arg min T k , w 1 2 Σ i | | u c i - π ( T k , w · p i ) | | 2 - - - ( 1 - 8 )
TK, wIt is the final result that ranging obtains.Go to step S15 afterwards, the present frame pose T simultaneously obtained by step S14K, wTurn map structuring thread respectively and depth map estimates that thread calculates angle point depth value and calculates edge point depth value;
S15: utilize the present frame pose T that step S14 obtainsK, w, each the three-dimensional feature point in the map can observe present frame is optimized respectively, and the error sum of squares making the projection in all key frames that can observe it of this three-dimensional feature point and true imaging position is minimum namely such as formula (1-9):
p i = arg min p i 1 2 Σ j | | u c i j - π ( T j , w · p i ) | | 2 , ∀ i - - - ( 1 - 9 )
In above formula, subscript j represents can observe three-dimensional feature point piJth key frame;
22) map structuring thread, process in two kinds of situation: if the new image obtained is chosen as new key frame at estimation thread step S12, perform step S21, Corner Feature is extracted from the key frame being newly added, the angle point that algorithm is each new extraction creates a depth filter based on probability (namely the degree of depth of angle point is made up of, and observation new each time is all passed through depth filter and updated depth value and the uncertainty thereof of angle point) depth value and uncertainty;When the new image obtained is not selected as key frame, the pose utilizing the estimation thread step S14 present frame obtained performs step S22~S24, utilize the probability depth filter of the information updating Corner Feature of new images, when the degree of depth uncertainty of a certain angle point is less than the uncertainty threshold value set, the three-dimensional feature of its correspondence point is joined in map, this thread implementation is as shown in Figure 4, concrete steps describe as follows: first determine whether whether new images is key frame, if performing step S21, otherwise perform step S22-S24;
S21: when a two field picture is chosen to be key frame, uses FAST algorithm (FeaturesfromAcceleratedSegmentTest, conventional Fast Corner extraction algorithm) to extract this key frame Corner Feature;First being divided by image uniform with equally spaced grid (such as 30x30 pixel), only extract an angle point, make angle point be evenly distributed in image in each grid, sum is less than grid number;For the angle point that each is newCreate a probability depth filter, and by the inverse of its degree of depth(whereinRepresenting the inverse of the degree of depth of angle point, subscript i represents the i-th angle point in current key frame, for positive integer), and the probability of occurrence ρ that this angle point is effectively observediThe joint posterior distribution of (wherein ρ represents effective observation probability of angle point, and subscript i represents the i-th angle point in current key frame, for positive integer) defines such as formula (2-1):
q ( d ~ i , ρ i | a n , b n , μ n , σ n 2 ) : = B e t a ( ρ i | a n , b n ) N ( d ~ i | μ n , σ n 2 ) - - - ( 2 - 1 )
Wherein Beta and N represents Beta distribution and normal distribution respectively, and subscript n represents that current probability depth filter has been carried out n subparameter to be updated, for positive integer, parameter anAnd bnIt is the number of times that in recursion renewal process, effectively observation and invalid observation occur respectively, μnWithIt is then average and the variance of degree of depth inverse Gauss distribution;When parameter in formula (2-1) is initialized, by anAnd bnInitial value be set to 10, μnIt is initialized as the inverse of the mean depth of place image,It is initialized as maximum.
S22: use the image I that the step S14 camera pose obtained is knownk, the estimation of Depth of Corner Feature is updated;The angle point that note is updatedThe key frame at place is rm, according to present frame IkAnd rmRelative pose, at IkIn determine straight line, and ensure IkIn withCorresponding pixelOccur on this straight line;This straight line scans for (this process be commonly referred to polar curve search), obtain through the Block-matching of sub-pixel precisionCoordinate after, calculate described angle point with triangulation (principle of measurement in stereoscopic vision)The degree of depth
S23: the degree of depth that step S22 is obtainedInverted isWherein, subscript i represents it is i-th angle point in current key frame, and for positive integer, subscript n represents that this is that the n-th to current angle point depth filter measures and updates, for positive integer, the inverse to the degree of depthProbability distribution model such as formula (2-2):
p ( d ~ i n | d ~ i , ρ i ) = ρ i N ( d ~ i n | d ~ i , τ i 2 ) + ( 1 - ρ i ) U ( d ~ i n | d ~ i min , d ~ i max ) - - - ( 2 - 2 )
The implication of above formula is: key frame rmMiddle i-th angle point is through image IkTriangulation, if being effectively measure, then the inverse of measurement resultMeet normal distributionWhereinThe degree of depth inverse caused for one pixel error of the plane of delineation calculates variance, if being invalid measurement, thenMeet and be uniformly distributedWhereinMinima that the degree of depth set for the prior information according to current scene is reciprocal and maximum;Pass throughMeasurement update, the degree of depth is reciprocalPosterior probability distribution such as formula (2-3):
C × p ( d ~ i n | d ~ i , ρ i ) q ( d ~ i , ρ i | a n - 1 , b n - 1 , μ n - 1 , σ n - 1 2 ) - - - ( 2 - 3 )
Wherein, C is that guarantee probability is distributed normalized constant.Can be calculated by moment-matching method (statistical method, with the approximate another kind distribution of one distribution)
S24: reject and effectively measure probabilityThe angle point of too low (less than 0.1), by uncertainty σnMeet and require (σnThe 1/200 of the maximum estimated value reciprocal less than the angle point degree of depth) point add in map;
23) depth map estimates thread, processes in two kinds of situation: if the new image obtained is chosen as new key frame at estimation thread step S12, extracts edge point from the key frame being newly added, performs step S31~S33;When newly inputted image is not selected as key frame, utilize the pose of the estimation thread step S14 present frame obtained, perform step S34~S36, update the probability depth filter of edge point by new image information, and reject the edge point that effective observation probability is too low;Performing step as it is shown in figure 5, be described in detail below: first determine whether whether new images is key frame, if performing step S31-S33, otherwise performing step S34-S36;
S31: in the key frame being newly added, utilizes Sobel operator (a kind of universal method carrying out edge point detection) to calculate the shade of gray G that image is horizontal and verticalxAnd Gy, shade of gray figure usesApproximate.Selecting 2500 to 4000 pixels that shade of gray is maximum from image, when the significant pixel number of gradient is less than 2500, Grads threshold 450 is reduced to original 0.95 times, when number is more than 4000, increase threshold value is original 1.05 times;
S32: remember that new key frame is rm, previous key frame is rm-1, this step is by frame rm-1Depth map Dm-1Propagate new key frame depth maps Dm;rm-1In, edge pointThe reciprocal average of the degree of depth beThe variance of degree of depth inverse isWherein,Represent the average that the degree of depth of edge point is reciprocal,Representing the variance that the degree of depth is reciprocal, subscript i represents frame rm-1In i-th edge point.rm-1To rmRelative pose be TM, m-1, edge pointAt frame rmMiddle corresponding sides are along the position putCan be calculated by projection relation obtains such as formula (3-1):
u e i ′ = π ( T m , m - 1 . π - 1 ( u e i , 1 μ ~ e i ) ) - - - ( 3 - 1 )
NoteDegree of depth inverse beThe variance of degree of depth inverse is designated asSo:
μ ~ e i ′ ( μ ~ e i ) = ( μ ~ e i - 1 - t z ) - 1 - - - ( 3 - 2 )
Wherein, tzRepresent camera translation on optical axis.Thus can deriveThe reciprocal variance of the degree of depthFor:
σ ~ e i ′ 2 = J μ ~ e i · σ ~ e i 2 · J μ ~ e i T + σ p 2 = ( μ ~ e i ′ μ ~ e i ) 4 · σ ~ e i 2 + σ p 2 - - - ( 3 - 3 )
WhereinRepresent uncertainty in traffic, rule of thumb arrange.Due to projection error,Coordinate be not likely to be integer, willIt is tied to rmIn nearest pixel
S33: each the edge point for extracting in step S31 creates probability depth filter, if 3x3 pixel coverage internal memory is in prior estimate near some edge pointThen its degree of depthAverageInitialize such as formula (3-4):
μ e i = Σ i 11 σ ~ e i ′ 2 μ ~ e i ′ Σ i 1 σ ~ e i ′ 2 - - - ( 3 - 4 )
The uncertainty that the parameter degree of depth of probability depth filter is reciprocalThen it is initialized as in prior estimate minimum uncertainty, namelyIf there is no prior estimate, then probability depth filter is initialized as mean depth and maximum uncertainty;TakeThen can construct probability depth filter such as formula (3-5) according to formula (2-1) for each edge point:
q ( d ~ e i , ρ i | a n , b n , μ n , σ n 2 ) : = B e t a ( ρ i | a n , b n ) N ( d ~ e i | μ n , σ n 2 ) - - - ( 3 - 5 )
Wherein, the definition of (3-5) formula each parameter is similar with (2-1) formula, here the parameters of the probability depth filter of corresponding sides edge point, anAnd bnInitial value be set to 10, μn, σnIt is initialized asWith
S34: by new key frame rmIt is set to the reference frame of successive image, utilizes follow-up image to frame rmIn depth map DmCarry out measuring and update;Use image IkTo DmDuring renewal, the edge point that note is updated isAccording to IkAnd rmRelative pose, at IkIn determine straight line, it is ensured that IkIn withCorresponding pixelOccur on this straight line;If some shade of gray direction, edge and described rectilinear direction angle are less than threshold value (the present embodiment sets threshold value as 25 degree), obtain through the Block-matching of sub-pixel precisionCoordinate after, calculate the degree of depth of described edge point with triangulation meterPerform step 35);If described angle is more than this threshold value, then do not search for, return step S34 and check next edge point.
S35: the degree of depth that step S34 is obtainedInverted isWherein, subscript i represents it is i-th edge point in current key frame, and for positive integer, subscript n represents that this is that the n-th to current edge point point depth filter measures and updates, for positive integer, the inverse to the degree of depthProbability distribution model such as formula (3-6):
p ( d ~ e i n | d ~ e i , ρ i ) = ρ i N ( d ~ e i n | d ~ e i , τ i 2 ) + ( 1 - ρ i ) U ( d ~ e i n | d ~ e i min , d ~ e i max ) - - - ( 3 - 6 )
The implication of above formula is: key frame rmMiddle i-th edge point point is through image IkTriangulation, if being effectively measure, then the inverse of measurement resultMeet normal distributionIf being invalid measurement, thenMeet and be uniformly distributedWhereinMinima that the degree of depth set for the prior information according to current scene is reciprocal and maximum;Pass throughMeasurement update, the degree of depth is reciprocalPosterior probability distribution such as formula (3-6):
C × p ( d ~ e i n | d ~ e i , ρ i ) q ( d ~ e i ρ i | a n - 1 , b n - 1 , μ n - 1 , σ n - 1 2 ) - - - ( 2 - 3 )
Calculated by moment-matching method
S36: reject effectively measure probability too low (Less than 0.1) edge point.
These are only presently preferred embodiments of the present invention, be only used for explaining the present invention, and be not limitation of the present invention.And protection scope of the present invention is not limited thereto; any those of ordinary skill in the art; when without departing from the spirit of the present invention, principle and scope; these embodiments can also be carried out multiple change, modification, amendment and replacement; therefore all equivalent technical methods all should belong to scope of the invention, and protection scope of the present invention should by claims and equivalent limit thereof.

Claims (4)

1. one kind based on the method for the unmanned plane monocular vision ranging of edge dot information in image, it is characterised in that the method includes initializing and the estimation of parallel processing, map structuring and depth map are estimated;Specifically include following steps:
1) initializing: select two frames to build initial map and ID figure from the image sequence caught depending on monocular camera being connected with unmanned plane, map is with one group of key frame set { rmAnd one group of three-dimensional feature point set { piRepresent, wherein r represents that key frame, subscript m represent m-th key frame, m is positive integer, and p represents three-dimensional feature point, and subscript i represents i-th three-dimensional feature point, i is positive integer, and the some set of the two-dimensional angular of its correspondence, corresponding to the Corner Feature in image, is designated as by three-dimensional feature pointWherein ucRepresenting angle point, subscript i represents i-th angle point, and i is positive integer;Depth map is made up of the uncertainty of edge point coordinates, edge point depth value and depth value, depth map and key frame one_to_one corresponding, and depth map is usedRepresenting, wherein, D represents the depth map that edge point is corresponding, and subscript m represents m width depth map, for positive integer, ueRepresent edge point, deRepresent the depth value of edge point, σeRepresenting the uncertainty that the edge point degree of depth is reciprocal, subscript i represents i-th edge point, for positive integer;
Wherein, the first frame is taken as first key frame in map, and camera coordinates system corresponding for the first frame is taken as world coordinate system, complete to initialize;
2) after initializing map and depth map information, carry out estimation, map structuring and depth map parallel and estimate three threads: wherein, estimation thread utilizes known map and depth map information, carry out aliging with present frame and obtain ranging result, and according to ranging result, existing cartographic information is optimized, map structuring and depth map estimate that thread runs to safeguard map and depth map information simultaneously.
2. as claimed in claim 1 method, it is characterised in that described step 2) carry out the estimation thread that estimation, map structuring and depth map estimate in three threads parallel and specifically comprise the following steps that
The current frame image that camera is newly obtained, utilizes the edge point depth information in current key frame, carries out the image alignment based on motion model, obtain present frame pose T in world coordinate systemK, wInitial estimationWherein T represents pose, and subscript k represents that present frame is the kth frame image that camera obtains, and k is positive integer, and subscript w represents world coordinate system, subscript-expression initial estimation, utilizes initial estimation, by three-dimensional feature point { p existing in mapiBe incident upon in present image, and obtained and { p by Block-matchingiCorresponding two-dimentional angle pointWithFor measuring value,For initial value, successively to present frame pose T in world coordinate system wK, wIt is optimized with three-dimensional feature point position relevant in map;
S11: note current frame image is Ik, wherein I represents the image that camera obtains, and first current for unmanned plane pose is initialized as the pose of previous moment, namelyWith current key frame rmAs reference frame, minimize I with Gauss-Newton iterative algorithmkAnd rmBetween weighted intensity residual error such as formula (1-1):
Obtain present frame IkRelative to reference frame rmPose TK, m, whereinRepresent reference frame rmMiddle degree of depth uncertainty is less than the uncertainty threshold value set and at present frame visible edge point set, uiRepresenting the pixel in image, wherein u represents that pixel, subscript i represent ith pixel point, for positive integer, wi(ui) represent pixel uiWeights, be taken asσiRepresent the uncertainty of the degree of depth of ith pixel point, pixel uiGray scale residual error δ R (TK, m, ui) for formula (1-2):
Wherein π is projection function, the three-dimensional point in camera coordinates system is projected in two dimensional image coordinate system, π-1For inverse projection function,WithRepresent that pixel * is at frame I respectivelykWith frame rmIn gray value, in conjunction with reference frame rmPose T relative to world coordinate systemM, w, then the initial estimation of present frame pose can be calculatedFor formula (1-3):
T k , w - = T k , m · T m , w - - - ( 1 - 3 )
If participating in the edge point quantity of weighting less than the edge point threshold value set, going to step S12, being otherwise directly entered step S13;
S12: be estimated as initial value with the pose obtained in step S11, by former frame Ik-1Reference frame as present frame carries out image alignment, minimizes I by Gauss-Newton iteration optimization algorithmskAnd Ik-1Between gray scale residual error obtain the present frame pose T relative to former frameK, k-1, such as formula (1-4):
WhereinRepresent the degree of depth in frame k-1 it is known that and in the visible angle point set of present frame in conjunction with the former frame pose T relative to world coordinate systemK-1, w, then the initial estimation of present frame pose is calculatedFor formula (1-5):
T k , w - = T k , k - 1 · T k - 1 , w - - - ( 1 - 5 )
Now, by present frame IkElect the new key frame reference frame as follow-up one section of image as;The input information of thread is estimated simultaneously as map structuring and depth map;
S13: according to initial estimationJudge the three-dimensional feature point set { p that can be observed by present frame in mapi, obtain it at reference frame rmThe angular coordinate of middle correspondence is estimatedWithFor initial value, calculate { p with image tracking algorithm KLTiIn the true imaging position of present frame
( u c i , δ R ‾ ) = arg min ( u c i , δ R ‾ ) 1 2 | | R I k ( u c i ) - A i · R I m ( u c i ′ ) + δ R ‾ | | 2 , ∀ i - - - ( 1 - 6 )
Wherein, AiRepresent the transformation matrix that reference segment is transformed to present image.Represent that the average gray with reference to the coupling segment in segment and present image is poor, be used for eliminating illumination effect;
S14: estimate through the initial pose of the step S13 two-dimensional coordinate obtained with present frameNo longer meet formula (1-7):
| | δu c i | | = | | u c i - π ( T k , w - · p i ) | | ≠ 0 - - - ( 1 - 7 )
WithFor initial value, by minimizing re-projection error to TK, wIt is updated the present frame pose T obtainedK, wNamely it is the final result of estimation, such as formula (1-8):
T k , w = arg min T k , w 1 2 Σ i | | u c i - π ( T k , w · p i ) | | 2 - - - ( 1 - 8 )
TK, wIt is the final result that ranging obtains;Go to step S15 afterwards, the present frame pose T simultaneously obtained by step S14K, wTurn map structuring thread respectively and depth map estimates that thread calculates angle point depth value and calculates edge point depth value;
S15: utilize the present frame pose T that step S14 obtainsK, w, each the three-dimensional feature point in the map can observe present frame is optimized respectively, and the error sum of squares making the projection in all key frames that can observe it of this three-dimensional feature point and true imaging position is minimum namely such as formula (1-9):
p i = arg min p i 1 2 Σ j | | u c i j - π ( T j , w · p i ) | | 2 , ∀ i - - - ( 1 - 9 )
In above formula, subscript j represents can observe three-dimensional feature point piJth key frame.
3. as claimed in claim 2 method, it is characterised in that described step 2) carry out the map structuring thread that estimation, map structuring and depth map estimate in three threads parallel and specifically comprise the following steps that
Described map structuring thread processes in two kinds of situation: if the new image obtained is chosen as new key frame at estimation thread step S12, performs step S21, otherwise utilizes the pose of the estimation thread step S14 present frame obtained, performs step S22-S24;Specifically comprise the following steps that
S21: when a two field picture is chosen to be key frame, uses FAST algorithm to extract this key frame Corner Feature;First being divided by image uniform with equally spaced grid, only extract an angle point, make angle point be evenly distributed in image in each grid, sum is less than grid number;For the angle point that each is newCreate a probability depth filter, and by the inverse of its degree of depthProbability of occurrence ρ with effectively observationiJoint posterior distribution define such as formula (2-1):
q ( d ~ i , ρ i | a n , b n , μ n , σ n 2 ) : = B e t a ( ρ i | a n , b n ) N ( d ~ i | μ n , σ n 2 ) - - - ( 2 - 1 )
Wherein: whereinRepresent the inverse of the degree of depth of angle point, ρ represents effective observation probability of angle point, subscript i represents the i-th angle point in current key frame, i is positive integer, Beta and N represents Beta distribution and normal distribution respectively, subscript n represents that current probability depth filter has been carried out n subparameter to be updated, and n is positive integer, parameter anAnd bnIt is the number of times that in recursion renewal process, effectively observation and invalid observation occur respectively, μnWithIt is then average and the variance of degree of depth inverse Gauss distribution;When parameter in formula (2-1) is initialized, by anAnd bnInitial value be set to 10, μnIt is initialized as the inverse of the mean depth of place image,It is initialized as maximum;
S22: use the image I that the step S14 camera pose obtained is knownk, the estimation of Depth of Corner Feature is updated;The angle point that note is updatedThe key frame at place is rm, according to present frame IkAnd rmRelative pose, at IkIn determine straight line, and ensure IkIn withCorresponding pixelOccur on this straight line;This straight line carries out polar curve search, obtains through the Block-matching of sub-pixel precisionCoordinate after, calculate described angle point with triangulation meterThe degree of depth
S23: the degree of depth that step S22 is obtainedInverted isWherein, subscript i represents it is i-th angle point in current key frame, and i is positive integer, and subscript n represents that this is that the n-th to current angle point depth filter measures and updates, and n is positive integer, the inverse to the degree of depthProbability distribution model such as formula (2-2):
p ( d ~ i n | d ~ i , ρ i ) = ρ i N ( d ~ i n | d ~ i , τ i 2 ) + ( 1 - ρ i ) U ( d ~ i n | d ~ i min d ~ i max ) - - - ( 2 - 2 )
The implication of above formula is: key frame rmMiddle i-th angle point is through image IkTriangulation, if being effectively measure, then the inverse of measurement resultMeet normal distributionWhereinThe degree of depth inverse caused for one pixel error of the plane of delineation calculates variance, if being invalid measurement, thenMeet and be uniformly distributedWhereinMinima that the degree of depth set for the prior information according to current scene is reciprocal and maximum;Pass throughMeasurement update, the degree of depth is reciprocalPosterior probability distribution such as formula (2-3):
C × p ( d ~ i n | d ~ i , ρ i ) q ( d ~ i , ρ i | a n - 1 , b n - 1 , μ n - 1 , σ n - 1 2 ) - - - ( 2 - 3 )
Wherein, C is that guarantee probability is distributed normalized constant, by moment-matching method, calculates an, bn, μn,
S24: reject and effectively measure probabilityToo low angle point, by uncertainty σnMeet the point required to add in map.
4. as claimed in claim 2 method, it is characterised in that described step 2) carry out estimation, map structuring and depth map parallel and estimate that the depth map of three threads estimates that thread specifically comprises the following steps that
If the new image obtained is chosen as new key frame at estimation thread step S12, from the key frame being newly added, extract edge point, perform step S31~S33;Otherwise utilize the pose of the estimation thread step S14 present frame obtained, perform step S34~S36, update the probability depth filter of edge point by new image information, and reject the edge point that effective observation probability is too low;Specifically comprise the following steps that
S31: in the key frame being newly added, utilizes Sobel operator to calculate the shade of gray G that image is horizontal and verticalxAnd Gy, shade of gray figure usesApproximate;Selecting 2500 to 4000 pixels that shade of gray is maximum from image, when the significant pixel number of gradient is less than 2500, Grads threshold 450 is reduced to original 0.95 times, when number is more than 4000, increase threshold value is original 1.05 times;
S32: remember that new key frame is rm, previous key frame is rm-1, by frame rm-1Depth map Dm-1Propagate new key frame depth maps Dm;rm-1In, edge pointThe reciprocal average of the degree of depth beThe variance of degree of depth inverse isWherein,Represent the average that the degree of depth of edge point is reciprocal,Representing the variance that the degree of depth is reciprocal, subscript i represents frame rm-1In i-th edge point;rm-1To rmRelative pose be TM, m-1, edge pointAt frame rmMiddle corresponding sides are along the position putCan be calculated by projection relation obtains such as formula (3-1):
u e i ′ = π ( T m , m - 1 · π - 1 ( u e i , 1 μ ~ e i ) ) - - - ( 3 - 1 )
NoteDegree of depth inverse beThe variance of degree of depth inverse is designated asThen:
μ ~ e i ′ ( μ ~ e i ) = ( μ ~ e i - 1 - t z ) - 1 - - - ( 3 - 2 )
Wherein, tzRepresent camera translation on optical axis;Thus deriveThe reciprocal variance of the degree of depthFor:
σ ~ e i ′ 2 = J μ ~ e i · σ ~ e i 2 · J μ ~ e i T + σ p 2 = ( μ ~ e i ′ μ ~ e i ) 4 · σ ~ e i 2 + σ p 2 - - - ( 3 - 3 )
WhereinRepresent uncertainty in traffic, willIt is tied to rmIn nearest pixel
S33: each the edge point for extracting in step S31 creates probability depth filter, if 3x3 pixel coverage internal memory is in prior estimate near some edge pointThen its degree of depthAverageInitialize such as formula (3-4):
μ e i = Σ i 1 σ ~ e i ′ 2 1 μ ~ e i ′ Σ i 1 σ ~ e i ′ 2 - - - ( 3 - 4 )
The uncertainty that then the parameter degree of depth of probability depth filter is reciprocalThen it is initialized as in prior estimate minimum uncertaintyIf being absent from prior estimate, then probability depth filter is initialized as mean depth and maximum uncertainty;TakeThen illuminated (2-1) is pressed for each edge and constructs probability depth filter such as formula (3-5):
q ( d ~ e i , ρ i | a n , b n , μ n , σ n 2 ) : = B e t a ( ρ i | a n , b n ) N ( d ~ e i | μ n , σ n 2 ) - - - ( 3 - 5 )
Wherein, the definition of formula (3-5) each parameter is identical with formula (2-1), the parameters of the probability depth filter of corresponding sides edge point, anAnd bnInitial value be set to 10, μn, σnIt is initialized asWith
S34: by new key frame rmIt is set to the reference frame of successive image, utilizes follow-up picture frame rmIn depth map DmCarry out measuring and update;Use image IkTo DmDuring renewal, the edge point that note is updated isAccording to IkAnd rmRelative pose, at IkIn determine straight line, it is ensured that IkIn withCorresponding pixelOccur on this straight line;If some shade of gray direction, edge and described rectilinear direction angle are less than threshold value, obtain through the Block-matching of sub-pixel precisionCoordinate after, calculate the degree of depth of described edge point with triangulation meterPerform step S35;If described angle is more than this threshold value, then do not search for, return step S34 and check next edge point;
S35: the degree of depth that step S34 is obtainedInverted isWherein, subscript i represents it is i-th edge point in current key frame, and i is positive integer, and subscript n represents that this is that the n-th to current edge point point depth filter measures and updates, and n is positive integer, the inverse to the degree of depthProbability distribution model such as formula (3-6):
p ( d ~ e i n | d ~ e i , ρ i ) = ρ i N ( d ~ e i n | d ~ e i , τ i 2 ) + ( 1 - ρ i ) U ( d ~ e i n | d ~ e i min , d ~ e i max ) - - - ( 3 - 6 )
The implication of above formula is: key frame rmMiddle i-th edge point is through image IkTriangulation, if being effectively measure, then the inverse of measurement resultMeet normal distributionIf being invalid measurement, thenMeet and be uniformly distributedWhereinMinima that the degree of depth set for the prior information according to current scene is reciprocal and maximum;Pass throughMeasurement update, the degree of depth is reciprocalPosterior probability distribution such as formula (2-3):
C × p ( d ~ e i n | d ~ e i , ρ i ) q ( d ~ e i , ρ i | a n - 1 , b n - 1 , μ n - 1 , σ n - 1 2 ) - - - ( 2 - 3 )
A is calculated by moment-matching methodn, bn, μn,
S36: reject effectively measure probability too low (Less than 0.1) edge point.
CN201610131438.2A 2016-03-08 2016-03-08 A kind of monocular vision ranging method based on point information in edge in image Active CN105809687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610131438.2A CN105809687B (en) 2016-03-08 2016-03-08 A kind of monocular vision ranging method based on point information in edge in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610131438.2A CN105809687B (en) 2016-03-08 2016-03-08 A kind of monocular vision ranging method based on point information in edge in image

Publications (2)

Publication Number Publication Date
CN105809687A true CN105809687A (en) 2016-07-27
CN105809687B CN105809687B (en) 2019-09-27

Family

ID=56467948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610131438.2A Active CN105809687B (en) 2016-03-08 2016-03-08 A kind of monocular vision ranging method based on point information in edge in image

Country Status (1)

Country Link
CN (1) CN105809687B (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106652028A (en) * 2016-12-28 2017-05-10 深圳乐行天下科技有限公司 Environment three-dimensional mapping method and apparatus
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN107527366A (en) * 2017-08-23 2017-12-29 上海视智电子科技有限公司 A kind of camera tracking towards depth camera
CN107687850A (en) * 2017-07-26 2018-02-13 哈尔滨工业大学深圳研究生院 A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN107888828A (en) * 2017-11-22 2018-04-06 网易(杭州)网络有限公司 Space-location method and device, electronic equipment and storage medium
CN108151728A (en) * 2017-12-06 2018-06-12 华南理工大学 A kind of half dense cognitive map creation method for binocular SLAM
CN106920279B (en) * 2017-03-07 2018-06-19 百度在线网络技术(北京)有限公司 Three-dimensional map construction method and device
CN108398139A (en) * 2018-03-01 2018-08-14 北京航空航天大学 A kind of dynamic environment visual odometry method of fusion fish eye images and depth image
CN108615246A (en) * 2018-04-19 2018-10-02 浙江大承机器人科技有限公司 It improves visual odometry system robustness and reduces the method that algorithm calculates consumption
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN108765481A (en) * 2018-05-25 2018-11-06 亮风台(上海)信息科技有限公司 A kind of depth estimation method of monocular video, device, terminal and storage medium
CN108759833A (en) * 2018-04-25 2018-11-06 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on priori map
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109798897A (en) * 2019-01-22 2019-05-24 广东工业大学 A method of it is assessed by environmental model integrity degree to improve monocular vision reliability of positioning
CN110044358A (en) * 2019-04-29 2019-07-23 清华大学 Method for positioning mobile robot based on live field wire feature
CN110099656A (en) * 2017-01-22 2019-08-06 四川金瑞麒智能科学技术有限公司 System and method for controlling intelligent wheel chair
CN110135376A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor
CN110177532A (en) * 2017-01-22 2019-08-27 四川金瑞麒智能科学技术有限公司 A kind of intelligent wheelchair system based on big data and artificial intelligence
CN110349212A (en) * 2019-06-28 2019-10-18 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN110889799A (en) * 2018-09-11 2020-03-17 顶级公司 Method, apparatus and processor for generating higher resolution frames
CN111260698A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Binocular image feature matching method and vehicle-mounted terminal
CN112364677A (en) * 2020-11-23 2021-02-12 盛视科技股份有限公司 Robot vision positioning method based on two-dimensional code
CN112632426A (en) * 2020-12-22 2021-04-09 新华三大数据技术有限公司 Webpage processing method and device
CN112907742A (en) * 2021-02-18 2021-06-04 湖南国科微电子股份有限公司 Visual synchronous positioning and mapping method, device, equipment and medium
CN113689400A (en) * 2021-08-24 2021-11-23 凌云光技术股份有限公司 Method and device for detecting section contour edge of depth image
CN113793417A (en) * 2021-09-24 2021-12-14 东北林业大学 Monocular SLAM method capable of creating large-scale map
CN115273538A (en) * 2022-08-29 2022-11-01 王炜程 GNSS-RTK technology-based parking space detection system and deployment and working methods thereof
CN116993740A (en) * 2023-09-28 2023-11-03 山东万世机械科技有限公司 Concrete structure surface defect detection method based on image data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377405A (en) * 2008-07-11 2009-03-04 北京航空航天大学 Vision measuring method of space round gesture parameter and geometric parameter
CN102128617A (en) * 2010-12-08 2011-07-20 中国科学院自动化研究所 Vision real-time measuring method based on color code block
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
US20150332446A1 (en) * 2014-05-16 2015-11-19 GM Global Technology Operations LLC Surround-view camera system (vpm) and vehicle dynamic
CN105157708A (en) * 2015-10-10 2015-12-16 南京理工大学 Unmanned aerial vehicle autonomous navigation system and method based on image processing and radar
CN105225241A (en) * 2015-09-25 2016-01-06 广州极飞电子科技有限公司 The acquisition methods of unmanned plane depth image and unmanned plane

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377405A (en) * 2008-07-11 2009-03-04 北京航空航天大学 Vision measuring method of space round gesture parameter and geometric parameter
CN102128617A (en) * 2010-12-08 2011-07-20 中国科学院自动化研究所 Vision real-time measuring method based on color code block
CN103926933A (en) * 2014-03-29 2014-07-16 北京航空航天大学 Indoor simultaneous locating and environment modeling method for unmanned aerial vehicle
US20150332446A1 (en) * 2014-05-16 2015-11-19 GM Global Technology Operations LLC Surround-view camera system (vpm) and vehicle dynamic
CN105225241A (en) * 2015-09-25 2016-01-06 广州极飞电子科技有限公司 The acquisition methods of unmanned plane depth image and unmanned plane
CN105157708A (en) * 2015-10-10 2015-12-16 南京理工大学 Unmanned aerial vehicle autonomous navigation system and method based on image processing and radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SAYANAN SIVARAMAN ET AL.: "Combining Monocular and Stereo-Vision for Real-Time Vehicle Ranging and Tracking on Multilane Highways", 《2011 14TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS》 *
刘艳丽: "融合颜色和深度信息的三维同步定位与地图构建研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106570820B (en) * 2016-10-18 2019-12-03 浙江工业大学 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone
CN106652028A (en) * 2016-12-28 2017-05-10 深圳乐行天下科技有限公司 Environment three-dimensional mapping method and apparatus
US11294379B2 (en) 2017-01-22 2022-04-05 Sichuan Golden Ridge Intelligence Science & Technology Co., Ltd. Systems and methods for controlling intelligent wheelchair
CN110177532A (en) * 2017-01-22 2019-08-27 四川金瑞麒智能科学技术有限公司 A kind of intelligent wheelchair system based on big data and artificial intelligence
CN110099656A (en) * 2017-01-22 2019-08-06 四川金瑞麒智能科学技术有限公司 System and method for controlling intelligent wheel chair
CN106920279B (en) * 2017-03-07 2018-06-19 百度在线网络技术(北京)有限公司 Three-dimensional map construction method and device
CN107341814A (en) * 2017-06-14 2017-11-10 宁波大学 The four rotor wing unmanned aerial vehicle monocular vision ranging methods based on sparse direct method
CN107341814B (en) * 2017-06-14 2020-08-18 宁波大学 Four-rotor unmanned aerial vehicle monocular vision range measurement method based on sparse direct method
CN107687850A (en) * 2017-07-26 2018-02-13 哈尔滨工业大学深圳研究生院 A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN107687850B (en) * 2017-07-26 2021-04-23 哈尔滨工业大学深圳研究生院 Unmanned aerial vehicle pose estimation method based on vision and inertia measurement unit
CN107527366B (en) * 2017-08-23 2020-04-10 上海视智电子科技有限公司 Camera tracking method for depth camera
CN107527366A (en) * 2017-08-23 2017-12-29 上海视智电子科技有限公司 A kind of camera tracking towards depth camera
CN107888828B (en) * 2017-11-22 2020-02-21 杭州易现先进科技有限公司 Space positioning method and device, electronic device, and storage medium
CN107888828A (en) * 2017-11-22 2018-04-06 网易(杭州)网络有限公司 Space-location method and device, electronic equipment and storage medium
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108151728A (en) * 2017-12-06 2018-06-12 华南理工大学 A kind of half dense cognitive map creation method for binocular SLAM
CN108398139B (en) * 2018-03-01 2021-07-16 北京航空航天大学 Dynamic environment vision mileometer method fusing fisheye image and depth image
CN108398139A (en) * 2018-03-01 2018-08-14 北京航空航天大学 A kind of dynamic environment visual odometry method of fusion fish eye images and depth image
CN108615244B (en) * 2018-03-27 2019-11-15 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN108615244A (en) * 2018-03-27 2018-10-02 中国地质大学(武汉) A kind of image depth estimation method and system based on CNN and depth filter
CN108615246B (en) * 2018-04-19 2021-02-26 浙江大承机器人科技有限公司 Method for improving robustness of visual odometer system and reducing calculation consumption of algorithm
CN108615246A (en) * 2018-04-19 2018-10-02 浙江大承机器人科技有限公司 It improves visual odometry system robustness and reduces the method that algorithm calculates consumption
CN108759833A (en) * 2018-04-25 2018-11-06 中国科学院合肥物质科学研究院 A kind of intelligent vehicle localization method based on priori map
CN108765481B (en) * 2018-05-25 2021-06-11 亮风台(上海)信息科技有限公司 Monocular video depth estimation method, device, terminal and storage medium
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN108986037B (en) * 2018-05-25 2020-06-16 重庆大学 Monocular vision odometer positioning method and positioning system based on semi-direct method
CN108765481A (en) * 2018-05-25 2018-11-06 亮风台(上海)信息科技有限公司 A kind of depth estimation method of monocular video, device, terminal and storage medium
CN110889799A (en) * 2018-09-11 2020-03-17 顶级公司 Method, apparatus and processor for generating higher resolution frames
CN111260698B (en) * 2018-12-03 2024-01-02 北京魔门塔科技有限公司 Binocular image feature matching method and vehicle-mounted terminal
CN111260698A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Binocular image feature matching method and vehicle-mounted terminal
CN109798897A (en) * 2019-01-22 2019-05-24 广东工业大学 A method of it is assessed by environmental model integrity degree to improve monocular vision reliability of positioning
CN109798897B (en) * 2019-01-22 2022-07-01 广东工业大学 Method for improving monocular vision positioning reliability through environment model integrity evaluation
CN110044358A (en) * 2019-04-29 2019-07-23 清华大学 Method for positioning mobile robot based on live field wire feature
CN110135376A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Determine method, equipment and the medium of the coordinate system conversion parameter of imaging sensor
CN110349212A (en) * 2019-06-28 2019-10-18 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN110349212B (en) * 2019-06-28 2023-08-25 Oppo广东移动通信有限公司 Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN110428462A (en) * 2019-07-17 2019-11-08 清华大学 Polyphaser solid matching method and device
CN112364677A (en) * 2020-11-23 2021-02-12 盛视科技股份有限公司 Robot vision positioning method based on two-dimensional code
CN112632426A (en) * 2020-12-22 2021-04-09 新华三大数据技术有限公司 Webpage processing method and device
CN112632426B (en) * 2020-12-22 2022-08-30 新华三大数据技术有限公司 Webpage processing method and device
CN112907742A (en) * 2021-02-18 2021-06-04 湖南国科微电子股份有限公司 Visual synchronous positioning and mapping method, device, equipment and medium
CN113689400A (en) * 2021-08-24 2021-11-23 凌云光技术股份有限公司 Method and device for detecting section contour edge of depth image
CN113689400B (en) * 2021-08-24 2024-04-19 凌云光技术股份有限公司 Method and device for detecting profile edge of depth image section
CN113793417A (en) * 2021-09-24 2021-12-14 东北林业大学 Monocular SLAM method capable of creating large-scale map
CN115273538A (en) * 2022-08-29 2022-11-01 王炜程 GNSS-RTK technology-based parking space detection system and deployment and working methods thereof
CN116993740A (en) * 2023-09-28 2023-11-03 山东万世机械科技有限公司 Concrete structure surface defect detection method based on image data

Also Published As

Publication number Publication date
CN105809687B (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN105809687A (en) Monocular vision ranging method based on edge point information in image
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
US11461912B2 (en) Gaussian mixture models for temporal depth fusion
Forster et al. Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles
CN112505065B (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN109509230A (en) A kind of SLAM method applied to more camera lens combined type panorama cameras
Won et al. OmniSLAM: Omnidirectional localization and dense mapping for wide-baseline multi-camera systems
CN108520554A (en) A kind of binocular three-dimensional based on ORB-SLAM2 is dense to build drawing method
Krombach et al. Feature-based visual odometry prior for real-time semi-dense stereo SLAM
CN113674416B (en) Three-dimensional map construction method and device, electronic equipment and storage medium
Xiao et al. 3D point cloud registration based on planar surfaces
CN106826833A (en) Independent navigation robot system based on 3D solid cognition technologies
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN110223351B (en) Depth camera positioning method based on convolutional neural network
CN113409459B (en) Method, device and equipment for producing high-precision map and computer storage medium
CN103247075A (en) Variational mechanism-based indoor scene three-dimensional reconstruction method
CN111127522B (en) Depth optical flow prediction method, device, equipment and medium based on monocular camera
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN113361499B (en) Local object extraction method and device based on two-dimensional texture and three-dimensional attitude fusion
CN111536970A (en) Infrared inertial integrated navigation method for low-visibility large-scale scene
CN109900272B (en) Visual positioning and mapping method and device and electronic equipment
CN111161334A (en) Semantic map construction method based on deep learning
CN116740488B (en) Training method and device for feature extraction model for visual positioning
Wang et al. Real-time omnidirectional visual SLAM with semi-dense mapping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant