CN107194334B - Video satellite image dense Stereo Matching method and system based on optical flow estimation - Google Patents

Video satellite image dense Stereo Matching method and system based on optical flow estimation Download PDF

Info

Publication number
CN107194334B
CN107194334B CN201710326528.1A CN201710326528A CN107194334B CN 107194334 B CN107194334 B CN 107194334B CN 201710326528 A CN201710326528 A CN 201710326528A CN 107194334 B CN107194334 B CN 107194334B
Authority
CN
China
Prior art keywords
image
patch
matching
pixel
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710326528.1A
Other languages
Chinese (zh)
Other versions
CN107194334A (en
Inventor
张过
汪韬阳
蒋永华
江万寿
蒋博洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710326528.1A priority Critical patent/CN107194334B/en
Publication of CN107194334A publication Critical patent/CN107194334A/en
Application granted granted Critical
Publication of CN107194334B publication Critical patent/CN107194334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The video satellite image dense Stereo Matching method and system based on optical flow estimation that the invention discloses a kind of, comprising: step 1, video satellite image is slightly matched using SGM matching method;Step 2, smart matching is carried out to video satellite image using PMVS matching method;Step 3, it is refined matching result present invention combination two kinds of dense Stereo Matching methods of SGM and PMVS of step 2 using optical flow method, SGM method is selected slightly to match, then carry out the matched form of PMVS algorithm essence, the advantage for playing the two as far as possible, improves the processing capacity of image data in terms of precision and efficiency two.

Description

Video satellite image dense Stereo Matching method and system based on optical flow estimation
Technical field
The present invention relates to image dense Stereo Matching technology, especially a kind of video satellite image based on optical flow estimation intensive Method of completing the square and system.
Background technique
By 30 years developments, China's Aerospace Technology achieves huge advance, formed resource, meteorology, ocean, environment, The earth observation remote sensing satellite system of the compositions such as national defence series.Especially in " high-resolution earth observation systems " national science and technology weight Under the promotion of big special construction, by innovation in terms of platform sensor development, more star networkings, ground data, I Spatial resolution, temporal resolution, the quality of data of state's remote sensing satellite are substantially improved, and for China's modern agriculture, prevent and reduce natural disasters, provide The key areas such as source environment, public safety provide information service and decision support.With going deep into for remote sensing application, application demand It generally investigates from regularly static state to real-time dynamic monitoring direction and develops, global hot spot region and target are continued using satellite Monitoring obtains multidate information and has become urgent need.Since video satellite can get the timing of target within the scope of certain time Image has the lasting surveillance coverage to moving target, and it is big that video satellite imaging technique has become remote sensing satellite develops one Hot spot.
Image dense Stereo Matching all obtains in many fields such as robot vision, video monitoring, missile guidance, unmanned plane scouting Extensive use has been arrived, has been the research hotspot in pattern-recognition and computer vision.Dense Stereo Matching is carried out using video satellite image Technology is one and requires higher high science and technology to precision and real-time, although people have been achieved for not its research Wrong achievement, but it is still to be improved in many performance indicators.When natural scene constantly changes, from D world space Projecting to two dimensional image space will cause target information loss, therefore the dense Stereo Matching method for designing a robust is extremely difficult 's.Therefore, dense Stereo Matching adaptation of methods how is improved, the requirement to precision and real-time is met, to guarantee video satellite Application effect in dynamic observation field has a very important significance.
Summary of the invention
The video satellite image dense Stereo Matching method and system based on optical flow estimation that the object of the present invention is to provide a kind of, this The matching of high-precision and high reliability can be achieved in invention.
Thinking of the present invention are as follows:
In conjunction with the thick matching strategy of SGM and PMVS essence matching process, closed using the geometry between video satellite image multiframe image System, constrains matching process using optical flow estimation.
The technical solution of the present invention is as follows:
One, the video satellite image dense Stereo Matching method based on optical flow estimation, comprising:
Step 1, video satellite image is slightly matched using SGM matching method, this step further comprises:
Disparity computation obtains parallax image to 1.1 pairs of video satellite images pixel-by-pixel;
1.2 calculate the minimum parallax path of each pixel, i.e., the correct matching of each pixel is matched in this, as two-dimensional constrains;
1.3 under the matching of 1.2 gained two-dimensional constrains of sub-step, and 1.1 gained parallax image of sub-paragraphs carries out SGM matching, The matching of video satellite image or so image same place is completed, match point is obtained;
Step 2, smart matching is carried out to video satellite image using PMVS matching method, this step further comprises:
2.1 pairs of images divide grid, and the point that interest value is local maximum is chosen in the match point of each grid and is used as spy Sign point;
2.2 by image each in video satellite image alternately as reference image, primary optical axis and reference are selected in other images Image of the angle less than 60 ° between image, is denoted as calculating image, based on characteristic point will with reference to image respectively with calculate Image Matching, And generate seed patch set;
2.3 judge in the neighborhood of seed patch with the presence or absence of with seed patch distance less than pixel patch or Existing average correlation coefficient is greater than the patch of threshold value, if so, executing this step to next seed patch then not to adjacent diffusion Suddenly;Otherwise, by seed patch to the adjacent diffusion of seed patch;Using the new patch after diffusion to reference image and calculating Image is matched;The threshold value is empirical value;
Step 3, it is refined the matching result of step 2 using optical flow method, this step further comprises:
Optical flow equation between the adjacent two frames image of 3.1 buildings:
Wherein, VxAnd VyRespectively indicate the light stream vectors on the direction x and y;q1,q2,…,qnIndicate characteristic point (x, y) neighborhood Interior pixel;Ix(q1)、Ix(q2)、……Ix(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the side x Upward partial derivative;Iy(q1)、Iy(q2)、……Iy(qn) indicate pixel q on t frame image1,q2,…,qnGray value in y Partial derivative on direction;IT(q1)、IT(q2)、……IT(qn) indicate pixel q on t frame image1,q2,…,qnGray value Partial derivative about time T;
3.2 pairs of each images iterate to calculate its light stream vectors between each match point of adjacent image based on optical flow equation respectively;
3.3 calculate the long d of mould (Vt) of each light stream vectors, count the long middle error of the mould of all light stream vectors on image, if Threshold value W is error in three times, and the match point using all errors greater than threshold value W is as elimination of rough difference.
Step 1.1 specifically:
Calculate separately p 'iWith its left point of proximity p 'i-1, right point of proximity p 'i+1Midpoint gray valueObtain p 'i Minimum gradation value in neighborhoodAnd maximum gradation valueThen regard Difference is max [0, IL(pi)-Imax,Imin-IL(pi)];
Wherein, piWith p 'iRespectively indicate the pixel on lateral probe movement line, IL(pi) indicate point p in left scan lineiThe ash at place Degree, IR(p′i) indicate to pass through the p ' for obtaining sampled point linear interpolation in right scan lineiGray value.
Step 1.2 specifically:
To each pixel p, the path in several directions is set around it, calculates separately the parallax path on each path, it is several The sum of the parallax path in path is the parallax path of each pixel p, using the smallest parallax path as the correct matching of each pixel p.
Step 1.3 specifically:
1.3a carries out SGM matching, the parallax image after being matched under half-resolution;
1.3b improves resolution ratio, carries out SGM matching to current parallax image under current resolution;
Resolution ratio is continuously improved in 1.3c, and repeats step (2) until resolution ratio reaches original resolution.
Step 1 further includes the steps that rejecting error hiding, specifically:
Using same place should if the parallax matched twice is different in the left image of homotopy mapping in right image Same place, which is considered as, to be matched and rejects in vain.
Step 2.2 specifically:
2.2a finds it and is calculating the candidate matches point f ' on image to characteristic point f each on reference image;
Every a pair of (f, the f ') forward intersection of 2.2b obtains model points, calculates each model points and with reference to image photographic camera center Distance from the near to the remote sorts to model points by distance, is abbreviated as reference camera hereinafter with reference to image photographic camera;
2.2c presses model points collating sequence, and model points is taken to initialize the center c (p), patch of matching face element patch in turn A line it is parallel with the x-axis of reference camera, the unit normal vector n (p) of patch is to be directed toward reference camera center by center c (p) Unit vector;Patch after initialization should meet the requirements: the angle of light where n (p) and patch less than 60 degree, and Patch, which is projected to, to be calculated image and is greater than preset first correlation coefficient threshold with reference to the related coefficient between image;If not being inconsistent It closes and requires, which takes next model points to re-execute this step;Otherwise, step 2.2d is executed;First phase relation Number threshold value is empirical value;
2.2d optimizes the centre coordinate and unit normal vector of patch, and patch is made to project to each calculating image and with reference to image Between related coefficient average value it is maximum;If after optimization, the average correlation coefficient of patch is greater than the first correlation coefficient threshold, Then think to be successfully generated patch, all images are divided into grid, patch is projected into image, grid where record patch Coordinate, while removing all characteristic points in the grid of the place patch, consider next model points, re-execute the steps 2.2c;Otherwise The patch is abandoned, considers that next model points re-execute the steps 2.2c.
7, the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Reference image and calculating image are matched using the new patch after diffusion described in step 2.3, specifically Are as follows:
New patch after diffusion initial unit normal vector is identical with the unit normal vector of seed patch, new patch's Center is the intersection point by plane where the light and seed patch at seed patch neighborhood grid center;
The centre coordinate and unit normal vector for optimizing new patch make new patch project to each calculating image and with reference to image Between related coefficient average value it is maximum;It is matched using the new patch after optimization to calculating image and with reference to image, if Matched calculating image number is greater than preset amount threshold, then it is assumed that has successfully spread a patch, has otherwise failed, skip this Fail patch, executes step 2.3 to next seed patch.
Step 2 further includes being filtered to matching result, specifically:
(1) patch that average correlation coefficient difference in same grid is greater than 0.6 is filtered out;
(2) find out the smallest patch of depth in grid, filter out in grid be greater than at a distance from the patch average distance, And the patch with the normal vector angle of the patch greater than 90 °, average distance refers to all patch in the patch and grid here Distance average value;
(3) patch of patch and its neighborhood is fitted quadratic surface, filters out the sum of neighborhood patch residual error greater than residual error The patch of threshold value, threshold residual value are empirical value;
(4) according to patch object space mutual distance to patch Clustering in grid, remove patch number be less than it is default The patch of amount threshold is grouped, and amount threshold is empirical value.
Step 3.2 specifically:
To adjacent two frames image, shade of gray is sought on the direction x, y, T respectively;It is zero that original optical flow field, which is arranged, using light Flow equation loop iteration calculates the light stream vectors of each match point;It is preset when the light stream vectors error of match point in iteration twice is less than Error threshold, terminate iteration.
Two, the video satellite image dense Stereo Matching system based on optical flow estimation, comprising:
(1) thick matching module, for slightly being matched using SGM matching method to video satellite image;
The thick matching module further comprises:
Disparity computation module, for disparity computation obtains parallax image pixel-by-pixel to video satellite image;
Two-dimensional constrains matching module, for calculating the minimum parallax path of each pixel, i.e., the correct matching of each pixel, with this It is matched as two-dimensional constrains;
SGM matching module, is used under two-dimensional constrains matching, carries out SGM matching to parallax image, completes video satellite shadow As the matching of left and right image same place, match point is obtained;
(2) smart matching module is used to carry out smart matching to video satellite image using PMVS matching method;
The smart matching module further comprises:
Characteristic point detection module, is used to divide grid to image, and it is part that interest value is chosen in the match point of each grid The point of maximum is as characteristic point;
Seed patch gathers generation module, is used to image each in video satellite image alternately as reference image, other Primary optical axis is selected in image and with reference to image of the angle less than 60 ° between image, is denoted as calculating image, and shadow will be referred to based on characteristic point As respectively with calculate Image Matching, and generate seed patch set;
PMVS matching module, in the neighborhood for judging seed patch with the presence or absence of with seed patch distance less than one The patch of pixel or existing average correlation coefficient are greater than the patch of threshold value, if so, then not to adjacent diffusion, to next seed Patch executes this step;Otherwise, by seed patch to the adjacent diffusion of seed patch;Using the new patch after diffusion to ginseng It examines image and calculates image and matched;The threshold value is empirical value;
(3) it refines matching module, for being refined the matching result of step 2 using optical flow method;
The matching module of refining further comprises:
Optical flow equation constructs module, for constructing the optical flow equation between adjacent two frames image:
Wherein, VxAnd VyRespectively indicate the light stream vectors on the direction x and y;q1,q2,…,qnIndicate characteristic point (x, y) neighborhood Interior pixel;Ix(q1)、Ix(q2)、……Ix(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the side x Upward partial derivative;Iy(q1)、Iy(q2)、……Iy(qn) indicate pixel q on t frame image1,q2,…,qnGray value in y Partial derivative on direction;IT(q1)、IT(q2)、……IT(qn) indicate pixel q on t frame image1,q2,…,qnGray value Partial derivative about time T;
Light stream vectors computing module is used to that it is each with adjacent image to iterate to calculate it respectively based on optical flow equation to each image Light stream vectors between match point;
Elimination of rough difference module, the long d (Vt) of mould for calculating each light stream vectors count the mould of all light stream vectors on image Long middle error, if threshold value W is error in three times, the match point using all errors greater than threshold value W is as elimination of rough difference.
Compared with prior art, the invention has the advantages that and the utility model has the advantages that
(1) it is based on optical flow estimation, geometrical relationship between satellite video image multiframe is estimated, Image Matching result is carried out about Matching reliability can be greatly improved in beam.
(2) two kinds of dense Stereo Matching algorithms of SGM and PMVS are combined, selects SGM algorithm slightly to match, then carry out PMVS algorithm essence Matched form plays the advantage of the two as far as possible, and the processing capacity of image data is improved in terms of precision and efficiency two.
Detailed description of the invention
Fig. 1 is that SGM coupling path selects schematic diagram, wherein figure (a) indicates the final accumulation result of all minimal paths, Scheming (b) indicates minimal path choice direction schematic diagram in each pixel;
Fig. 2 is that PMVS matches flow chart;
Fig. 3 is optical flow method multiframe bounding algorithm flow chart.
Specific embodiment
A specific embodiment of the invention is further illustrated with reference to the accompanying drawing, and steps are as follows:
Step 1, video satellite image is slightly matched using SGM matching method.
SGM (semi-global matching) is a kind of Image Matching method generated in computer vision field, is utilized Mutual information describes complicated corresponding relationship between corresponding image points, to realize Image Matching.The basic thought of SGM Image Matching is: first First, disparity computation pixel-by-pixel is executed based on mutual information;Then, the one-dimensional constrained approximation two-dimensional constrains on multi-direction are utilized.
This step further comprises:
Step 1.1, to video satellite image, disparity computation obtains parallax image pixel-by-pixel.
Parallax function d (pi,p′i,IL,IR) it is defined as follows:
A possibility that two pixel of parallax function representation is corresponding image points, wherein piWith p 'iIt respectively indicates on lateral probe movement line Pixel.When disparity computation, p ' value range is in p 'i1/2 pixel in left and right.I indicates the gray value at respective pixel, left scanning Point p on lineiThe gray scale at place is defined as IL;IRIt is then the p ' by being obtained in right scan line to sampled point linear interpolationiGray scale Value.
Because the extreme point of piecewise linear function must be breakpoint, the calculating of parallax d is very easy to.Practical parallax d Calculating process in, it is as follows according to the properities optimization of its extreme point:
Firstly, calculating p 'iWith left point of proximity p 'i-1Midpoint gray value
In formula (2), IR(p′i) andRespectively indicate p 'iWith p 'i-1Gray value.
Likewise, calculating p 'iWith right point of proximity p 'i+1Midpoint gray value
In formula (3), IR(p′i) andRespectively indicate p 'iWith p 'i+1Gray value.
Obtain yiMinimum gradation value in 1/2 neighborhood of pixels in left and rightAnd maximum gradation valueThen parallax d (pi,p′i,IL,IR)=max [0, IL(pi)-Imax,Imin-IL(pi)]。
Step 1.2, with one-dimensional constrained approximation two-dimensional constrains.
Possible, the minimum parallax path function L for a kind of matchingr(p, d) is defined as:
Lr(p, d)=C (p, d)+min (Lr(p-r,d),Lr(p-r,d±1)+P1,minLr(pi-r,i)+P2) (4)
Around pixel p, 16 paths are set for interval with 45 °.It is calculated separately on each path most by 16 paths Small parallax path Lr(p, d), d indicate the parallax d (p calculated in sub-step 1.1i,p′i,IL,IR), r indicates path direction.
In formula (4):
First item C (p, d) indicates the dissimilarity cost to pixel p, when parallax is d;
Section 2 min (Lr(p-r,dp),Lr(p-r,d±1)+P1,minLr(pi-r,i)+P2), as shown in Figure 1, parallax road Diameter function adds up in 16 directions.In every cumulative path, the accumulation parallax path of the current parallax of pixel p and pixel p are regarded The accumulation parallax path of previous pixel is related in difference and path.Therefore from the parallax of previous pixel and the parallax of pixel p It makes the difference, is taken as 0,1 or greater than the cumulant for choosing reckling as pixel p in 1 three kinds of situations:
(1) if result is 0, the cumulant of pixel p is the parallax path L of previous pixelr(p-r,d);
(2) if result is 1, the cumulant of pixel p is the parallax path L of previous pixelr(p-r, d ± 1) and punishment because Sub- P1The sum of;
(3) if result is greater than 1, the cumulant of pixel p is the reckling min L in all parallax paths of previous pixelr(pi- R, i) and penalty factor P2The sum of, piIndicate the previous pixel of pixel p, i indicates the path direction of previous pixel.
For each pixel, last parallax path be 16 directions parallax path and.Parallax path and the smallest road Diameter is considered the correct matching of current pixel.Carry out approximate two-dimensional constrains matching primitives with this.
Step 1.3, parallax iterates to calculate.
The present invention starts SGM matching using a kind of multistage matching strategy, i.e., first in the case of half-resolution, by what is obtained Parallax image is as initial value;Then, SGM matching is recursively carried out again using last parallax image, during this Resolution ratio is continuously improved until original resolution.To which every level-one matching range to be limited in certain limit, by step by step Transmitting, adaptively determines the matching disparity range of each pixel.It can accelerate matched speed while reducing the demand of memory Degree, and reduce matched rough error.
Step 1.4, the rejecting of error hiding.
After the completion of the image homotopy mapping of left and right, same place in the left image of homotopy mapping in right image is utilized.If two The parallax that secondary matching obtains is not identical, then the same place, which is considered as, matches and reject in vain.
Step 2, smart matching is carried out to video satellite image using PMVS matching method.
PMVS is view solid matching method more than one, by one group of photo and camera parameter, reconstruct in photo object or The three-dimensional structure of scene.PMVS only rebuilds rigid structure, it can ignore non-rigid structure automatically, such as the pedestrian before building.It is soft Part exports one group of three-dimensional point with direction, has estimated the three-dimensional coordinate and normal vector of each point.
The matching thinking of PMVS is: first finding the characteristic point of steady energy reliable matching, then increases according still further to region Thinking carries out matching propagation.Whole flow process such as attached drawing 2, detailed process is as follows:
Step 2.1, feature point extraction.
Frame net on every image, grid size are 32 × 32pixels.It is obtained from passing through SGM in step 1 and slightly matching Selecting η=4 interest value in match point in each grid is the point of local maximum, as characteristic point.
Step 2.2, Feature Points Matching generates seed patch set.
Every image is selected angle between primary optical axis and R (P) in other images and is less than alternately as reference image R (P) 60 ° of image I (P), is denoted as calculating image, with reference to image and will calculate Image Matching, specifically:
(1) it to characteristic point f each on reference image, finds it and is calculating candidate matches point f ', f ' the composition collection on image Close F.It is required that f ', within the scope of from two pixels of corresponding epipolar line, corresponding epipolar line, that is, same core face is crossed to form with left and right image Two core lines, center face refers to plane determined by object space point and photographic base.
(2) every a pair of (f, f ') forward intersection obtains model points, calculates each model points and with reference to image photographic camera (letter hereinafter It is denoted as " reference camera ") distance at center, from the near to the remote sorts to model points by distance.
(3) model points collating sequence is pressed, model points is taken to initialize matching face element patch in turn.It is close for matching face element patch Be similar to the local tangential plane of body surface, including center c (p), normal vector n (p), with reference to image R (p).The a line of patch It is parallel with the x-axis of reference camera.There is the grid of μ × μ on patch, μ is generally 5 or 7.
Wherein, the center c (p) of patch is model points, and unit normal vector n (p) is to be directed toward in reference camera by center c (p) The unit vector of the heart.It is required that the angle of light is less than 60 degree where the normal vector and patch of patch, and patch projects to meter It calculates image and is greater than the first correlation coefficient threshold with reference to the related coefficient between image.If the requirements are not met, the model points Failure skips the point and considers next model points.First correlation coefficient threshold is empirical value, in the present embodiment, the first phase relation Number threshold value is taken as 0.4.
(4) optimize the centre coordinate and normal vector of patch, specifically: subtle adjustment centre coordinate and normal vector make average Related coefficient maximizes, i.e. the average value of related coefficient between the reference image of maximization patch and calculating image.The optimization Process needs are fixed on the center of patch on the light with reference to image, and optimization freedom degree is 3.
If after optimization, the average correlation coefficient of patch is greater than the first correlation coefficient threshold, then it is assumed that is successfully generated patch.The grid cells Ci (x, y) that size is 2 × 2pixels is stamped on all images, and patch is projected to image On, record the grid coordinate where patch, grid coordinate, that is, grid coordinate in photographic coordinate system.There are two deposit each grid The set Qi (x, y) and Qi* (x, y) of patch are stored up, storage projects to georeferencing coordinate system V (p) and image space coordinate system V* respectively (p) patch.All characteristic points in grid, next consider next model points where removing patch simultaneously.Otherwise it abandons The patch considers next model points.
After the completion of the patch calculating process of all images, these patch are put into seed patch set.
Step 2.3, matching is propagated, the adjacent diffusion of grid from where seed patch to seed patch, if in neighborhood It is greater than second in the presence of with an existing average correlation coefficient in patch or neighborhood of the seed patch distance less than a pixel The patch of correlation coefficient threshold, then not to the adjacent diffusion.Second correlation coefficient threshold is empirical value, in the present embodiment, second Correlation coefficient threshold is taken as 0.6.
New patch after diffusion initial unit normal vector is identical with the unit normal vector of seed patch, new patch's Center is the intersection point by plane where the light and seed patch at neighborhood grid center.Optimize new patch centre coordinate and Unit normal vector, and using the new patch after optimization to calculating image and being matched with reference to image, if matched calculating image Number is greater than preset amount threshold, then it is assumed that has successfully spread a patch, has otherwise failed, has skipped failure patch to next Seed patch carries out adjacent diffusion.
Step 2.3 is repeated, until that can not spread.
Step 2.4, matching result filters.
Matching result filtering includes following four situation:
(1) patch that average correlation coefficient difference in same grid is greater than 0.6 is filtered out;
(2) find out the smallest patch of depth in grid, filter out in grid be greater than at a distance from the patch average distance, And the patch with the normal vector angle of the patch greater than 90 °, average distance refers to all patch in the patch and grid here Distance average value.Depth projects to the length in key light axis direction at a distance from referring to object point between camera center.
(3) the neighbors patches in patch and its eight neighborhood is fitted quadratic surface, filters out neighbors The sum of patches residual error is greater than the patch of threshold residual value, and threshold residual value is empirical value.
(4) filter out quantity lesser patch grouping, specifically: according to patch object space mutual distance in grid Patch Clustering removes patch grouping of the patch number less than 20.
Step 3, the match point obtained based on step 2 calculates separately the light stream vectors of match point according to optical flow estimation, repeats Then iteration obtains the light stream vectors of all match points and counts its mean value and middle error, will exceed the matching of threshold value according to result Point is rejected as Mismatching point, obtains match point to the end.
With reference to attached drawing 3, the specific steps are as follows:
Step 3.1, the state estimation parameter of the characteristic point obtained using sparse optical flow method estimating step 2.Optical flow method It is a kind of method for estimating pixel motion conditions in successive frame in image sequence, sparse optical flow method only needs to handle in image Certain pixels, not calculate image all pixels light stream estimation, so that calculation amount wants relatively small.
Assuming that between adjacent two frames image, there is only small variable quantities, to establish following optical flow equation:
In formula (5):
VxAnd VyThe light stream vectors on the direction x and y are respectively indicated, the direction x is image pixel column direction, and the direction y is image picture Plain line direction;
q1,q2,…,qnIndicate the pixel in characteristic point (x, y) neighborhood;
Ix(q1)、Ix(q2)、……Ix(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the direction x On partial derivative;
Iy(q1)、Iy(q2)、……Iy(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the direction y On partial derivative;
IT(q1)、IT(q2)、……IT(qn) indicate pixel q on t frame image1,q2,…,qnGray value about the time The partial derivative of T.
Enable v=(vx,vy), vxAnd vyIndicate VxAnd VyIn element, can pass through least square method obtain approximate solution:
In formula (6), wiIt is point qiWeight, the initial value of weight is set as 1.
Step 3.2, to each image, its light stream vectors between each match point of adjacent image is iterated to calculate respectively.
This step specifically:
The adjacent raw video of multiframe is read, seeks shade of gray on the direction x, y, T respectively;It is zero that original optical flow field, which is arranged, The light stream vectors of each match point are calculated using optical flow equation loop iteration;When the light stream vectors error of match point in iteration twice is small In preset error threshold, terminate iteration.
Step 3.3, the mould for calculating the long d of mould (Vt) of each light stream vectors, and counting all light stream vectors on image is grown equal Value and middle error, if threshold value W is error in three times, the match point using all errors greater than threshold value W is as elimination of rough difference.
Final resulting match point is the result that this method obtains.

Claims (10)

1. the video satellite image dense Stereo Matching method based on optical flow estimation, characterized in that include:
Step 1, video satellite image is slightly matched using SGM matching method, this step further comprises:
Disparity computation obtains parallax image to 1.1 pairs of video satellite images pixel-by-pixel;
1.2 calculate the minimum parallax path of each pixel, i.e., the correct matching of each pixel is matched in this, as two-dimensional constrains;
1.3 under the matching of 1.2 gained two-dimensional constrains of sub-step, and 1.1 gained parallax image of sub-paragraphs carries out SGM matching, completes The matching of video satellite image or so image same place obtains match point;
Step 2, smart matching is carried out to video satellite image using PMVS matching method, this step further comprises:
2.1 pairs of images divide grid, and it is the point of local maximum as characteristic point that interest value is chosen in the match point of each grid;
2.2 by image each in video satellite image alternately as reference image, selected in other images primary optical axis with reference to image Between image of the angle less than 60 °, be denoted as calculating image, based on characteristic point will with reference to image respectively with calculate Image Matching, and it is raw Gather at seed patch;
2.3 judge in the neighborhood of seed patch with the presence or absence of the patch with seed patch distance less than pixel or have deposited It is greater than the patch of threshold value in average correlation coefficient, if so, executing this step to next seed patch then not to adjacent diffusion; Otherwise, by seed patch to the adjacent diffusion of seed patch;Image to reference image and is calculated using the new patch after diffusion It is matched;The threshold value is empirical value;
Step 3, it is refined the matching result of step 2 using optical flow method, this step further comprises:
Optical flow equation between the adjacent two frames image of 3.1 buildings:
Wherein, VxAnd VyRespectively indicate the light stream vectors on the direction x and y;q1,q2,…,qnIt indicates in characteristic point (x, y) neighborhood Pixel;Ix(q1)、Ix(q2)、……Ix(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the x direction Partial derivative;Iy(q1)、Iy(q2)、……Iy(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the direction y On partial derivative;IT(q1)、IT(q2)、……IT(qn) indicate pixel q on t frame image1,q2,…,qnGray value about The partial derivative of time T;
3.2 pairs of each images iterate to calculate its light stream vectors between each match point of adjacent image based on optical flow equation respectively;
3.3 calculate the long d of mould (Vt) of each light stream vectors, the long middle error of the mould of all light stream vectors on image are counted, if threshold value W For error in three times, the match point using all errors greater than threshold value W is as elimination of rough difference.
2. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 1.1 specifically:
Calculate separately p 'iWith its left point of proximity p 'i-1, right point of proximity p 'i+1Midpoint gray valueObtain p 'iIn neighborhood Minimum gradation valueAnd maximum gradation valueThen parallax is max [0,IL(pi)-Imax,Imin-IL(pi)];
Wherein, piWith p 'iRespectively indicate the pixel on lateral probe movement line, IL(pi) indicate point p in left scan lineiThe gray scale at place, IR (p′i) indicate to pass through the p ' for obtaining sampled point linear interpolation in right scan lineiGray value.
3. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 1.2 specifically:
To each pixel p, the path in several directions is set around it, calculates separately the parallax path on each path, several paths The sum of parallax path be each pixel p parallax path, using the smallest parallax path as the correct matching of each pixel p.
4. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 1.3 specifically:
1.3a carries out SGM matching, the parallax image after being matched under half-resolution;
1.3b improves resolution ratio, carries out SGM matching to current parallax image under current resolution;
Resolution ratio is continuously improved in 1.3c, and repeats step (2) until resolution ratio reaches original resolution.
5. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 1 further includes the steps that rejecting error hiding, specifically:
Using same place in the left image of homotopy mapping in right image, if the parallax matched twice is different, this is of the same name Point, which is considered as, to be matched and rejects in vain.
6. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 2.2 specifically:
2.2a finds it and is calculating the candidate matches point f ' on image to characteristic point f each on reference image;
Every a pair of (f, the f ') forward intersection of 2.2b obtains model points, calculate each model points and with reference to image photographic camera center away from From, by distance from the near to the remote to model points sort, be abbreviated as reference camera hereinafter with reference to image photographic camera;
2.2c presses model points collating sequence, takes the center c (p) of model points initialization seed patch in turn, a line of patch It is parallel with the x-axis of reference camera, the unit normal vector n (p) of patch be from center c (p) be directed toward the unit at reference camera center to Amount;Patch after initialization should meet the requirements: the angle of light is less than 60 degree where n (p) and patch, and patch is projected to It calculates image and is greater than preset first correlation coefficient threshold with reference to the related coefficient between image;If it does not meet the requirements, the mould Type point failure takes next model points to re-execute this step;Otherwise, step 2.2d is executed;First correlation coefficient threshold is warp Test value;
2.2d optimizes the centre coordinate and unit normal vector of patch, and patch is made to project to each calculating image and with reference between image Related coefficient average value it is maximum;If after optimization, the average correlation coefficient of patch is greater than the first correlation coefficient threshold, then recognize To be successfully generated patch, all images are divided into grid, patch is projected into image, the coordinate of grid where record patch, All characteristic points in grid, consider next model points, re-execute the steps 2.2c where removing patch simultaneously;Otherwise it abandons The patch considers that next model points re-execute the steps 2.2c.
7. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Reference image and calculating image are matched using the new patch after diffusion described in step 2.3, specifically:
New patch after diffusion initial unit normal vector is identical with the unit normal vector of seed patch, the center of new patch To pass through the intersection point of plane where the light at seed patch neighborhood grid center and seed patch;
The centre coordinate and unit normal vector for optimizing new patch make new patch project to each calculating image and with reference between image Related coefficient average value it is maximum;Using the new patch after optimization to calculating image and being matched with reference to image, if matching Calculating image number be greater than preset amount threshold, then it is assumed that successfully spread a patch, otherwise failed, skipped the failure Patch executes step 2.3 to next seed patch.
8. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 2 further includes being filtered to matching result, specifically:
(1) patch that average correlation coefficient difference in same grid is greater than 0.6 is filtered out;
(2) find out the smallest patch of depth in grid, filter out in grid be greater than at a distance from the patch average distance and with The normal vector angle of the patch is greater than 90 ° of patch, here average distance refer to all patch in the patch and grid away from From average value;
(3) patch of patch and its neighborhood is fitted quadratic surface, filters out the sum of neighborhood patch residual error greater than threshold residual value Patch, threshold residual value is empirical value;
(4) according to patch object space mutual distance to patch Clustering in grid, remove patch number less than preset quantity The patch of threshold value is grouped, and amount threshold is empirical value.
9. the video satellite image dense Stereo Matching method based on optical flow estimation as described in claim 1, it is characterized in that:
Step 3.2 specifically:
To adjacent two frames image, shade of gray is sought on the direction x, y, T respectively;It is zero that original optical flow field, which is arranged, using light stream side Journey loop iteration calculates the light stream vectors of each match point;When the light stream vectors error of match point in iteration twice is less than preset mistake Poor threshold value terminates iteration.
10. the video satellite image dense Stereo Matching system based on optical flow estimation, characterized in that include:
(1) thick matching module, for slightly being matched using SGM matching method to video satellite image;
The thick matching module further comprises:
Disparity computation module, for disparity computation obtains parallax image pixel-by-pixel to video satellite image;
Two-dimensional constrains matching module, for calculating the minimum parallax path of each pixel, i.e., the correct matching of each pixel, in this, as Two-dimensional constrains matching;
SGM matching module, is used under two-dimensional constrains matching, carries out SGM matching to parallax image, it is left to complete video satellite image The matching of right image same place obtains match point;
(2) smart matching module is used to carry out smart matching to video satellite image using PMVS matching method;
The smart matching module further comprises:
Characteristic point detection module, is used to divide grid to image, and it is local maximum that interest value is chosen in the match point of each grid The point of value is as characteristic point;
Seed patch gathers generation module, is used to image each in video satellite image alternately as reference image, other images In select primary optical axis and with reference to image of the angle less than 60 ° between image, be denoted as calculating image, will be with reference to image point based on characteristic point Not with calculate Image Matching, and generate seed patch set;
PMVS matching module, whether there is in the neighborhood for judging seed patch and less than one pixel of seed patch distance Patch or existing average correlation coefficient be greater than threshold value patch, if so, then not to adjacent diffusion, to next seed Patch executes this step;Otherwise, by seed patch to the adjacent diffusion of seed patch;Using the new patch after diffusion to ginseng It examines image and calculates image and matched;The threshold value is empirical value;
(3) it refines matching module, for being refined the matching result of step 2 using optical flow method;
The matching module of refining further comprises:
Optical flow equation constructs module, for constructing the optical flow equation between adjacent two frames image:
Wherein, VxAnd VyRespectively indicate the light stream vectors on the direction x and y;q1,q2,…,qnIt indicates in characteristic point (x, y) neighborhood Pixel;Ix(q1)、Ix(q2)、……Ix(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the x direction Partial derivative;Iy(q1)、Iy(q2)、……Iy(qn) indicate pixel q on t frame image1,q2,…,qnGray value in the direction y On partial derivative;IT(q1)、IT(q2)、……IT(qn) indicate pixel q on t frame image1,q2,…,qnGray value about The partial derivative of time T;
Light stream vectors computing module is used to that each image is iterated to calculate it respectively based on optical flow equation and respectively matched with adjacent image Light stream vectors between point;
Elimination of rough difference module, the long d (Vt) of mould for calculating each light stream vectors count the mould length of all light stream vectors on image Middle error, if threshold value W is error in three times, the match point using all errors greater than threshold value W is as elimination of rough difference.
CN201710326528.1A 2017-05-10 2017-05-10 Video satellite image dense Stereo Matching method and system based on optical flow estimation Active CN107194334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710326528.1A CN107194334B (en) 2017-05-10 2017-05-10 Video satellite image dense Stereo Matching method and system based on optical flow estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710326528.1A CN107194334B (en) 2017-05-10 2017-05-10 Video satellite image dense Stereo Matching method and system based on optical flow estimation

Publications (2)

Publication Number Publication Date
CN107194334A CN107194334A (en) 2017-09-22
CN107194334B true CN107194334B (en) 2019-09-10

Family

ID=59872885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710326528.1A Active CN107194334B (en) 2017-05-10 2017-05-10 Video satellite image dense Stereo Matching method and system based on optical flow estimation

Country Status (1)

Country Link
CN (1) CN107194334B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509217B (en) * 2018-11-06 2022-11-15 辽宁工程技术大学 Low-rank sequence image matching point gross error detection method for motion structure similarity
CN117152221B (en) * 2023-10-26 2024-01-16 山东科技大学 Image non-rigid registration method, system, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN104864853A (en) * 2015-05-08 2015-08-26 同济大学 Method for detecting posture flutter of high resolution three linear array satellites along rail direction
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN105160702A (en) * 2015-08-20 2015-12-16 武汉大学 Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN105205808A (en) * 2015-08-20 2015-12-30 武汉大学 Multi-vision image dense coupling fusion method and system based on multiple characteristics and multiple constraints
CN105225233A (en) * 2015-09-15 2016-01-06 武汉大学 A kind of stereopsis dense Stereo Matching method and system expanded based on two classes
JP2016009333A (en) * 2014-06-24 2016-01-18 トヨタ自動車株式会社 Roadway boundary estimation device and roadway boundary estimation method
CN106228593A (en) * 2015-05-28 2016-12-14 长沙维纳斯克信息技术有限公司 A kind of image dense Stereo Matching method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8897546B2 (en) * 2011-09-29 2014-11-25 Texas Instruments Incorporated Semi-global stereo correspondence processing with lossless image decomposition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016009333A (en) * 2014-06-24 2016-01-18 トヨタ自動車株式会社 Roadway boundary estimation device and roadway boundary estimation method
CN104864853A (en) * 2015-05-08 2015-08-26 同济大学 Method for detecting posture flutter of high resolution three linear array satellites along rail direction
CN106228593A (en) * 2015-05-28 2016-12-14 长沙维纳斯克信息技术有限公司 A kind of image dense Stereo Matching method
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN105160702A (en) * 2015-08-20 2015-12-16 武汉大学 Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN105205808A (en) * 2015-08-20 2015-12-30 武汉大学 Multi-vision image dense coupling fusion method and system based on multiple characteristics and multiple constraints
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN105225233A (en) * 2015-09-15 2016-01-06 武汉大学 A kind of stereopsis dense Stereo Matching method and system expanded based on two classes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
以SRTM-DEM为控制的光学卫星遥感立体影像正射纠正;张浩等;《测绘学报》;20160331;第45卷(第3期);全文
卫星视频处理与应用进展;张过;《应用科学学报》;20160731;第34卷(第4期);全文
自适应三角形约束的多基元多视影像匹配方法;张云生;《中国博士学位论文全文数据库 信息科技辑》;20140715(第07期);全文

Also Published As

Publication number Publication date
CN107194334A (en) 2017-09-22

Similar Documents

Publication Publication Date Title
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
Berman et al. Air-light estimation using haze-lines
CN110675418B (en) Target track optimization method based on DS evidence theory
CN108921781B (en) Depth-based optical field splicing method
CN112085844A (en) Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment
US10930062B2 (en) 3D view model generation of an object utilizing geometrically diverse image clusters
CN110599545B (en) Feature-based dense map construction system
Yang et al. Non-parametric depth distribution modelling based depth inference for multi-view stereo
CN114066960B (en) Three-dimensional reconstruction method, point cloud fusion method, device, equipment and storage medium
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
Gómez et al. An experimental comparison of multi-view stereo approaches on satellite images
CN111383335A (en) Crowd funding photo and two-dimensional map combined building three-dimensional modeling method
CN107610219A (en) The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN112946679A (en) Unmanned aerial vehicle surveying and mapping jelly effect detection method and system based on artificial intelligence
Zhao et al. Double propagation stereo matching for urban 3-d reconstruction from satellite imagery
CN107194334B (en) Video satellite image dense Stereo Matching method and system based on optical flow estimation
Rothermel et al. Fast and robust generation of semantic urban terrain models from UAV video streams
US20220276046A1 (en) System and method for providing improved geocoded reference data to a 3d map representation
CN113240755B (en) City scene composition method and system based on street view image and vehicle-mounted laser fusion
Singh et al. Towards generation of effective 3D surface models from UAV imagery using open source tools
Qiao et al. RGB Guided ToF Imaging System: A Survey of Deep Learning-based Methods
Chen et al. Processing of volumetric video
Qayyum et al. Vegetation height estimation near power transmission poles via satellite stereo images using 3D depth estimation algorithms
He Research on outdoor garden scene reconstruction based on PMVS Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant