CN108682029A - Multiple dimensioned dense Stereo Matching method and system - Google Patents

Multiple dimensioned dense Stereo Matching method and system Download PDF

Info

Publication number
CN108682029A
CN108682029A CN201810240775.4A CN201810240775A CN108682029A CN 108682029 A CN108682029 A CN 108682029A CN 201810240775 A CN201810240775 A CN 201810240775A CN 108682029 A CN108682029 A CN 108682029A
Authority
CN
China
Prior art keywords
image
layer
visibility
depth
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810240775.4A
Other languages
Chinese (zh)
Inventor
高广
胡洋
支晓栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Science And Technology Ltd Of Flying Horse Robot
Original Assignee
Shenzhen Science And Technology Ltd Of Flying Horse Robot
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Science And Technology Ltd Of Flying Horse Robot filed Critical Shenzhen Science And Technology Ltd Of Flying Horse Robot
Priority to CN201810240775.4A priority Critical patent/CN108682029A/en
Publication of CN108682029A publication Critical patent/CN108682029A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This application discloses multiple dimensioned dense Stereo Matching method and systems.The multiple dimensioned dense Stereo Matching method includes:Image pyramid is built respectively to several images, and calculates the relevant matches image set of each width image, wherein each image pyramid has the 0 to the n-th 1 layer, each layer corresponds to a scale;The depth bounds of each width image are built respectively;The depth map, normal direction spirogram and visibility figure of this layer are sequentially calculated since the n-th 1 layers of each image pyramid, then it is transformed into next layer of each image pyramid, and using the depth map of the last layer of each image pyramid, normal direction spirogram and visibility figure as the initial value of the algorithm of this layer, the depth map, normal direction spirogram and visibility figure for calculating this layer, the 0th layer until being transformed into each image pyramid;And depth map fusion is carried out according to the 0th layer of the depth map, normal direction spirogram and visibility figure of all image pyramids, to generate a cloud.The application carries out dense Stereo Matching using depth integration frame, depth information, normal information and visibility information are considered simultaneously, and optical homogeneity and Geometrical consistency is combined to constrain, to improve precision, and multiple dimensioned strategy is used, memory requirements is reduced, improves efficiency.

Description

Multiple dimensioned dense Stereo Matching method and system
Technical field
The application embodiment is related to technical field of image processing, more particularly to a kind of matched based on dough sheet (patch) Multiple dimensioned dense Stereo Matching method and system.
Background technology
Dense Stereo Matching is one of core technology of 3-dimensional reconstruction.When using the technology, it is only necessary to image information energy Realize the recovery of object dimensional point cloud information.The technology has widely been used in surveying production, digital city models, virtual The fields such as real (Virtual Reality, VR), augmented reality (Augmented Reality, AR).Using dense Stereo Matching skill Point cloud caused by art can compare favourably with laser point cloud, and have the characteristics that price is low, density is high.
The technology also has a wide range of applications in many fields such as large scale scene modeling, wisp modelings.In large scene Reconstruction field, dense Stereo Matching are processes most time-consuming in algorithmic procedure.Common algorithm has the dense Stereo Matching based on binocular vision With the dense Stereo Matching based on more visions, wherein the efficiency of the dense Stereo Matching based on binocular vision is higher, however because do not have simultaneously Consider multiple images, robustness and precision are relatively low, such as based on half global matching (semi-global Matching, SGM) algorithm;The algorithm of dense Stereo Matching based on more visions is computationally intensive, many algorithms and it is inadaptable parallel and Larger memory is needed to carry out operation, such as three-dimensional various visual angles stereoscopic vision (the patch-based multi-view based on dough sheet Stereo, PMVS) algorithm.
Above factor all restricts the practical ranges of dense Stereo Matching technology, especially for the reconstruction of large scene, Needing one kind to be suitble to, parallel, robustness is good, request memory is low and algorithm with high accuracy.
Invention content
In view of robustness in the prior art is low, precision is low, algorithm is computationally intensive, inadaptable parallel, request memory height etc. Technical problem, the application provide one kind and being based on the matched multiple dimensioned dense Stereo Matching method and system of dough sheet, improve precision, reduce Memory requirements improves efficiency.
Technical solution is as follows used by the application solves above-mentioned technical problem:
One or more embodiment of the application discloses a kind of multiple dimensioned dense Stereo Matching method, including:
Image pyramid is built respectively to several images, and calculates the relevant matches image set of each width image, wherein Each image pyramid has the 0 to the (n-1)th layer, and each layer corresponds to a scale;
The depth bounds of each width image are built respectively;
The depth map, normal direction spirogram and visibility of this layer are sequentially calculated since (n-1)th layer of each image pyramid Figure, is then transformed into next layer of each image pyramid, and with the depth map of the last layer of each image pyramid, method The initial value of vectogram and visibility figure as the algorithm of this layer calculates the depth map, normal direction spirogram and visibility figure of this layer, directly To being transformed into the 0th layer of each image pyramid;And
Depth map fusion is carried out according to the 0th of all image pyramids the layer of depth map, normal direction spirogram and visibility figure, To generate point cloud.
In one or more embodiment of the application, image pyramid is built respectively to several images in the method, And the relevant matches image set for calculating each width image specifically includes:
Image is referred to using the current image as this;
It is related with reference to image that the visibility information of sparse cloud caused by result according to aerial triangulation builds this Image set;Statistics refers to the visible point of the simultaneously visible each width coherent video of image with this, which is projected to ginseng It examines on image and builds the two-dimentional triangulation network, and count the triangulation network gross area;And
According to the area by the image descending sort in the reference relevant image set of image, and preceding N therein are selected to make The relevant matches image set of image is referred to for this.
In one or more embodiment of the application, the depth bounds that each width is built in the method with reference to image have Body includes:Depth value of the cloud in this is with reference to image is calculated according to the visible point cloud with reference to image, counts the depth Range, and obtained depth bounds expansion 10% is obtained into the final depth bounds.
In one or more embodiment of the application, sequentially from the (n-1)th of each image pyramid in the method Layer calculates separately the depth map, normal direction spirogram and visibility figure of this layer, is then transformed into next layer of each image pyramid, And using the depth map of the last layer of each image pyramid, normal direction spirogram and visibility figure as the initial of the algorithm of this layer Value, calculates the depth map, normal direction spirogram and visibility figure of this layer, and the 0th layer until being transformed into each image pyramid is specific Including:
Sequentially the depth map of this layer, normal direction spirogram and visibility are calculated separately from (n-1)th layer of each image pyramid Figure;
Using each width image in the relevant matches image set as image to be matched, this is swept with reference to image as depth Retouch optimization;And
It is then transformed into next layer of each image pyramid, and with the depth of the last layer of each image pyramid Figure, the initial value of normal direction spirogram and visibility figure as the algorithm of this layer, calculate the depth map, normal direction spirogram and visibility of this layer Figure, the 0th layer until being transformed into each image pyramid.
In one or more embodiment of the application, in the method during depth integration consider projection error, Normal vector error and visibility information.
One or more embodiment of the application also discloses a kind of multiple dimensioned dense Stereo Matching system, described multiple dimensioned close It includes processor and memory to collect matching system, and multiple dimensioned dense Stereo Matching unit is stored in the memory, which is characterized in that should Multiple dimensioned dense Stereo Matching unit realizes above-mentioned multiple dimensioned dense Stereo Matching method.
The application provides one kind and being based on the matched multiple dimensioned dense Stereo Matching method and system of dough sheet, using depth integration frame Dense Stereo Matching is carried out, while considering depth information, normal information and visibility information, and combines optical homogeneity and geometry one The constraint of cause property to improve precision, and uses multiple dimensioned strategy, reduces memory requirements, improves efficiency.
Description of the drawings
One or more embodiments are illustrated by the picture in corresponding attached drawing, these exemplary theorys The bright restriction not constituted to embodiment, the element with same reference numbers label is expressed as similar element in attached drawing, removes Non- to have special statement, composition does not limit the figure in attached drawing.
Fig. 1 is the flow chart of the embodiment of the multiple dimensioned dense Stereo Matching method of the application;
Fig. 2 is the flow chart of the specific embodiment of the step 30 of the multiple dimensioned dense Stereo Matching method of Fig. 1;
Fig. 3 is the structure diagram of the embodiment of the multiple dimensioned dense Stereo Matching system of the application.
Specific implementation mode
It is with reference to the accompanying drawings and embodiments, right in order to make the object, technical solution and advantage of the application be more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not For limiting the application.
Unless otherwise defined, technical and scientific term all used in this specification is led with the technology for belonging to the application The normally understood meaning of technical staff in domain is identical.Term in this specification used in the description of the present application is to be The purpose of description specific embodiment is not intended to limitation the application.Term "and/or" used in this specification includes Any and all combinations of one or more relevant Listed Items.
In addition, as long as technical characteristic involved in each embodiment of the application disclosed below is each other not Conflict is constituted to can be combined with each other.
The application is described in further details below by way of the drawings and specific embodiments.
For the ease of understanding the application, with reference to the accompanying drawings and detailed description, the application is carried out in more detail It is bright.Unless otherwise defined, technical and scientific term all used in this specification and belong to the technical field of the application The normally understood meaning of technical staff is identical.Term in this specification used in the description of the present application is intended merely to retouch The purpose for stating specific embodiment is not intended to limitation the application.
Fig. 1 is the flow chart of the embodiment of the multiple dimensioned dense Stereo Matching method of the application.As shown in Figure 1, the embodiment of the present application A kind of multiple dimensioned dense Stereo Matching method is provided, including:
Step S10:Image pyramid P (not shown) is built respectively to several images G (not shown), and calculates each width shadow As the relevant matches image set Vm (not shown) of G.
Every width image G corresponds to an image pyramid P.Image pyramid P have n-layer (from the 0th layer to (n-1)th layer, wherein N is integer), each layer corresponds to one " scale " (or " resolution ratio ", wherein the 0th layer of resolution for corresponding to raw video Rate).In one embodiment of the application, step S10 is specifically included:
Using current image G as with reference to image R (not shown);
According to the visibility v (not shown) information architectures of sparse cloud caused by the result of aerial triangulation (sky three) With reference to the relevant image sets of image R, the visible point of statistics and the reference simultaneously visible each width coherent videos of image R can by this See on spot projection to reference image R and build the two-dimentional triangulation network, and counts the triangulation network gross area;And
By the image in the reference relevant image sets of image R according to the area descending sort counted, and select therein Preceding N (not shown) is as the relevant matches image set Vm with reference to image R.
Step S20:The depth bounds Dr (not shown) of each width image G is built respectively.
In one embodiment of the application, step S20 is specifically included:It is calculated according to the visible point cloud with reference to image R Point cloud counts depth bounds with reference to the depth value in image R, and obtained depth bounds expansion 10% is obtained final Depth bounds Dr.
Step S30:Sequentially calculated since (n-1)th layer of each image pyramid P this layer depth map Fd (not shown), Normal direction spirogram Fn (not shown) and visibility figure Fv (not shown).
The depth information of each pixel is stored in depth map Fd, and the normal direction of each pixel is stored in normal direction spirogram Fn Amount, it is seen that visibility probability P v (not shown) of each pixel in relevant matches image set Vm is stored in property figure Fv.Step S30 is the process of depth optimization.Fig. 2 is the flow chart of the specific embodiment of the step 30 of the multiple dimensioned dense Stereo Matching method of Fig. 1. As shown in Fig. 2, step S30 is specifically included:
Step S31:The depth map Fd, normal direction spirogram of this layer are sequentially calculated since (n-1)th layer of each image pyramid P Fn and visibility figure Fv..
In one embodiment of the application, if being currently at (n-1)th layer of image pyramid P, depth map, normal direction The initial value of spirogram and visibility figure calculates in accordance with the following methods:Image R is referred to using random number filling according to depth bounds Dr Corresponding depth map Fd and normal direction spirogram Fn and the value for configuring visibility figure Fv are 0.5.It, will be upper if being currently other layers Initial values of the depth map Fd, normal direction spirogram Fn and visibility figure Fv of one layer of generation as current layer.
Step S32:Using each width image G in relevant matches image set Vm as image to be matched, to making with reference to image R Depth scan optimizes.
In one embodiment of the application, the process of scanning only considers the influence of the previous pixel in Current Scan direction. The sequence of scanning is that from top to bottom, from left to right, from bottom to up, from right to left, scanning is denoted as an iteration 4 times, altogether can be with Carry out 5 iteration.The initial value scanned every time is last scan as a result, maximum using it is expected during each scanning (Expectation Maximization, EM) algorithm optimizes depth d, normal vector n and visibility v.
The pixel sampling image set S (not shown) is calculated according to the visibility probability P v for each pixel being calculated, Wherein sample graph image set S is the subset of relevant matches image set Vm, to meet visibility probability P v in relevant matches image set Vm Higher than the set of the image of some threshold value.The corresponding depth d and normal vector n of each pixel is calculated according to following formula:
Wherein, ρmMatching power flow is accumulated in the normalization of corresponding image m (not shown) in the case of for depth d, normal vector n The value of (normalized cross correlation, NCC), calculation are:It will lead to reference to the match window in image R It crosses in depth d and normal vector n back projections to matching image m, and is calculated.
At this point, if be currently at the 0th layer of image pyramid P, and be the last time of iteration, then consider that geometry is believed Breath.According to depth d and normal vector n, a certain pixel Px by back projection after projecting to matching image m with reference to image R to reference to image R In pixel Py, it is assumed that projection error Ep=| Px-Py |, use following equation:
Wherein, η is weights, and Emax is error threshold, and usual η can take 0.5, Emax that can take 3 pixels.Formula (2) is examined The geological information of object is considered, maximal end point cloud precision can be increased substantially.
According to obtained depth d and normal vector n, is scanned and counted using confidence spread (Belief Propagation) algorithm Calculate visibility probability P v.Wherein, message forward, backward is respectively α (Zl) and β (Zl):
Wherein, wherein l indicates the pixel of Current Scan, l-1 and l+1 indicate respectively previous pixel on current scan line and Next pixel.P(Xl|zl, dl, nl) calculated with following likelihood function:
Wherein, P (zl|zl-1) it is transformation probability.
Wherein, Z be whether visible probability threshold value, N is constant, and A is normalization coefficient.
Finally visibility probability P (z) is:
Global optimization process includes the steps that three iteration carry out:
Calculate final visibility probability P (z);
Estimating depth d and normal vector n;And
Recalculate α (zl) and β (zl)。
Finally obtain depth map Fd, the normal direction spirogram Fn and visibility figure Fv of current layer.
Step S40:It is transformed into next layer of each image pyramid P, then if it is the 0th layer of each image pyramid P Step S50 is executed, otherwise using the depth map, normal direction spirogram and visibility figure of the last layer of each image pyramid P as the layer Algorithm initial value, calculate the depth map, normal direction spirogram and visibility figure of this layer, then repeat step S40.
Step S40 is repeated, until the 0th layer of each image pyramid P.Finally obtain the depth map of all images Fd, normal direction spirogram Fn and visibility figure Fv.
When being transformed into lower layer by the upper layer of image pyramid P, depth d, normal vector n and visibility v can be by image pyramid P Upper layer obtain.In the case of image resolution is increased, memory requirements raising, calculation amount also increase, therefore can be in low resolution More relevant matches image set Vm is chosen in the image of rate.With the increase of resolution ratio, piecemeal processing can be carried out to image, Each root tuber counts the visibility histogram that visibility v in its relevant matches image set Vm is more than certain probability, choosing according to visibility v There is most images as new relevant matches image set Vm (wherein h in h (not shown) before selecting<S), to reduce related Quantity with image, improving efficiency, reducing memory requirements and keeping rational visibility information.In addition, with resolution ratio Increase, because initial depth d, normal vector n and visibility v are passed over by the image on upper layer, there is relatively reliable initial value, So iterations can be reduced, it is further reduced operation time.During depth optimization, the pass in scan line is only considered It is (previous pixel), therefore is very suitable for the accelerometer of graphics processor (Graphics Processing Unit, GPU) It calculates.
Step S50:According to the 0th of all image pyramid P the layer of depth map Fd, normal direction spirogram Fn and visibility figure Fv into Row depth map merges, to generate point cloud C (not shown).
In the fusion process of step S50, projection error Ep (not shown), normal vector error E n (not shown) and can are considered Opinion property information.Wherein projection error Ep is identical as the meaning in step S32, angles of the normal vector error E n between normal vector. Merged more than the pixel of certain probability according to visibility information selection, and remove projection error Ep and normal vector error E n compared with The pixel for (being more than certain threshold value) greatly.Final depth is the intermediate value of the depth of the pixel of all fusions.In order to increase robustness, Only retain point of those degrees of overlapping more than 3 degree in fusion process as final point cloud C.In addition, in order to further increase effect Rate can mark the depth of each pixel only to merge once, to realize the linear calculating time.
The application is matched based on dough sheet, dense Stereo Matching is carried out using depth integration frame, due to being used for for most of the time Depth map is calculated, calculating time and the image quantity of algorithm are presented linear relationship, are rebuild so being particularly suitable for large scene.Additionally Multi-scale information is considered, the initial value under high-resolution is imported by the result of calculation of low resolution, is needed to reduce memory Read group total amount.In addition, algorithm has fully considered matching Lighting information and geological information (projection error Ep and normal vector error En), while the visibility information of image is considered, therefore can get higher matching precision.It can on high-resolution image Visibility letter is screened by piecemeal to reduce the quantity of matching image, to improve matching speed.
Fig. 3 is the structure diagram of the embodiment of the multiple dimensioned dense Stereo Matching system of the application.As shown in Fig. 2, the application is implemented Example provides a kind of multiple dimensioned dense Stereo Matching system 100.
In one embodiment of the application, multiple dimensioned dense Stereo Matching system 100 is that a kind of computing device (such as services Device, computer and mobile intelligent terminal).Multiple dimensioned dense Stereo Matching system 100 includes processor 110 and memory 120, memory Multiple dimensioned dense Stereo Matching unit 121 is stored in 120.Processor 110 is a kind of IC chip, such as microprocessor (central processing unit, CPU), digital signal processor (digital signal processor, DSP), specially With integrated circuit (application specific integrated circuit, ASIC), ready-made programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, for executing in memory 120 The computer program stored.Multiple dimensioned dense Stereo Matching unit 121 includes being used for realizing multiple dimensioned dense Stereo Matching side shown in FIG. 1 The computer program of method.
The multiple dimensioned dense Stereo Matching system of the embodiment of the present application is based on phase with above-mentioned multiple dimensioned dense Stereo Matching embodiment of the method Same inventive concept, some particular technique features of system can refer to embodiment of the method, and this will not be detailed here.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements include not only those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this There is also other identical elements in the process of element, method, article or device.
The foregoing is merely presently filed embodiments, are not intended to limit the scope of the claims of the application, every to utilize this Equivalent structure or equivalent flow shift made by application specification and accompanying drawing content, it is relevant to be applied directly or indirectly in other Technical field includes similarly in the scope of patent protection of the application.

Claims (10)

1. a kind of multiple dimensioned dense Stereo Matching method, which is characterized in that including:
Image pyramid is built respectively to several images, and calculates the relevant matches image set of each width image, wherein each The image pyramid has the 0 to the (n-1)th layer, and each layer corresponds to a scale;
The depth bounds of each width image are built respectively;
The depth map, normal direction spirogram and visibility figure of this layer are sequentially calculated since (n-1)th layer of each image pyramid, and Afterwards
It is transformed into next layer of each image pyramid, and with the depth map of the last layer of each image pyramid, normal direction The initial value of spirogram and visibility figure as the algorithm of this layer calculates the depth map, normal direction spirogram and visibility figure of this layer, until It is transformed into the 0th layer of each image pyramid;And
Depth map fusion is carried out according to the 0th of all image pyramids the layer of depth map, normal direction spirogram and visibility figure, with life At a cloud.
2. multiple dimensioned dense Stereo Matching method according to claim 1, which is characterized in that build image respectively to several images Pyramid, and the relevant matches image set for calculating each width image specifically includes:
Image is referred to using the current image as this;
The visibility information of sparse cloud caused by result according to aerial triangulation builds this and refers to the relevant shadow of image Image set;Statistics refers to the visible point of the simultaneously visible each width coherent video of image with this, which is projected to reference to shadow As upper and build the two-dimentional triangulation network, and count the triangulation network gross area;And
The image descending sort in the relevant image set of image will be referred to according to the area, and preceding N therein is selected to open as this With reference to the relevant matches image set of image.
3. multiple dimensioned dense Stereo Matching method according to claim 1, which is characterized in that build the depth that each width refers to image Degree range specifically includes:
Depth value of the cloud in this is with reference to image is calculated according to the visible point cloud with reference to image, counts the depth model It encloses, and obtained depth bounds expansion 10% is obtained into the final depth bounds.
4. multiple dimensioned dense Stereo Matching method according to claim 1, which is characterized in that sequentially from each image pyramid (n-1)th layer calculate separately the depth map of this layer, normal direction spirogram and visibility figure, be then transformed into each image pyramid Next layer, and using the depth map of the last layer of each image pyramid, normal direction spirogram and visibility figure as the algorithm of this layer Initial value, calculate the depth map, normal direction spirogram and visibility figure of this layer, the until being transformed into each image pyramid the 0th Layer specifically includes:
Sequentially the depth map of this layer, normal direction spirogram and visibility figure are calculated separately from (n-1)th layer of each image pyramid;
Using each width image in the relevant matches image set as image to be matched, it is excellent that with reference to image depth scan is made to this Change;And
Then be transformed into next layer of each image pyramid, and with the depth map of the last layer of each image pyramid, The initial value of normal direction spirogram and visibility figure as the algorithm of this layer calculates the depth map, normal direction spirogram and visibility figure of this layer, The 0th layer until being transformed into each image pyramid.
5. multiple dimensioned dense Stereo Matching method according to claim 1, which is characterized in that wherein examined during depth integration Consider projection error, normal vector error and visibility information.
6. a kind of multiple dimensioned dense Stereo Matching system, including processor and memory, multiple dimensioned intensive is stored in the memory With unit, which is characterized in that the multiple dimensioned dense Stereo Matching unit is used for:
Image pyramid is built respectively to several images, and calculates the relevant matches image set of each width image, wherein each The image pyramid has the 0 to the (n-1)th layer, and each layer corresponds to a scale;
The depth bounds of each width image are built respectively;
The depth map, normal direction spirogram and visibility figure of this layer are sequentially calculated since (n-1)th layer of each image pyramid, and It is transformed into next layer of each image pyramid afterwards, and with the depth map of the last layer of each image pyramid, normal vector Figure and initial value of the visibility figure as the algorithm of this layer, calculate the depth map, normal direction spirogram and visibility figure of this layer, Zhi Daozhuan Change to each image pyramid the 0th layer;And
Depth map fusion is carried out according to the 0th of all image pyramids the layer of depth map, normal direction spirogram and visibility figure, with life At a cloud.
7. multiple dimensioned dense Stereo Matching system according to claim 6, which is characterized in that build image respectively to several images Pyramid, and the relevant matches image set for calculating each width image specifically includes:
Image is referred to using the current image as this;
The visibility information of sparse cloud caused by result according to aerial triangulation builds this and refers to the relevant shadow of image Image set;Statistics refers to the visible point of the simultaneously visible each width coherent video of image with this, which is projected to reference to shadow As upper and build the two-dimentional triangulation network, and count the triangulation network gross area;And
The image descending sort in the relevant image set of image will be referred to according to the area, and preceding N therein is selected to open as this With reference to the relevant matches image set of image.
8. multiple dimensioned dense Stereo Matching system according to claim 6, which is characterized in that build the depth that each width refers to image Degree range specifically includes:
Depth value of the cloud in this is with reference to image is calculated according to the visible point cloud with reference to image, counts the depth model It encloses, and obtained depth bounds expansion 10% is obtained into the final depth bounds.
9. multiple dimensioned dense Stereo Matching system according to claim 6, which is characterized in that sequentially from each image pyramid (n-1)th layer calculate separately the depth map of this layer, normal direction spirogram and visibility figure, be then transformed into each image pyramid Next layer, and using the depth map of the last layer of each image pyramid, normal direction spirogram and visibility figure as the algorithm of this layer Initial value, calculate the depth map, normal direction spirogram and visibility figure of this layer, the until being transformed into each image pyramid the 0th Layer specifically includes:
Sequentially the depth map of this layer, normal direction spirogram and visibility figure are calculated separately from (n-1)th layer of each image pyramid;
Using each width image in the relevant matches image set as image to be matched, it is excellent that with reference to image depth scan is made to this Change;And
Then be transformed into next layer of each image pyramid, and with the depth map of the last layer of each image pyramid, The initial value of normal direction spirogram and visibility figure as the algorithm of this layer calculates the depth map, normal direction spirogram and visibility figure of this layer, The 0th layer until being transformed into each image pyramid.
10. multiple dimensioned dense Stereo Matching system according to claim 6, which is characterized in that wherein during depth integration Consider projection error, normal vector error and visibility information.
CN201810240775.4A 2018-03-22 2018-03-22 Multiple dimensioned dense Stereo Matching method and system Pending CN108682029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810240775.4A CN108682029A (en) 2018-03-22 2018-03-22 Multiple dimensioned dense Stereo Matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810240775.4A CN108682029A (en) 2018-03-22 2018-03-22 Multiple dimensioned dense Stereo Matching method and system

Publications (1)

Publication Number Publication Date
CN108682029A true CN108682029A (en) 2018-10-19

Family

ID=63800454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810240775.4A Pending CN108682029A (en) 2018-03-22 2018-03-22 Multiple dimensioned dense Stereo Matching method and system

Country Status (1)

Country Link
CN (1) CN108682029A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176060A (en) * 2019-04-28 2019-08-27 华中科技大学 Dense three-dimensional rebuilding method and system based on the guidance of multiple dimensioned Geometrical consistency
CN110363235A (en) * 2019-06-29 2019-10-22 苏州浪潮智能科技有限公司 A kind of high-definition picture matching process and system
CN112154303A (en) * 2019-07-29 2020-12-29 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN113392879A (en) * 2021-05-26 2021-09-14 中铁二院工程集团有限责任公司 Multi-view matching method for aerial image
CN113496509A (en) * 2020-03-18 2021-10-12 广州极飞科技股份有限公司 Method and device for generating depth image frame, computer equipment and storage medium
CN113989250A (en) * 2021-11-02 2022-01-28 中国测绘科学研究院 Improved block dense matching method, system, terminal and medium based on depth map

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130100114A1 (en) * 2011-10-21 2013-04-25 James D. Lynch Depth Cursor and Depth Measurement in Images
CN105160702A (en) * 2015-08-20 2015-12-16 武汉大学 Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN105205808A (en) * 2015-08-20 2015-12-30 武汉大学 Multi-vision image dense coupling fusion method and system based on multiple characteristics and multiple constraints
CN105654547A (en) * 2015-12-23 2016-06-08 中国科学院自动化研究所 Three-dimensional reconstruction method
CN105809712A (en) * 2016-03-02 2016-07-27 西安电子科技大学 Effective estimation method for large displacement optical flows
CN106991440A (en) * 2017-03-29 2017-07-28 湖北工业大学 A kind of image classification algorithms of the convolutional neural networks based on spatial pyramid
CN107314762A (en) * 2017-07-06 2017-11-03 广东电网有限责任公司电力科学研究院 Atural object distance detection method below power line based on unmanned plane the sequence monocular image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130100114A1 (en) * 2011-10-21 2013-04-25 James D. Lynch Depth Cursor and Depth Measurement in Images
CN105160702A (en) * 2015-08-20 2015-12-16 武汉大学 Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN105205808A (en) * 2015-08-20 2015-12-30 武汉大学 Multi-vision image dense coupling fusion method and system based on multiple characteristics and multiple constraints
CN105654547A (en) * 2015-12-23 2016-06-08 中国科学院自动化研究所 Three-dimensional reconstruction method
CN105809712A (en) * 2016-03-02 2016-07-27 西安电子科技大学 Effective estimation method for large displacement optical flows
CN106991440A (en) * 2017-03-29 2017-07-28 湖北工业大学 A kind of image classification algorithms of the convolutional neural networks based on spatial pyramid
CN107314762A (en) * 2017-07-06 2017-11-03 广东电网有限责任公司电力科学研究院 Atural object distance detection method below power line based on unmanned plane the sequence monocular image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ENLIANG ZHENG ET AL: "PatchMatch Based Joint View Selection and Depthmap Estimation", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
GOESELE M ET AL: "Multi-view Stereo for Community Photo Collections", 《2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
JOHANNES L.SCHONBERGER,ET AL: "Pixelwise View Selection for Unstructured Multi-View Stereo", 《COMPUTER VISION-ECCV 2016》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176060A (en) * 2019-04-28 2019-08-27 华中科技大学 Dense three-dimensional rebuilding method and system based on the guidance of multiple dimensioned Geometrical consistency
CN110176060B (en) * 2019-04-28 2020-09-18 华中科技大学 Dense three-dimensional reconstruction method and system based on multi-scale geometric consistency guidance
CN110363235A (en) * 2019-06-29 2019-10-22 苏州浪潮智能科技有限公司 A kind of high-definition picture matching process and system
CN110363235B (en) * 2019-06-29 2021-08-06 苏州浪潮智能科技有限公司 High-resolution image matching method and system
CN112154303A (en) * 2019-07-29 2020-12-29 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN112154303B (en) * 2019-07-29 2024-04-05 深圳市大疆创新科技有限公司 High-precision map positioning method, system, platform and computer readable storage medium
CN113496509A (en) * 2020-03-18 2021-10-12 广州极飞科技股份有限公司 Method and device for generating depth image frame, computer equipment and storage medium
CN113392879A (en) * 2021-05-26 2021-09-14 中铁二院工程集团有限责任公司 Multi-view matching method for aerial image
CN113392879B (en) * 2021-05-26 2023-02-24 中铁二院工程集团有限责任公司 Multi-view matching method for aerial images
CN113989250A (en) * 2021-11-02 2022-01-28 中国测绘科学研究院 Improved block dense matching method, system, terminal and medium based on depth map
CN113989250B (en) * 2021-11-02 2022-07-05 中国测绘科学研究院 Improved block dense matching method, system, terminal and medium based on depth map

Similar Documents

Publication Publication Date Title
CN108682029A (en) Multiple dimensioned dense Stereo Matching method and system
Zach Fast and high quality fusion of depth maps
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN105809712B (en) A kind of efficient big displacement light stream method of estimation
CN103021017B (en) Three-dimensional scene rebuilding method based on GPU acceleration
CN104268934B (en) Method for reconstructing three-dimensional curve face through point cloud
CN108961390A (en) Real-time three-dimensional method for reconstructing based on depth map
CN110223370B (en) Method for generating complete human texture map from single-view picture
CN105865462B (en) The three-dimensional S LAM method based on event with depth enhancing visual sensor
CN104715504A (en) Robust large-scene dense three-dimensional reconstruction method
CN107610219A (en) The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN103646421A (en) Tree lightweight 3D reconstruction method based on enhanced PyrLK optical flow method
CN109584355A (en) Threedimensional model fast reconstructing method based on mobile phone GPU
CN108444451B (en) Planet surface image matching method and device
CN113705796A (en) Light field depth acquisition convolutional neural network based on EPI feature enhancement
CN114332125A (en) Point cloud reconstruction method and device, electronic equipment and storage medium
CN115546442A (en) Multi-view stereo matching reconstruction method and system based on perception consistency loss
CN113840127B (en) Method for automatically masking DSM (digital multimedia subsystem) in satellite video image acquisition water area
CN117115359B (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
CN114663298A (en) Disparity map repairing method and system based on semi-supervised deep learning
CN113034666B (en) Stereo matching method based on pyramid parallax optimization cost calculation
CN107578429B (en) Stereo image dense matching method based on dynamic programming and global cost accumulation path
Kang et al. UV Completion with Self-referenced Discrimination.
CN102236893A (en) Space-position-forecast-based corresponding image point matching method for lunar surface image
CN107194334A (en) Video satellite image dense Stereo Matching method and system based on optical flow estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181019