CN110473247A - Solid matching method, device and storage medium - Google Patents
Solid matching method, device and storage medium Download PDFInfo
- Publication number
- CN110473247A CN110473247A CN201910694818.0A CN201910694818A CN110473247A CN 110473247 A CN110473247 A CN 110473247A CN 201910694818 A CN201910694818 A CN 201910694818A CN 110473247 A CN110473247 A CN 110473247A
- Authority
- CN
- China
- Prior art keywords
- image
- carried out
- stereogram
- matching method
- sift feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
Abstract
The invention discloses solid matching method, device and storage mediums, are related to remote sensing fields.This method comprises: obtaining stereogram, piecemeal is carried out to stereogram according to preset partition strategy, obtains n space image block pair;N disparity map is obtained to disparity estimation is carried out to n space image block according to preset deep learning model, parallax fusion is carried out to n disparity map, obtains parallax fusion figure.The present invention provides Stereo matching mode, is suitable for the biggish big picture image of such as satellite remote sensing images resolution ratio and picture size, realizes the Stereo matching to big picture image pair, whole time-consuming few, and disparity estimation accuracy is higher.
Description
Technical field
The present invention relates to remote sensing fields more particularly to solid matching methods, device and storage medium.
Background technique
Stereo Matching Technology is always the research hotspot of binocular vision, shoots the left and right of Same Scene by binocular camera
Two width visual point images according to image to acquisition disparity map, and then obtain depth map.And the application range of depth map is very extensive,
It is commonly used to measure, three-dimensional reconstruction and the synthesis of virtual view etc., for example, in remote sensing technology field, by satellite remote sensing
Stereogram carries out Stereo matching processing, can construct the digital terrain model on celestial body surface, generate Digital height model or pass through
It constructs digital surface model and obtains city threedimensional model etc..
Currently, being mostly based on feature extraction, Region Matching and cost by the solid matching method of satellite remote-sensing image
It calculates and carries out disparity estimation, however since satellite remote-sensing image is mostly big picture image, so that Feature Points Matching and cost calculated
Process is very time-consuming, and matched error is larger.
Summary of the invention
The technical problem to be solved by the present invention is in view of the deficiencies of the prior art, provide suitable for the vertical of big picture image
Body matching process, device and storage medium.
The technical scheme to solve the above technical problems is that
A kind of solid matching method, comprising:
Stereogram is obtained, piecemeal is carried out to the stereogram according to preset partition strategy, obtains n space image block
It is right, n >=2;
N disparity map is obtained to disparity estimation is carried out to the n space image block according to preset deep learning model;
Parallax fusion is carried out to the n disparity map, obtains parallax fusion figure.
The beneficial effects of the present invention are: the present invention provides Stereo matching mode, differentiated suitable for such as satellite remote sensing images
Rate and the biggish big picture image of picture size, by being divided into the stereogram piecemeal of big picture multiple fritters, then leading to again
Cross preselect trained deep learning model respectively to these small images carry out disparity estimation, then by obtained disparity map into
Row fusion, obtains complete parallax fusion figure, realizes the Stereo matching to big picture image pair.Meanwhile to small in the application
The disparity estimation of block image can not have to directly estimate whole sub-picture progress parallax as conventional estimated mode with parallel processing
Meter, improves the efficiency of disparity estimation, reduces the whole time-consuming of Stereo matching process, and by regarding to small images
Then difference estimation is merged again, the treating capacity of single disparity estimation is reduced, to reduce error, and passes through deep learning model
Disparity estimation is carried out, it is anticipated that more parallax detailed information, disparity estimation accuracy are higher.
The another technical solution that the present invention solves above-mentioned technical problem is as follows:
A kind of storage medium is stored with instruction in the storage medium, when computer reads described instruction, makes the meter
Calculation machine executes solid matching method as described in the above technical scheme.
The another technical solution that the present invention solves above-mentioned technical problem is as follows:
A kind of Stereo matching device, comprising:
Memory, for storing computer program;
Processor realizes solid matching method as described in the above technical scheme for executing the computer program.
The advantages of additional aspect of the invention, will be set forth in part in the description, and will partially become from the following description
It obtains obviously, or practice is recognized through the invention.
Detailed description of the invention
Fig. 1 is the flow diagram that the embodiment of solid matching method of the present invention provides;
Fig. 2 is the deep learning model structure schematic diagram that the embodiment of solid matching method of the present invention provides;
Fig. 3 is the Stereo matching schematic diagram that the embodiment of solid matching method of the present invention provides;
Fig. 4 is the space the DOG schematic diagram that the other embodiments of solid matching method of the present invention provide;
Fig. 5 is that the disparity map that the other embodiments of solid matching method of the present invention provide merges schematic diagram;
Fig. 6 is the side seam line area schematic that the other embodiments of solid matching method of the present invention provide;
Fig. 7 is the structural framing figure that the embodiment of Stereo matching device of the present invention provides.
Specific embodiment
The principle and features of the present invention will be described below with reference to the accompanying drawings, and illustrated embodiment is served only for explaining the present invention,
It is not intended to limit the scope of the present invention.
Currently, the existing solid matching method based on satellite remote-sensing image is mostly to carry out parallax based on feature extraction to estimate
Meter, obtains sparse disparities figure, then obtains dense disparity map using Interpolate estimation, often passes through the features such as Harris, SIFT, SURF
Extraction algorithm combination RANSAC optimization algorithm realizes matching;And dense view is obtained using matching cost is calculated based on sliding window
Difference figure, the basic principle of matching algorithm are one pixels of selection in a reference image, are typically chosen left figure as reference picture,
Right figure selects a support window in the neighborhood of pixel points to carry out feature description to the pixel, so as image to be matched
It finds in image to be matched according to certain similitude judgment criterion and sentences with the most like child window of support window, similitude afterwards
Disconnected criterion is generally minimum Eustachian distance criterion, and pixel corresponding to the child window is corresponding with selected picture point
With picture point.
And picture image big for satellite remote sensing images etc., since its picture is big, pixel is more, and existing three-dimensional
Method of completing the square has that error is big, time-consuming, is based on this, then the application passes through by being split to big picture image
Deep learning model carries out carry out disparity estimation to the image block after segmentation, realizes the disparity estimation to big picture image, is promoted
Disparity estimation accuracy rate and Stereo matching efficiency, embody especially by following embodiment.
As shown in Figure 1, for the flow diagram that provides of embodiment of solid matching method of the present invention, the solid matching method
Include:
S1 obtains stereogram, carries out piecemeal to stereogram according to preset partition strategy, obtains n space image block
It is right, n >=2;
It should be understood that stereogram can may be small picture image for big picture image, wherein big picture image refers to
The biggish image of size, resolution ratio, for example, stereogram can be the satellite remote sensing stereogram of satellite shooting, it specifically can be with
A size value or resolution value are set, is greater than the value, as big picture image, small picture image is similarly.And it is in the application
For the improvement that big picture image proposes, therefore, in following embodiment defaulting stereogram is big picture image, is carried out with this
Explanation.It will be understood by those skilled in the art that the application also can be applied to small picture image, by small picture image piecemeal
And disparity estimation is carried out by deep learning model, it can also be improved disparity estimation accuracy rate.
It should be understood that partition strategy can be arranged according to actual needs, the purpose is to carry out piecemeal, this field to stereogram
Technical staff can select suitable partition strategy according to actual needs, for example, can by sliding window to it is three-dimensional opposite into
Row piecemeal can also preset the resolution ratio of image block, directly be split to stereogram.
It should be understood that stereogram includes left image and right image, piecemeal is carried out to stereogram and is referred to respectively to left image
Same piecemeal operation is carried out with right image.Respectively obtain the space image block of n left image and the space image block of n right image.
S2 obtains n disparity map to disparity estimation is carried out to n space image block according to preset deep learning model;
It should be understood that deep learning model is that trained learning model, each layer structure and training process etc. can be in advance
It is arranged according to actual needs, for example, deep learning model can choose DispNet model.
Specifically, as shown in Fig. 2, DispNet network frame can use Encoder-Decoder frame, one is rolled up
Product neural network is divided into two parts of contraction and expansion.Wherein constriction is substantially carried out feature extraction, may include ten steps
A length of 2 convolutional layer.For example, can be C1, C2, C3, C4, C5, C6, C7, C8, C9, C10.It is inputted using the convolution of the last layer
Characteristic pattern carry out Fusion Features, calculate initial parallax figure, then by five deconvolution operate, for example, can for U1, U2,
U3, U4, U5 export final parallax.Six Loss layers can be set at the deconvolution operation input and output, be L1 respectively,
L2, L3, L4, L5, L6, the activation primitive using ReLu function as neuron, to solve to use traditional Sigmoid, Tanh
The gradient disappearance problem that equal saturation activations function occurs.
S3 carries out parallax fusion to n disparity map, obtains parallax fusion figure.
It should be understood that parallax fusion refers to that will obtain n disparity map is fused into a complete disparity map again.
As shown in figure 3, providing a specific Stereo matching example, stereogram includes left figure L and right figure R, is passed through
Identical sliding window carries out overlapping sliding cutting to left figure L and right figure R respectively, i.e., when sliding window slides into the next position
When, in the region of a upper position, there are Chong Die with it, it is assumed that every picture obtains 2 image blocks, the image block of left figure be L1 and
L2, the image block of right figure are R1 and R2, wherein L1 and R1 is corresponding image block, and L2 and R2 are corresponding image blocks, so
Afterwards, L1 and R1 are input in DispNet model, obtain the disparity map SP1 of L1 and R1, L2 and R2 are input to DispNet mould
In type, the disparity map SP2 of L2 and R2 is obtained, then merges SP1 and SP2, obtains final fusion results.
It should be understood that in the actual implementation process, the picture of shooting may not be just corresponding, as shown in figure 3, in left figure L
Pattern it is to the right, the pattern in right figure R is to the left, at this point it is possible to by left figure L to right translation or reduce P distance, to the left by right figure R
P distance is reduced in translation, is kept the pattern of stereogram corresponding, is can be avoided in this way when carrying out image block, the region on both sides
Image block without matching, increase time-consuming.
For example, as shown in Figure 3, it is assumed that the size of sliding window is 3*3, and step-length 1 is slided to the right, then to image
When carrying out piecemeal, by taking left figure as an example, 2/3 region and 2/3 region of the left side L2 are overlappings on the right side of L1, then obtaining disparity map
Afterwards, equally 2/3 region on the left of 2/3 region on the right side of P1 and P2 is overlapped, fusion obtains being overlapped result.
The present embodiment provides Stereo matching modes, and it is biggish to be suitable for such as satellite remote sensing images resolution ratio and picture size
Big picture image, by being divided into multiple fritters for the stereogram piecemeal of big picture, then again by preselecting trained depth
Learning model carries out disparity estimation to these small images respectively, then merges obtained disparity map, obtains complete
Parallax fusion figure, realizes the Stereo matching to big picture image pair.Meanwhile it can to the disparity estimation of small images in the application
With parallel processing, and do not have to directly carry out disparity estimation to whole sub-picture as conventional estimated mode, improves disparity estimation
Efficiency, reduce the whole time-consuming of Stereo matching process, and then merge again by carrying out disparity estimation to small images,
The treating capacity of single disparity estimation is reduced, to reduce error, and carries out disparity estimation by deep learning model, can be with
Estimate that more parallax detailed information, disparity estimation accuracy are higher.
Optionally, in some embodiments, it before carrying out piecemeal to stereogram according to preset partition strategy, also wraps
It includes:
Polar curve correction is carried out to stereogram.
By carrying out polar curve correction to stereogram, it can be improved stereogram and carry out parallax in deep learning model and estimate
The accuracy of meter reduces the time-consuming of disparity estimation.
Optionally, in some embodiments, stereogram includes benchmark image and target image, carries out pole to stereogram
Line correction, specifically includes:
The SIFT feature of benchmark image and target image is detected respectively, by SIFT feature in target image with
SIFT feature in benchmark image is matched, and SIFT feature pair is obtained;
Spin matrix and translation matrix according to SIFT feature to calculating target image relative to benchmark image;
Polar curve correction is carried out to target image according to spin matrix and translation matrix.
It should be understood that the present embodiment preferably uses SIFT algorithm to realize polar curve correction, those skilled in the art can also basis
Actual demand selects Harris algorithm, SURF algorithm etc..
Specifically, target image can be calculated by the following formula relative to the spin matrix R of benchmark image and translation square
Battle array T:
Wherein, xRFor the pixel coordinate of the SIFT feature of right image, xLPixel for the SIFT feature of left image is sat
Mark, KRFor the camera internal reference of right image, KLFor the camera internal reference of left image, R is spin matrix, [t]xFor translation matrix.
Optionally, polar curve correction is carried out to target image according to spin matrix and translation matrix, specifically included:
Spin matrix R and translation matrix T are resolved into the spin matrix R that left and right camera respectively rotates half1、R2With translation square
Battle array T1、T2.The principle of decomposition is so that minimum, the common area maximum of left and right view that distorts caused by left images re-projection, benefit
Two camera optical axis are corrected to parallel optical axis with the homography matrix after decomposition.
Optionally, in some embodiments, the SIFT feature of benchmark image and target image is detected respectively, it will
SIFT feature is matched with the SIFT feature in benchmark image in target image, is specifically included:
The extreme point of benchmark image and target image is detected respectively, obtains multiple extreme points;
The offset for calculating each extreme point judges whether there is the unstable extreme point that offset is greater than preset threshold,
If it is, re-starting interpolation processing at the position of unstable extreme point, multiple SIFT features are obtained after the completion of judgement;
Position and the scale of each SIFT feature are detected respectively, and determine the principal direction of each SIFT feature;
Description of each characteristic point is constructed according to position, scale and principal direction;
The SIFT feature in benchmark image and target image is divided respectively according to description and K-D tree algorithm;
The SIFT feature in benchmark image and target image is matched according to K- NN Query algorithm.
It should be understood that extreme point is maximum point or minimum point, it is preferable that can realize scale space by Gaussian convolution
Interior linear transformation compares its k point adjacent with surrounding, to preliminary characteristic point each of is detected to ensure to mention
Get corresponding extreme point, i.e. extreme point in difference of Gaussian pyramid space on each scale.
The value of k can be arranged according to actual needs, for example, can be 26.
By carrying out the linear transformation in scale space, it can be ensured that SIFT algorithm is to rotation, scaling, brightness change
It maintains the invariance, a degree of stability is also kept to visual angle change, affine transformation, noise.
Specifically, preliminary characteristic point can be chosen in the following manner.
Building gaussian pyramid obtains height then by upper layer and lower layer image subtraction adjacent in every group of gaussian pyramid first
Thus this difference image constructs difference of Gaussian pyramid, the i.e. space DOG.
Then preliminary characteristic point is chosen, preliminary characteristic point is by completeer between two tomographic images adjacent in the space DOG
At.Each pixel consecutive points all with it are compared, see whether it is more adjacent than its image area and scale domain
Point is big or small.
As shown in figure 4, provide a kind of illustrative space DOG schematic diagram, the test point of middle layer is replaced with " x ",
The point compares with it with 8 consecutive points of scale, and totally 26 points compare 2*9 point corresponding with neighbouring scale, with true
It protects and all detects extreme point in scale space and two dimensional image space.
Specifically, the offset of extreme point can be calculated in the following manner.
The second Taylor series are carried out to the DOG function in the space DOG, to the function derivation after expansion and equation are allowed to be equal to 0,
The offset of available extreme point.
Guarantee to can detecte stable extremal as matched curve by using the second Taylor series to difference operator
Point, when offset is greater than preset threshold, it is meant that interpolation center has shifted on its neighbor point, so must change current
The position of extreme point.Simultaneously in the new position repeatedly interpolation until convergence.Preferably, when beyond set the number of iterations or
Person exceeds the range of image boundary, can delete the extreme point at this time.Linear-scale spatial edge point is easy by picture noise
Influence, reject unstable marginal point, can be improved the stability of the SIFT feature detected.
Optionally, the SIFT feature in benchmark image and target image is clicked through respectively according to description and K-D tree algorithm
Row divides, and constructs K-D tree, finds the root node of tree, and then determine the left and right subtree of K-D tree, is divided according to description.It can
To use characteristic point to calculate separately the gradient information in 8 directions in the 4*4 window in scale space, by finally obtained 4*4*
8=128 dimensional vector is as the corresponding Feature Descriptor of this feature point.
Optionally, the SIFT feature in benchmark image and target image is matched according to K- NN Query algorithm,
It can specifically include:
Utilize the neighbour on K- NN Query algorithm queries target image with benchmark image character pair point;
It is focused to find out the K characteristic point met the requirements at a distance from query point from the characteristic point of target image, completes two width
The Feature Points Matching of image.
It should be understood that being usually to look for query point apart from K nearest characteristic point, it is assumed that K 3 and is looked in target image
Be respectively 1,2,3,4,5,6 at a distance to 6 characteristic points, with query point, it can be seen that the distance of preceding 3 points is 1,2,
3, recently with query point distance, then it is assumed that the distance of preceding 3 points is met the requirements.
Specifically, the characteristic point extracted is utilized respectively for benchmark image and target image and constructs two K-D trees, set
Each of node be all a characteristic point in image, similar node in two trees is found using K- NN Query,
Similar characteristic point in two images is found, realizes Feature Points Matching.
It should be understood that K- NN Query is given query point and positive integer K, the nearest K of Distance query point is found from K-D tree
A node further finds most like node.
Optionally, in some embodiments, piecemeal is carried out to stereogram according to preset partition strategy, specifically included:
Piecemeal is carried out to stereogram according to the sliding window of windows overlay and fixed step size.
It should be understood that after windows overlay refers to window sliding, with the region in window sliding front window there are Chong Die, in this way
The benefit done is to reduce whole disparity map side seam after fusion using parallax redundancy in parallax fusion steps
The obvious degree of line.
Optionally, in some embodiments, the horizontal sliding step of sliding window and vertical sliding motion step-length meet following public
Formula:
Wherein, stepxFor horizontal sliding step, stepyFor vertical sliding motion step-length, X × Y is the resolution ratio of initial pictures, M
× N is piecemeal quantity, and x × y is the resolution ratio of every block of image after piecemeal.
Optionally, in some embodiments, parallax fusion is carried out to n disparity map, specifically included:
N disparity map is spliced according to the inverse process of partition strategy, takes mean value to carry out the parallax value of redundant area
Fusion.
It should be understood that the spliced redundant area of disparity map refers to that same pixel corresponds to different parallaxes in the region
Value uses overlaid windows, has overlay region between two image blocks after piecemeal when the generation of redundant area is because of piecemeal
Domain, in this way in splicing, overlapping region will generate parallax redundancy.
As shown in figure 5, providing a kind of illustrative disparity map fusion schematic diagram, left figure L and right figure R use fixed step
Long, overlaid windows and the identical sliding window progress piecemeal of step-length and size, respectively obtain image block L1, L2, R1 and R2,
In, L1 and R1 are corresponding image blocks, and L2 and R2 are corresponding image blocks, then, L1 and R1 are input to DispNet mould
In type, the disparity map P1 of L1 and R1 is obtained, L2 and R2 are input in DispNet model, obtains the disparity map P2 of L2 and R2, so
P1 and P2 are merged afterwards, at this time it can be found that there is parallax redundant area in the fused intermediate region P1 and P2, this
When, it is averaged by two parallax values to each pixel, the parallax value as the pixel, it will be able to obtain final melt
Close result.
It should be understood that in the actual implementation process, the picture of shooting may not be just corresponding, as shown in figure 5, in left figure L
Pattern it is to the right, the pattern in right figure R is to the left, at this point it is possible to by left figure L to right translation or reduce P distance, to the left by right figure R
P distance is reduced in translation, is kept the pattern of stereogram corresponding, is can be avoided in this way when carrying out image block, the region on both sides
Image block without matching, increase time-consuming.
Optionally, in some embodiments, further includes:
Median filter process is carried out to the side seam line of parallax fusion figure, rejects side seam line.
Optionally, median filter process is carried out to the side seam line of parallax fusion figure, rejects side seam line, can specifically include:
The side seam region that figure extracts the default size of side seam line is merged to parallax;
Big window median filtering is carried out to the side seam region extracted;
Result after median filtering is spliced to original disparity map position.
It should be understood that the size in side seam region can be arranged according to actual needs, for example, showing as shown in fig. 6, providing one kind
The side seam line area schematic of example property, it is assumed that the size of sliding window when carrying out piecemeal to image is 3*3, it is assumed that window level
Sliding, then in splicing length will be generated in the stitching portion of two image blocks because splicing two image blocks in vertical direction
The side seam line that degree is 3, at this point it is possible to centered on the side seam line, left and right distance respectively for 0.5, a length of 3 a rectangle as side
Stitch region.
It is appreciated that in some embodiments, may include such as implementation optional some or all of in the various embodiments described above
Mode.
In other embodiments of the invention, a kind of storage medium is also provided, instruction is stored in the storage medium, works as meter
When calculating machine-readable instruction fetch, the computer is made to execute the solid matching method as described in above-mentioned any embodiment.
As shown in fig. 7, for the structural framing figure that provides of embodiment of Stereo matching device of the present invention, the Stereo matching device
Include:
Memory 1, for storing computer program;
Processor 2 realizes the solid matching method as described in above-mentioned any embodiment for executing the computer program.
Reader should be understood that in the description of this specification reference term " one embodiment ", " is shown " some embodiments "
The description of example ", " specific example " or " some examples " etc. mean specific features described in conjunction with this embodiment or example, structure,
Material or feature are included at least one embodiment or example of the invention.In the present specification, above-mentioned term is shown
The statement of meaning property need not be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described
It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this
The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples
Sign is combined.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, embodiment of the method described above is only schematical, for example, the division of step, only
A kind of logical function partition, there may be another division manner in actual implementation, such as multiple steps can combine or can be with
It is integrated into another step, or some features can be ignored or not executed.
It, can be with if the above method is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products
Come, which is stored in a storage medium, including some instructions are used so that a computer equipment (can
To be personal computer, server or the network equipment etc.) execute all or part of step of each embodiment method of the present invention
Suddenly.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-OnlyMemory), arbitrary access
The various media that can store program code such as memory (RAM, RandomAccessMemory), magnetic or disk.
More than, only a specific embodiment of the invention, but scope of protection of the present invention is not limited thereto, and it is any to be familiar with
Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or substitutions,
These modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be wanted with right
Subject to the protection scope asked.
Claims (10)
1. a kind of solid matching method characterized by comprising
Stereogram is obtained, piecemeal is carried out to the stereogram according to preset partition strategy, obtains n space image block pair, n
≥2;
N disparity map is obtained to disparity estimation is carried out to the n space image block according to preset deep learning model;
Parallax fusion is carried out to the n disparity map, obtains parallax fusion figure.
2. solid matching method according to claim 1, which is characterized in that according to preset partition strategy to the solid
Before to progress piecemeal, further includes:
Polar curve correction is carried out to the stereogram.
3. solid matching method according to claim 2, which is characterized in that the stereogram includes benchmark image and mesh
Logo image carries out polar curve correction to the stereogram, specifically includes:
The SIFT feature of the benchmark image and the target image is detected respectively, by SIFT in the target image
Characteristic point is matched with the SIFT feature in the benchmark image, obtains SIFT feature pair;
Spin matrix and translation square according to the SIFT feature to the calculating target image relative to the benchmark image
Battle array;
Polar curve correction is carried out to the target image according to the spin matrix and the translation matrix.
4. solid matching method according to claim 3, which is characterized in that respectively to the benchmark image and the target
The SIFT feature of image is detected, by the SIFT feature in SIFT feature in the target image and the benchmark image
Point is matched, and is specifically included:
The extreme point of the benchmark image and the target image is detected respectively, obtains multiple extreme points;
The offset for calculating each extreme point judges whether there is the unstable extreme point that offset is greater than preset threshold,
If it is, re-starting interpolation processing at the position of the unstable extreme point, it is special that multiple SIFT are obtained after the completion of judgement
Sign point;
Position and the scale of each SIFT feature are detected respectively, and determine the principal direction of each SIFT feature;
Description of each characteristic point is constructed according to the position, the scale and the principal direction;
The SIFT feature in the benchmark image and the target image is clicked through respectively according to description and K-D tree algorithm
Row divides;
The SIFT feature in the benchmark image and the target image is matched according to K- NN Query algorithm.
5. solid matching method according to claim 1, which is characterized in that according to preset partition strategy to the solid
As specifically including to piecemeal is carried out:
Piecemeal is carried out to the stereogram according to the sliding window of windows overlay and fixed step size.
6. solid matching method according to claim 5, which is characterized in that the horizontal sliding step of the sliding window and
Vertical sliding motion step-length meets following formula:
Wherein, stepxFor horizontal sliding step, stepyFor vertical sliding motion step-length, X × Y is the resolution ratio of initial pictures, and M × N is
Piecemeal quantity, x × y are the resolution ratio of every block of image after piecemeal.
7. solid matching method according to claim 1, which is characterized in that parallax fusion is carried out to the n disparity map,
It specifically includes:
The n disparity map is spliced according to the inverse process of the partition strategy, mean value is taken to the parallax value of redundant area
It is merged.
8. solid matching method according to any one of claim 1 to 7, which is characterized in that further include:
Median filter process is carried out to the side seam line of parallax fusion figure, rejects the side seam line.
9. a kind of storage medium, which is characterized in that instruction is stored in the storage medium, when computer reads described instruction
When, so that the computer is executed such as solid matching method described in any item of the claim 1 to 8.
10. a kind of Stereo matching device characterized by comprising
Memory, for storing computer program;
Processor realizes such as Stereo matching side described in any item of the claim 1 to 8 for executing the computer program
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910694818.0A CN110473247A (en) | 2019-07-30 | 2019-07-30 | Solid matching method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910694818.0A CN110473247A (en) | 2019-07-30 | 2019-07-30 | Solid matching method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110473247A true CN110473247A (en) | 2019-11-19 |
Family
ID=68509815
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910694818.0A Pending CN110473247A (en) | 2019-07-30 | 2019-07-30 | Solid matching method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110473247A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489385A (en) * | 2020-04-08 | 2020-08-04 | 北京市商汤科技开发有限公司 | Binocular stereo matching network training method and device |
CN111768434A (en) * | 2020-06-29 | 2020-10-13 | Oppo广东移动通信有限公司 | Disparity map acquisition method and device, electronic equipment and storage medium |
CN113112412A (en) * | 2020-01-13 | 2021-07-13 | 株式会社理光 | Generation method and device of vertical correction matrix and computer readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101487694A (en) * | 2009-03-03 | 2009-07-22 | 北京微视新纪元科技有限公司 | Method and apparatus for processing image |
CN102075779A (en) * | 2011-02-21 | 2011-05-25 | 北京航空航天大学 | Intermediate view synthesizing method based on block matching disparity estimation |
CN102263957A (en) * | 2011-07-25 | 2011-11-30 | 北京航空航天大学 | Search-window adaptive parallax estimation method |
CN104112263A (en) * | 2014-06-28 | 2014-10-22 | 南京理工大学 | Method for fusing full-color image and multispectral image based on deep neural network |
CN106600538A (en) * | 2016-12-15 | 2017-04-26 | 武汉工程大学 | Human face super-resolution algorithm based on regional depth convolution neural network |
CN108734660A (en) * | 2018-05-25 | 2018-11-02 | 上海通途半导体科技有限公司 | A kind of image super-resolution rebuilding method and device based on deep learning |
-
2019
- 2019-07-30 CN CN201910694818.0A patent/CN110473247A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101487694A (en) * | 2009-03-03 | 2009-07-22 | 北京微视新纪元科技有限公司 | Method and apparatus for processing image |
CN102075779A (en) * | 2011-02-21 | 2011-05-25 | 北京航空航天大学 | Intermediate view synthesizing method based on block matching disparity estimation |
CN102263957A (en) * | 2011-07-25 | 2011-11-30 | 北京航空航天大学 | Search-window adaptive parallax estimation method |
CN104112263A (en) * | 2014-06-28 | 2014-10-22 | 南京理工大学 | Method for fusing full-color image and multispectral image based on deep neural network |
CN106600538A (en) * | 2016-12-15 | 2017-04-26 | 武汉工程大学 | Human face super-resolution algorithm based on regional depth convolution neural network |
CN108734660A (en) * | 2018-05-25 | 2018-11-02 | 上海通途半导体科技有限公司 | A kind of image super-resolution rebuilding method and device based on deep learning |
Non-Patent Citations (4)
Title |
---|
QINGLING JIA ET AL: "DispNet based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images", 《2018 10TH IAPR WORKSHOP ON PATTERN RECOGNITION IN REMOTE SENSING (PRRS)》 * |
张抢强: "基于分块卷积的大图像输入卷积神经网络优化", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王淏 等: "基于双目视觉的无人机避障之研究", 《检测技术与数据处理》 * |
袁杰: "基于SIFT的图像配准与拼接技术研究", 《万方数据知识服务平台》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113112412A (en) * | 2020-01-13 | 2021-07-13 | 株式会社理光 | Generation method and device of vertical correction matrix and computer readable storage medium |
CN113112412B (en) * | 2020-01-13 | 2024-03-19 | 株式会社理光 | Method and device for generating vertical correction matrix and computer readable storage medium |
CN111489385A (en) * | 2020-04-08 | 2020-08-04 | 北京市商汤科技开发有限公司 | Binocular stereo matching network training method and device |
CN111489385B (en) * | 2020-04-08 | 2021-12-07 | 北京市商汤科技开发有限公司 | Binocular stereo matching network training method and device |
CN111768434A (en) * | 2020-06-29 | 2020-10-13 | Oppo广东移动通信有限公司 | Disparity map acquisition method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Vogel et al. | 3d scene flow estimation with a piecewise rigid scene model | |
Matsuyama et al. | Real-time 3D shape reconstruction, dynamic 3D mesh deformation, and high fidelity visualization for 3D video | |
CN102903096B (en) | Monocular video based object depth extraction method | |
CN103561258B (en) | Kinect depth video spatio-temporal union restoration method | |
CN110473247A (en) | Solid matching method, device and storage medium | |
CN109242873A (en) | A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild | |
CN106709947A (en) | RGBD camera-based three-dimensional human body rapid modeling system | |
CN112367514A (en) | Three-dimensional scene construction method, device and system and storage medium | |
CN106709878B (en) | A kind of rapid image fusion method | |
Sterzentsenko et al. | Self-supervised deep depth denoising | |
WO2011138472A1 (en) | Method for generating depth maps for converting moving 2d images to 3d | |
CN109462747A (en) | Based on the DIBR system gap filling method for generating confrontation network | |
CN104966318A (en) | A reality augmenting method having image superposition and image special effect functions | |
Zhang et al. | Critical regularizations for neural surface reconstruction in the wild | |
CN106530336B (en) | Stereo matching method based on color information and graph cut theory | |
CN103310420A (en) | Method and system for repairing color image holes on basis of texture and geometrical similarities | |
WO2017183985A1 (en) | Image stitching method and device | |
Yang et al. | Global auto-regressive depth recovery via iterative non-local filtering | |
Park et al. | Acquisition of sharp depth map from multiple cameras | |
CN103761766A (en) | Three-dimensional object model texture mapping algorithm based on tone mapping and image smoothing | |
CN109949354A (en) | A kind of light field depth information estimation method based on full convolutional neural networks | |
CN106780326A (en) | A kind of fusion method for improving panoramic picture definition | |
Wang et al. | Terrainfusion: Real-time digital surface model reconstruction based on monocular slam | |
Zabulis et al. | Multi-camera reconstruction based on surface normal estimation and best viewpoint selection | |
CN113436130A (en) | Intelligent sensing system and device for unstructured light field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191119 |