CN105719288A - Binary-tree based object depth order evaluation method in monocular image - Google Patents

Binary-tree based object depth order evaluation method in monocular image Download PDF

Info

Publication number
CN105719288A
CN105719288A CN201610034259.7A CN201610034259A CN105719288A CN 105719288 A CN105719288 A CN 105719288A CN 201610034259 A CN201610034259 A CN 201610034259A CN 105719288 A CN105719288 A CN 105719288A
Authority
CN
China
Prior art keywords
image
region
angle point
depth order
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610034259.7A
Other languages
Chinese (zh)
Inventor
马健翔
周瑜
宋桂岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUXI BUPT PERCEPTIVE TECHNOLOGY INDUSTRY INSTITUTE Co Ltd
Original Assignee
WUXI BUPT PERCEPTIVE TECHNOLOGY INDUSTRY INSTITUTE Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUXI BUPT PERCEPTIVE TECHNOLOGY INDUSTRY INSTITUTE Co Ltd filed Critical WUXI BUPT PERCEPTIVE TECHNOLOGY INDUSTRY INSTITUTE Co Ltd
Priority to CN201610034259.7A priority Critical patent/CN105719288A/en
Publication of CN105719288A publication Critical patent/CN105719288A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a binary-tree based object depth order evaluation method in a monocular image. The evaluation method comprises steps of 1: defining middle distances among image regions; 2: evaluating T angle point confidence for T angle points formed at junctures of the image regions; 3: according to the middle distances and the T angle point confidence, constructing a binary partitioning tree so as to obtain a regional model of the image; and 4: selecting an optimal T angle point set and finishing depth ordering so as to obtain a depth order graph. Thus, a depth levelrestoration effect is achieved.

Description

Based on Object Depth order method of estimation in the monocular image of binary tree
Technical field
The present invention relates to image processing field, in particular it relates to a kind of based on Object Depth order method of estimation in the monocular image of binary tree.
Background technology
The depth order reasoning of monocular image generally has method two kinds main: a kind of method being based on study, another kind of then be based on picture structure find low level clue infer method.
For first kind method, D.Hoiem etc. (" Recoveringocclusionboundariesfromasingleimage ", ICCV, 2007, pp.1-8) image is carried out over-segmentation, then for each extracted region color, texture, vertically and horizontally feature, they are piled up, be applied in the framework of Markov random field (MRF) and get off estimating depth order.Obviously, these methods are based on study, and MRF requires over real depth hierarchical data and goes training to obtain.But, the effect that major drawback is that them of this kind of method is limited only to the image identical with the type of training image.If the scene type of test image is big with training set difference, then effect is general, and cannot define object area border accurately.
For Equations of The Second Kind method, M.Dimiccoli etc. (" Hierarchicalregion-basedrepresentationforsegmentationand filteringwithdepthinsingleimages ", ICIP, 2009, pp.3497 3500) do not utilize training method, but concentrate such as blocking or the object in scene is carried out depth ordering by the detection of these associated depth clues of concavity and convexity.Though this kind of method can not infer absolute depth as first kind method, but more general, it is little affected by the scene type restriction of image.T angle point in image is strong clue of blocking, but the depth perception system of a robust seems unlikely only by using T Corner Detection just can define out.
The depth order reasoning of monocular image has many application in computer vision field and actual production, but, the depth order inference method of some existing monocular images, because the disappearance of three-dimensional information in scene, the effect of depth order reasoning is not as.M.Dimiccoli etc. utilize binary split tree (BPT) that image carries out region merging technique and effectively segmentation, and have carried out the estimation of T angle point after regional depth level is inferred.Although effect is better, but the T angle point of some complexity estimates mistake in the starting stage.
Summary of the invention
It is an object of the invention to, for the problems referred to above, it is proposed to a kind of based on Object Depth order method of estimation in the monocular image of binary tree, to realize the advantage improving degree of depth level recovery effects.
For achieving the above object, the technical solution used in the present invention is:
A kind of based on Object Depth order method of estimation in the monocular image of binary tree, including:
Intermediate distance between step 1, definition image-region;
The T angle point that step 2, intersection to image-region are formed estimates T angle point confidence level;
Step 3, according to the binary split tree of above-mentioned intermediate distance and T angle point confidence level structure realm thus obtaining the regional model of image;
Step 4, choose optimum T angle point set, complete depth ordering, thus obtaining depth order figure.
Preferably, step 1 defines between image-region:
The definition of intermediate distance is as follows:
d B P T 1 , 2 = α × d c o l o r 1 , 2 + ( 1 - α ) × d c o n t o u r 1 , 2
Wherein,Represent the intermediate distance between image-region R1 and R2,Represent the color distance between image-region R1 and R2,Representing the profile distance between image-region R1 and R2, α is weight parameter, and the distribution of balance color distance and profile distance, for known parameters.
Preferably, in step 2 estimate T angle point confidence level particularly as follows:
P=pcolor×pangle×pcurve
P represents T angle point confidence level, pcolorRepresent color distortion confidence level, pangleRepresent angled arrangement confidence level, pcurveRepresent border curvature confidence level.
Preferably, described color distortion confidence level pcolor, it is assumed that the region of this T angle point local is to (it is carried out two kinds of tolerance by i, j)={ (1,2), (1,3), (2,3) } respectively, and one is statistically measureTwo is perceptibility toleranceAccording to statistically measuringMeasure with perceptibilityDraw comprehensive measurement c r = m i n ( c s 1 , 2 , c s 1 , 3 , c s 2 , 3 ) - ( 1 c s 1 , 2 + 1 c s 1 , 3 + 1 c s 2 , 3 ) - 1 , Thus each T angle point to be detected can obtain 7 kinds of tolerance and namely statistically measureKind, perceptibility is measuredKind, comprehensive measurement cr, a kind, each tolerance is all a probability, it is assumed that measure all Rayleigh distributed for these 7 kinds, then the product of these 7 kinds of probits is final color distortion confidence level pcolor
Preferably, definition two kinds tolerance Δ θmaxWith Δ θmin, its represent respectively each angle of T angle point respectively with the maximum angle difference of 180 ° and the absolute difference of 90 ° and minimum angles difference, it is assumed that tolerance Δ θmaxWith Δ θminRayleigh distributed, then tolerance Δ θmaxWith Δ θminThe product of probit is final angled arrangement confidence level pangle
Preferably, estimate that T angle point confidence level specifically includes: for each T angle point n to be selected, it is possible to calculate pi,n, i.e. image-region R in the regional area of angle point niBlock the probability of other two image-regions, due to about image-region R1And R2Angle point more than one, thus comprehensive their judgement information, draw image-region R1At R2Total probability above:
p 1 = ( 1 - Π n N 1 ( 1 - p 1 , n ) ) Π n N 2 ( 1 - p 2 , n )
Wherein N1、N2For the correspondence image region R judged by T angle point1、R2In number above, due to each pi,nIt is independent, thus p2≠1-p1, p2Represent image-region R2At R1Total probability above.
Preferably, step 3 according to the binary split tree of above-mentioned intermediate distance and T angle point confidence level structure realm thus obtain image regional model particularly as follows:
Define the similarity measurement between image-region:
d 1 , 2 = d B P T 1 , 2 ( 1 - | p 1 - p 2 | )
According to the similarity measurement between image-region, thus the similarity size drawn between image-region, according to the similarity size combined region between image-region, more new neighbor and similarity measurement, continuous iteration, and remove storage record with tree construction, start to construct the binary split tree in a region from single pixel, until image only remains a region, it is simply that entire image, thus drawing the regional model of image.
Preferably, step 4 chooses the T angle point set of optimum, completing depth ordering, specifically including thus obtaining depth order figure:
Definition cost function is as follows:
C ( R ) = Σ i ∈ R c i c max + γ N × N + γ u × U
Wherein R is the set of the T angle point being rejected, ciRepresent single angle pointiCost function,Represent the region area of local window, γuRepresenting the penalty coefficient of isolated node collection U in depth order figure, N represents 4 connection number, γ in depth ordering imageNFor all normalized T angle point costsThe half of geometrical mean, piRepresent image-region R in regional areaiBlocking the probability of other image-region, Cmax represents the maximum cost function in all angle points, goes iteration to minimize cost function to solve by minimizing loop, thus obtaining depth order figure.
Preferably, described go iteration to minimize cost function to carry out solving specifically including by minimizing loop:
It is exactly concentrate from the existing T angle point finally given to randomly select the step that angle point produces to solve;
From the binary split tree constructed, choose respective nodes form image segmentation, carrying out the step of area extension, the condition of segmentation is, the region split to retain all T angle points picked out, it is thus possible to the depth relationship according to T angle point constructs depth order figure, depth order figure and DOG;
According to the DOG constructed, depth order deduction is modified, removes the circle in DOG continuously by deleting the T angle point of corresponding minimum probability, thus the step of the conflict situations solved during local hierarchy infers;
According to the partial ordering relation in each region in revised DOG, region is carried out sounding mark, i.e. the step of depth order sequence.
Technical scheme has the advantages that
(1) this depth interpretation system disobeys the priori being against any scene structure, and concentrates and pay close attention to detection specified point to infer the depth relationship of object scene;(2) propose a kind of framework in conjunction with the estimation of T angle point and extension binary split tree and carry out depth order deduction as partition tools, improve the robustness of system estimation;(3) depth interpretation image minimizes a specific cost function acquisition by iteration, and iterative process is except the estimation of T angle point and binary split tree structure each time, also T angle point is carried out Effective selection, thus ensureing the convergence solved, saving time and space.
Below by drawings and Examples, technical scheme is described in further detail.
Accompanying drawing explanation
Fig. 1 be described in the specific embodiment of the invention based on the flow chart of Object Depth order method of estimation in the monocular image of binary tree;
Fig. 2 is T angle point schematic diagram;
Fig. 3 is for minimizing circuit diagram.
Detailed description of the invention
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are illustrated, it will be appreciated that preferred embodiment described herein is merely to illustrate and explains the present invention, is not intended to limit the present invention.
As it is shown in figure 1, based on Object Depth order method of estimation in the monocular image of binary tree, including:
Intermediate distance between step 1, definition image-region;
The T angle point that step 2, intersection to image-region are formed estimates T angle point confidence level;
Step 3, according to the binary split tree of above-mentioned intermediate distance and T angle point confidence level structure realm thus obtaining the regional model of image;
Step 4, choose optimum T angle point set, complete depth ordering, thus obtaining depth order figure.
The technical program mainly illustrates from following four step: define interregional intermediate distance, and T angle point is estimated, the binary split tree (BPT) of structure realm, iteration T angle point is selected and depth ordering.1, interregional intermediate distance d is definedBPT:
The single pixel of image is used as prime area.Research (" Hierarchicalregion-basedrepresentationforsegmentationand filteringwithdepthinsingleimages " according to M.Dimiccoli etc., ICIP, 2009, pp.3497 3500), in order to construct the binary split tree (BPT) of image-region, need definition domain model to represent interregional similarity measurement, thus merging corresponding region according to similarity size.CIELab color space defines the colouring information in region, sets up corresponding 3 Color Channels of 3 analogous column diagrams.The rectangular histogram of initial pixel uses the self-similarity used in M.Dimiccoli et al. research to go definition.Then interregional intermediate distance dBPTThen gone tolerance by these features of color, area, profile and depth information.Region R1And R2Between color distanceDefined by earth displacement (EMD), profile distanceThen go definition according to the research (" Binarypartitiontreesforobjectdetection ", IEEETrans.onImageProcessing, 2008, vol.17, no.11, pp.2201-2216) of V.Vilaplana et al..This measurement is above 50 pixels only in two region areas and just carries out, and is otherwise nonsensical.Cast out depth information, thus the definition of intermediate distance is as follows:
d B P T 1 , 2 = α × d c o l o r 1 , 2 + ( 1 - α ) × d c o n t o u r 1 , 2 - - - ( 1 )
Wherein α is weight parameter, and the distribution of balance color distance and profile distance, for known parameters.After such intermediate distance is determined, it is possible to introduce T angle point information.
2, T angle point is estimated:
Generally, the intersection of three edges of regions can form T angle point (as shown in Figure 2), and T angle point is the effective clue judging to block.Each T angle point is estimated that its confidence value p (makes p represent p1,n, i.e. R in the regional area of angle point n1Block the probability in other two regions), remove comprehensively to weigh p by color distortion, angled arrangement and border curvature confidence level.During practical operation, each T corner point is taken an appropriately sized window and judges.And owing to three are characterized by separate, thus p=pcolor×pangle×pcurve
For color confidence level, it is assumed that the region of this T angle point local is to (it is carried out two kinds of tolerance by i, j)={ (1,2), (1,3), (2,3) } respectively.One is statistically measureHotellingT by two groups of samples2Inspection goes to obtain.Two is perceptibility toleranceDefined by the Euclidean distance of interregional average color measurements.In order to punish the dissimilarity of higher-dimension statistical distance, use minimum statistical distance, thus comprehensive measurement c r = m i n ( c s 1 , 2 , c s 1 , 3 , c s 2 , 3 ) - ( 1 c s 1 , 2 + 1 c s 1 , 3 + 1 c s 2 , 3 ) - 1 . So, each T angle point to be detected can obtain 7 kinds of tolerance (Kind,Kind, cr1 kind), each tolerance is all a probability.Assume they all Rayleigh distributed, then the product of these 7 kinds of probits is final pcolor
Angle is critically important in T angle point.The region generally having maximum subtended angle is occlusion area (as shown in Figure 2).Utilize three edges of corner point, it may be determined that its tangential direction is as the mean direction of branch, thus can draw its angle between two between direction.Desirable T angle point is to comprise 180 ° and two minimum angles of a maximum angle 90 °.Therefore, definition two kinds tolerance Δ θmax、Δθmin, its represent each angle respectively with the maximum angle difference of 180 ° and the absolute difference of 90 ° and minimum angles difference.In order to obtain confidence value, it is also assumed that they Rayleigh distributed, and the product of its probit is final pangle
Curvature is used for weighing the degree of crook of branch.If a border is highly bending, it is clear that associated angle point is unlikely to be T angle point.The curvature measurement on border needs to utilize the theory of level set.Research method (" ImageanalysisandPDEs ", InstituteforPureandAppliedMathematicsGBMTutorial, 2001) according to F.Guichard and J.M.Morel, it is possible to obtain 3 kinds of tolerance about curvature.Still assume its Rayleigh distributed, then final pcurveAlso it is obtained by corresponding probability product.
For each angle point n to be selected, it is possible to calculate pi,n, i.e. R in the regional area of angle point niBlock the probability in other two regions.Due to about region R1And R2Angle point more than one, thus comprehensive their judgement information, it can be deduced that region R1At R2Total probability above:
p 1 = ( 1 - Π n N 1 ( 1 - p 1 , n ) ) Π n N 2 ( 1 - p 2 , n ) - - - ( 2 )
Wherein N1、N2For the corresponding R judged by T angle point1、R2In number above.Due to each pi,nIt is independent, thus p2≠1-p1
3, the binary split tree (BPT) of structure realm:
Corresponding regional model has been constructed, such that it is able to define interregional similarity measurement by 1,2 liang of steps:
d 1 , 2 = d B P T 1 , 2 ( 1 - | p 1 - p 2 | ) - - - ( 3 )
There is interregional similarity measurement, then can according to interregional similarity size combined region, more new neighbor and similarity measurement, continuous iteration, and remove storage record with tree construction, the binary split tree in a region is constructed (as leaves), until image only remains a region, it is simply that entire image from single pixel.
4, iteration T angle point is selected and depth ordering:
All be used for producing final depth order inferred results if all of T angle point, then can make the mistake the situation with order conflict.So that problem restrains and can effectively solve, some angle points must be given up.Accordingly, it would be desirable to therefrom select the T angle point set of the best, and this requirement can have been gone by the cost function that minimizes of an iteration.
In order to define suitable cost function, it is necessary to consider 3 factors: one is can obtain its confidence value due to each angle point to be selected, thus probit corresponding to authentic and valid T angle point should be bigger;Two is in true picture, and the quantity of T angle point should be less, and therefore final depth order figure also should comprise less region and degree of depth level;Three is in depth order figure (DOG), at least there is a depth relationship between the region that region is adjacent, and namely should not be isolated node.Based on considerations above factor, definition cost function is as follows:
C ( R ) = Σ i ∈ R c i c max + γ N × N + γ u × U - - - ( 4 )
Wherein R is the set of the T angle point being rejected, ciRepresent single angle pointiCost function,Represent the region area of local window, γuRepresenting the penalty coefficient of isolated node collection U in depth order figure, Cmax represents the maximum cost function in all angle points, and N represents that in depth ordering image, 4-connects number, γNIt is set to all normalized T angle point costsThe half of geometrical mean.
Define cost function, it is possible to go iteration to minimize cost function to solve by minimizing loop.Minimize shown in circuit diagram 3;
Initial T angle point selects to go to carry out by one probability threshold value of setting.In iterative process each time, the first step is exactly concentrate from the T angle point finally given before to randomly select angle point generation solution.The probability selected directly is fixed against its p value.Second step is exactly choose respective nodes from the binary split tree constructed to form image segmentation, carry out area extension, and this segmentation is conditional on, namely the region split to retain all T angle points picked out, it is thus possible to the depth relationship according to T angle point constructs depth order figure (DOG).3rd step is exactly Conflict solving, according to the DOG constructed, depth order deduction is modified, and removes the circle in DOG continuously by deleting the T angle point of corresponding minimum probability, thus the conflict situations solved during local hierarchy infers.Region is carried out sounding mark by the partial ordering relation in each region that the 4th step is just based in revised DOG, i.e. depth order sequence.Finally, depth image can be obtained by.Now, calculate the cost function of (4) formula definition, and minimize, obtain the solution that minimum cost is corresponding, namely need the set of the T angle point deleted.Then, circulation restarts until restraining, and the solution of final corresponding minimum cost and depth order image are then as final result.
Two kinds of main method that monocular depth is estimated by the technical program carry out review and conclude, and by comparing, get rid of to the big and some other inherent shortcoming of scene limitation based on the estimating method learnt, and the limitation based on low level clue estimating method is made improvements and optimization is used.Limitation for the image binary split tree of the problem of depth perception system and structure that only cannot define robust by T Corner Detection, to binary split tree (BPT) make be used as extend split image, and by the T Corner Detection in the method designed such as M.Dimiccoli with retain T angle point and carry out this two step of region merging technique and be combined into a step to improve the robustness that T angle point is estimated, and a new iterative process is proposed to determine final depth order.
Last it is noted that the foregoing is only the preferred embodiments of the present invention, it is not limited to the present invention, although the present invention being described in detail with reference to previous embodiment, for a person skilled in the art, technical scheme described in foregoing embodiments still can be modified by it, or wherein portion of techniques feature carries out equivalent replacement.All within the spirit and principles in the present invention, any amendment of making, equivalent replacement, improvement etc., should be included within protection scope of the present invention.

Claims (9)

1. one kind based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that including:
Intermediate distance between step 1, definition image-region;
The T angle point that step 2, intersection to image-region are formed estimates T angle point confidence level;
Step 3, according to the binary split tree of above-mentioned intermediate distance and T angle point confidence level structure realm thus obtaining the regional model of image;
Step 4, choose optimum T angle point set, complete depth ordering, thus obtaining depth order figure.
2. according to claim 1 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that step 1 defines between image-region:
The definition of intermediate distance is as follows:
d B P T 1 , 2 = α × d c o l o r 1 , 2 + ( 1 - α ) × d c o n t o u r 1 , 2
Wherein,Represent the intermediate distance between image-region R1 and R2,Represent the color distance between image-region R1 and R2,Representing the profile distance between image-region R1 and R2, α is weight parameter, and the distribution of balance color distance and profile distance, for known parameters.
3. according to claim 2 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that in step 2 estimate T angle point confidence level particularly as follows:
P=pcolor×pangle×pcurve
P represents T angle point confidence level, pcolorRepresent color distortion confidence level, pangleRepresent angled arrangement confidence level, pcurveRepresent border curvature confidence level.
4. according to claim 3 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that described color distortion confidence level pcolor, it is assumed that the region of this T angle point local is to (it is carried out two kinds of tolerance by i, j)={ (1,2), (1,3), (2,3) } respectively, and one is statistically measureTwo is perceptibility toleranceAccording to statistically measuringMeasure with perceptibilityDraw comprehensive measurement c r = m i n ( c s 1 , 2 , c s 1 , 3 , c s 2 , 3 ) - ( 1 c s 1 , 2 + 1 c s 1 , 3 + 1 c s 2 , 3 ) - 1 , Thus each T angle point to be detected can obtain 7 kinds of tolerance and namely statistically measure3 kinds, perceptibility is measured3 kinds, comprehensive measurement cr, a kind, each tolerance is all a probability, it is assumed that measure all Rayleigh distributed for these 7 kinds, then the product of these 7 kinds of probits is final color distortion confidence level pcolor
5. according to claim 3 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that to define two kinds of tolerance Δ θmaxWith Δ θmin, its represent respectively each angle of T angle point respectively with the maximum angle difference of 180 ° and the absolute difference of 90 ° and minimum angles difference, it is assumed that tolerance Δ θmaxWith Δ θminRayleigh distributed, then tolerance Δ θmaxWith Δ θminThe product of probit is final angled arrangement confidence level pangle
6. according to claim 3 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that to estimate that T angle point confidence level specifically includes: for each T angle point n to be selected, it is possible to calculate pi,n, i.e. image-region R in the regional area of angle point niBlock the probability of other two image-regions, due to about image-region R1And R2Angle point more than one, thus comprehensive their judgement information, draw image-region R1At R2Total probability above:
p 1 = ( 1 - Π n N 1 ( 1 - p 1 , n ) ) Π n N 2 ( 1 - p 2 , n )
Wherein N1、N2For the correspondence image region R judged by T angle point1、R2In number above, due to each pi,nIt is independent, thus p2≠1-p1, p2Represent image-region R2At R1Total probability above.
7. according to claim 6 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterized in that, step 3 according to the binary split tree of above-mentioned intermediate distance and T angle point confidence level structure realm thus obtain image regional model particularly as follows:
Define the similarity measurement between image-region:
d 1 , 2 = d B P T 1 , 2 ( 1 - | p 1 - p 2 | )
According to the similarity measurement between image-region, thus the similarity size drawn between image-region, according to the similarity size combined region between image-region, more new neighbor and similarity measurement, continuous iteration, and remove storage record with tree construction, start to construct the binary split tree in a region from single pixel, until image only remains a region, it is simply that entire image, thus drawing the regional model of image.
8. according to claim 7 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that step 4 chooses the T angle point set of optimum, completing depth ordering, specifically including thus obtaining depth order figure:
Definition cost function is as follows:
C ( R ) = Σ i ∈ R c i c max + γ N × N + γ u × U
Wherein R is the set of the T angle point being rejected, ciRepresent the cost function of single angle point i, Represent the region area of local window, γuRepresenting the penalty coefficient of isolated node collection U in depth order figure, N represents 4 connection number, γ in depth ordering imageNFor all normalized T angle point costsThe half of geometrical mean, piRepresenting that image-region Ri blocks the probability of other image-region in regional area, Cmax represents the maximum cost function in all angle points, goes iteration to minimize cost function to solve by minimizing loop, thus obtaining depth order figure.
9. according to claim 8 based on Object Depth order method of estimation in the monocular image of binary tree, it is characterised in that described to go iteration to minimize cost function to carry out solving specifically including by minimizing loop:
It is exactly concentrate from the existing T angle point finally given to randomly select the step that angle point produces to solve;
From the binary split tree constructed, choose respective nodes form image segmentation, carrying out the step of area extension, the condition of segmentation is, the region split to retain all T angle points picked out, it is thus possible to the depth relationship according to T angle point constructs depth order figure, depth order figure and DOG;
According to the DOG constructed, depth order deduction is modified, removes the circle in DOG continuously by deleting the T angle point of corresponding minimum probability, thus the step of the conflict situations solved during local hierarchy infers;
According to the partial ordering relation in each region in revised DOG, region is carried out sounding mark, i.e. the step of depth order sequence.
CN201610034259.7A 2016-01-19 2016-01-19 Binary-tree based object depth order evaluation method in monocular image Pending CN105719288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610034259.7A CN105719288A (en) 2016-01-19 2016-01-19 Binary-tree based object depth order evaluation method in monocular image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610034259.7A CN105719288A (en) 2016-01-19 2016-01-19 Binary-tree based object depth order evaluation method in monocular image

Publications (1)

Publication Number Publication Date
CN105719288A true CN105719288A (en) 2016-06-29

Family

ID=56147740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610034259.7A Pending CN105719288A (en) 2016-01-19 2016-01-19 Binary-tree based object depth order evaluation method in monocular image

Country Status (1)

Country Link
CN (1) CN105719288A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993239A (en) * 2017-12-25 2018-05-04 北京邮电大学 A kind of method and apparatus for the depth order for calculating monocular image
CN109785328A (en) * 2017-11-13 2019-05-21 南京大学 A kind of bond area merges with depth connectivity like physical property estimation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7292256B2 (en) * 2003-06-26 2007-11-06 Canon Kabushiki Kaisha Optimising compositing calculations for a run of pixels
EP1859410A1 (en) * 2005-03-17 2007-11-28 British Telecommunications Public Limited Company Method of tracking objects in a video sequence
CN101390090B (en) * 2006-02-28 2011-11-16 微软公司 Object-level image editing
CN104899883A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Indoor object cube detection method for depth image scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7292256B2 (en) * 2003-06-26 2007-11-06 Canon Kabushiki Kaisha Optimising compositing calculations for a run of pixels
EP1859410A1 (en) * 2005-03-17 2007-11-28 British Telecommunications Public Limited Company Method of tracking objects in a video sequence
CN101390090B (en) * 2006-02-28 2011-11-16 微软公司 Object-level image editing
CN104899883A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Indoor object cube detection method for depth image scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUILLEM PALOU 等: "OCCLUSION-BASED DEPTH ORDERING ON MONOCULAR IMAGES WITH BINARY PARTITION TREE", 《2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785328A (en) * 2017-11-13 2019-05-21 南京大学 A kind of bond area merges with depth connectivity like physical property estimation method
CN107993239A (en) * 2017-12-25 2018-05-04 北京邮电大学 A kind of method and apparatus for the depth order for calculating monocular image
CN107993239B (en) * 2017-12-25 2022-04-12 北京邮电大学 Method and device for calculating depth order of monocular image

Similar Documents

Publication Publication Date Title
Wei et al. Toward automatic building footprint delineation from aerial images using CNN and regularization
Xiong et al. Automatic creation of semantically rich 3D building models from laser scanner data
Jung Detecting building changes from multitemporal aerial stereopairs
Shu et al. Shoreline extraction from RADARSAT-2 intensity imagery using a narrow band level set segmentation approach
JP5939056B2 (en) Method and apparatus for positioning a text region in an image
CN109255781B (en) Object-oriented multispectral high-resolution remote sensing image change detection method
CN110866455B (en) Pavement water body detection method
CN110189339A (en) The active profile of depth map auxiliary scratches drawing method and system
CN111027446B (en) Coastline automatic extraction method of high-resolution image
CN104933721A (en) Spliced image-tamper detection method based on color filter array characteristic
CN111047603B (en) Aerial image hybrid segmentation algorithm based on novel Markov random field and region combination
CN105389774A (en) Method and device for aligning images
CN101630407B (en) Method for positioning forged region based on two view geometry and image division
CN110399820B (en) Visual recognition analysis method for roadside scene of highway
CN111738295B (en) Image segmentation method and storage medium
CN115641327B (en) Building engineering quality supervision and early warning system based on big data
CN106257537A (en) A kind of spatial depth extracting method based on field information
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN111461043A (en) Video significance detection method based on deep network
Stentoumis et al. A local adaptive approach for dense stereo matching in architectural scene reconstruction
CN104835142B (en) A kind of vehicle queue length detection method based on textural characteristics
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
CN109300115B (en) Object-oriented multispectral high-resolution remote sensing image change detection method
CN105719288A (en) Binary-tree based object depth order evaluation method in monocular image
CN114332644A (en) Large-view-field traffic density acquisition method based on video satellite data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160629