CN104599283B - A kind of picture depth improved method for recovering camera heights based on depth difference - Google Patents

A kind of picture depth improved method for recovering camera heights based on depth difference Download PDF

Info

Publication number
CN104599283B
CN104599283B CN201510070896.5A CN201510070896A CN104599283B CN 104599283 B CN104599283 B CN 104599283B CN 201510070896 A CN201510070896 A CN 201510070896A CN 104599283 B CN104599283 B CN 104599283B
Authority
CN
China
Prior art keywords
depth
ground
point
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510070896.5A
Other languages
Chinese (zh)
Other versions
CN104599283A (en
Inventor
隋铭明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Forestry University
Original Assignee
Nanjing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Forestry University filed Critical Nanjing Forestry University
Priority to CN201510070896.5A priority Critical patent/CN104599283B/en
Publication of CN104599283A publication Critical patent/CN104599283A/en
Application granted granted Critical
Publication of CN104599283B publication Critical patent/CN104599283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of picture depth improved method for recovering camera heights based on depth difference, including three parts:(1) single image ground region depth calculation model is built;(2) depth calculation Model Error Analysis;(3) the camera heights inverting based on depth difference.The present invention is suitable for inclusion in the indoor and outdoor close shot image on ground, for the ground region in image, according to composition geometrical condition, is derived by ground each point absolute depth computation model in scene.It is important that proposing camera heights (H) this determination method for parameter in depth calculation model, so as to improve depth calculation model, the precision and reliability for calculating are improved.

Description

A kind of picture depth improved method for recovering camera heights based on depth difference
Technical field
The present invention relates to a kind of Computer Image Processing method, particularly a kind of figure for recovering camera heights based on depth difference As depth improved method.
Background technology
Depth information is the indispensable part of image space information, and estimation of Depth is intended to by view data Computing determines the spatial relation in image between different objects object so as to obtain depth information.According to estimation of Depth Result it is different, be divided into that relative depth is estimated and absolute depth is estimated.Relative depth estimates it is to estimate different objects in image Relative position relation, before house, trees are behind house etc. for such as people, it is only necessary to distinguish target with depth value Front and back position, but the depth value is not necessarily the target to the absolute depth of camera.Absolute depth estimates it is by meter Vertical range of the target relative to shooting camera in image, the absolute depth of object even corresponding to each pixel are determined in calculation Degree information.
Existing single image depth estimation method is numerous, mainly including 1) based on shade, texture, to block clue etc. implicit The method of information;2) based on machine learning method;3) based on methods such as imaging models.
Shadowing method is, using existing shade, scene image and light source to be constituted into reflection model, using the change reality of shade Existing estimation of Depth.Shade is also a Depth cue based on monocular image, and conventional method is SFS (Shape from Shading).Due to the body surface typically change with shape, so light and shade can be caused to become when light source is irradiated on object Change, that is, the shade on image.In general, place raised on object can be local brighter than depression, therefore intuitively comes See, if image shade and light source direction are, it is known that be the threedimensional model for having reason to recover body surface, otherwise referred to as Depth is recovered by light and shade.Durou et al. is particularly solved to the related algorithm of shadow recovery depth.Texture can be as The clue that a kind of picture depth is recovered.For the object for having certain texture, when target range camera is more remote, resolution ratio is got over It is low, therefore texture is fuzzyyer, conversely, when target range camera is nearer, resolution ratio is higher, and texture is more clear.According to texture Fog-level it is different it is estimated that the distant relationships of object.The method for recovering depth by texture information is commonly referred to " Depth from Texture”.Loh et al. proposes a kind of representative method.Method based on texture is also required to relevant texture Priori.Indigo plant is built beam et al. and proposes a kind of single image depth estimation method based on multi-dimension texture energy norm.
Front and rear hiding relation according to object may determine that the relative depth between object, and the object being blocked should be in Deeper position.Hiding relation is the important Depth cue that the mankind are used to judge distance, is also that single eyes may determine that object Far and near one of the main reasons.Human eye can be according to prioris such as the species of object, size, color and shapes, easier Judge the hiding relation between object.And for computer, then need to study the algorithm of automatic decision hiding relation.Wu etc. is carried A kind of method that occlusion detection (Occlusion Detection) is carried out in monocular video is gone out.The method assumes object It is all that since certain side up and down, the pixel value changes of borderline region can compare interior of articles when blocking generation that body is blocked Pixel change it is much bigger.Thoma etc. and Izquierdo propose the occlusion detection method based on luminosity and geometry respectively. Palou etc. has carried out estimation of Depth using clue is blocked.
Recover depth using machine learning method, be increasingly becoming a kind of effective depth estimation method.It is substantially former Reason is the relation by learning different objects feature and depth distance in real world, so as to set up estimation of Depth model, thus it is speculated that Go out the depth of target in unknown images.For example, the size of any one object of real world imaging chi in photo Very little size and the distance of object distance video camera have a direct relation, object from video camera more close to, be imaged in the picture bigger. If having learnt the corresponding relation of size in the depth location and image of certain object with the method for machine learning, it is possible to according to The imaging size of the target releases the current depth of the target.
Saxena employs machine learning method and has carried out picture depth estimation.Afterwards many scholars using Markov with The perfect depth estimation result of airport innovatory algorithm.Except Markov random field model, also there are some other training method quilts For estimation of Depth.Lin etc. carries out estimation of Depth using SVMs (SVM) training aids;Battiato etc. is using scene point Class method generates depth map;Nedovic etc. is entered using the method and SVM and AdaBoost algorithms that geometric classfication is carried out to scene Picture depth estimation is gone.
Estimation of Depth based on imaging model is using the structured message structure in camera imaging geometrical model and photo Make equation, the relation set up between depth and image coordinate so that extrapolate target absolute depth a kind of estimation of Depth side Method.Different regions are first generally divided the image into out, image segmentation or image block algorithm can be used.Hoiem etc. is used Image is divided into ground, sky and the part of upright scenery three by Adboost sorting techniques, and threedimensional model is set up to ground region, then According to other objects overall three-dimensional scenic is reconstructed with ground relation of plane.Salih etc. constructs depth and is regarded with camera heights, shooting The isoparametric triangle geometrical relationship in rink corner, can meter when known camera heights, the camera angle of pitch and camera imaging field range The three-dimensional coordinate at two dimensional image midpoint is calculated, so as to realize estimation of Depth.Li Le etc. is using image block in street view image Target is divided, and is deduced the functional relation between image coordinate and depth further according to pin-hole imaging model, and be directed to Street view image has carried out depth recovery experiment.Zhang etc. carries out zone marker and by camera imaging model using Adaboost Recover picture depth.The overall structure of scene such as Torralba estimates the imaging scale of scene and then obtains the average exhausted of scene To depth.Liu etc. first divides the image into different target using semantic segmentation, then realizes that depth is estimated by markov random file Meter.Yang etc. realizes the semi-automatic estimation of single image depth by partial-depth hypothesis.Magnify will to have studied based on monocular The sequence image that the Air-borne Forward-looking passive sensor of vision is obtained, and inertial navigation information is combined, recover the depth of forward sight scene.
In sum, the method on estimation of Depth has a lot, and each method has respective restrictive condition and is applicable Situation, because estimation of Depth occupies critical role in whole image information extraction, therefore the quantity and level of correlative study exist Constantly lifting, but holistic approach is still in state in the ascendant, still has many problems demands to solve.
1) existing depth estimation method oftenes focus on the depth of specific objective, is identified and carries by scene objects Take, draw out the depth map of reflection target depth change, such as represent the foreground target of motion on depth map with different gray values With target context to reflect change in depth relation, but lack the continuous description to overall scenario depth, have impact on to spatial field The overall cognitive of scape and expression.
2) existing research is mostly the relative depth of restoration scenario, i.e. the front and rear hiding relation and relative position of object, Depth is often discontinuous often to be assumed or distribution to a kind of of depth, and it is most of all without target in restoration scenario Absolute depth.
3) existing many research methods are complex, and restrictive condition is more strict, generally requires in the case of different Substantial amounts of image is shot, and estimation of Depth is carried out by complicated study or formula, and such condition cannot usually meet.
The content of the invention
Goal of the invention:The technical problems to be solved by the invention are directed to the deficiencies in the prior art, there is provided one kind is based on deep Degree difference recovers the picture depth improved method of camera heights.This method reduction makes full use of only to the restrictive condition of data source The data resource of single image, and by the depth estimation method based on geometry, recover the absolute depth of target, and improve depth The reliability of estimation.
In order to solve the above-mentioned technical problem, the invention discloses a kind of picture depth for recovering camera heights based on depth difference Improved method, including three parts:
(1) single image ground region depth calculation model is built;
(2) depth calculation Model Error Analysis;
(3) camera heights based on depth difference are recovered.
In the present invention, building single image ground region depth calculation model includes:By to the several of camera imaging model Depth calculation model is set up in what parsing, calculates the depth information of ground region in image, and by the meter of point to be located depth on ground Three kinds of situations of calculation method point:
Situation 1:Ground point is bordering on the depth calculation of central point;
Situation 2:Ground point is distal to the depth calculation of central point;
Situation 3:The depth calculation when there are fluctuations on ground.
In the present invention, the computing formula of depth D is in situation 1:
Wherein, H is the height for surveying camera, and f is the focal length of camera, and s is the actual size and figure of the photosensitive CCD device of camera As the ratio between pixel coordinate, vc、vg、vpRespectively ordinate of c, g, p point in image coordinate system, c is image plane center Point, g is picture point formed by ground point G, and p is the intersection point in horizon and image plane.
In the present invention, the computing formula of depth D is in situation 2:
In the present invention, in situation 3, A points position ground h where millet cake G above GroundA, B points position is less than ground Ground h where point GB, then the computing formula of depth be:
Wherein, DARepresent the depth of A points, DBRepresent the depth of B points.
In the present invention, the camera heights inverting based on depth difference includes, according to the camera heights of actual measurement, derives ground depth Degree.
In the present invention, in depth calculation Model Error Analysis, made in situation 1:
Ground depth calculation simplified formula is:
D=H*k;
I.e. depth calculations are directly proportional to camera heights.
In the present invention, camera heights H is calculated according to below equation, and it is public to substitute into the depth calculation in 1~situation of situation 3 Formula, further improves picture depth and calculates:
Wherein, Δ D is the depth difference of known 2 points G1, G2 on ground,WithRespectively ground point G1, G2 institute are right Upper picture point g should be schemed1, g2Ordinate of the point in image coordinate system.
The present invention is suitable for inclusion in the indoor and outdoor close shot image on ground, for the ground region in image, according to composition Geometrical condition, is derived by ground each point absolute depth computation model in scene.It is important that proposing camera in depth calculation model Highly (H) this determination method for parameter, so as to improve depth calculation model, improves the precision and reliability for calculating.
Specifically, this law advantage is:(1) the existing single image ground region depth gauge based on geometrical relationship is optimized Calculate model.The far and near difference of ground point is considered, there is the influence of height fluctuating supplemented with ground;(2) depth calculation mould is analyzed The error influence factor of type, it is found that camera heights error is one of major influence factors of depth calculation precision;(3) base is proposed In the camera heights Parameter reconstruction method of depth difference, depth estimation result reliability is remarkably improved.
Brief description of the drawings
The present invention is done with reference to the accompanying drawings and detailed description further is illustrated, it is of the invention above-mentioned And/or otherwise advantage will become apparent.
Fig. 1 is bordering on the depth calculation schematic diagram of central point for ground point.
Fig. 2 is distal to the depth calculation schematic diagram of central point for ground point.
Fig. 3 is depth calculation Experiment about Analysis of Predict Error figure.
Fig. 4 is that the camera heights based on depth difference recover schematic diagram.
Specific embodiment
The present invention is a kind of method for recovering picture depth based on imaging model.Mainly include three partial contents:1) single width Image ground region depth calculation model;2) depth calculation error analysis;3) the camera heights inverting based on depth difference.First Divide the method for mainly realizing depth calculation;Part II is mainly influence of the analysis camera heights error to result, with reference to reality Example provides its specific influence numerical value.3) camera heights based on depth difference are recovered, and are the method for solving to the parameter H in formula, So as to substitute in the past simply with the method for assuming to determine H, so as to improve depth calculation model, the reliability for calculating is improved.
1) ground region depth calculation
Depth calculation model is set up by the geometrical analysis to camera imaging model, the depth of ground region in image is calculated Information.And divide three kinds of situations by the computational methods of point to be located depth on ground:
(1) ground point is bordering on the depth calculation of central point.
For most natural scene image, the depth (i.e. object distance) of true scenery is much larger than the image distance of imaging, root According to the pin-hole imaging principle of camera, scenery imaging can be approximately considered and be all located in camera focal plane[75].Based on vacation above If depth of certain point in real world can be derived on ground and it is right between the image coordinate of imaging point in the photo Should be related to, the depth of ground point position is calculated with this.First under conditions of camera rotation is not considered, scenery depth is derived Corresponding relation and imaging between, it is shown in Figure 1.
In Fig. 1, optical axis is perpendicular to the axis that image plane crosses lens centre point, and the intersection point of optical axis and lens is referred to as photocentre. According to pin-hole imaging model, by the photocentre o points of camera lens, primary optical axis Cc and ground angle are ∠ GCo to all of light, C is image plane central point, and g is picture point formed by ground point G, and p is the intersection point in horizon and image plane, and f is the focal length of camera, γ It is the line and the angle on ground of image center o and ground point G, H is the height of camera, and β is the camera angle of pitch, and D is ground point G Depth.As above figure situation, when wait ask the ground point G of depth apart from camera distance than C point closer to when, can obtain following pass It is formula:
γ=alpha+beta (1)
γ is the line of image center o and ground point G and the angle on ground in above formula, and α is ∠ goc, and β is ∠ cop.
Tan γ=tan (alpha+beta)=tan (∠ goc+ ∠ cop) (2)
According to triangle and poor transformation for mula, such as shown in formula (3), above formula can be entered line translation, further according to right angle Angle and the relation of the length of side, such as shown in formula (4) (5) (6), are most formulated as the form shown in formula (7) at last in triangle, Wherein vc、vg、vpRespectively ordinate of c, g, p point in image coordinate system.
The computing formula (8) of depth will be derived after above formula abbreviation.
The formula reflects the corresponding relation between scenery depth and imaging, and s is the actual chi of the photosensitive CCD device of camera The very little proportionate relationship and image pixel coordinates between, f and s can shoot ginseng according to the camera recorded in scene image file attribute Number is obtained, and camera heights H can be evaluated whether to obtain, or is calculated according to the projectional technique mentioned herein below.By what is be calculated It is that can obtain the corresponding absolute depth of point that each supplemental characteristic and corresponding picpointed coordinate substitute into (8) formula.(2) during ground point is distal to The depth calculation of heart point.
When the ground point G of depth to be determined is more farther than the intersection point C of camera optical axis and ground, as shown in Fig. 2 ground point G Positioned at the left side of C, apart from the farther place of camera, now because angular relationship changes, depth calculation formula is also sent out therewith Changing, derives as follows:
γ=β-α (9)
Bring formula (4) (5) (6) into above formula, obtain:
By obtaining depth calculation formula after conversion, this formula is applied to the ground point depth gauge more farther than image center Calculate:
(3) depth calculation when there are fluctuations on ground.
Ground in piece image not necessarily in approximately the same plane, it sometimes appear that be divided on ground by the atural object such as step , that is, there is height and be mutated in two highly different regions, A points position is higher than ground h where G points in Fig. 2A, B points institute is in place Set low the ground h where G pointsB。γAIt is the line and the angle on ground of image center o and ground point A.Now, the calculating of angle Formula will change, such as following formula:
Correspondingly, depth DAComputing formula will be changed into:
For B points, γBIt is the line and the angle on ground of image center o and ground point B, there is formula:
Correspondingly, depth DBComputing formula will be changed into:
2) ground depth calculation error analysis
In formula (8), order:
Error analysis is the influence in order to illustrate camera heights to result of calculation, and the field of ground even is only have selected during analysis Scape is tested, so not using situation 2 and 3.
Then depth calculation formula in ground can be reduced to:
D=H*k (18)
Influence in conjunction with instance analysis camera heights H to depth calculation error, such as Fig. 3 are (for performance present invention must Must using gray scale photo) in ground paving ceramic tile dimension be 60cm*60cm, may be selected ceramic tile separate line computation depth value, with The error that analysis depth is calculated based on this, it is 69cm that camera heights are surveyed during shooting.First with formula (17) design factor K, then depth is calculated by formula (18), the results are shown in Table 1.
The influence of the camera heights error of table 1
As can be seen from the table, the coefficient k of depth calculation is actually that the error of 1 unit length camera heights causes Depth calculation error.With reference to formula (18), when camera heights and true value deviation 1cm, will be to tetra- depth values of A, B, C, D Result of calculation bring the error of 1.69,2.57,3.47 and 4.39cm to influence respectively.It can be seen that, even if camera heights have very little Deviation can also make a big impact to depth calculations.Generally the height of camera be it is unknown, in document empirically Value substantially estimates camera heights or assumes and the roughly the same method of Human Height there is very big uncertainty, it would be possible to Extreme influence is caused to depth estimation result.
3) the camera heights restoration methods based on depth difference
According to last point of analysis, the error of camera heights is one of major influence factors of depth calculation precision, because And it is highly most important to determine that camera shoots.This part proposes a kind of side for recovering camera heights based on target depth difference Method, as shown in Figure 4, G1, G2 are 2 points on ground, and the absolute depth of the two is respectively D1、D2, the difference of 2 depth is Depth difference is Δ D, Δ D=D1-D2, can be derived by by formula (8) following various.
If known depth is poor in above formula, the computing formula of camera heights can be derived, such as shown in formula (22).And it is deep Poor this clue of degree can be satisfaction in many scenes, such as the object of known dimensions or the method measurement measured by single picture Obtain, such that it is able to as depth difference clue.The modular size of such as floor tile can as depth difference, road signs Standard size, standard size of well lid etc., so as to realize the inverting of camera heights during shooting.
The camera heights parameter H that above formula is calculated substitutes into depth calculation formula, can be obviously improved depth calculation Reliability, improves the precision of result of calculation.
Embodiment
Attached specific experimental data below.
As shown in figure 3, actual measurement camera heights are 69cm when shooting, the size of floor tile is 60cm*60cm, A points position Absolute depth be 120cm.
(1) depth is calculated from correct parameter
The depth of five points is calculated using correct parameter, depth difference therein refer respectively to ED, DC, CB and Depth difference between 2 points of BA line segments, that is, two differences of the depth of edge line of the floor tile obtained, by its mark with floor tile Object staff cun can calculate the relative error of line segment after comparing, the results are shown in Table 2.
The depth calculations of table 2 and error (unit:cm)
As can be seen from the table, the depth value being calculated obtains the precision of Centimeter Level, it was demonstrated that the algorithm it is effective Property, yield good result.
(2) influence and amendment of camera heights
Camera heights are assumed to 60cm first, the depth to each point is calculated again, result of calculation is listed in table 3. As can be seen from the table, because the camera heights error that there is 9cm, depth calculation error are significantly increased, result has been had a strong impact on Quality.
Influence (unit of the camera heights of table 3 to depth calculations:cm)
As the method previously described, using the depth difference between 2 points, you can the height of inverting camera.It is worth mentioning It is that the method only needs to be calculated by knowing one section of depth difference, certainly, if having, multiple known depth differences are common to be participated in calculating, The result of acquisition will be more reliable.Select three sections of depth differences to calculate camera heights respectively herein, take its average value as most Result afterwards, the result for being computed camera heights is 66.7cm;
The absolute depth of A-E is recalculated using the camera heights of 66.7cm, is listed in Table 4 below, can be seen that with the contrast of table 3 The result of estimation of Depth is significantly improved.
Amendment (the unit that the absolute depth of table 4 is estimated:cm)
The invention provides a kind of picture depth improved method for recovering camera heights based on depth difference, the skill is implemented The method and approach of art scheme are a lot, and the above is only the preferred embodiment of the present invention, it is noted that led for this technology For the those of ordinary skill in domain, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these Improvements and modifications also should be regarded as protection scope of the present invention.Each part being not known in the present embodiment can use prior art Realized.

Claims (6)

1. it is a kind of based on depth difference recover camera heights picture depth improved method, it is characterised in that including three parts:
(1) single image ground region depth calculation model is built;
(2) depth calculation Model Error Analysis;
(3) the camera heights inverting based on depth difference;
Building single image ground region depth calculation model includes:Depth is set up by the geometrical analysis to camera imaging model Computation model, calculates the depth information of ground region in image, and divides three kinds of feelings by the computational methods of point to be located depth on ground Condition:
Situation 1:Ground point is bordering on the depth calculation of central point;
Situation 2:Ground point is distal to the depth calculation of central point;
Situation 3:The depth calculation when there are fluctuations on ground.
2. it is according to claim 1 it is a kind of based on depth difference recover camera heights picture depth improved method, its feature It is that the computing formula of depth D is in situation 1:
D = H ( f 2 - s 2 ( v g - v c ) ( v c - v p ) ) f s ( v g - v p ) ,
Wherein, H is the height for surveying camera, and f is the focal length of camera, and s is the actual size and image slices of the photosensitive CCD device of camera Ratio between plain coordinate, vc、vg、vpRespectively ordinate of c, g, p point in image coordinate system, c is image plane central point, g The picture point formed by ground point G, p is the intersection point in horizon and image plane.
3. it is according to claim 2 it is a kind of based on depth difference recover camera heights picture depth improved method, its feature It is that the computing formula of depth D is in situation 2:
D = H ( f 2 + s 2 ( v g - v c ) ( v c - v p ) ) f s ( ( v c - v p ) - ( v g - v c ) ) .
4. it is according to claim 3 it is a kind of based on depth difference recover camera heights picture depth improved method, its feature It is, in situation 3, A points position ground h where millet cake G above GroundA, B points position is less than ground where ground point G hB, then the computing formula of depth be:
D A = ( H - h A ) ( f 2 + s 2 ( v g - v c ) ( v c - v p ) ) f s ( ( v c - v p ) - ( v g - v c ) ) ,
D B = ( H + h B ) ( f 2 + s 2 ( v g - v c ) ( v c - v p ) ) f s ( ( v c - v p ) - ( v g - v c ) ) ,
Wherein, DARepresent the depth of A points, DBRepresent the depth of B points.
5. it is according to claim 4 it is a kind of based on depth difference recover camera heights picture depth improved method, its feature It is in depth calculation Model Error Analysis, to be made in situation 1:
k = ( f 2 - s 2 ( v g - v c ) ( v c - v p ) ) f s ( v g - v p ) ,
Ground depth calculation simplified formula is:
D=H*k;
Depth calculations and camera heights strong correlation.
6. it is according to claim 5 it is a kind of based on depth difference recover camera heights picture depth improved method, its feature It is that camera heights H is calculated according to below equation, and substitutes into the depth calculation formula in 1~situation of situation 3, further improves Picture depth is calculated:
H = Δ D . f s ( f 2 - s 2 ( v g 1 - v c ) ( v c - v p ) ) ( v g 1 - v p ) - ( f 2 - s 2 ( v g 2 - v c ) ( v c - v p ) ) ( v g 2 - v p ) ,
Wherein, Δ D is the depth difference of known 2 points G1, G2 on ground,WithThe corresponding figure of respectively ground point G1, G2 Upper picture point g1, g2Ordinate of the point in image coordinate system.
CN201510070896.5A 2015-02-10 2015-02-10 A kind of picture depth improved method for recovering camera heights based on depth difference Active CN104599283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510070896.5A CN104599283B (en) 2015-02-10 2015-02-10 A kind of picture depth improved method for recovering camera heights based on depth difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510070896.5A CN104599283B (en) 2015-02-10 2015-02-10 A kind of picture depth improved method for recovering camera heights based on depth difference

Publications (2)

Publication Number Publication Date
CN104599283A CN104599283A (en) 2015-05-06
CN104599283B true CN104599283B (en) 2017-06-09

Family

ID=53125033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510070896.5A Active CN104599283B (en) 2015-02-10 2015-02-10 A kind of picture depth improved method for recovering camera heights based on depth difference

Country Status (1)

Country Link
CN (1) CN104599283B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991308B (en) * 2021-03-25 2023-11-24 北京百度网讯科技有限公司 Image quality determining method and device, electronic equipment and medium
CN114046728B (en) * 2021-08-30 2022-11-04 中国水产科学研究院东海水产研究所 Method for measuring target object in large area based on hyperfocal distance imaging
CN114486732B (en) * 2021-12-30 2024-04-09 武汉光谷卓越科技股份有限公司 Ceramic tile defect online detection method based on line scanning three-dimension

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314683A (en) * 2011-07-15 2012-01-11 清华大学 Computational imaging method and imaging system based on nonplanar image sensor
CN104134234A (en) * 2014-07-16 2014-11-05 中国科学技术大学 Full-automatic three-dimensional scene construction method based on single image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9338409B2 (en) * 2012-01-17 2016-05-10 Avigilon Fortress Corporation System and method for home health care monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314683A (en) * 2011-07-15 2012-01-11 清华大学 Computational imaging method and imaging system based on nonplanar image sensor
CN104134234A (en) * 2014-07-16 2014-11-05 中国科学技术大学 Full-automatic three-dimensional scene construction method based on single image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Depth and Geometry from a Single 2D Image Using Triangulation;Yasir Salih 等;《2012 IEEE International Conference on Multimedia and Expo Workshops》;20120816;第131卷(第5期);511-515 *
Depth estimation and occlusion boundary recovery from a single outdoor image;Shihui Zhang 等;《Optical Engineering》;20120831;第51卷(第8期);摘要,第3.1节,附图3 *
基于内容理解的单幅静态街景图像深度估计;李乐 等;《机器人》;20110131;第33卷(第1期);174-180 *

Also Published As

Publication number Publication date
CN104599283A (en) 2015-05-06

Similar Documents

Publication Publication Date Title
US11721067B2 (en) System and method for virtual modeling of indoor scenes from imagery
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN106504284B (en) A kind of depth picture capturing method combined based on Stereo matching with structure light
CN108010081B (en) RGB-D visual odometer method based on Census transformation and local graph optimization
ES2695157T3 (en) Rendering method
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
CN105374039B (en) Monocular image depth information method of estimation based on contour acuity
CN102496183B (en) Multi-view stereo reconstruction method based on Internet photo gallery
JP6985897B2 (en) Information processing equipment and its control method, program
CN104182968B (en) The fuzzy moving-target dividing method of many array optical detection systems of wide baseline
CN101930628A (en) Monocular-camera and multiplane mirror catadioptric device-based motion capturing method
CN105938619A (en) Visual odometer realization method based on fusion of RGB and depth information
Basha et al. Structure and motion from scene registration
CN108399631B (en) Scale invariance oblique image multi-view dense matching method
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN104599283B (en) A kind of picture depth improved method for recovering camera heights based on depth difference
CN108629828B (en) Scene rendering transition method in the moving process of three-dimensional large scene
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
Rothermel et al. Fast and robust generation of semantic urban terrain models from UAV video streams
Kerschner Twin snakes for determining seam lines in orthoimage mosaicking
CN107808160B (en) Three-dimensional building extraction method and device
CN115131504A (en) Multi-person three-dimensional reconstruction method under wide-field-of-view large scene
CN114119891A (en) Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150506

Assignee: Nanjing Yihaopu Software Technology Co., Ltd.

Assignor: Nanjing Forestry University

Contract record no.: 2018320000334

Denomination of invention: Image depth improvement method for camera height recovery based on depth difference

Granted publication date: 20170609

License type: Common License

Record date: 20181119

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150506

Assignee: Quanquan-technology Co., Ltd.

Assignor: Nanjing Forestry University

Contract record no.: 2018320000354

Denomination of invention: Image depth improvement method for camera height recovery based on depth difference

Granted publication date: 20170609

License type: Common License

Record date: 20181127

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20150506

Assignee: Nanjing Grand Canyon Mdt InfoTech Ltd

Assignor: Nanjing Forestry University

Contract record no.: X2019320000016

Denomination of invention: Image depth improvement method for camera height recovery based on depth difference

Granted publication date: 20170609

License type: Common License

Record date: 20190809