CN103198473B - A kind of degree of depth drawing generating method and device - Google Patents

A kind of degree of depth drawing generating method and device Download PDF

Info

Publication number
CN103198473B
CN103198473B CN201310069640.3A CN201310069640A CN103198473B CN 103198473 B CN103198473 B CN 103198473B CN 201310069640 A CN201310069640 A CN 201310069640A CN 103198473 B CN103198473 B CN 103198473B
Authority
CN
China
Prior art keywords
depth information
image
image object
pars intermedia
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310069640.3A
Other languages
Chinese (zh)
Other versions
CN103198473A (en
Inventor
马腾
李保利
李成军
屈孝志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Computer Systems Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201310069640.3A priority Critical patent/CN103198473B/en
Publication of CN103198473A publication Critical patent/CN103198473A/en
Application granted granted Critical
Publication of CN103198473B publication Critical patent/CN103198473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a kind of degree of depth drawing generating method and device, wherein, described method comprises: operate the Image Segmentation Using of current input, obtain the object set of described image, the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object; From preset depth information algorithm, be respectively each the image object selected depth information algorithm in the object set of described image, calculate the depth information of correspondence image object; The depth information of each image object is combined, obtains the depth map of the image of described input.Adopt the present invention, the depth map comparatively accurately of input picture can be obtained rapidly, particularly not high to the accuracy requirement of depth information at some, and only require in the scene of distant relationships, the requirement of the quick obtaining of depth map can be met, save computational resource, reduce cost.

Description

A kind of degree of depth drawing generating method and device
Technical field
The present invention relates to image processing techniques, particularly relate to a kind of degree of depth drawing generating method and device.
Background technology
About in the image processing techniques of streetscape, an important problem is the acquisition of image depth information, for under each scene of streetscape, in each image, each point can represent with depth map (DepthMap) relative to the distance of video camera, and each pixel value in depth map represents the distance in image between certain any and video camera.
At present, obtain current shooting to the technology of depth map of scene image can be divided into passive ranging sensing and the large class of initiative range measurement sensing two.Passive ranging sensing refers to that vision system receives the luminous energy of launching from scene or reflecting, and forms the gray level image about light energy distribution, and then obtains the depth information of scene.Initiative range measurement sensing refers to that vision system is initiatively to scene emitted energy, and by accepting the reflected energy compute depth information of scene, the most frequently used initiative range measurement sensor-based system is radar ranging system and range of triangle system.
Above-mentioned two class modes can both carry out depth calculation to each point in image, then combine each point, just can obtain accurately about the depth map of this image.The account form of prior art calculates unified for all objects in image, comparatively complicated, cannot complete picture depth process fast.
Summary of the invention
Embodiment of the present invention technical matters to be solved is, provides a kind of degree of depth drawing generating method and device, can complete picture depth comparatively rapidly and calculate, obtain the depth map of image.
In order to solve the problems of the technologies described above, embodiments provide a kind of degree of depth drawing generating method, comprising:
Operate the Image Segmentation Using of current input, obtain the object set of described image, the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object;
From preset depth information algorithm, be respectively each the image object selected depth information algorithm in the object set of described image, calculate the depth information of correspondence image object;
The depth information of each image object is combined, obtains the depth map of the image of described input.
Wherein, if the object set of described image comprises ground image object, then described each image object selected depth information algorithm be respectively from preset depth information algorithm in the object set of described image, calculates the depth information of correspondence image object, comprising:
Be the depth information algorithm of the ground image Object Selection in the object set of described image based on interpolation calculation from preset depth information algorithm;
Calculate the depth information of at least three points in ground image object;
According to the depth information of at least three points calculated, adopt the mode of interpolation calculation to obtain in described ground image object depth information a little.
Wherein, if the object set of described image comprises pars intermedia image object, then described each image object selected depth information algorithm be respectively from preset depth information algorithm in the object set of described image, calculates the depth information of correspondence image object, comprising:
Be the depth information algorithm of the ground image Object Selection in the object set of described image based on binocular solid geometric relationship from preset depth information algorithm;
With unit plane in described pars intermedia image object for base unit, adopt the depth information algorithm based on binocular solid geometric relationship to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size;
According to the depth information of constituent parts plane, combination obtains the depth information of described pars intermedia image object.
Wherein, described with unit plane in described pars intermedia image object for base unit, adopt the depth information algorithm based on binocular solid geometric relationship to calculate the depth information of constituent parts plane in described pars intermedia image object, comprising:
Contours extract operation is performed to described pars intermedia image object;
Result according to contours extract operation carries out modeling to described pars intermedia image object, obtains the perspective wire frame model of pars intermedia image object;
According to the unit area preset, each face in the perspective wire frame model of described pars intermedia image object is divided, the unit plane that each face in perspective wire frame model of obtaining is corresponding;
Adopt the depth calculation mode of binocular solid geometric relationship, unit plane in perspective wire frame model is calculated, obtains the depth information of each unit plane.
Wherein, if the object set of described image comprises sky image object, then the depth information calculating the sky image object in the object set of described image comprises:
The depth information of the sky image object in the object set of described image is labeled as infinite distance.
Correspondingly, the embodiment of the present invention additionally provides a kind of depth map generation device, comprising:
Segmentation module, for the Image Segmentation Using operation to input, obtains the object set of image, and the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object;
Computing module, for being respectively each the image object selected depth information algorithm in the object set of described image from preset depth information algorithm, calculates the depth information of correspondence image object;
Composite module, for being combined by the depth information of each image object, obtains the depth map of the image of described input.
Wherein, described computing module comprises:
First selection unit, for being the depth information algorithm of the ground image Object Selection in the object set of described image based on interpolation calculation from preset depth information algorithm;
First computing unit, for calculating the depth information of at least three points in ground image object;
First processing unit, for according to the depth information of at least three points calculated, adopt the mode of interpolation to obtain in described ground image object depth information a little.
Wherein, described computing module comprises:
Second selection unit, for being the depth information algorithm of the ground image Object Selection in the object set of described image based on binocular solid geometric relationship from preset depth information algorithm;
Second computing unit, for with unit plane in described pars intermedia image object for base unit, adopt the depth information algorithm based on binocular solid geometric relationship to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size;
Second processing unit, according to the depth information of constituent parts plane, combination obtains the depth information of described pars intermedia image object.
Wherein, described second computing unit comprises:
Extract subelement, contours extract operation is performed to described pars intermedia image object;
Modeling subelement, carries out modeling for the result operated according to contours extract to described pars intermedia image object, obtains the perspective wire frame model of pars intermedia image object;
Plane divides subelement, for dividing each face in the perspective wire frame model of described pars intermedia image object according to the unit area preset, and the unit plane that each face in perspective wire frame model of obtaining is corresponding;
Computation subunit, for adopting the depth calculation mode of binocular solid geometric relationship, calculating unit plane in perspective wire frame model, obtaining the depth information of each unit plane.
Wherein, described computing module comprises:
Indexing unit, for being labeled as infinite distance by the depth information of the sky image object in the object set of described image.
Implement the embodiment of the present invention, there is following beneficial effect:
The embodiment of the present invention is by obtaining different image objects to the Image Segmentation Using of input, then the depth information of this image object is calculated for different image object selected depth information algorithms, inventor finds, the depth map comparatively accurately of input picture can be obtained rapidly, particularly not high to the accuracy requirement of depth information at some, and only require in the scene of distant relationships, the requirement of the quick obtaining of depth map can be met, save computational resource, reduce cost.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of a kind of degree of depth drawing generating method of the embodiment of the present invention;
Fig. 2 is a kind of schematic flow sheet calculating the method for the depth information of correspondence image object of the embodiment of the present invention;
Fig. 3 is the schematic flow sheet of the method for the depth information of the another kind calculating correspondence image object of the embodiment of the present invention;
Fig. 4 is the schematic flow sheet of the another kind of degree of depth drawing generating method of the embodiment of the present invention;
Fig. 5 is the structure composition schematic diagram of a kind of depth map generation device of the embodiment of the present invention;
Fig. 6 is wherein a kind of concrete structure schematic diagram of computing module described in Fig. 5;
Fig. 7 is the wherein another kind of concrete structure schematic diagram of computing module described in Fig. 5.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Refer to Fig. 1, be the schematic flow sheet of a kind of degree of depth drawing generating method of the embodiment of the present invention, the described method of the embodiment of the present invention can be applicable in all kinds of computer vision system, concrete, and the described method of the present embodiment comprises:
S101: operate the Image Segmentation Using of current input, obtain the object set of described image, the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object.
The image of current input can be under the scenes such as streetscape shooting, is taken the binocular image obtained by video camera.Generally comprise sky, ground and the center section between sky and ground in image, center section can be the images such as buildings, street trees, vehicle, sculpture.
In described S101, segmentation is carried out to the sky portion of image and obtains sky image object and can realize in the following manner:
The colouring information intrinsic according to sky, and use mean shift meanshift algorithm to carry out cutting operation, obtain sky image object.
Carry out segmentation to the above ground portion of image to obtain ground image object and then can realize in the following manner:
The first step, on image, dense Stereo Matching generates disparity map; Second step, ground point is sampled, and can comprise the sample mode adopted based on subpoint density; 3rd step, ground matching, can adopt based on RANSAC(RANdomSAmpleConsensus, random sampling consistance) plane fitting calculates; 4th step, carries out just segmentation to the result of ground matching; 5th step, ground point opens operation and computation bound; 6th step, carries out ground degree of depth interpolation.Wherein, the above-mentioned first step is according to the associated information calculation disparity map in image space, and second step completes in disparity space (three dimensions) to the 4th step, and then maps back in image space and complete subsequent operation.That is, the segmentation of in fact whole ground to comprise from image space to disparity space and then to the cutting procedure of image space.
After the segmentation of the sky and ground that complete image, the objects such as the buildings in center section can be direct as pars intermedia image object, to carry out subsequent treatment.
S102: be respectively each the image object selected depth information algorithm in the object set of described image from preset depth information algorithm, calculates the depth information of correspondence image object;
Described preset depth information algorithm can comprise multiple, according to the type splitting the image object obtained, different depth information algorithms can be selected, concrete, for sky image object, directly its depth information is labeled as infinite distance, then in depth map, sky image object is expressed as infinite distance; Ground image object then uses the mode compute depth value of interpolation; Pars intermedia image object then carries out comparatively detailed calculating based on the depth information algorithm of binocular solid geometric relationship.
S103: combined by the depth information of each image object, obtains the depth map of the image of described input.
Depth map due to described image represents that on image, the depth information obtained, to the distance of same video camera, is therefore recorded to the picture depth figure that depth map can obtain described input by each point.
The embodiment of the present invention is by obtaining different image objects to the Image Segmentation Using of input, then the depth information of this image object is calculated for different image object selected depth information algorithms, inventor finds, the depth map comparatively accurately of input picture can be obtained rapidly, particularly not high to the accuracy requirement of depth information at some, and only require in the scene of distant relationships, the requirement of the quick obtaining of depth map can be met, save computational resource, reduce cost.
Refer to Fig. 2 again, it is a kind of schematic flow sheet calculating the method for the depth information of correspondence image object of the embodiment of the present invention, the described method of the embodiment of the present invention corresponds to the S102 in above-described embodiment, concrete, the described method of the embodiment of the present invention is used for when the object set of described image comprises ground image object, to the computing method of the depth information of ground image object, it comprises:
S201: be the depth information algorithm of the ground image Object Selection in the object set of described image based on interpolation calculation from preset depth information algorithm;
S202: the depth information calculating at least three points in ground image object;
S203: according to the depth information of at least three points calculated, adopt the mode of interpolation calculation to obtain in described ground image object depth information a little.
Concrete, depth information algorithm based on interpolation calculation is according to selecting suitable specific function, each aspect in the interval of ground image object calculates part point value, then on other aspects in interval, use approximate values of other points of value of the specific function selected, the embodiment of the present invention specifically can adopt Block-matching (BlockMatch) conventional at present to calculate the depth information of part point, is then obtained the depth information of other points by interpolation.After obtaining the depth information of each point, namely complete the calculating of depth information in described ground image object, complete the depth map of ground image object part.
The embodiment of the present invention adopts the mode of interpolation to calculate the depth information of above ground portion, after calculating several point, just the depth value of other points can be obtained, meet the requirement of quick compute depth information, and more efficiently can get rid of the impact on above ground portion depth information such as such as trees, vehicle etc.
Refer to Fig. 3 again, it is the schematic flow sheet of the method for the depth information of the another kind calculating correspondence image object of the embodiment of the present invention, the described method of the embodiment of the present invention corresponds to the S102 in above-described embodiment, concrete, the described method of the embodiment of the present invention is used for when the object set of described image comprises pars intermedia image object, to the computing method of the depth information of pars intermedia image object, the actual image object that can comprise the objects such as buildings, personage, vehicle of pars intermedia image object, the described method of the present embodiment comprises:
S301: be the depth information algorithm of the ground image Object Selection in the object set of described image based on binocular solid geometric relationship from preset depth information algorithm;
S302: with unit plane in described pars intermedia image object for base unit, adopt the depth information algorithm based on binocular solid geometric relationship to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size;
S303: according to the depth information of constituent parts plane, combination obtains the depth information of described pars intermedia image object.
Depth information algorithm based on binocular solid geometric relationship is the depth information being obtained impact point by impact point same on image object to the distance of two identical video cameras, in the embodiment of the present invention, with unit plane in described pars intermedia image object for base unit, impact point is replaced, to calculate the depth information of this unit plane with the point set formed a little in unit plane.
For the image of input, can suppose that the imaging plane of image is positioned in same plane, the coordinate axis of two video cameras is parallel to each other, and x-axis overlaps, video camera distance is in the direction of the x axis called parallax range, and in the image obtained separately in two video cameras, the alternate position spike of impact point is called parallax.Then according to the focal length of the similarity of solid geometry relation intermediate cam shape, parallax range, parallax and two video cameras, the depth information of described impact point and point set on image can be obtained, thus obtain the depth information of this unit plane.
Further, described S302 specifically can comprise:
Contours extract operation is performed to described pars intermedia image object;
Result according to contours extract operation carries out modeling to described pars intermedia image object, obtains the perspective wire frame model of pars intermedia image object;
According to the unit area preset, each face in the perspective wire frame model of described pars intermedia image object is divided, the unit plane that each face in perspective wire frame model of obtaining is corresponding;
Adopt the depth calculation mode of binocular solid geometric relationship, unit plane in perspective wire frame model is calculated, obtains the depth information of each unit plane.
Namely be first the modeling of described pars intermedia image object according to profile, such as, right cylinder is adopted to represent trees, people according to profile, employing rectangular parallelepiped, cube represent building, further each object of pars intermedia can be described as roughly geometry and the wire-frame model of rule like this, and then take point set as the depth information that base unit calculates each point set (unit plane) in wire-frame model, this can improve the speed of depth calculation further, and saving assesses the cost.
In the embodiment of the present invention by based on binocular solid geometric relationship depth information algorithm, to calculate the depth information of the objects such as buildings, people, vehicle for base unit with the unit plane of point set composition, the calculating of each point need not be carried out, the computing velocity of depth information can be provided significantly, and can also be improved the speed of depth calculation further by the mode of modeling, saving assesses the cost.
Further, then refer to Fig. 4, be the schematic flow sheet of the another kind of degree of depth drawing generating method of the embodiment of the present invention, the described method of the embodiment of the present invention can be applicable in all kinds of computer vision system, concrete, and the described method of the present embodiment comprises:
S401: input picture;
S402: the Image Segmentation Using of current input is operated, segmentation obtains sky image object;
S403: the Image Segmentation Using of current input is operated, segmentation obtains ground image object and pars intermedia image object;
S404: to calculate in described ground image object depth information a little;
Described S404 specifically can comprise: for the ground image Object Selection in the object set of described image is based on the depth information algorithm of interpolation calculation from preset depth information algorithm; Calculate the depth information of at least three points in ground image object; According to the depth information of at least three points calculated, adopt the mode of interpolation calculation to obtain in described ground image object depth information a little.
S405: the depth information calculating described pars intermedia image object;
Described S405 specifically can comprise: for the ground image Object Selection in the object set of described image is based on the depth information algorithm of binocular solid geometric relationship from preset depth information algorithm; With unit plane in described pars intermedia image object for base unit, adopt the depth information algorithm based on binocular solid geometric relationship to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size; According to the depth information of constituent parts plane, combination obtains the depth information of described pars intermedia image object.
Wherein, with unit plane in described pars intermedia image object for base unit, the depth information adopting the depth information algorithm based on binocular solid geometric relationship to calculate constituent parts plane in described pars intermedia image object comprises: perform contours extract operation to described pars intermedia image object; Result according to contours extract operation carries out modeling to described pars intermedia image object, obtains the perspective wire frame model of pars intermedia image object; According to the unit area preset, each face in the perspective wire frame model of described pars intermedia image object is divided, the unit plane that each face in perspective wire frame model of obtaining is corresponding; Adopt the depth calculation mode of binocular solid geometric relationship, unit plane in perspective wire frame model is calculated, obtains the depth information of each unit plane.
S406: the depth information of the sky image object in the object set of described image is labeled as infinite distance;
S407: combined by the depth information of each image object, obtains the depth map of the image of described input.
The embodiment of the present invention is by obtaining different image objects to the Image Segmentation Using of input, then the depth information of this image object is calculated for different image object selected depth information algorithms, inventor finds, the depth map comparatively accurately of input picture can be obtained rapidly, particularly not high to the accuracy requirement of depth information at some, and only require in the scene of distant relationships, the requirement of the quick obtaining of depth map can be met, save computational resource, reduce cost.
Below the depth map generation device of the embodiment of the present invention is described in detail.
Refer to Fig. 5, be the structure composition schematic diagram of a kind of depth map generation device of the embodiment of the present invention, the described device of the embodiment of the present invention can be applicable in all kinds of computer vision system, concrete, and the described device of the present embodiment comprises:
Segmentation module 1, for the Image Segmentation Using operation to input, obtains the object set of image, and the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object;
The image of current input can be under the scenes such as streetscape shooting, is taken the binocular image obtained by video camera.Generally comprise sky, ground and the center section between sky and ground in image, center section can be the images such as buildings, street trees, vehicle, sculpture.
Described segmentation module 1 specifically see the partitioning scheme in embodiment of the method corresponding to Fig. 1, can not repeat them here the cutting operation of the image of input.
Computing module 2, for being respectively each the image object selected depth information algorithm in the object set of described image from preset depth information algorithm, calculates the depth information of correspondence image object;
Described preset depth information algorithm can comprise multiple, described computing module 2 specifically can according to the type splitting the image object obtained, different depth information algorithms is selected namely to calculate, concrete, described computing module 2 is for sky image object, directly its depth information is labeled as infinite distance, then in depth map, sky image object is expressed as infinite distance; Ground image object is then used to the mode compute depth value of interpolation; Then based on the depth information algorithm of binocular solid geometric relationship, comparatively detailed calculating is carried out for pars intermedia image object.
Composite module 3, for being combined by the depth information of each image object, obtains the depth map of the image of described input.
Depth map due to described image represents that on image, the depth information obtained, to the distance of same video camera, is therefore recorded to the picture depth figure that depth map can obtain described input by each point.
The embodiment of the present invention is identical to the partitioning scheme described in the corresponding embodiment of the above-mentioned and above-mentioned Fig. 1 of segmentation of the Image Segmentation Using operation of input, is not repeated herein.
The embodiment of the present invention is by obtaining different image objects to the Image Segmentation Using of input, then the depth information of this image object is calculated for different image object selected depth information algorithms, inventor finds, the depth map comparatively accurately of input picture can be obtained rapidly, particularly not high to the accuracy requirement of depth information at some, and only require in the scene of distant relationships, the requirement of the quick obtaining of depth map can be met, save computational resource, reduce cost.
Further, refer to Fig. 6, be wherein a kind of concrete structure schematic diagram of computing module described in Fig. 5, described computing module 2 carries out depth calculation especially by each unit in Fig. 6 to ground image object, and concrete described computing module 2 comprises:
First selection unit 21, for being the depth information algorithm of the ground image Object Selection in the object set of described image based on interpolation calculation from preset depth information algorithm;
First computing unit 22, for calculating the depth information of at least three points in ground image object;
First processing unit 23, for according to the depth information of at least three points calculated, adopt the mode of interpolation to obtain in described ground image object depth information a little.
Concrete, depth information algorithm based on interpolation calculation is according to selecting suitable specific function, each aspect in the interval of ground image object calculates part point value, then on other aspects in interval, use approximate values of other points of value of the specific function selected, described computing module 2 is by described first computing unit 22 and the first processing unit 23, adopt Block-matching (BlockMatch) conventional at present to calculate the depth information of part point, then obtained the depth information of other points by interpolation.After obtaining the depth information of each point, namely complete the calculating of depth information in described ground image object, complete the depth map of ground image object part.
Described computing module 2 adopts the mode of interpolation to calculate the depth information of above ground portion, after calculating several point, just the depth value of other points can be obtained, meet the requirement of quick compute depth information, and more efficiently can get rid of the impact on above ground portion depth information such as such as trees, vehicle etc.
Further, refer to Fig. 7, be the wherein another kind of concrete structure schematic diagram of computing module described in Fig. 5, described computing module 2 carries out depth calculation especially by portion's image object between each cell pairs in Fig. 7, and concrete described computing module 2 comprises:
Second selection unit 24, for being the depth information algorithm of the ground image Object Selection in the object set of described image based on binocular solid geometric relationship from preset depth information algorithm;
Second computing unit 25, for with unit plane in described pars intermedia image object for base unit, adopt the depth information algorithm based on binocular solid geometric relationship to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size;
Second processing unit 26, according to the depth information of constituent parts plane, combination obtains the depth information of described pars intermedia image object.
Depth information algorithm based on binocular solid geometric relationship is the depth information being obtained impact point by impact point same on image object to the distance of two identical video cameras, described computing module 2 by described second computing unit 25 and the second processing unit 26 with unit plane in described pars intermedia image object for base unit, replace impact point with the point set formed a little in unit plane, calculate the depth information of this unit plane.
By described second computing unit 25 and the second processing unit 26, for the image of input, can suppose that the imaging plane of image is positioned in same plane, the coordinate axis of two video cameras is parallel to each other, and x-axis overlaps, video camera distance is in the direction of the x axis called parallax range, and in the image obtained separately in two video cameras, the alternate position spike of impact point is called parallax.Then according to the focal length of the similarity of solid geometry relation intermediate cam shape, parallax range, parallax and two video cameras, the depth information of described impact point and point set on image can be obtained, thus obtain the depth information of this unit plane.
Concrete further, described second computing unit 24 can also comprise following subelement and complete corresponding function:
Extract subelement, contours extract operation is performed to described pars intermedia image object;
Modeling subelement, carries out modeling for the result operated according to contours extract to described pars intermedia image object, obtains the perspective wire frame model of pars intermedia image object;
Plane divides subelement, for dividing each face in the perspective wire frame model of described pars intermedia image object according to the unit area preset, and the unit plane that each face in perspective wire frame model of obtaining is corresponding;
Computation subunit, for adopting the depth calculation mode of binocular solid geometric relationship, calculating unit plane in perspective wire frame model, obtaining the depth information of each unit plane.
Described second computing unit 24 is by above-mentioned subelement, first be the modeling of described pars intermedia image object according to profile, such as, right cylinder is adopted to represent trees, people according to profile, employing rectangular parallelepiped, cube represent building, further each object of pars intermedia can be described as roughly geometry and the wire-frame model of rule like this, and then take point set as the depth information that base unit calculates each point set (unit plane) in wire-frame model, this can improve the speed of depth calculation further, and saving assesses the cost.
Described computing module 2 by based on binocular solid geometric relationship depth information algorithm, to calculate the depth information of the objects such as buildings, people, vehicle for base unit with the unit plane of point set composition, the calculating of each point need not be carried out, the computing velocity of depth information can be provided significantly, and can also be improved the speed of depth calculation further by the mode of modeling, saving assesses the cost.
Further, for sky image object, described computing module 2 can comprise indexing unit and process: indexing unit, for the depth information of the sky image object in the object set of described image is labeled as infinite distance.
It should be noted that, described computing module 2 can comprise the first selection unit 21, first computing unit 22, first processing unit 23 simultaneously, and second selection unit 24, second computing unit 25, second processing unit 26, and indexing unit, certainly, also can obtain scene according to real image, select unit wherein to complete the corresponding function of computing module 2.
The embodiment of the present invention can obtain the depth map comparatively accurately of input picture rapidly, particularly not high to the accuracy requirement of depth information at some, and only require in the scene of distant relationships, the requirement of the quick obtaining of depth map can be met, save computational resource, reduce cost.
One of ordinary skill in the art will appreciate that all or part of flow process realized in above-described embodiment method, that the hardware that can carry out instruction relevant by computer program has come, described program can be stored in a computer read/write memory medium, this program, when performing, can comprise the flow process of the embodiment as above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-OnlyMemory, ROM) or random store-memory body (RandomAccessMemory, RAM) etc.
Above disclosedly be only present pre-ferred embodiments, certainly can not limit the interest field of the present invention with this, therefore according to the equivalent variations that the claims in the present invention are done, still belong to the scope that the present invention is contained.

Claims (6)

1. a degree of depth drawing generating method, is characterized in that, comprising:
S1, to operate the Image Segmentation Using of current input, obtain the object set of described image, the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object;
S2, each the image object selected depth information algorithm be respectively from preset depth information algorithm in the object set of described image, calculate the depth information of correspondence image object, if the object set of described image comprises pars intermedia image object, be specially:
S21, be the depth information algorithm that pars intermedia image object in the object set of described image selects based on binocular solid geometric relationship from preset depth information algorithm;
S22, with unit plane in described pars intermedia image object for base unit, the depth information algorithm based on binocular solid geometric relationship is adopted to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size, is specially:
S221, contours extract operation is performed to described pars intermedia image object,
S222, according to contours extract operation result modeling is carried out to described pars intermedia image object, obtain the perspective wire frame model of pars intermedia image object,
S223, according to the unit area preset, each face in the perspective wire frame model of described pars intermedia image object to be divided, the unit plane that each face in perspective wire frame model of obtaining is corresponding,
The depth calculation mode of S224, employing binocular solid geometric relationship, calculates unit plane in perspective wire frame model, obtains the depth information of each unit plane;
S23, depth information according to constituent parts plane, combination obtains the depth information of described pars intermedia image object;
S3, the depth information of each image object to be combined, obtain the depth map of the image of described input.
2. the method for claim 1, it is characterized in that, if the object set of described image comprises ground image object, then described each image object selected depth information algorithm be respectively from preset depth information algorithm in the object set of described image, calculate the depth information of correspondence image object, comprising:
Be the depth information algorithm of the ground image Object Selection in the object set of described image based on interpolation calculation from preset depth information algorithm;
Calculate the depth information of at least three points in ground image object;
According to the depth information of at least three points calculated, adopt the mode of interpolation calculation to obtain in described ground image object depth information a little.
3. the method for claim 1, is characterized in that, if the object set of described image comprises sky image object, then the depth information calculating the sky image object in the object set of described image comprises:
The depth information of the sky image object in the object set of described image is labeled as infinite distance.
4. a depth map generation device, is characterized in that, comprising:
Segmentation module, for the Image Segmentation Using operation to input, obtains the object set of image, and the object set of described image comprises any one or more in sky image object, ground image object and pars intermedia image object;
Computing module, for being respectively each the image object selected depth information algorithm in the object set of described image from preset depth information algorithm, calculate the depth information of correspondence image object, described computing module comprises the second selection unit, the second computing unit and the second processing unit
Second selection unit, for being the depth information algorithm of the pars intermedia image object selection in the object set of described image based on binocular solid geometric relationship from preset depth information algorithm,
Second computing unit, for with unit plane in described pars intermedia image object for base unit, the depth information algorithm based on binocular solid geometric relationship is adopted to calculate the depth information of constituent parts plane in described pars intermedia image object, described unit plane refers to that in pars intermedia image object, size is the plane of default unit area size
Described second computing unit also comprises following subelement:
Extract subelement, contours extract operation performed to described pars intermedia image object,
Modeling subelement, carries out modeling for the result operated according to contours extract to described pars intermedia image object, obtains the perspective wire frame model of pars intermedia image object,
Plane divides subelement, for dividing each face in the perspective wire frame model of described pars intermedia image object according to the unit area preset, and the unit plane that each face in perspective wire frame model of obtaining is corresponding,
Computation subunit, for adopting the depth calculation mode of binocular solid geometric relationship, calculating unit plane in perspective wire frame model, obtaining the depth information of each unit plane,
Second processing unit, according to the depth information of constituent parts plane, combination obtains the depth information of described pars intermedia image object;
Composite module, for being combined by the depth information of each image object, obtains the depth map of the image of described input.
5. device as claimed in claim 4, it is characterized in that, described computing module comprises:
First selection unit, for being the depth information algorithm of the ground image Object Selection in the object set of described image based on interpolation calculation from preset depth information algorithm;
First computing unit, for calculating the depth information of at least three points in ground image object;
First processing unit, for according to the depth information of at least three points calculated, adopt the mode of interpolation to obtain in described ground image object depth information a little.
6. device as claimed in claim 4, it is characterized in that, described computing module comprises:
Indexing unit, for being labeled as infinite distance by the depth information of the sky image object in the object set of described image.
CN201310069640.3A 2013-03-05 2013-03-05 A kind of degree of depth drawing generating method and device Active CN103198473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310069640.3A CN103198473B (en) 2013-03-05 2013-03-05 A kind of degree of depth drawing generating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310069640.3A CN103198473B (en) 2013-03-05 2013-03-05 A kind of degree of depth drawing generating method and device

Publications (2)

Publication Number Publication Date
CN103198473A CN103198473A (en) 2013-07-10
CN103198473B true CN103198473B (en) 2016-02-24

Family

ID=48720979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310069640.3A Active CN103198473B (en) 2013-03-05 2013-03-05 A kind of degree of depth drawing generating method and device

Country Status (1)

Country Link
CN (1) CN103198473B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617608B (en) * 2013-10-24 2016-07-06 四川长虹电器股份有限公司 By the method that binocular image obtains depth map
US10282591B2 (en) 2015-08-24 2019-05-07 Qualcomm Incorporated Systems and methods for depth map sampling
CN105374019B (en) * 2015-09-30 2018-06-19 华为技术有限公司 A kind of more depth map fusion methods and device
CN106231177A (en) * 2016-07-20 2016-12-14 成都微晶景泰科技有限公司 Scene depth measuring method, equipment and imaging device
CN108024051B (en) * 2016-11-04 2021-05-04 宁波舜宇光电信息有限公司 Distance parameter calculation method, double-camera module and electronic equipment
CN109274864A (en) * 2018-09-05 2019-01-25 深圳奥比中光科技有限公司 Depth camera, depth calculation System and method for
CN109658451B (en) * 2018-12-04 2021-07-30 深圳市道通智能航空技术股份有限公司 Depth sensing method and device and depth sensing equipment
CN111080804B (en) * 2019-10-23 2020-11-06 贝壳找房(北京)科技有限公司 Three-dimensional image generation method and device
CN111025137A (en) * 2019-12-13 2020-04-17 苏州华电电气股份有限公司 Open type isolating switch state sensing device
WO2022040941A1 (en) * 2020-08-25 2022-03-03 深圳市大疆创新科技有限公司 Depth calculation method and device, and mobile platform and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102308320A (en) * 2009-02-06 2012-01-04 香港科技大学 Generating three-dimensional models from images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129432B2 (en) * 2010-01-28 2015-09-08 The Hong Kong University Of Science And Technology Image-based procedural remodeling of buildings

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102308320A (en) * 2009-02-06 2012-01-04 香港科技大学 Generating three-dimensional models from images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Putting Objects in Perspective;Derek Hoiem;《International Journal of Computer Vision》;20080417;第3-25页 *
基于内容理解的单幅静态街景图像深度估计;李乐等;《机器人》;20110131;第33卷(第1期);第174-180页 *

Also Published As

Publication number Publication date
CN103198473A (en) 2013-07-10

Similar Documents

Publication Publication Date Title
CN103198473B (en) A kind of degree of depth drawing generating method and device
CN108537876B (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN108520536B (en) Disparity map generation method and device and terminal
US20190318547A1 (en) System and method for dense, large scale scene reconstruction
WO2019127445A1 (en) Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN101324663B (en) Rapid blocking and grating algorithm of laser radar point clouds data
Einecke et al. A multi-block-matching approach for stereo
EP3570253A1 (en) Method and device for reconstructing three-dimensional point cloud
KR20140060454A (en) Backfilling points in a point cloud
CN105469386B (en) A kind of method and device of determining stereoscopic camera height and pitch angle
ES2392229A1 (en) Method for generating a model of a flat object from views of the object
CN114419130A (en) Bulk cargo volume measurement method based on image characteristics and three-dimensional point cloud technology
CN112907573B (en) Depth completion method based on 3D convolution
CN102368137A (en) Embedded calibrating stereoscopic vision system
WO2016129612A1 (en) Method for reconstructing a three-dimensional (3d) scene
CN108401551B (en) Twin-lens low-light stereoscopic full views imaging device and its ultra-large vision field distance measuring method
Ambrosch et al. Hardware implementation of an SAD based stereo vision algorithm
CN114881841A (en) Image generation method and device
CN104236468A (en) Method and system for calculating coordinates of target space and mobile robot
Rothermel et al. Fast and robust generation of semantic urban terrain models from UAV video streams
CN103971356B (en) Street view image Target Segmentation method and device based on parallax information
Sánchez-Ferreira et al. Development of a stereo vision measurement architecture for an underwater robot
CN112948605A (en) Point cloud data labeling method, device, equipment and readable storage medium
CN116704112A (en) 3D scanning system for object reconstruction
CN112215048B (en) 3D target detection method, device and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180117

Address after: The South Road in Guangdong province Shenzhen city Fiyta building 518000 floor 5-10 Nanshan District high tech Zone

Patentee after: Shenzhen Tencent Computer System Co., Ltd.

Address before: Shenzhen Futian District City, Guangdong province 518057 Zhenxing Road, SEG Science Park 2 East Room 403

Patentee before: Tencent Technology (Shenzhen) Co., Ltd.