CN105530503A - Depth map creating method and multi-lens camera system - Google Patents

Depth map creating method and multi-lens camera system Download PDF

Info

Publication number
CN105530503A
CN105530503A CN201410520072.9A CN201410520072A CN105530503A CN 105530503 A CN105530503 A CN 105530503A CN 201410520072 A CN201410520072 A CN 201410520072A CN 105530503 A CN105530503 A CN 105530503A
Authority
CN
China
Prior art keywords
depth map
image
overlapping
overlapping region
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410520072.9A
Other languages
Chinese (zh)
Inventor
郑青峰
吴俊辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lite On Technology Corp
Original Assignee
Lite On Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lite On Technology Corp filed Critical Lite On Technology Corp
Priority to CN201410520072.9A priority Critical patent/CN105530503A/en
Publication of CN105530503A publication Critical patent/CN105530503A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a depth map creating method and a multi-lens camera system. The depth map creating method comprises the steps of: acquiring a plurality of images of a same scene by using a multi-lens camera, wherein the multi-lens camera comprises non-parallel lenses; judging overlapping regions and non-overlapping regions in the images based on the acquired adjacent images; creating an overlapping region depth map by utilizing aberration information of corresponding pixels of two images in the overlapping regions separately; estimating one or more non-overlapping region depth maps of images in the non-overlapping regions; and acquiring boundaries or overlapped regions of the overlapping region depth map and the adjacent non-overlapping region depth maps, so as to form a global depth map through splicing.

Description

Depth map method for building up and multi-lens camera system
Technical field
The present invention relates to a kind of depth map method for building up and multi-lens camera system, particularly a kind of multi-lens camera system, and set up the method for the depth map that wherein many lens shooting overlaps combine with non-overlapping images.
Background technology
Depth map is a kind of image comprising each object depth information in image, and this type of pattern represents with grey scale pattern usually.Depth information in image also can be described as the Z degree of depth (Z-depth), namely represents the Z-direction depth information outside an X-Y plane direction.Therefore a stereo-picture can be rebuild according to the depth information of object each in image.
According to the prior art setting up stereo-picture with the image of shooting, majority is by two parallel lens shooting Same Scene, and a kind of depth map (depthmap) is set up in the overlapping region in the image obtained respectively according to two camera lenses, this depth map is the foundation setting up stereo-picture (3Dimage).
The image projection that above-mentioned two camera lenses obtain respectively, at two imageing sensors, can calculate depth map according to the aberration between image, but each camera lens have certain visual field (fieldofview,
FOV), nonoverlapping place, two visuals field cannot produce depth map.
Summary of the invention
In order to the complete depth map of whole scene image still can be set up under the particular case of the less grade in overlapping region, the present invention proposes a kind of depth map method for building up, and perform the multi-lens camera system of the method, wherein except part overlapping in image that many camera lenses obtain is with except the depth map set up according to parallax information, non-overlapped part then also sets up its depth map in estimation mode, makes it possible to obtain universe depth map after combining.
According to embodiment of the method, step comprises the multiple images first obtaining Same Scene with multi-lens camera, wherein each image comprises and at least onely has the image of overlapping region and at least one image with Non-overlapping Domain, then judges overlapping region and Non-overlapping Domain in each image.Then set up an overlapping region depth map of image in overlapping region, the aberration of respective pixel in overlapping region two images can be utilized to obtain the depth information of each pixel, and set up this overlapping region depth map accordingly; One or more Non-overlapping Domain depth map of image in Non-overlapping Domain is set up again with estimation of Depth means.
After obtaining overlapping region depth map and the adjacent border of one or more Non-overlapping Domain depth map or the region of overlap, sliceable two kinds of depth maps and form a universe depth map.
According to system embodiment, multi-lens camera system includes camera lens module more than, comprising the first camera lens module obtaining the first image in scene, and the second camera lens module of the second image in this scene, two camera lens modules are not parallel camera lens; System comprises the temporary storage location of register map picture; System includes an overlapping region judging unit, can obtain the first image and the second image from temporary storage location, with the overlapping region according to the visual field of the first camera lens module and the second camera lens module and Distance Judgment first image of camera lens and scene and the second image.System sets up an overlapping region depth map with an overlapping region Deep Computing unit according to the aberration of respective pixel in image, and sets up one or more Non-overlapping Domain depth map with a Non-overlapping Domain Deep Computing unit; System comprises a concatenation unit, after can obtaining overlapping region depth map and the adjacent border of one or more Non-overlapping Domain depth map or the region of overlap, forms a universe depth map after splicing.
In order to technology, method and effect that the present invention takes for realizing set object further can be understood, refer to following detailed description for the present invention, accompanying drawing, believe object of the present invention, feature and feature, when being goed deep into thus and concrete understanding, but Figure of description only provides reference and explanation use, is not used for being limited the present invention.
Accompanying drawing explanation
Fig. 1 shows one of visual field schematic diagram of two non-parallel lens shooting Same Scene in a camera;
Fig. 2 shows the visual field schematic diagram two of two non-parallel lens shooting Same Scene in a camera;
Fig. 3 A and 3B shows each area schematic that shooting Same Scene obtains;
Fig. 4 display utilizes the algorithm based on end point to estimate the schematic diagram of the depth of field;
Fig. 5 is shown as the multi-lens camera system embodiment adopting depth map method for building up of the present invention;
The flow process of Fig. 6 display describes the embodiment of depth map method for building up of the present invention;
Fig. 7 illustrates the joining method of display depth figure;
Fig. 8 shows the joining method embodiment of adjacent depth map in depth map method for building up of the present invention.
Description of reference numerals:
Camera 10 first camera lens 101
Second camera lens 102 first visual field 1a
Second 1b overlapping region, visual field 1c
First sensor 103 second transducer 104
Camera 20 first camera lens 201
Second camera lens 202 first visual field 2a
Second 2b overlapping region, visual field 2c
First sensor 203 second transducer 204
First image Non-overlapping Domain 301 first image overlapping region 302
Second image overlapping region 303 second image Non-overlapping Domain 304
Image-region a, b
Scene 4 end point 401
Object 4a, 4b, 4c, 4d, 4e
First camera lens module 501 second camera lens module 502
Temporary storage location 503 overlapping region judging unit 504
Overlapping region Deep Computing unit 505
Non-overlapping Domain Deep Computing unit 506
Storage element 507 concatenation unit 508
Output unit 509
Non-overlapped district depth image 701,702
Overlapping region depth map 703 image-region c, d
Step S601 ~ S615 depth map method for building up flow process
Step S801 ~ S811 joining method flow process
Embodiment
The present invention proposes a kind of depth map method for building up and multi-lens camera system, utilization can make the bicamera of stereo-picture, twin-lens such as a kind of have two non-parallel and have the camera lens of a distance apart, the aberration that two camera lenses obtain image can obtain the depth information that forms stereo-picture, to make a kind of depth map (depthmap), the stereo-picture formed according to depth map watched by recycling software.
But at a kind of camera with non-parallel camera lens, the overlapping region of the image captured by two camera lenses difference is relatively very narrow and small, if only utilize this overlay region to obtain depth map, depth map is also non-extends to whole picture, or it is narrow and small to seem very, so the present invention proposes a kind of depth map method for building up and multi-lens camera system, wherein one of the object of method is the depth map by splicing overlapping and non-overlapped district, produces larger depth map application.
First can consult shown in Fig. 1, the visual field schematic diagram that one has the camera shooting Same Scene of two non-parallel camera lens modules is wherein described, the design of non-parallel camera lens module is namely in time designing, two camera lenses are not axially arranged along same, and two camera lenses (101,102) of non-parallel design can take the scene of wider range.Wherein include the first camera lens module that the first camera lens 101 forms with first sensor 103 in camera 10, and second the second camera lens module that form of camera lens 102 and the second transducer 104, wherein only schematically illustrate the relative position of camera lens and transducer, and actual design is not limited to the mode shown in figure.
Above-mentioned two camera lens module positions are separated by a segment distance, take Same Scene simultaneously.When taking certain scene with the first camera lens module (101,103), coverage as the first visual field 1a the scenery contained; The scenery that the second visual field 1b contains then is obtained with the second camera lens module (102,104) shooting Same Scene.Overlapping region 1c is then had between first visual field 1a and the second visual field 1b.
When Same Scene taken by camera 10, the image taken respectively from the first camera lens module (101,103) and the second camera lens module (102,104) obtains the image of overlapping region 1c, object in the 1c of overlapping region is different in the position of different imageing sensor imagings, namely has an aberration.Aberration can obtain the depth information of related item accordingly.Therefore, according to one of embodiment that this specification is recorded, principle the depth map of scene in the 1c of overlapping region can be obtained by this.Other regions beyond the 1c of overlapping region are the non-overlapped region of image, be unfavorable for obtaining depth map, and namely the universe depth map that the contained invention of this specification produces cover the depth map that this Non-overlapping Domain produces under general technology.
Another enforcement aspect of camera schematic diagram as shown in Figure 2.
Be a camera 20 with two non-parallel camera lens modules shown in figure, camera 20 includes the first camera lens module that the first camera lens 201 forms with first sensor 203, and the second camera lens module that the second camera lens 202 and the second transducer 204 form.In this example, the second camera lens 202 adopts a kind of microlens array camera lens (microlensarray), and the image captured through the second camera lens 202 will be projected to the second transducer 204.
Above-mentioned two camera lens modules are separated by a segment distance, the first camera lens module (201,203) coverage as the first visual field 2a the scenery contained; Second camera lens module (202,204) can obtain the scenery that the second visual field 2b is contained.Overlapping region 2c is then had between first visual field 2a and the second visual field 2b.
When Same Scene taken by camera 20, the object in the 2c of overlapping region can be utilized to obtain the depth information of related item at the aberration of different imageing sensors, and the image Non-overlapping Domain beyond the 2c of overlapping region then obtains depth map with the support method estimation of specification embodiment institute.
So, depth map method for building up proposed by the invention and multi-lens camera system are then from the scene of Non-overlapping Domain, utilize the technology being converted to solid (3D) image from two dimension (2D) image further, wherein the core of switch technology is and estimates in Non-overlapping Domain object in the aberration information of different images transducer imaging, by this two dimensional image coding is obtained the depth information of object in two dimensional image, and set up the depth map in Non-overlapping Domain.
Then can Fig. 3 A with take the overlapping and Non-overlapping Domain schematic diagram that Same Scene obtains Fig. 3 B Suo Shi and represent the image information that twin-lens obtains respectively.
Different two camera lenses (as non-parallel camera lens) are shown as to the image acquired by the Same Scene same time in figure, display is as image-region a and image-region b respectively, through image ratio to after, between the image that Same Scene taken respectively by the camera lens showing two distances of being separated by, there is overlapping and non-overlapped part.As diagram, there is in such as image-region a the first not overlapping with image-region b image Non-overlapping Domain 301, and the part of overlap, as the first image overlapping region 302; Image-region b in like manner includes the second not overlapping with image-region a image overlapping region 303, and the second image Non-overlapping Domain 304 of overlap.
So, except the parallax information that can produce according to the right and left eyes of stereoscopic vision in overlapping region is set up except depth map, depth map method for building up described in specification of the present invention then to utilize switch technology to estimate in Non-overlapping Domain object in the aberration information of different images transducer imaging, obtain the depth information of each image object in Non-overlapping Domain by this, and set up the depth map in Non-overlapping Domain.
Set up method such as a kind of reflective estimation algorithm of the depth information of the plane picture obtained in Non-overlapping Domain, its principle is according to the part of different depth on each object of stereo-picture or the entirety reflective characteristic having different angles for light, imageing sensor receives the difference of the reflection ray of different object or same object different surfaces in scene, the depth information of each object is produced according to the contrast of light, therefore can estimate with the image of object reflector segment in two dimensional image the distance this object and camera lens, the namely degree of depth of this object.
Embodiment still has the tagsort of tone (tone), saturation (saturation), brightness (brightness) etc. can be utilized simultaneously to publish picture the depth information in each object or region in picture in addition, can infer the single or progressive degree of depth wherein object and have according to depth information relation between the depth information of image and object.Therefore, the depth information object in two dimensional image can be estimated, to set up out depth map.
A kind of method is had to be utilize image processing techniques to obtain the depth relationship of each object in plane picture again.Such as first define the plane in image, can be described as zero plane (zeroplane), calculate the spatial relationship between object according to the position relationship of the relation each other of object in image and zero plane therewith, comprise level and vertical relation, the depth information of object can be obtained.
Execution mode such as Fig. 4 display is separately had to utilize the algorithm based on end point to estimate the schematic diagram of the depth of field.This example is presented at the object in a plane scene 4 with multiple personage or article etc., can obtain the relevance in space between various object and obtain end point 401 by image procossing, such as farthest a bit set up virtual end point.
Then, can accordingly end point 401 obtain each object therewith the line of end point 401 and the degree of depth estimate, wherein first foreground object can be separated with background object, more distinctly carry out the difference of foreground object and background objects.Such as: object 4a is shown as the article on a wall, object 4b shows a personage, the distance of object 4a and object 4b and end point 401 be exactly represent both position relationship; Object 4c and other objects, such as the relation of object 4d and end point 401 also just represents the spatial relationship of two objects, and same object 4e represents the article on another wall, can obtain object 4e and object 4a or the relation of other objects.So, obtain the distance of each object in image and end point 401 according to end point estimation algorithm, estimate each object depth information in a scene further, set up the depth map of this image accordingly.
The above-mentioned mode such as one obtaining end point is called the method for Hough transformation (Houghtransform), method first obtains the feature in image according to graphical analysis, such as linear relation, a linear relation can be obtained between each object, multiple linear relation can estimate the end point in a space, also just can estimate the relation of the end point therewith of each object in space, and then obtain the depth information of object.
So, the method that depth map method for building up of the present invention can obtain object depth information in Non-overlapping Domain according to one of above-mentioned several method or other obtains the depth map of Non-overlapping Domain, then again the depth map of overlapping region and the depth map of Non-overlapping Domain are stitched together (imagestitching), and set up the complete depth map of whole image, finally formed accordingly one complete and do not lose the stereo-picture of part Non-overlapping Domain stereoeffect.
The embodiment of the present invention proposes a multi-lens camera system, wherein namely adopt the method for the above-mentioned depth map combined with the overlap captured by many camera lenses and non-overlapping images, system embodiment as shown in Figure 5, wherein can realize various functional module in system in hardware, or can realize by software mode, and perform wherein each functional module with computer system.And can simultaneously with reference to the embodiment flow process shown in figure 6.
According to this embodiment, multi-lens camera system mainly comprises camera lens module more than, there is multiple camera lens, be in particular many camera lens modules with at least one non-parallel camera lens, the visual angle that in multi-lens camera, each camera lens module is had nothing in common with each other, take the different angles of Same Scene respectively, can obtain multiple images of Same Scene, wherein each image comprises and at least onely has the image of overlapping region and at least one image with Non-overlapping Domain.
The first camera lens module 501 and the second camera lens module 502 is shown than example like this.Wherein the first camera lens module 501 is in order to obtain the first image in a scene, and the second camera lens module 502 is in order to obtain the second image in scene, and is first stored to temporary storage location 503 in multi-lens camera system, and step is as the step S601 of Fig. 6.The part of superimposed images can be included between two images acquired by two camera lens modules (501,502), and non-overlapped part.During computing, obtain two images (Fig. 6, step S603) by the temporary storage location 503 of system, temporary storage location 503 is the memory in a kind of system, in order to temporary first image and the second image.
Then, system comprises an overlapping region judging unit 504, the first image and second image of above-mentioned storage can be obtained from temporary storage location 503, so with image processing techniques, according to the visual field of the first camera lens module 501 and the second camera lens module 502, distance between two camera lens modules, and judge the overlapping region scope of above-mentioned first image and the second image with parameters such as the distances of captured scene, also can judge all the other Non-overlapping Domain (Fig. 6 simultaneously, step S605), namely can according to the distance of scene and many camera lens modules, and the spacing of each camera lens and field range judge the size of overlapping region.
Then, the picture signal of overlapping region will carry out computing by overlapping region Deep Computing unit 505, utilize the aberration of respective pixel in the first image in overlapping region and the second image to obtain the depth information of each pixel, set up an overlapping region depth map (Fig. 6, step S607) accordingly.The picture signal of Non-overlapping Domain is then responsible for computing by Non-overlapping Domain Deep Computing unit 506, wherein computing comprises and sets up Non-overlapping Domain depth map with an end point estimation algorithm or a reflective estimation algorithm, system namely set up beyond overlapping region Non-overlapping Domain with one of above-mentioned various estimation of Depth means or method in the Non-overlapping Domain depth map (at least one) of image, as described in the step S609 of Fig. 6.Overlapping region depth map and Non-overlapping Domain depth map can first be stored to storage element 507.The processing sequence of abovementioned steps (as step S607, S609) unrestricted condition of the invention process, and can determine according to side circuit and Software for Design.
After the overlapping region depth map obtained respectively by above-mentioned overlapping region Deep Computing unit 505 and Non-overlapping Domain Deep Computing unit 506 and Non-overlapping Domain depth map, first judge region (Fig. 6 overlapping between depth map, step S611), Fig. 7 can be consulted, as described explanation afterwards.After obtaining with a concatenation unit 508 more afterwards, the software engineering of image procossing is utilized to splice overlapping region depth map and the border of one or more adjacent Non-overlapping Domain depth map or region (Fig. 6 of overlap, step S613), the universe depth map (Fig. 6, step S615) that one has depth information is formed after splicing.
System comprises output unit 509 again, and in order to export universe depth map to set up a stereo-picture, output unit 509 is such as an output interface, and by file output on other devices, output unit 509 also can be the display unit that presents stereo-picture.
After utilizing said system to complete splicing overlapping region depth map and one or more adjacent Non-overlapping Domain depth map, the spliced map with depth information can be set up, schematic diagram can consult Fig. 7, wherein show two image-region c and d, formed by two depth images 701,702 respectively, also there is a depth map overlapping region 703 between the two, therefore can according to the image information of this depth map overlapping region 703 during splicing, as border, wherein Pixel Information etc. are spliced, method can consult Fig. 8.
Namely flow process shown in Fig. 8 describes the joining method embodiment of adjacent depth map.
When after the image that system obtains respectively according to camera two camera lens modules, overlap and the Non-overlapping Domain of the image that two camera lenses obtain respectively can be judged according to the parameter of each middle camera inner lens, and make depth map respectively with the flow process that above-mentioned Fig. 6 is contained.
Splice program starts as step S801, when obtaining the depth map of multiple Same Scene, as step S803, the depth map of the image obtained respectively by two camera lens modules is through a preliminary corrections, comprise and obtain image border, overlapping region confirmation, processes pixel etc., more according to the hardware setting parameter of camera lens module, obtain adjacent depth map overlapping region, as step S805 by overlapping region depth map and Non-overlapping Domain depth map.
Then the image information in the region of two camera lens depth map borders or overlap is obtained, as step S807, can first adjust overlapping region depth map before a splice, object makes splicing can not produce image difference improperly, such as adjust the image parameter in the region of adjacent depth map border or overlap, comprise edge pixel, brightness value etc. in overlapping region between first comparison adjacent depth map, as step S809.According to execution mode, when simply two adjacent images being added, wherein overlapping region has to add in brightness takes advantage of effect, therefore overlapping region has obvious clear zone, therefore simultaneously one of embodiment of the present invention needs overlapping place to reduce brightness value adjacent two images, or reach the object of mean flow rate by the different brightness value (as effect of fading out) of blending.Finally be spliced to form above-mentioned universe depth map, as step S811.
In sum, the depth map method for building up that the present invention proposes is mainly for the image of the different angles acquired by the many camera lens modules in multi-lens camera system, when depth map for obtaining image, especially perform a kind of two-dimensional image for non-overlapped region to be converted to the action of three-dimensional image and to obtain wherein depth information, wherein make image that is overlapping and point overlapping region all can obtain depth map, finally then utilize image mosaic mode to form the spliced map of a universe two depth maps.
Be more than better possible embodiments of the present invention, non-ly therefore namely limit to the scope of the claims of the present invention, therefore the equivalent structure change of such as using specification of the present invention and diagramatic content to do, be all in like manner included within the scope of the present invention, close and give Chen Ming.

Claims (10)

1. a depth map method for building up, is characterized in that, described method comprises:
The multi-lens camera having at least one non-parallel camera lens with one obtains multiple images of Same Scene, and wherein each image comprises and at least onely has the image of overlapping region and at least one image with Non-overlapping Domain;
Judge overlapping region and Non-overlapping Domain in each image;
Set up an overlapping region depth map of image in this overlapping region, be wherein the depth information utilizing the aberration of respective pixel in this overlapping region two images to obtain each pixel, set up this overlapping region depth map accordingly;
One or more Non-overlapping Domain depth map of image in this Non-overlapping Domain is set up with estimation of Depth means; And
Splice this overlapping region depth map and one or more adjacent Non-overlapping Domain depth map, to form a universe depth map.
2. depth map method for building up as claimed in claim 1, is wherein set up this Non-overlapping Domain depth map with an end point estimation algorithm.
3. depth map method for building up as claimed in claim 1, is wherein set up this Non-overlapping Domain depth map with a reflective estimation algorithm.
4. depth map method for building up as claimed in claim 2 or claim 3, after the multiple images wherein taking this scene through this multi-lens camera are first temporary, according in Distance Judgment image of the visual field of each camera lens in this multi-lens camera, the spacing of camera lens module and this multi-lens camera and this scene with the overlapping region of another adjacent image, and Non-overlapping Domain.
5. depth map method for building up as claimed in claim 4, wherein this multi-lens camera at least has a microlens array camera lens.
6. depth map method for building up as claimed in claim 4, wherein the step of this overlapping region depth map of this splicing and this one or more Non-overlapping Domain depth map comprises:
Obtain the image information in the region of depth map border or overlap; And
Adjust the image parameter in the region of adjacent depth map border or overlap, to be spliced to form this universe depth map.
7. depth map method for building up as claimed in claim 6, wherein the image parameter in the region of the adjacent depth map border of this adjustment or overlap is brightness value.
8. a multi-lens camera system, is characterized in that, described system comprises:
One many camera lens modules with at least one non-parallel camera lens, comprise one first camera lens module obtaining one first image in a scene, and one second camera lens module of one second image in this scene;
One temporary storage location, in order to this first image temporary and this second image;
One overlapping region judging unit, obtains this first image and this second image from this temporary storage location, according to the visual field of this first camera lens module and this second camera lens module and with this first image of Distance Judgment of this scene and the overlapping region of this second image;
One overlapping region Deep Computing unit, utilizes the aberration of respective pixel in this first image in this overlapping region and this second image to obtain the depth information of each pixel, sets up an overlapping region depth map accordingly;
One Non-overlapping Domain Deep Computing unit, one or more Non-overlapping Domain depth map of image in the Non-overlapping Domain setting up beyond this overlapping region with estimation of Depth means; And
One concatenation unit, obtains this overlapping region depth map and the adjacent border of one or more Non-overlapping Domain depth map or the region of overlap, forms a universe depth map after splicing.
9. multi-lens camera system as claimed in claim 8, wherein these estimation of Depth means set up this Non-overlapping Domain depth map with an end point estimation algorithm or a reflective estimation algorithm.
10. multi-lens camera system as claimed in claim 8, wherein this multi-lens camera at least has a microlens array camera lens.
CN201410520072.9A 2014-09-30 2014-09-30 Depth map creating method and multi-lens camera system Pending CN105530503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410520072.9A CN105530503A (en) 2014-09-30 2014-09-30 Depth map creating method and multi-lens camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410520072.9A CN105530503A (en) 2014-09-30 2014-09-30 Depth map creating method and multi-lens camera system

Publications (1)

Publication Number Publication Date
CN105530503A true CN105530503A (en) 2016-04-27

Family

ID=55772445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410520072.9A Pending CN105530503A (en) 2014-09-30 2014-09-30 Depth map creating method and multi-lens camera system

Country Status (1)

Country Link
CN (1) CN105530503A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396080A (en) * 2016-05-17 2017-11-24 纬创资通股份有限公司 Method and system for generating depth information
CN107399274A (en) * 2016-05-06 2017-11-28 财团法人金属工业研究发展中心 image superposition method
CN107884930A (en) * 2016-09-30 2018-04-06 宏达国际电子股份有限公司 Wear-type device and control method
CN108769476A (en) * 2018-06-06 2018-11-06 Oppo广东移动通信有限公司 Image acquiring method and device, image collecting device, computer equipment and readable storage medium storing program for executing
CN108777784A (en) * 2018-06-06 2018-11-09 Oppo广东移动通信有限公司 Depth acquisition methods and device, electronic device, computer equipment and storage medium
CN108830785A (en) * 2018-06-06 2018-11-16 Oppo广东移动通信有限公司 Background-blurring method and device, electronic device, computer equipment and storage medium
WO2019178970A1 (en) * 2018-03-23 2019-09-26 深圳奥比中光科技有限公司 Structured light projection module and depth camera
WO2019233169A1 (en) * 2018-06-06 2019-12-12 Oppo广东移动通信有限公司 Image processing method and device, electronic device, computer apparatus, and storage medium
CN111932576A (en) * 2020-07-15 2020-11-13 中国科学院上海微系统与信息技术研究所 Object boundary measuring method and device based on depth camera
US11543671B2 (en) 2018-03-23 2023-01-03 Orbbec Inc. Structured light projection module and depth camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673395A (en) * 2008-09-10 2010-03-17 深圳华为通信技术有限公司 Image mosaic method and image mosaic device
CN103530597A (en) * 2012-07-03 2014-01-22 纬创资通股份有限公司 Operation object identification and operation object depth information establishing method and electronic device
US20140184640A1 (en) * 2011-05-31 2014-07-03 Nokia Corporation Methods, Apparatuses and Computer Program Products for Generating Panoramic Images Using Depth Map Data
CN103916658A (en) * 2014-04-18 2014-07-09 山东大学 3DV system inter-viewpoint depth image generating method adopting depth spread technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673395A (en) * 2008-09-10 2010-03-17 深圳华为通信技术有限公司 Image mosaic method and image mosaic device
US20140184640A1 (en) * 2011-05-31 2014-07-03 Nokia Corporation Methods, Apparatuses and Computer Program Products for Generating Panoramic Images Using Depth Map Data
CN103530597A (en) * 2012-07-03 2014-01-22 纬创资通股份有限公司 Operation object identification and operation object depth information establishing method and electronic device
CN103916658A (en) * 2014-04-18 2014-07-09 山东大学 3DV system inter-viewpoint depth image generating method adopting depth spread technology

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107399274A (en) * 2016-05-06 2017-11-28 财团法人金属工业研究发展中心 image superposition method
CN107396080A (en) * 2016-05-17 2017-11-24 纬创资通股份有限公司 Method and system for generating depth information
CN107396080B (en) * 2016-05-17 2019-04-09 纬创资通股份有限公司 Method and system for generating depth information
CN107884930A (en) * 2016-09-30 2018-04-06 宏达国际电子股份有限公司 Wear-type device and control method
CN107884930B (en) * 2016-09-30 2020-06-09 宏达国际电子股份有限公司 Head-mounted device and control method
WO2019178970A1 (en) * 2018-03-23 2019-09-26 深圳奥比中光科技有限公司 Structured light projection module and depth camera
US11543671B2 (en) 2018-03-23 2023-01-03 Orbbec Inc. Structured light projection module and depth camera
CN108769476A (en) * 2018-06-06 2018-11-06 Oppo广东移动通信有限公司 Image acquiring method and device, image collecting device, computer equipment and readable storage medium storing program for executing
WO2019233106A1 (en) * 2018-06-06 2019-12-12 Oppo广东移动通信有限公司 Image acquisition method and device, image capture device, computer apparatus, and readable storage medium
WO2019233169A1 (en) * 2018-06-06 2019-12-12 Oppo广东移动通信有限公司 Image processing method and device, electronic device, computer apparatus, and storage medium
CN108830785A (en) * 2018-06-06 2018-11-16 Oppo广东移动通信有限公司 Background-blurring method and device, electronic device, computer equipment and storage medium
CN108830785B (en) * 2018-06-06 2021-01-15 Oppo广东移动通信有限公司 Background blurring method and apparatus, electronic apparatus, computer device, and storage medium
CN108777784A (en) * 2018-06-06 2018-11-09 Oppo广东移动通信有限公司 Depth acquisition methods and device, electronic device, computer equipment and storage medium
CN111932576A (en) * 2020-07-15 2020-11-13 中国科学院上海微系统与信息技术研究所 Object boundary measuring method and device based on depth camera
CN111932576B (en) * 2020-07-15 2023-10-31 中国科学院上海微系统与信息技术研究所 Object boundary measuring method and device based on depth camera

Similar Documents

Publication Publication Date Title
CN105530503A (en) Depth map creating method and multi-lens camera system
JP6021541B2 (en) Image processing apparatus and method
US10326981B2 (en) Generating 3D images using multi-resolution camera set
CN106331527B (en) A kind of image split-joint method and device
CN104539925B (en) The method and system of three-dimensional scenic augmented reality based on depth information
US9013559B2 (en) System, method and program for capturing images from a virtual viewpoint
CN106254854B (en) Preparation method, the apparatus and system of 3-D image
KR101944911B1 (en) Image processing method and image processing apparatus
CN107545586B (en) Depth obtaining method and system based on light field polar line plane image local part
US20170318280A1 (en) Depth map generation based on cluster hierarchy and multiple multiresolution camera clusters
CN101673395A (en) Image mosaic method and image mosaic device
US20150288945A1 (en) Generarting 3d images using multiresolution camera clusters
US9154765B2 (en) Image processing device and method, and stereoscopic image display device
CN113196007B (en) Camera system applied to vehicle
US9665967B2 (en) Disparity map generation including reliability estimation
CN113160068B (en) Point cloud completion method and system based on image
WO2020125637A1 (en) Stereo matching method and apparatus, and electronic device
CN104618704A (en) Method and apparatus for processing a light field image
JP5852093B2 (en) Video processing apparatus, video processing method, and program
JP2017016431A5 (en)
KR20120053536A (en) Image display device and image display method
US20130038698A1 (en) Image processing device, image processing method and imaging device
CN101754042A (en) Image reconstruction method and image reconstruction system
CN103634588A (en) Image composition method and electronic apparatus
JP6128748B2 (en) Image processing apparatus and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160427

WD01 Invention patent application deemed withdrawn after publication