CN103945210A - Multi-camera photographing method for realizing shallow depth of field effect - Google Patents

Multi-camera photographing method for realizing shallow depth of field effect Download PDF

Info

Publication number
CN103945210A
CN103945210A CN201410195050.XA CN201410195050A CN103945210A CN 103945210 A CN103945210 A CN 103945210A CN 201410195050 A CN201410195050 A CN 201410195050A CN 103945210 A CN103945210 A CN 103945210A
Authority
CN
China
Prior art keywords
depth
camera
pixel
photo
blur circle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410195050.XA
Other languages
Chinese (zh)
Other versions
CN103945210B (en
Inventor
李凌霄
谭德宝
李青云
林莉
张煜
张穗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changjiang River Scientific Research Institute Changjiang Water Resources Commission
Changjiang Waterway Planning Design and Research Institute
Original Assignee
Changjiang River Scientific Research Institute Changjiang Water Resources Commission
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changjiang River Scientific Research Institute Changjiang Water Resources Commission filed Critical Changjiang River Scientific Research Institute Changjiang Water Resources Commission
Priority to CN201410195050.XA priority Critical patent/CN103945210B/en
Publication of CN103945210A publication Critical patent/CN103945210A/en
Application granted granted Critical
Publication of CN103945210B publication Critical patent/CN103945210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a multi-camera photographing method for realizing shallow depth of field effect. The multi-camera photographing method for realizing the shallow depth of field effect is used for a solid photographic device with a plurality of cameras and includes that enabling the cameras to expose a photographed scene simultaneously to acquire photos, carrying out multi-baseline photogrammetry calculation on each photo photographed by the multi-camera photographic device to generate a three-dimensional model for the photographed scene, simulating the three-dimensional mode according to optical parameters provided by the user to form an image, and generating a photo with the shallow depth of field effect. By means of the multi-camera photographing method for realizing the shallow depth of field effect, the shallow depth of field effect generally realized through a more expensive and heavy special photographing apparatus can be realized through light portable mobile equipment, and moreover, because the multi-camera photographing method uses the depth information of the photographed body, the shallow depth of field effect is very natural and is better than the effect of an existing software which does not use the depth information to generate the shallow depth of field.

Description

A kind of multi-cam image pickup method of realizing shallow Deep Canvas
Technical field
The present invention relates to a kind of image treatment method, specifically a kind of multi-cam image pickup method of realizing shallow Deep Canvas.
Background technology
The depth of field refers to that camera lens can become to object the object distance scope of sharply defined image, outside this object distance scope, (comprises prospect and background), and camera lens becomes vague image to object.By the control to the depth of field, cameraman can control the definition of picture main body and background environment, thereby shows better picture theme, and this picture virtualization effect is very popular.
When shooting, affect the depth of field because have 3 points---the aperture size of camera lens, the physics focal length length of camera lens and the focusing of camera and subject are from distance.In theory, under the certain prerequisite of other factors, aperture is larger, focal length is longer, focusing is nearer, and the depth of field is more shallow, and prospect and background are fuzzyyer, and virtualization effect is stronger.But in actual photographed, the specification of the effect of virtualization and photographic equipment itself has much relations.
Due to the photo-sensitive cell size difference of different photographic equipments, it is the also difference of specification of supporting camera lens separately.This species diversity is mainly reflected on the physical size and physics focal length of camera lens.In general, for the volume of the camera lens of large scale photo-sensitive cell design can be huger than the volume of the similar camera lens designing for small size photo-sensitive cell; Even and two camera lenses have identical equivalent focal length (angle of visual field), for the physics focal length of the camera lens of large scale photo-sensitive cell design also longer than the physics focal length of the camera lens designing for small size photo-sensitive cell.So,, even if two cameras use camera lens, aperture size and the focusing at same field angle in the time taking, the depth of field that large scale photo-sensitive cell camera obtains is also more shallow than small size photo-sensitive cell camera.
At present, the portable mobile apparatus that can be used for taking a picture of selling on the market has card form camera, smart mobile phone, panel computer etc., because its photo-sensitive cell is small-sized, and the physics focal length of camera lens is very short, be difficult to shoot the picture effect of the shallow depth of field, like this, people cannot realize by existing portable mobile apparatus the demand of background virtualization effect.
Although present image processing software can be processed picture by certain fuzzy algorithmic approach, realize the blur effect of similar camera lens virtualization, but because software cannot learn in picture, which region need to be carried out Fuzzy Processing to highlight photographed subject, therefore cannot obtain desirable virtualization effect by automatic processing, now can only lean on artificial selection picture area to process, loaded down with trivial details time-consuming completely.
And this simulation virtualization is compared and is had very large difference with real camera lens virtualization effect.This species diversity is mainly reflected in:
One, real virtualization effect is strict relevant to object distance, in various degree fuzzy of appearance that be bound to of the object outside field depth.But because software cannot be learnt the object distance information of object in photo, therefore cannot judge that the object in photo is in the depth of field or outside the depth of field on earth, so the virtualization processing that cannot treat with a certain discrimination object---the final result of automatically processing, object that probably should be fuzzy is clear on the contrary, and object is fuzzy on the contrary clearly;
Two, real virtualization effect is smooth gradual change along with the variation of object object distance before and after focusing---the object distance of object in picture and lens focusing are apart from differing larger, and virtualization degree is higher.But the effect of existing software often general one brush ground carry out even Fuzzy Processing, cannot embody this gradual change;
Three, existing software is generally, by convolution algorithm, image is carried out to Fuzzy Processing, and fuzzy form is often very simple and fixing, cannot embody well the difference of the virtualization effect of different model camera lens.
The existing patent of utilizing three-dimensional camera system to generate shallow Deep Canvas, for example Chinese patent " produces method and the device (application number CN201110031413.2) of shallow depth of field image ", only adopt dual camera camera system, recognition capability for complex scene is not high, particularly cannot identify well coupling for the horizontal edge in image; And the method for its blurred picture is not considered the cover problem of close shot to distant view, can produce so-called " intensity seepage " phenomenon, the loose scape image of object that should crested has covered the phenomenon of prospect on the contrary.
Chinese patent " is realized method and the mobile terminal (application number CN201310039752.4) of Deep Canvas " in a kind of mobile terminal in, described " mobile terminal " can not obtain the depth information of image, therefore the fuzzy region of image can not be calculated automatically and be determined by software, and will specify non-fuzzy region and be undertaken after association and eliminating just obtaining fuzzy region by software by user, the afocal fuzzy region that this fuzzy region and true camera produce is also inconsistent.The Fuzzy Processing of carrying out is not subsequently used depth information yet, therefore this fuzzy be the fuzzy of a kind of homogeneous, so this depth of field is drawn the effect Deep Canvas of Reality simulation camera well.
The device of Chinese patent in " producing method and the device (Chinese Patent Application No. CN102811309A) of shallow depth image " can not obtain the depth information of image, but by using different aperture size successively object to be carried out to imaging, by analyze and process two width images merged, thereby reach the object of obtaining shallow depth image thereafter.This method can reach more natural effect, but before and after needing, double exposure has limited its scope of application.Particularly, in the time of taking moving object, front and back double exposure obtains photo can not be in full accord, and the compare of analysis process that this can affect two width images to a great extent, even draws depth of field simulate effect factitious, mistake.
In Chinese patent " simulation method for shallow field depth (application number 200810176590.8) of digital picture ", use different focusings from taking multiple pictures by a camera, thereby the difference of analyzing each photo draws the depth information of photographed scene, and on this basis image is carried out to Fuzzy Processing, thereby obtain shallow Deep Canvas.Although the method can obtain the virtualization effect of gradual change in theory, but because needs are taken a series of photo to Same Scene, consuming time more, during this period as long as scene being shot or camera one side are moved, capital affects the consistency of sequence image to a great extent, thereby cause the comparison processing of image to make a mistake, draw depth of field simulate effect factitious, mistake.
Chinese patent " simulation method for shallow field depth and digital camera " (application number: 201110133927.9) only adopted single camera, therefore cannot carry out stereophotogrammetric survey calculating, so cannot learn the exact position of object in picture.So, carry out virtualization effect according to depth information plays up more and has not known where to begin.
Generally speaking, want the virtualization effect of realizing ideal by capture apparatus, existing capture apparatus or too heavy expensive, though light material benefit but cannot virtualization, and the virtualization effect of realizing ideal by the method for image processing, existing software not only operates and wastes time and energy, and cannot reach very natural effect; Also some shortcoming more or less of existing patent.The present invention can filming apparatus light and lower-cost situation under, realize preferably natural Deep Canvas.
Summary of the invention
The invention provides a kind of multi-cam image pickup method of realizing shallow Deep Canvas, can obtain the three-dimensional information of scene being shot and also generate the image with the shallow Deep Canvas of nature.
A multi-cam image pickup method of realizing shallow Deep Canvas, is characterized in that: comprise the steps:
Step 1, provide a filming apparatus, described filming apparatus comprises at least three cameras, the optical axis direction of described at least three cameras be parallel to each other and imaging plane coplanar, the focusing of each camera is arranged on hyperfocal distance;
Described in step 2, utilization, at least three cameras are taken and are obtained one group of photo scene being shot simultaneously;
Step 3, the photo that shooting is obtained carry out intensive Image Matching, extract the wherein picpointed coordinate of all object points in different photos, the i.e. coordinate of same place of scene being shot;
Step 4, carry out the forward intersection of many photos according to the coordinate of the demarcation information of described filming apparatus and the same place that obtains and calculate, obtain the three-dimensional coordinate of the corresponding object points of all same places, generate cloud data;
Step 5, utilize the texture information of cloud data and photo, calculate the digital surface model that generates scene being shot;
Step 6, described digital surface model is carried out to projection imaging with new photo centre and projection plane, and on projection plane, generate a width projection print;
Step 7, calculate the projector distance of all pixels in described projection print, store as the depth information of each pixel;
Step 8, specify certain pixel in described projection print simulation focusing as photo, the depth information obtaining from step 7, read the depth value of this pixel, as the focusing of photo;
Step 9, according to taking f-number, lens focus, allow that blur circle diameter and the definite focusing of step 8 calculate field depth, and according to field depth, all pixels in described projection print are divided into group, prospect group in three groups-background group, the depth of field;
Step 10, shooting f-number, lens focus, focusing and four parameters of picture diagonal line length, calculate the relative blur circle size of each pixel, until the relative blur circle size of all pixels is calculated complete;
Step 11, for all pixels of background group, according to pixel depth value order from far near, the blur circle hot spot of drafting and each pixel that superposes in a width blank image successively, wherein blur circle spot size is consistent with the relative blur circle size result of calculation in step 10;
Step 12, for all pixels of organizing in the depth of field, according to pixel depth value order from far near, on the image generating in step 11 successively, draw and each pixel that superposes, if existing RGB information on the drafting position of pixel, by its replacement;
Step 13, for all pixels of prospect group, according to pixel depth value order from far near, the blur circle hot spot of drafting and each pixel that superposes on the image generating in step 12 successively, wherein blur circle spot size is consistent with the relative blur circle size result of calculation in step 10;
Step 14, store this width image, and present to user as final result.The multi-cam image pickup method of realizing shallow Deep Canvas as above, described picture diagonal line length, allows that blur circle diameter and lens focus chosen from a series of preset true camera bodies and camera lens model by user, takes f-number and is specified by user.
The multi-cam image pickup method of realizing shallow Deep Canvas as above, described front depth of field computing formula is: the front depth of field=(f-number * allows blur circle diameter * focusing ^2)/(lens focus ^2+ f-number * allows blur circle diameter * focusing), described rear depth of field computing formula is: the rear depth of field=(f-number * allows blur circle diameter * focusing ^2)/(lens focus ^2-f-number * allows blur circle diameter * focusing).
The multi-cam image pickup method of realizing shallow Deep Canvas as above, the position of new photo centre described in step 6 by software according to the demarcation information of camera, automatically get the mean value of three camera photographing centers and obtain, the plane at described new projection plane and three camera photo-sensitive cell places is coplanar.
The present invention can realize the shallow Deep Canvas that generally can only could realize on more expensive and heavy professional photographic apparatus on light portable mobile apparatus, and owing to having used the depth information of subject, this shallow Deep Canvas is very natural, is better than existingly not using depth information and generating the effect of the software of the shallow depth of field.The present invention can solve the problem of horizontal edge identification and intensity seepage preferably; Obtain the depth information of image by multi-cam filming apparatus, then carry out depth of field drafting according to depth information, not only can automatically accurately distinguish fuzzy region and non-fuzzy region, Deep Canvas that can also the true camera of simulate;
Brief description of the drawings
Fig. 1 is a wherein example structure schematic diagram of multi-cam filming apparatus of the present invention, and described filming apparatus is as the shooting annex of mobile device;
Fig. 2 is another example structure schematic diagram of multi-cam filming apparatus of the present invention, and described filming apparatus and mobile device itself become one;
Fig. 3 (a) is the shooting schematic diagram of multi-cam filming apparatus of the present invention;
Fig. 3 (b) is the geometrical relationship schematic diagram during multi-cam filming apparatus of the present invention is taken;
Fig. 4 is the schematic diagram that the present invention is projected to digital surface model picture plane;
Fig. 5 is the geometric optics schematic diagram of lens imaging of the present invention;
Fig. 6 is the electrical block diagram of multi-cam filming apparatus of the present invention.
In figure: 101, 201, the 301-the first camera, 102, 202, 302-second camera, 103, 203, the 303-the three camera, 104, 204-mobile device, 105-mobile device data interface, 106-filming apparatus data pin, 107-data pin interface, 108, the 205-the first photoflash lamp, 109, the 206-the second photoflash lamp, the 311-the first photo, the 312-the second photo, the 313-the three photo, 320-object point, 321, 322, 323-same place, the 331-the first imaging light, the 332-the second imaging light, the 333-the three imaging light, 400-digital surface model, the 401-the first scene being shot, the 402-the second scene being shot, the 403-the three scene being shot, 410-photo centre, the 411-the first projected light beam, the 412-the second projected light beam, the 413-the three projected light beam, 420-projection plane, the 421-the first scene projection being shot, the 422-the second scene projection being shot, the 423-the three scene projection being shot, 500-processor, 600-memory module.
Embodiment
Below in conjunction with the accompanying drawing in the present invention, the technical scheme in the present invention is clearly and completely described.
For achieving the above object, the present invention realizes the processor that the multi-cam filming apparatus of shallow Deep Canvas need to be equipped with at least three cameras and be connected with described at least three cameras, and described three processors carry out photogrammetric and image processing function for the photo that camera is taken.
The multi-cam filming apparatus that the present invention realizes shallow Deep Canvas can be both the external camera annex of mobile device, was connected and operated, as supplementing of the original camera of mobile device by data-interface with the equipment such as mobile phone, panel computer; Also can substitute single piece of camera in conventional mobile device completely, be integrated with integration of equipments, as the camera of mobile device self---two schemes all can completely be realized the various functions that the present invention proposes.To respectively two schemes be set forth below.
The first scheme,, when this filming apparatus is during as the external camera annex of mobile device, as shown in Figure 1, mobile device 104 has the first camera 101, owned two cameras of this filming apparatus, i.e. second camera 102 and the 3rd camera 103.Second camera 102 and the 3rd camera 103 are arranged in filming apparatus body two ends in a distance.In use, this filming apparatus is connected and fixes with the mobile device data interface 105 of mobile device 104 by filming apparatus data pin 106; And according to the data-interface of mobile device different model, this filming apparatus also can be changed the data pin interface 107 of corresponding model flexibly, so that successfully docking.At this moment, two cameras (102,103) of filming apparatus and the camera (101) of mobile device itself in a distance, are triangularly arranged, and they form the stereoscopic shooting system of a set of three cameras jointly.Under ideal state, the optical axis direction of each camera be parallel to each other and imaging plane coplanar.This filming apparatus is equipped with two pieces of photoflash lamps (the first photoflash lamp 108, the second photoflash lamp 109), for giving shot subject light filling.It is to be noted, this kind of filming apparatus is in the time of the external camera annex as mobile device, camera number on it is minimum be two pieces (if count mobile device self camera, camera sum is not less than three), but camera that also can integrated greater number---only taking two pieces as example principle of specification, the camera number on filming apparatus is not carried out to the restriction on maximum quantity herein.
As shown in Figure 6, described filming apparatus also comprises the processor being connected with the first camera 101, second camera 102 and the 3rd camera 103 and the memory module 600 being connected with processor.
First scheme, when this filming apparatus and mobile device are integrated, this filming apparatus should have at least three pieces of cameras.As shown in Figure 2, between each piece of camera (201,202,203), in a distance, be triangularly arranged on mobile device 204, formed the stereoscopic shooting system of a set of three cameras.Under ideal state, the optical axis direction of each camera be parallel to each other and imaging plane coplanar.Two pieces of photoflash lamps (the first photoflash lamp 205, the second photoflash lamp 206) between each camera, are equipped with on mobile device 204 surfaces,, for giving shot subject light filling.It is to be noted, the camera that this and mobile device is integrated, its number is minimum is three pieces, but camera that also can integrated greater number---only taking three pieces as example principle of specification, the camera number on filming apparatus is not carried out to the restriction on maximum quantity herein.Described filming apparatus also comprises the processor being connected with the first camera 201, second camera 202 and the 3rd camera 203 and the memory module being connected with processor.
It is pointed out that this above-mentioned two schemes except having on hardware configuration certain difference, is consistent taking the aspects such as flow process, Computing Principle.
The camera module specification being equipped with on camera on this filming apparatus and general mobile phone is similar, is made up of parts such as camera lens, photo-sensitive cell and surface-mounted integrated circuits.The specifications parameter (lens structure, focal length, photo-sensitive cell size, pixel count etc.) of each camera is consistent.The feature of this camera module is: photo-sensitive cell size is little, and camera lens physics focal length is short, and camera overall dimensions is small and exquisite; In the time that focusing is arranged on hyperfocal distance, can take the photo with the very big depth of field.Each camera has the function of synchronous shooting,, under manual control, together a certain photographed scene is exposed at synchronization, and each camera all records next photo.Can disposablely obtain so a series of photos with parallax of same photographed scene.
Filming apparatus needs to carry out camera lens demarcation before using, determine relative position between each camera and the optical parametric (focal length, principal point position, distortion factor etc.) of each camera self, demarcation can complete at production link, also can be completed by user, here not limit.
As shown in Fig. 3 (a) and Fig. 3 (b), when shooting, the focusing of each camera (the first camera 301, second camera 302, the 3rd camera 303) is all arranged on hyperfocal distance, now can think that camera lens can reach infinity focus, be all clearly from hyperfocal distance prospect apart from all objects in the scope of infinity.Each camera (301,302,303) is taken and is obtained photo (311,312,313) scene being shot, and these photos (311,312,313) will be stored, produce in the handling process of shallow depth image and use in order to the next stage.
As shown in 6, the present invention is an embodiment wherein, and described filming apparatus comprises the first camera 301, second camera 302, the 3rd camera 303, the processor 500 being connected with described the first camera 301, second camera 302, the 3rd camera 303 and the memory module 600 being connected with described processor 500.The first camera 301, second camera 302, the 3rd camera 303 are taken the photo storage obtaining in described memory module 600 to scene being shot simultaneously.
The processing of processor 500 comparison films of the present invention is divided into two parts on the whole.Part I is the three-dimensional information that calculates all pixels, generating three-dimensional models.Part II is to generate the image with shallow Deep Canvas.
The concrete steps of Part I (generating three-dimensional models) are:
1, described processor 500 reads the first camera 301, second camera 302, the 3rd camera 303 and takes the multiple pictures (311,312,313) obtaining from memory module 600;
2, described processor 500 carries out intensive Image Matching to taking the photo obtaining, extract the picpointed coordinate of one of them object point 320 of scene being shot in different photos (311,312,313), i.e. the coordinate of same place (321,322,323).Here it is to be noted, because traditional double is as many employings of Stereographing system level two cameras of placement side by side, therefore the horizontal edge in scene being shot can be parallel with photographic base, cause two horizontal edge imagings in photo to overlap, this allows image processing program cannot obtain enough information and goes to distinguish and identify the same place on horizontal edge, and this is rather unfavorable to Image Matching.The captured photo of multi-base stereo camera chain of at least three cameras arranging of non-colinear and Image Matching in the present invention uses, even if contain in the photo therefore obtaining and certain linear edge that photographic base is parallel, program also can obtain enough information solution is retrained from other photographic base, strengthen the reliability of Image Matching, therefore there is better Image Matching effect;
3, constantly repeat the 2nd step, until extract the coordinate of same places all in photo;
4, due to the line of each camera photographing center, corresponding same place position and object point 320, the i.e. inevitable intersection of three imaging light (331,332,333) is in object point 320, therefore carrying out the forward intersection of many photos according to the coordinate of the demarcation information of filming apparatus and same place obtained in the previous step (321,322,323) calculates, can obtain the three-dimensional coordinate (X of the corresponding object point of each same place (320), Y, Z);
5, constantly repeat the 4th step, until the three-dimensional coordinate of the corresponding object point of all same places all calculates, generate cloud data;
6, utilize the texture information of cloud data and photo, calculate the digital surface model 400 (referring to Fig. 4) that generates scene being shot, the three-dimensional form of expression that this digital surface model 400 is scenes being shot.
The concrete steps of Part II (shallow depth image generation) are:
1, these digital surface model 400 projections are carried out to projection imaging with a new photo centre 410 on a new projection plane 420, wherein the position of photo centre 410 by software according to the demarcation information of camera, automatically get the mean value of three camera photographing centers and obtain, the plane at described new projection plane 420 and three camera photo-sensitive cell places is coplanar, finally on projection plane 420, generates a width projection print.For example, in the projection print that, the scene being shot on digital surface model 400 (401,402,403) generates on projection plane 420 to there being scene projection being shot (421,422,423).
2, the projector distance (digital surface model 400 is to the normal distance of projection plane 420) that calculates all pixels in this width projection print, stores as the depth information of each pixel.That is to say, the each pixel in this width projection print, except rgb value information, also has a corresponding depth information.
3, from a series of preset true camera bodies and camera lens model, chosen camera body and the camera lens model wanted for simulating Deep Canvas by user, now read fuselage picture diagonal line length, allow blur circle diameter and lens focus, and specify shooting f-number by user;
4, specify certain pixel in the projection print generating in (1) simulation focusing as photo by user, the depth information obtaining from (2), read the depth value of this pixel, as the focusing of photo;
5, according to depth of field computing formula (the front depth of field=(f-number * allows blur circle diameter * focusing ^2)/(lens focus ^2+ f-number * allows blur circle diameter * focusing); The rear depth of field=(f-number * allows blur circle diameter * focusing ^2)/(lens focus ^2-f-number * allows blur circle diameter * focusing)), shooting f-number, lens focus, the focusing obtaining in substitution (3) (4) and allow four parameters of blur circle diameter, calculate the whole photo field depth under these acquisition parameters, and according to field depth, all pixels in the photo obtaining in (1) be divided into three groups---group, prospect group in background group, the depth of field.Classification foundation is: if the depth value of pixel is less than or equal to prospect apart from (that is: focusing-front depth of field), pixel is classified to prospect group; If the depth value of pixel is greater than prospect apart from (that is: focusing-front depth of field) and is less than background apart from (that is: focusing+rear depth of field), pixel is classified to group in the depth of field; If the depth value of pixel is more than or equal to background apart from (that is: focusing+rear depth of field), pixel is classified to background group.Geometrical relationship between focusing, prospect distance, background distance is shown in Fig. 5;
6, for example, according to blur circle diameter computing formula (approximate formula: blur circle diameter ≈ lens focus ^2/ (f-number * picture diagonal line length * focusing) relatively), shooting f-number, lens focus, focusing and four parameters of picture diagonal line length of in substitution (3) (4), obtaining, calculate the relative blur circle size (fog-level of pixel) of each pixel, until complete the relative blur circle size calculating of all pixels, and preserve.Optical schematic diagram as shown in Figure 5;
7, for all pixels of background group, according to pixel depth value order from far near, the blur circle hot spot of drafting and each pixel that superposes in a width blank image successively, wherein blur circle spot size is consistent with the relative blur circle size result of calculation in (6).Constantly circulation is until all processes pixel of background group are complete;
8, for all pixels of organizing in the depth of field, according to pixel depth value order from far near, on the image generating in (7) successively, draw and each pixel that superposes, if existing RGB information on the drafting position of pixel, by its replacement.Constantly circulation is until all processes pixel of the interior group of the depth of field are complete;
9, for all pixels of prospect group, according to pixel depth value order from far near, the blur circle hot spot of drafting and each pixel that superposes on the image generating in (8) successively, wherein blur circle spot size is consistent with the relative blur circle size result of calculation in (6).Constantly circulation is until all processes pixel of prospect group are complete;
It should be noted that, here the shape of drawing hot spot in (7) and (9) is not limited, in actual implementation procedure, can use different light spot shapes according to user's demand, for example circle, annular, heart, crescent or the vignetting form of above-mentioned shape etc.And, owing to using the pixel of organizing in the depth of field to cover the hot spot of background group pixel, therefore got rid of the conflicting situation of group pixel---the i.e. appearance of " intensity seepage " in background group hot spot and the depth of field in the 8th step.
10, store this width image in memory module 600, and present to user as final result.Because all pixels of prospect and background are all processed into fuzzy hot spot stack, and pixel in the depth of field is directly with original picture rich in detail drafting, and therefore entire image can present the interior sharp keen and shallow depth of field visual effect that afocal is soft and graceful of the discharging of the coke.If user is satisfied not to virtualization effect, can reselect related optical parameter, repeat above-mentioned steps and generate the image that a width is new, until satisfied.

Claims (4)

1. a multi-cam image pickup method of realizing shallow Deep Canvas, is characterized in that comprising the steps:
Step 1, provide a filming apparatus, described filming apparatus comprises at least three cameras, the optical axis direction of described at least three cameras be parallel to each other and imaging plane coplanar, the focusing of each camera is arranged on hyperfocal distance;
Described in step 2, utilization, at least three cameras are taken and are obtained one group of photo scene being shot simultaneously;
Step 3, the photo that shooting is obtained carry out intensive Image Matching, extract the wherein picpointed coordinate of all object points in different photos, the i.e. coordinate of same place of scene being shot;
Step 4, carry out the forward intersection of many photos according to the coordinate of the demarcation information of described filming apparatus and the same place that obtains and calculate, obtain the three-dimensional coordinate of the corresponding object points of all same places, generate cloud data;
Step 5, utilize the texture information of cloud data and photo, calculate the digital surface model that generates scene being shot;
Step 6, described digital surface model is carried out to projection imaging with new photo centre and projection plane, and on projection plane, generate a width projection print;
Step 7, calculate the projector distance of all pixels in described projection print, store as the depth information of each pixel;
Step 8, specify certain pixel in described projection print simulation focusing as photo, the depth information obtaining from step 7, read the depth value of this pixel, as the focusing of photo;
Step 9, according to taking f-number, lens focus, allow that blur circle diameter and the definite focusing of step 8 calculate field depth, and according to field depth, all pixels in described projection print are divided into group, prospect group in three groups-background group, the depth of field;
Step 10, shooting f-number, lens focus, focusing and four parameters of picture diagonal line length, calculate the relative blur circle size of each pixel, until the relative blur circle size of all pixels is calculated complete;
Step 11, for all pixels of background group, according to pixel depth value order from far near, the blur circle hot spot of drafting and each pixel that superposes in a width blank image successively, wherein blur circle spot size is consistent with the relative blur circle size result of calculation in step 10;
Step 12, for all pixels of organizing in the depth of field, according to pixel depth value order from far near, on the image generating in step 11 successively, draw and each pixel that superposes, if existing RGB information on the drafting position of pixel, by its replacement;
Step 13, for all pixels of prospect group, according to pixel depth value order from far near, the blur circle hot spot of drafting and each pixel that superposes on the image generating in step 12 successively, wherein blur circle spot size is consistent with the relative blur circle size result of calculation in step 10;
Step 14, store this width image, and present to user as final result.
2. the multi-cam image pickup method of realizing shallow Deep Canvas as claimed in claim 1, it is characterized in that: described picture diagonal line length, allow that blur circle diameter and lens focus chosen from a series of preset true camera bodies and camera lens model by user, take f-number and specified by user.
3. the multi-cam image pickup method of realizing shallow Deep Canvas as claimed in claim 1, it is characterized in that: described front depth of field computing formula is: the front depth of field=(f-number * allows blur circle diameter * focusing ^2)/(lens focus ^2+ f-number * allows blur circle diameter * focusing), described rear depth of field computing formula is: the rear depth of field=(f-number * allows blur circle diameter * focusing ^2)/(lens focus ^2-f-number * allows blur circle diameter * focusing).
4. the multi-cam image pickup method of realizing shallow Deep Canvas as claimed in claim 1, it is characterized in that: the position of new photo centre described in step 6 by software according to the demarcation information of camera, automatically get the mean value of three camera photographing centers and obtain, the plane at described new projection plane and three camera photo-sensitive cell places is coplanar.
CN201410195050.XA 2014-05-09 2014-05-09 A kind of multi-cam image pickup method realizing shallow Deep Canvas Active CN103945210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410195050.XA CN103945210B (en) 2014-05-09 2014-05-09 A kind of multi-cam image pickup method realizing shallow Deep Canvas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410195050.XA CN103945210B (en) 2014-05-09 2014-05-09 A kind of multi-cam image pickup method realizing shallow Deep Canvas

Publications (2)

Publication Number Publication Date
CN103945210A true CN103945210A (en) 2014-07-23
CN103945210B CN103945210B (en) 2015-08-05

Family

ID=51192658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410195050.XA Active CN103945210B (en) 2014-05-09 2014-05-09 A kind of multi-cam image pickup method realizing shallow Deep Canvas

Country Status (1)

Country Link
CN (1) CN103945210B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN105163042A (en) * 2015-08-03 2015-12-16 努比亚技术有限公司 Device and method for virtually processing depth image
CN105262952A (en) * 2015-10-23 2016-01-20 努比亚技术有限公司 Mobile terminal and image processing method thereof
CN105335958A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Processing method and device for light supplement of flash lamp
EP3041230A1 (en) * 2015-01-05 2016-07-06 BOE Technology Group Co., Ltd. Image acquisition device and image processing method and system
CN106296801A (en) * 2015-06-12 2017-01-04 联想(北京)有限公司 A kind of method setting up object three-dimensional image model and electronic equipment
CN106954012A (en) * 2017-03-29 2017-07-14 武汉嫦娥医学抗衰机器人股份有限公司 A kind of high definition polyphaser full-view stereo imaging system and method
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107454377A (en) * 2016-05-31 2017-12-08 深圳市微付充科技有限公司 A kind of algorithm and system that three-dimensional imaging is carried out using camera
CN107734091A (en) * 2017-10-12 2018-02-23 上海青橙实业有限公司 Projectable's mobile terminal
CN107846556A (en) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 imaging method, device, mobile terminal and storage medium
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
WO2018214077A1 (en) * 2017-05-24 2018-11-29 深圳市大疆创新科技有限公司 Photographing method and apparatus, and image processing method and apparatus
CN108954722A (en) * 2017-05-18 2018-12-07 奥克斯空调股份有限公司 A kind of air-conditioning with depth of field identification function and the air supply method applied to the air-conditioning
CN110880161A (en) * 2019-11-21 2020-03-13 大庆思特传媒科技有限公司 Depth image splicing and fusing method and system for multi-host multi-depth camera
CN110889410A (en) * 2018-09-11 2020-03-17 苹果公司 Robust use of semantic segmentation in shallow depth of field rendering
CN110969675A (en) * 2019-11-28 2020-04-07 成都品果科技有限公司 Method for simulating blurring of different-shape apertures of camera
CN106576160B (en) * 2014-09-03 2020-05-05 英特尔公司 Imaging architecture for depth camera mode with mode switching
CN112985272A (en) * 2021-03-05 2021-06-18 钟庆生 VR picture viewing method and three-dimensional measurement method of stereograph
CN114051103A (en) * 2021-11-11 2022-02-15 陕西师范大学 Camera combination method and system for clearly shooting student expressions based on classroom
CN115334233A (en) * 2021-05-10 2022-11-11 联发科技股份有限公司 Method and system for generating sliding zoom effect

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472064A (en) * 2007-12-25 2009-07-01 鸿富锦精密工业(深圳)有限公司 Filming system and method for processing scene depth
TW201023627A (en) * 2008-12-12 2010-06-16 Altek Corp Method for stimulating the depth of field of an image
CN101764925A (en) * 2008-12-25 2010-06-30 华晶科技股份有限公司 Simulation method for shallow field depth of digital image
US20100266207A1 (en) * 2009-04-21 2010-10-21 ArcSoft ( Hangzhou) Multimedia Technology Co., Ltd Focus enhancing method for portrait in digital image
CN101998053A (en) * 2009-08-13 2011-03-30 富士胶片株式会社 Image processing method, image processing apparatus, computer readable medium, and imaging apparatus
CN102158648A (en) * 2011-01-27 2011-08-17 明基电通有限公司 Image capturing device and image processing method
CN102457740A (en) * 2010-10-14 2012-05-16 华晶科技股份有限公司 Method and device for generating shallow depth-of-field image
CN102801908A (en) * 2011-05-23 2012-11-28 联咏科技股份有限公司 Shallow depth-of-field simulation method and digital camera
TW201248549A (en) * 2011-05-31 2012-12-01 Altek Corp Method and apparatus for generating image with shallow depth of field
CN102811309A (en) * 2011-05-31 2012-12-05 华晶科技股份有限公司 Method and device for generating shallow depth-of-field image
CN103002212A (en) * 2011-09-15 2013-03-27 索尼公司 Image processor, image processing method, and computer readable medium
CN103095978A (en) * 2011-11-03 2013-05-08 华晶科技股份有限公司 Handling method for generating image with blurred background and image capturing device
US20140003662A1 (en) * 2011-12-16 2014-01-02 Peng Wang Reduced image quality for video data background regions
US20140009585A1 (en) * 2012-07-03 2014-01-09 Woodman Labs, Inc. Image blur based on 3d depth information
CN103516999A (en) * 2012-06-20 2014-01-15 联发科技股份有限公司 Image processing method and image processing apparatus

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472064A (en) * 2007-12-25 2009-07-01 鸿富锦精密工业(深圳)有限公司 Filming system and method for processing scene depth
TW201023627A (en) * 2008-12-12 2010-06-16 Altek Corp Method for stimulating the depth of field of an image
CN101764925A (en) * 2008-12-25 2010-06-30 华晶科技股份有限公司 Simulation method for shallow field depth of digital image
US20100266207A1 (en) * 2009-04-21 2010-10-21 ArcSoft ( Hangzhou) Multimedia Technology Co., Ltd Focus enhancing method for portrait in digital image
CN101998053A (en) * 2009-08-13 2011-03-30 富士胶片株式会社 Image processing method, image processing apparatus, computer readable medium, and imaging apparatus
CN102457740A (en) * 2010-10-14 2012-05-16 华晶科技股份有限公司 Method and device for generating shallow depth-of-field image
CN102158648A (en) * 2011-01-27 2011-08-17 明基电通有限公司 Image capturing device and image processing method
CN102801908A (en) * 2011-05-23 2012-11-28 联咏科技股份有限公司 Shallow depth-of-field simulation method and digital camera
TW201248549A (en) * 2011-05-31 2012-12-01 Altek Corp Method and apparatus for generating image with shallow depth of field
CN102811309A (en) * 2011-05-31 2012-12-05 华晶科技股份有限公司 Method and device for generating shallow depth-of-field image
CN103002212A (en) * 2011-09-15 2013-03-27 索尼公司 Image processor, image processing method, and computer readable medium
CN103095978A (en) * 2011-11-03 2013-05-08 华晶科技股份有限公司 Handling method for generating image with blurred background and image capturing device
US20140003662A1 (en) * 2011-12-16 2014-01-02 Peng Wang Reduced image quality for video data background regions
CN103516999A (en) * 2012-06-20 2014-01-15 联发科技股份有限公司 Image processing method and image processing apparatus
US20140009585A1 (en) * 2012-07-03 2014-01-09 Woodman Labs, Inc. Image blur based on 3d depth information

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335958A (en) * 2014-08-15 2016-02-17 格科微电子(上海)有限公司 Processing method and device for light supplement of flash lamp
CN105335958B (en) * 2014-08-15 2018-12-28 格科微电子(上海)有限公司 The processing method and equipment of flash lighting
CN106576160B (en) * 2014-09-03 2020-05-05 英特尔公司 Imaging architecture for depth camera mode with mode switching
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
EP3041230A1 (en) * 2015-01-05 2016-07-06 BOE Technology Group Co., Ltd. Image acquisition device and image processing method and system
US9712808B2 (en) 2015-01-05 2017-07-18 Boe Technology Group Co., Ltd. Image acquisition device and image processing method and system
CN106296801A (en) * 2015-06-12 2017-01-04 联想(北京)有限公司 A kind of method setting up object three-dimensional image model and electronic equipment
CN105163042A (en) * 2015-08-03 2015-12-16 努比亚技术有限公司 Device and method for virtually processing depth image
CN105262952A (en) * 2015-10-23 2016-01-20 努比亚技术有限公司 Mobile terminal and image processing method thereof
CN107454377A (en) * 2016-05-31 2017-12-08 深圳市微付充科技有限公司 A kind of algorithm and system that three-dimensional imaging is carried out using camera
CN107454377B (en) * 2016-05-31 2019-08-02 深圳市微付充科技有限公司 A kind of algorithm and system carrying out three-dimensional imaging using camera
CN106954012A (en) * 2017-03-29 2017-07-14 武汉嫦娥医学抗衰机器人股份有限公司 A kind of high definition polyphaser full-view stereo imaging system and method
CN108954722A (en) * 2017-05-18 2018-12-07 奥克斯空调股份有限公司 A kind of air-conditioning with depth of field identification function and the air supply method applied to the air-conditioning
CN108954722B (en) * 2017-05-18 2020-11-06 奥克斯空调股份有限公司 Air conditioner with depth-of-field recognition function and air supply method applied to air conditioner
WO2018214077A1 (en) * 2017-05-24 2018-11-29 深圳市大疆创新科技有限公司 Photographing method and apparatus, and image processing method and apparatus
CN107454332A (en) * 2017-08-28 2017-12-08 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN107734091A (en) * 2017-10-12 2018-02-23 上海青橙实业有限公司 Projectable's mobile terminal
CN107846556A (en) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 imaging method, device, mobile terminal and storage medium
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
CN110889410A (en) * 2018-09-11 2020-03-17 苹果公司 Robust use of semantic segmentation in shallow depth of field rendering
CN110889410B (en) * 2018-09-11 2023-10-03 苹果公司 Robust use of semantic segmentation in shallow depth of view rendering
CN110880161A (en) * 2019-11-21 2020-03-13 大庆思特传媒科技有限公司 Depth image splicing and fusing method and system for multi-host multi-depth camera
CN110880161B (en) * 2019-11-21 2023-05-09 大庆思特传媒科技有限公司 Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras
CN110969675B (en) * 2019-11-28 2023-05-05 成都品果科技有限公司 Method for simulating blurring of different-shape diaphragms of camera
CN110969675A (en) * 2019-11-28 2020-04-07 成都品果科技有限公司 Method for simulating blurring of different-shape apertures of camera
CN112985272A (en) * 2021-03-05 2021-06-18 钟庆生 VR picture viewing method and three-dimensional measurement method of stereograph
CN115334233A (en) * 2021-05-10 2022-11-11 联发科技股份有限公司 Method and system for generating sliding zoom effect
CN115334233B (en) * 2021-05-10 2024-03-19 联发科技股份有限公司 Method and system for generating sliding zoom effect
CN114051103A (en) * 2021-11-11 2022-02-15 陕西师范大学 Camera combination method and system for clearly shooting student expressions based on classroom

Also Published As

Publication number Publication date
CN103945210B (en) 2015-08-05

Similar Documents

Publication Publication Date Title
CN103945210B (en) A kind of multi-cam image pickup method realizing shallow Deep Canvas
US10015469B2 (en) Image blur based on 3D depth information
TWI668997B (en) Image device for generating panorama depth images and related image device
EP3134868B1 (en) Generation and use of a 3d radon image
CN107113415A (en) The method and apparatus for obtaining and merging for many technology depth maps
WO2020125797A1 (en) Terminal, photographing method, and storage medium
CN107911621A (en) A kind of image pickup method of panoramic picture, terminal device and storage medium
CN112822402B (en) Image shooting method and device, electronic equipment and readable storage medium
CN107657656B (en) Homonymy point matching and three-dimensional reconstruction method, system and luminosity stereo camera terminal
CN107800979A (en) High dynamic range video image pickup method and filming apparatus
CN108399634B (en) RGB-D data generation method and device based on cloud computing
US20180338129A1 (en) Dual-camera image capture system
CN105190229A (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
CN104184935A (en) Image shooting device and method
CN106023073A (en) Image splicing system
CN104915919B (en) Image processing apparatus and image processing method
CN109600556B (en) High-quality precise panoramic imaging system and method based on single lens reflex
CN105827932A (en) Image synthesis method and mobile terminal
CN109788199B (en) Focusing method suitable for terminal with double cameras
CN109842791B (en) Image processing method and device
Popovic et al. Design and implementation of real-time multi-sensor vision systems
CN109166176B (en) Three-dimensional face image generation method and device
Gava et al. Dense scene reconstruction from spherical light fields
Kang et al. Fast dense 3D reconstruction using an adaptive multiscale discrete-continuous variational method
CN115623313A (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant