CN103973963B - Image acquisition device and image processing method thereof - Google Patents

Image acquisition device and image processing method thereof Download PDF

Info

Publication number
CN103973963B
CN103973963B CN201310260044.3A CN201310260044A CN103973963B CN 103973963 B CN103973963 B CN 103973963B CN 201310260044 A CN201310260044 A CN 201310260044A CN 103973963 B CN103973963 B CN 103973963B
Authority
CN
China
Prior art keywords
image
gradient
pixel
those
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310260044.3A
Other languages
Chinese (zh)
Other versions
CN103973963A (en
Inventor
庄哲纶
周宏隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Altek Semiconductor Corp
Original Assignee
Altek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Altek Semiconductor Corp filed Critical Altek Semiconductor Corp
Publication of CN103973963A publication Critical patent/CN103973963A/en
Application granted granted Critical
Publication of CN103973963B publication Critical patent/CN103973963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides an image acquisition device and an image processing method thereof. The image processing method includes the following steps. The first image is acquired at a first focal length and the second image is acquired at a second focal length. And performing a geometric correction program on the second image to generate a second image after displacement correction. And performing gradient operation on each pixel point of the second image after the displacement correction to generate a plurality of second gradient values. And comparing each first gradient value with each corresponding second gradient value to generate a plurality of first pixel comparison results, and generating a first parameter map according to the first pixel comparison results. And generating a composite image according to the first parameter map and the first image, and generating an output image at least according to the composite image.

Description

Image acquiring device and its image processing method
Technical field
Pass through the invention relates to a kind of image acquiring device and its image processing method, and in particular to one kind Calculate image acquiring device and its image processing method that pixel gradient value carrys out mixed image according to this.
Background technology
With the progress of optical technology, adjustable aperture, the digital camera of shutter even lens changeable are gradually popularized, number The function of code-phase machine also tends to variation.Digital camera in addition to good image quality to be provided, technology of focusing it is accurate Property with speed be even more factor that consumer can refer to when buying product.But for existing optical system, due to multiple things Body has different distances in stereo scene, therefore can not be obtained during single shot image and be entirely clear panorama depth Image.That is, limited by lens optical characteristic, when using digital camera capture one of depth can only be selected to enter Row focusing, therefore the scenery in other depth can be obscured more in imaging.
The existing method for producing full depth image shot mostly using a variety of different photography conditions obtained by multiple figures As combining.Multiple different figures are shot to Same Scene by changing one or more parameters in photography conditions Picture, then these images are combined into by a clearly image by definition method of discrimination.Using above-mentioned a variety of different photographies Condition, which is shot, must be dependent on fixed image acquiring device to synthesize the skill of full depth image and shot.In general, User is often using stable foot stool come still image acquisition device, to ensure to turn round without obvious geometry between acquired image It is bent.In addition, in shooting process, the movement for having any target in scene shot must also be avoided.
On the other hand, when using image shot by camera, in order to highlight the theme in captured image, institute can typically be used Meaning dissipates scape(bokeh)Shooting skill.Dissipate scape to represent in the shallower photographic imagery of the depth of field, fall the picture meeting beyond the depth of field Have and gradually produce loosely fuzzy effect.In general, the scattered scape effect that camera lens can produce is limited.To obtain compared with Good scattered scape effect, it usually needs while meet following several important conditions:Large aperture, long-focus.In other words, in order to reach High-aperture lenses need to be relied on to strengthen the blurring of distant object by dissipating scape effect, and allow the theme of clear imaging to be able to from background In highlight.However, high-aperture lenses is bulky and expensive, not general consumption-orientation camera can be equipped with.
Sum it up, existing generation panorama is deep or produces the image generation that the method for scattered scape image is all easily caused after processing The problem of depth of field is discontinuous or not natural enough.In addition, it is even more to allow user to feel not for the operation limitation in shooting image Just, seem the considerably long or complicated process of its average total shooting time, even resulting in final result images can not make us feeling It is satisfied.
The content of the invention
In view of this, the present invention provides a kind of image acquiring device and its image processing method, can pass through different focal value Captured image judges the main body in image, and then produces that main body is clear and the scattered natural image of scape effect.The opposing party Face, image processing method of the invention also can by the image captured by different focal value to avoid producing full depth image when Ghost problem.
The present invention proposes a kind of image processing method, and suitable for image acquiring device, this image processing method includes following Step.One first image is obtained with the first focal length, and the second image is obtained with the second focal length, wherein the first focal length is focused at least One main body.Geometric correction program is carried out to the second image, produces the second image after displacement correction.To each picture of the first image Vegetarian refreshments performs gradient algorithm to produce multiple first gradient values, and each pixel of the second image after displacement correction is held Row gradient algorithm is to produce multiple second Grad.It is more to produce with corresponding each second Grad to compare each first gradient value Individual first pixel comparative result, and produce the first parameter map according to these the first pixel comparative results.According to the first parameter Figure produces composograph with the first image, and produces output image according at least to composograph.
In one embodiment of this invention, above-mentioned image processing method, output is wherein at least produced according to composograph The step of image, includes:3rd image is obtained with the 3rd focal length.Geometric correction program is carried out to the 3rd image, produces displacement correction The 3rd image afterwards.Gradient algorithm is performed to produce multiple 3rd gradient values, and contraposition to each pixel of composograph Each pixel of the 3rd image after shift correction performs gradient algorithm to produce multiple 4th gradient values.Compare each 3rd gradient Value and corresponding each 4th gradient value are to produce multiple second pixel comparative results, and according to these the second pixel comparative results Produce the second parameter map.According to the second parameter map, mix the 3rd image after displacement correction and composograph and produce defeated Go out image.
In one embodiment of this invention, above-mentioned image processing method, wherein carrying out geometric correction journey to the second image Sequence, produce displacement correction after the second image the step of include:First image and the second image amount of moving are estimated, used Calculate homography (homography matrix) matrix.Geometry affine transform is carried out to the second image according to homography matrix (affine transformation), to obtain the second image after displacement correction.
In one embodiment of this invention, above-mentioned image processing method, wherein more each first gradient value with it is corresponding Each second Grad to produce multiple first pixel comparative results, and according to these the first pixel comparative results with producing parameter The step of figure, includes:By these second Grad divided by corresponding first gradient value, multiple gradient fiducial values are produced.According to this A little gradient fiducial values produce multiple parameter values, and these parameter values are recorded as into parameter map.
In one embodiment of this invention, above-mentioned image processing method, wherein being produced according to multiple gradient fiducial values more The step of individual parameter value, includes:Judge whether these gradient fiducial values are more than first gradient critical value.If gradient fiducial value is more than First gradient critical value, the parameter value corresponding to gradient fiducial value is set as the first numerical value.
In one embodiment of this invention, above-mentioned image processing method, wherein being produced according to these gradient fiducial values more The step of individual parameter value, includes:If gradient fiducial value is had no more than first gradient critical value, judge whether gradient fiducial value is more than Second gradient critical value.If gradient fiducial value is more than the second gradient critical value, set parameter value corresponding to gradient fiducial value as Second value.If gradient fiducial value is had no more than the second gradient critical value, the parameter value setting corresponding to gradient fiducial value is set For third value, wherein, first gradient critical value is more than the second gradient critical value.
In one embodiment of this invention, above-mentioned image processing method, wherein at least according to the first parameter map and the The step of one image generation composograph, includes:Blurring program is carried out to the first image, produces blurred picture.According to the first ginseng Examine map and mix the first image with blurred picture to produce main body picture rich in detail.
In one embodiment of this invention, above-mentioned image processing method, wherein mixing first according to the first Reference Map Image is included with blurred picture with producing the step of main body picture rich in detail:Judge whether parameter value is more than the first mixing critical value. If parameter value is more than the first mixing critical value, the pixel of blurred picture corresponding to parameter value is taken as main body picture rich in detail Pixel.If parameter value is had no more than the first mixing critical value, judge whether parameter value is more than the second mixing critical value.If parameter Value is more than the second mixing critical value, and the pixel of corresponding main body picture rich in detail is gone out according to parameter value calculation.If parameter value has no More than the second mixing critical value, pixel of the pixel of the first image corresponding to parameter value as main body picture rich in detail is taken, Wherein, the first mixing critical value is more than the second mixing critical value.
In one embodiment of this invention, above-mentioned image processing method, wherein at least according to the first parameter map and the The step of one image generation composograph, includes:Calculated for pixel values according to each pixel in the first image and the second image goes out respectively Multiple absolute differences corresponding to pixel and(Sum of Absolute Differences), and according to these absolute differences With the parameter value in the first parameter map of adjustment.According to the first Reference Map after adjustment, the first image and displacement correction are mixed The second image afterwards is to produce full depth image.
In one embodiment of this invention, above-mentioned image processing method, wherein according in the first image and the second image The calculated for pixel values of each pixel go out absolute difference corresponding to each pixel and, and according to the first parameter of absolute difference and adjustment The step of parameter value in map, includes:When absolute difference and more than mobile critical value, according to absolute difference and each parameter is determined The weight factor of value, and weight factor adjusting parameter value is utilized, wherein rising of each parameter value with corresponding absolute difference sum And decline.
In one embodiment of this invention, above-mentioned image processing method, wherein after according to being adjusted via weight factor First Reference Map, mixing the first image with the second image after displacement correction to produce full depth image the step of include:Sentence Whether disconnected parameter value is more than the first mixing critical value.If parameter value is more than the first mixing critical value, the position corresponding to parameter value is taken Pixel of the pixel of the second image after shift correction as full depth image.If parameter value has no critical more than the first mixing Value, judges whether parameter value is more than the second mixing critical value.If parameter value is more than the second mixing critical value, according to parameter value calculation The pixel of full depth image corresponding to going out.If parameter value is had no more than the second mixing critical value, the corresponding to parameter value is taken Pixel of the pixel of one image as full depth image, wherein the first mixing critical value is more than the second mixing critical value.
From the point of view of another viewpoint, the present invention proposes a kind of image acquiring device, and this image acquiring device obtains including image Module, displacement correction module, gradient calculation module, map generation module and image synthesis unit.Image collection module is with One focal length obtains the first image, and obtains the second image with the second focal length, wherein the first focal length is focused in an at least main body.Displacement Correction module carries out geometric correction program to the second image, produces the second image after displacement correction.Gradient calculation module is to Each pixel of one image performs gradient algorithm to produce multiple first gradient values, and to the second image after displacement correction Each pixel execution degree computing to produce multiple second Grad.More each first gradient value of map generation module with it is relative Each second Grad answered produces the first parameter to produce multiple first pixel comparative results according to the first pixel comparative result Map.Image synthesis unit produces composograph according to the first parameter map and the first image, and is produced according at least to composograph Raw output image.
Based on above-mentioned, the present invention can cause the different characteristic of image by focal length difference, to Same Scene with different focal Shot, and between movement images the gradient disparities of each pixel and produce parameter map.By the information of parameter map, Clearly full depth image or the scattered scape image of the clear blurred background of main body can be produced, reaches good full Deep Canvas or scattered scape Effect.
For features described above of the invention and advantage can be become apparent, special embodiment below, and it is detailed to coordinate accompanying drawing to make Carefully it is described as follows.
Brief description of the drawings
Fig. 1 is the function block schematic diagram of the image acquiring device shown by one embodiment of the invention;
Fig. 2 is the image processing method flow chart shown by one embodiment of the invention;
Fig. 3 is the schematic diagram of the image processing method shown by another embodiment of the present invention;
Fig. 4 is the block diagram of the image acquiring device shown by further embodiment of this invention;
Fig. 5 is the image processing method flow chart shown by further embodiment of this invention;
Fig. 6 is the detail flowchart of step S550 in Fig. 5 shown by further embodiment of this invention;
Fig. 7 is the detail flowchart of step S560 in Fig. 5 shown by further embodiment of this invention;
Fig. 8 is the block diagram of the image acquiring device shown by one more embodiment of the present invention;
Fig. 9 A are the schematic diagrames of the block of pixels shown by one more embodiment of the present invention;
Fig. 9 B are the absolute difference shown by yet another embodiment of the invention and the relation schematic diagram with weight factor.
Description of reference numerals:
100、400、800:Image acquiring device;
110、410、810:Image collection module;
120、420、820:Image correction module;
130、430、830:Gradient calculation module;
140、440、840:Map generation module;
150、450、850:Image synthesis unit;
460:Image obfuscation module;
860:Map adjusting module;
Img1、Img2、Img3、Img_b、Img_F、Img1_blur、Img2_cal:Image
G1、G2:Grad;
bokeh_map:Dissipate scape map;
map、allin_map:Parameter map;
S210~S250, S510~S560, S610~S625, S710~S750:Step.
Embodiment
The present invention proposes a kind of multiple images by using captured by different focal value to produce scattered scape image and complete The method of depth image.First focus and carry out and shot at least main body to be shot, followed by another focal length to same One scene is shot.Parameter map is produced by comparing the pixel gradient of two images, can be judged according to this in image Main part, and then produce with the image for dissipating scape effect.On the other hand, by compare the pixel gradient of at least two images and The parameter map of the foundation as mixed image is produced, and then produces full depth image.In order that present disclosure is more bright , it is exemplified below the example that embodiment can actually be implemented according to this as the present invention.
Fig. 1 is the function block schematic diagram according to the imaged image acquisition device depicted in one embodiment of the invention.It please join According to Fig. 1, the image acquiring device 100 of the present embodiment be, for example, digital camera, slr camera, digital code camera or other have The smart mobile phones of image-acquisition functions, tablet personal computer, head-mounted display etc., are not limited to above-mentioned.Image acquiring device 100 includes Image collection module 110, image correction module 120, gradient calculation module 130, map generation module 140 and image synthesis mould Block 150.
Image collection module 110 includes zoom lens and photo-sensitive cell.Photo-sensitive cell is, for example, charge coupled cell (Charge Coupled Device, CCD), Complimentary Metal-Oxide semiconductor(Complementary Metal-Oxide Semiconductor, CMOS)Element or other elements, image collection module 110 may also include aperture etc., neither limit herein. Image collection module 110 can obtain different images according to different focal length values.
On the other hand, image correction module 120, gradient calculation module 130, map generation module 140 and image synthesis Module 150 can be obtained by software, hardware or its combination implementation, be not any limitation as herein.Software is, for example, source code, operation system System, application software or driver etc..Hardware is, for example, CPU (Central Processing Unit, CPU), Or the general service of other programmeds or the microprocessor (Microprocessor) of specific use.
Fig. 2 is according to the Image Processing method flow diagram depicted in one embodiment of the invention.The method of the present embodiment It is each module declaration the present embodiment in collocation image acquiring device 100 below suitable for Fig. 1 image acquiring device 100 Detailed step:
First, in step S210, image collection module 110 obtains the first image with the first focal length, and with the second focal length The second image is obtained, wherein the first focal length is focused in an at least main body.That is, image collection module 110 utilizes two kinds not Same focal length shoots two images.Wherein, under the same conditions, can be with the picture result captured by different focal It is different.Specifically, for focusing in the first image of main body, the shank in its image is the most clearly.
In step S220, image correction module 120 carries out geometric correction program to the second image, after producing displacement correction The second image.Because the first image and the second image are continuously shot gained by user to Same Scene, during which due to phase Rocking or moving for machine, may shoot the image of different angle, i.e. the first image and the second image have the generation of displacement. Therefore image correction module 120 carries out geometric correction program to the second image, and in other words, geometric correction program can make displacement correction The starting pixels point position of the second image afterwards is same as the starting pixels point position of the first image.
In step S230, gradient calculation module 130 performs a gradient algorithm to produce to each pixel of the first image Raw multiple first gradient values, and it is multiple to produce to each pixel execution gradient algorithm of the second image after displacement correction Second Grad.That is, each pixel in the first image has its first gradient value, and second after displacement correction Each pixel in image has its second Grad.
In step S240, more each first gradient value of map generation module 140 and corresponding each second Grad with Multiple first pixel comparative results are produced, and the first parameter map is produced according to the first pixel comparative result.In simple terms, map The Grad of position identical pixel can be compared by generation module 140, can be had for each pixel position One pixel comparative result.
In step S250, image synthesis unit 150 produces composograph according to the first parameter map and the first image, and Output image is produced according at least to composograph.Specifically, after parameter map is obtained, image acquiring device 100 can be with The first image and the in itself image after other image procossings are mixed according to parameter map, produces composograph according to this.This Outside, image acquiring device 100 can also mix the first image and the second image according to parameter map, produce composograph according to this.
It is noted that although above-mentioned embodiment is by taking two images captured by two kinds of focal lengths out as an example, but The present invention is not restricted to this.Depending on visual practical application situation of the invention, extend to and utilize what is come out captured by multiple focal lengths Multiple images obtain final output image.For example, due to the different image of focal length respectively have respectively it is different clear Image portions, therefore clearly full depth image can be obtained by the image of multiple different focals.In addition, the image of the present invention Processing method can be by focusing in three images of main body, background and prospect, and then produces only main body and clearly export Image.Another embodiment will be enumerated below describes it in detail.
Fig. 3 is the schematic diagram according to the image processing method depicted in another embodiment of the present invention.In the present embodiment, scheme As acquisition module 110 utilizes the first focal length and the second focal length acquisition the first image Img1 and the second image Img2.Afterwards, as ibid The explanation of embodiment is stated, passes through image correction module 120, gradient calculation module 130, map generation module, image synthesis unit 150 processing, composograph Img_b can be produced according to this, is repeated no more in this.It is noted that image closes in above-described embodiment Can be using composograph Img_b as last output image into module 150, but in the present embodiment, composograph Img_b will enter One step is synthesized with another image and produces final output image Img_F.Specifically, as shown in figure 3, image obtains mould Block 110 will obtain the 3rd image Img3 with the 3rd focal length again.Image correction module 120 carries out geometric correction to the 3rd image Img3 Program, produce the 3rd image Img3 after displacement correction.
Afterwards, gradient calculation module performs gradient algorithm to composograph Img_b each pixel to produce multiple the Three Grad, and the gradient algorithm is performed to produce multiple to each pixel of the 3rd shadow Img3 pictures after displacement correction Four Grad.More each 3rd gradient value of map generation module 140 and corresponding each 4th gradient value are to produce multiple second Pixel comparative result, and the second parameter map is produced according to the second pixel comparative result.In this second parameter map be to pass through Calculate composograph Img_b and the 3rd image Img3 Grad and obtain, parameter value inside it by with foregoing utilization the One image Img1 is different from the parameter map that the second image Img2 is calculated.Image synthesis unit 150 is according to the second parameter Map, mix the 3rd image Img3 and composograph Img_b after displacement correction and produce output image Img_F.Based on above-mentioned Understand, the present invention is not intended to limit to blend the picture number of final output image, depending on visual practical application request.
However, the implementation of the present invention is not limited to described above, change can be given as one thinks fit above-mentioned for the demand of reality The content of embodiment.For example, in one more embodiment of the present invention, image acquiring device can also further include image and obscure mould Block, there is the main body picture rich in detail for dissipating scape effect to produce.In addition, in another embodiment of the present invention, image obtains dress Map adjusting module can also be further included by putting, to produce the full depth image with good full Deep Canvas.In order to further Illustrate the present invention gradient calculation module, map generation module and image synthesis unit how according to different focal image and Synthesize scattered scape image and full depth image, embodiment detailed description will be enumerated respectively below.
Fig. 4 is the block diagram according to the image acquiring device depicted in another embodiment of the present invention.Image acquiring device 400, which include image collection module 410, image correction module 420, gradient calculation module 430, map production receipts module 440, image, closes Into module 450 and image obfuscation module 460.Wherein, image collection module 410, image correction module 420, gradient calculation mould Block 430, map production receive module 440 and image synthesis unit 450 be identical or similar to that image collection module 110 shown in Fig. 1, Module 140 and image synthesis unit 150 are received in image correction module 120, gradient calculation module 130, map production, no longer superfluous in this State.Embodiment illustrated in fig. 4 is referred to Fig. 1 and analogizes it to Fig. 3 related description.
Specifically, unlike the image acquiring device 100 shown in Fig. 1, image acquiring device 400 more wraps Include image obfuscation module 460.Wherein, image obfuscation module 460 be, for example, using Gaussian filter (Gaussian filter), Bidirectional filter (Bilateral filter) or average filter (Average filter) etc., to the first image Img1 Blurring program is carried out, the present invention is not limited this.In addition, in this example, it is assumed that the second focal length is to focus in background Focal length.
Fig. 5 is according to the image processing method flow chart depicted in one embodiment of the invention.The method of the present embodiment is applicable In Fig. 4 image acquiring device 400, each module declaration the present embodiment i.e. in collocation image acquiring device 400 is detailed below Step:
First in step S510, image collection module 410 obtains the first image Img1 with the first focal length, and burnt with second Away from the second image Img2 is obtained, focused wherein the first focal length is focused in an at least main body, the second focal length in background.Focus in main body In captured the first image Img1 out, main body is more clear, and background is more fuzzy.Compared to the first image Img1, focusing In the second image Img2 captured by background out, background is more clear.Then, as described in step S520, image obscures mould Block 460 carries out blurring program to the first image Img1, to produce blurred picture Img1_blur.
In step S530, image correction module 420 carries out geometric correction program to the second image Img2, produces displacement school The second image Img2_cal after just.In detail, image correction module 420 can be entered to the first image Img1 and the second image Img2 Row amount of movement is estimated, and uses and calculates homography matrix (homography matrix).Then, the foundation of image correction module 420 This homography matrix carries out geometry affine transform (affine transformation) to the second image Img2, to be changed The second image Img2_cal after displacement correction afterwards.Accordingly, in the first image Img1 body region starting pixels point position Can be identical with the starting pixels point position of the second image Img2_cal body regions after displacement correction.
Then, in step S540, gradient calculation module 430 performs gradient fortune to the first image Img1 each pixel Calculate to produce multiple first gradient value G1, and ladder is performed to each pixel of the second image Img2_cal after displacement correction Computing is spent to produce multiple second Grad G2.Wherein, gradient algorithm can be horizontally oriented Grad computing, vertical direction ladder Angle value computing or two diagonal Grad computings, the present invention are not limited this.That is, first gradient value and the second ladder The mode that angle value corresponds to its gradient algorithm can be horizontally oriented Grad, vertical gradient value or two diagonals ladder Angle value.Wherein, horizontal direction Grad is the grey jump absolute value sum of this pixel and two-phase neighbour's horizontal direction pixel.Hang down Grey jump absolute value sum of the Nogata to Grad for this pixel vertical directions vegetarian refreshments adjacent with two.Diagonal gradient Value includes the grey jump absolute value sum of this pixel and diagonal pixel.
It should be noted that in the present embodiment, because the first image Img1 is focused in the image captured by main body, so For displacement correction image Img2_cal, the main body in the first image Img1 can be more clear.That is, the first figure As Img1 focal length in body region pixel Grad can be more than displacement correction after the second image Img2_cal phase With the Grad of the pixel of position.Conversely, as the second image Img2_cal after displacement correction is to focus to be produced in background Raw image, so the Grad of the pixel of the first image Img1 background area can be less than displacement correction image Img2_cal Same position pixel Grad.
Base this, in step S550, more each first gradient value G1 of map generation module 440 and corresponding each second ladder Angle value G2 produces parameter map to produce multiple comparative results according to comparative result.It should be noted that in the present embodiment, Parameter map is referred to as to dissipate scape map bokeh_map.Specifically, map generation module 440 will compare the first image Img1 with The Grad of the pixel of each same position in the second image Img2_cal after displacement correction.Furthermore based on above-mentioned first Image Img1 and the Grad of each pixel in the second image Img2_cal after displacement correction relation, can pass through comparative result It is to be located at body region or background area to determine each pixel in the first image Img1.Map generation module 440 passes through One image Img1 and the Grad of each pixel in the second image Img2_cal after displacement correction comparative result, can be produced Dissipate scape map bokeh_map.In other words, scape map bokeh_map is dissipated with the first image Img1 and the after displacement correction The comparative result information of the Grad of each position identical pixel in two image Img2_cal.
Finally, in step S560, image synthesis unit 450 mixes the first image according to scattered scape map bokeh_map Img1 and blurred picture Img1_blur are to produce main body picture rich in detail Img1_bokeh.As can be seen here, the second image Img2 is to use Scape map bokeh_map is dissipated to produce, image synthesis unit 450 is to mix the first image according to scattered scape map bokeh_map Img1 and blurred picture Img1_blur has the main body picture rich in detail Img1_bokeh for dissipating scape effect to produce.Consequently, it is possible to just The scattered scape image for other the clear and fuzzy background areas for keeping shot subject region can be produced.
In addition, how map generation module 440 will be further described in detail below according to more each first gradient value G1 Scattered scape map bokeh_map is produced with corresponding each second Grad G2 result.Fig. 6 is institute according to embodiments of the present invention Illustrate the detail flowchart of step S550 in Fig. 5.Referring to Fig. 4 and Fig. 6, in step S610, map generation module 440 By the second Grad G2 divided by corresponding first gradient value G1, gradient fiducial value is produced.In step S620, map produces mould Block 440 produces multiple parameter values according to gradient fiducial value, and parameter value is recorded as into scattered scape map bokeh_map.For example, If the first image Img1 and displacement correction image Img2_cal has 1024*768 pixel respectively, via image procossing mould 1024*768 gradient fiducial value will be produced after the computing of block 140, then 1024*768 parameter will be included by dissipating scape map bokeh_map Value.Here, step S620, which can be divided into step S621 to step S625, implements it.
Map generation module 440 judges whether the gradient fiducial value of each position is more than first gradient critical value(Step S621).If gradient fiducial value is more than first gradient critical value, the setting of map generation module 440 is corresponding to this gradient fiducial value Parameter value is the first numerical value(Step S622), it is referred to as to dissipate scape background value in this first numerical value.In other words, if gradient fiducial value is more than First gradient critical value, the pixel for representing this position are located at the region of background.If gradient fiducial value is not more than the first ladder Critical value is spent, map generation module 440 judges whether gradient fiducial value is more than the second gradient critical value(Step S623).If gradient Fiducial value is more than the second gradient critical value, and map generation module 440 sets the parameter value corresponding to this gradient fiducial value as second Numerical value(Step S324), referred herein to second value is to dissipate scape marginal value.In simple terms, if gradient fiducial value faces between the second gradient Between dividing value and first gradient critical value, represent this position pixel be located at main body connection background between fringe region.If Gradient fiducial value is not greater than the second gradient critical value, parameter of the setting of map generation module 440 corresponding to this gradient fiducial value Value is set as third value(Step S625), referred herein to third value is to dissipate scape main body value, i.e., the pixel of this position is positioned at master The region of body.Will be between scattered scape background value and scattered scape main body value it is noted that dissipating scape marginal value, and first gradient is critical Value is more than the second gradient critical value, and first gradient critical value is with being that the second gradient critical value is suitably set according to actual conditions Fixed, the present invention is not limited this.
As an example it is assumed that the setup parameter value of map generation module 440 is between 0 and 255, then image processing module 140 can utilize following source code (1)To produce scattered scape map bokeh_map:
Map=255
else
Map=0
(1)
Wherein, in this exemplary embodiment, it is 255 to dissipate scape background value, and it is 0 to dissipate scape main body value, and scattered scape marginal value can By first gradient is critical, the ratio between the second gradient critical value and the second Grad and first gradient value calculate and. Gra2 is the second Grad, and Gra1 is first gradient value, and TH1 is first gradient critical value, and TH2 is the second gradient critical value, Map To dissipate the multiple parameter values in scape map bokeh_map.
It is in addition, how clear to produce main body using scattered scape map bokeh_map in order to describe image synthesis unit 450 in detail Clear image Img1_bokeh, will be detailed below it.Fig. 7 is step in Fig. 5 according to depicted in exemplary embodiment of the present invention S560 detail flowchart, referring to Fig. 4 and Fig. 7.It should be noted that the pixel of each position in the first image Img1 It can be respectively corresponding to dissipate the parameters value in scape map bokeh_map.In step S710, image synthesis unit 450 judges Whether parameters value is more than the first mixing critical value.If parameter value is more than the first mixing critical value, in step S720, image Synthesis module 450 takes the pixel of the blurred picture Img1_blur corresponding to these parameter values as main body picture rich in detail Img1_ The pixel of same position in bokeh.That is the pixel of this little position is identified as background area, therefore takes blurred picture Img1_blur pixel produces the image of blurred background.
If parameter value is not greater than the first mixing critical value, in step S730, image blend module 150 judges parameter value Whether second mixing critical value is more than.If parameter value is more than the second mixing critical value, in step S740, image synthesis unit 450 pixel according to the main body picture rich in detail Img1_bokeh corresponding to parameter value calculation this parameter value.In detail, these are situated between Differentiated it is positioned at carrying on the back in the pixel position that the first mixing critical value mix with second corresponding to the parameter value between critical value Fringe region between scene area connection body region.Therefore can be by synthesizing the first image Img1 and blurred picture Img1_ Blur, to obtain the pixel of the fringe region in main body picture rich in detail Img1_bokeh between background area connection body region Point.
If parameter value is not greater than the second mixing critical value, in step S750, image synthesis unit 450 takes parameter value institute Corresponding first image Img_1 pixel is main body picture rich in detail Img1_bokeh pixel.That is, these parameters The corresponding position of value, which is differentiated, to be located in body region, therefore the picture that will take body region in clearly the first image Img_1 Vegetarian refreshments is as the body region pixel in main body picture rich in detail Img1_bokeh.Wherein, the first mixing critical value is more than second Mix critical value.
As an example it is assumed that the setup parameter value of image synthesis unit 450 is between 0 and 255, image synthesis unit 450 Main body picture rich in detail Img1_bokeh is produced using following source code (2):
if(Map≥Blend_TH1)//Background area
Img1_Bokeh=Img1_Blur
else if(Map≥Blend_TH2)//Transition area
wBokeh=LUT [Map] (LUT is table and value range is0~255)
else//Subject
Img1_Bokeh=Img1
(2)
Wherein, in this exemplary embodiment, Blend_TH1 is the first mixing critical value, and Blend_TH2 is the second mixing Critical value, Map are to dissipate the multiple parameter values in scape map bokeh_map, and LUT [] is function of tabling look-up.It is noted that side The pixel in edge region can be obtained by the concept of weight to calculate.As described above shown in the formula in exemplary source code, it will join Numerical value is as synthetic weight wbokeh, and pass through synthetic weight wbokehTo synthesize the pixel of fringe region.It is that is, right For the pixel of fringe region, its fuzzy degree will be determined depending on the closer body region in its position or fuzzy region, Thus can produces body region and natural main body picture rich in detail Img1_bokeh is connected with background area, makes to dissipate scape figure Edge as between main body and background can be more soft and natural.
In the above-described embodiments, so that the second focal length value is focused in background as an example, therefore blurred background can be produced according to this and led Body clearly background blur image.Via Fig. 3 explanation understand, image processing method of the invention can by multiple images come Obtain last output image.Base this, in other embodiments, if image acquiring device is to focus in the 3rd focal length of prospect Obtain another image.Image acquiring device can utilize previous reasons for its use blurred picture and focus in another figure of prospect Picture, via with above-mentioned generation background blur image identical processing procedure, further calculate and to produce prospect and background all fuzzy And main body clearly image.
Fig. 8 is the block diagram according to the image acquiring device depicted in one more embodiment of the present invention.Fig. 8 is refer to, In the present embodiment, image acquiring device 800 is producing the deep image of panorama.Image acquiring device 800 includes image and obtains mould Block 810, image correction module 820, gradient calculation module 830, map generation module 840, image synthesis unit 850 and map Adjusting module 860.Wherein, image collection module 810, image correction module 820, gradient calculation module 830, map generation module 840 and image synthesis unit 850 be identical or similar to that image collection module 410 shown in Fig. 4, image correction module 420, ladder Computing module 430, map generation module 440 and image synthesis unit 450 are spent, is repeated no more in this.
Specifically, unlike the image acquiring device 400 shown in Fig. 4, the present embodiment obtains with image Device 800 is taken without image obfuscation module but further includes map adjusting module 860.Wherein, map adjusting module 860 is adjusting Parameter map caused by site preparation figure generation module 840.In the present embodiment, image collection module 810 is obtained using the first focal length Take the first image Img1, and the second image Img2 is obtained with the second focal length, wherein the first focal length is focused in an at least main body, second The region that focal length is focused beyond main body.
Then, image correction module 830 carries out geometric correction program to the second image Img2, produces the after displacement correction Two image Img2_cal.Then gradient calculation module 840 performs gradient algorithm to produce to the first image Img1 each pixel Raw multiple first gradient value G1, and gradient algorithm is performed to each pixel of the second image Img2_cal after displacement correction To produce multiple second Grad G2.Then, more each first gradient value G1 of map generation module 840 and corresponding each second Grad G2 produces parameter map map to produce multiple comparative results according to comparative result.On image correction module 830 The step of the step of producing the second image Img2_cal after displacement correction, gradient calculation module 830 perform gradient algorithm, and The step of generation parameter map map of map generation module 840, is similar with the image collection module 400 shown in Fig. 4, can refer to Fig. 4 Analogize it with Fig. 5 explanation.
In general, same position pixel can be different in the Grad on two images, that is, among the present embodiment First gradient value G1 and the second Grad G2.On the other hand, for the pixel of same position, if the pixel of the position Grad o'clock in the first image is higher (i.e. G1 is more than G2), and the pixel for typically representing the position is seated in the first image More clearly region (region i.e. in the first focal length).If Grad of the pixel of the position in the second image is higher (i.e. G2 is more than G1), the pixel for typically representing the position are seated in the second image more clearly region (i.e. the second focal length Interior region).That is, map generation module 840 can also obtain parameter map map, but this hair by source code (1) It is bright to be not restricted to this.
Therefore, in the present embodiment, map generation module 440 passes through the second figure after the first image Img1 and displacement correction As the comparative result of the Grad of each pixel in Img2_cal, parameter map map can be produced.In other words, parameter map Grad of the map with the first image Img1 with each position identical pixel in the second image Img2_cal after displacement correction Comparative result information.Consequently, it is possible to image acquiring device 800 can be learnt according to parameter map map, the picture on a certain position Vegetarian refreshments is clear partly or in the second focal length in the second image Img2 in the first focal length in the first image Img1 Clear part.Accordingly, image synthesis unit 850 can be picked out according to this from two images more clearly part, with synthesis Go out the clear more output images of part.
It is noted that Same Scene is continuously shot in user and obtains the first image and the second image Among process, due in the time difference scene in shooting, it is thus possible to cause to have item in movement.Image correction module 820 be the correction that image is done to global displacement (or camera displacement), the item in scene can't be corrected, therefore If there is the object moved individually in image, mixed full depth image can be caused ghost phenomenon occur.The map of the present embodiment Adjusting module 860 is to improve above-mentioned ghost phenomenon.
In this, pixel value of the map adjusting module 860 according to each pixel in the first image Img1 and the second image Img2 The multiple absolute differences and (Sum of Absolute Differences) corresponding to each pixel are calculated, and according to these Multiple parameter values in absolute difference and adjusting parameter map map.Map adjusting module 860 is according to the Reference Map after adjustment Map, the first image Img1 is mixed with the second image Img2_cal after displacement correction to produce full depth image.
Specifically, n × n-pixel block is obtained first in the first image Img1 (n is positive integer).Assuming that n is 5, this Then as shown in Figure 9 A, it includes 25 location of pixels P to 5 × 5 block of pixels acquired by embodiment00~P44.Similarly, in place N × n-pixel block centered on location of pixels is obtained in the second image Img2_cal after shift correction.Then, first is calculated Image Img1 and the specific color of each pixel in Img2_cal other n × n-pixel blocks of the second image after displacement correction The absolute difference of spatial component and and find out its maximum as represent.Absolute difference is closed after reflecting Img1 and displacement correction Whether characteristics of the second image Img2_cal in n × n-pixel block this regional area approaches.Under YCbCr color spaces, Specific color space component includes luminance component, chroma blue component, and red chrominance component, but the present invention is not to color Color space is limited.Based under YCbCr color spaces, the present embodiment for example assumes that n=5 and calculates with following equation Absolute difference and SAD between the location of pixels of the second image Img2_cal after one image Img1 and displacement correction:
SAD=max(max(SAD_Y,SAD_Cb),SAD_Cr)
Wherein, i and j represents the position of pixel.Example as shown in Figure 9 A, each block of pixels include 25 pixel positions Put P00~P44.And Y1ijPixel P in as the first imageijLuminance component, Y2ijPixel P in as the second imageij's Luminance component.Cb1ijPixel P in as the first imageijChroma blue component, Cb2ijPixel P in as the second imageij Chroma blue component.Cr1ijPixel P in as the first imageijRed chrominance component, Cr2ijPicture in as the second image Vegetarian refreshments PijRed chrominance component.SAD_Y, SAD_Cb and SAD_Cr are respectively then absolute on each specific color space component Difference and.
Base this, map adjusting module 860 of the invention be, for example, obtained using above-mentioned calculation formula absolute difference and SAD.Afterwards, map adjusting module 860 will determine that whether absolute difference and SAD are more than mobile critical value TH_SAD.If absolute difference Value and SAD are not greater than moving critical value TH_SAD, represent the situation that this block of pixels does not have subject to move, not Need to adjust this block of pixels corresponding to the parameter value in parameter map.If absolute difference and SAD are more than mobile critical value TH_ SAD, representing this block of pixels, there is subject to move phenomenon, therefore map adjusting module 860 will be according to absolute difference and SAD Size adjust this block of pixels corresponding to the parameter value in parameter map.For example, map adjusting module 860 is available Following source code (3) come produce adjustment after parameter map allin_map:
if(SAD>TH_SAD)
Fac=LUT[SAD];
allin_map=map×Fac
else
allin_map=map
(3)
Wherein, Fac represents weight factor of the map adjusting module 8620 to adjusting parameter map map.It follows that work as Absolute difference and SAD are more than mobile critical value TH_SAD, and map adjusting module 860 determines each parameter according to absolute difference and SAD The weight factor Fac of value, and utilize the parameter value in weight factor Fac adjusting parameter maps map.Wherein, weight factor Fac with The increase of absolute difference and SAD and decline.
Fig. 9 B are schematically shown as the relation schematic diagram according to the absolute difference of yet another embodiment of the invention and with weight factor.Such as figure Shown in 9B, when absolute difference and SAD are more than mobile critical value TH_SAD, map adjusting module 860 is determined according to absolute difference and SAD The weight factor of fixed each parameter value, and utilize weight factor adjusting parameter value.Weight factor is with absolute difference and SAD increase And decline, that is to say, that each parameter value declines with corresponding absolute difference and SAD rising.
Afterwards, image synthesis unit 850 can mix the first image according to the parameter map allin_map after adjustment Img1 and the second image Img2_cal by displacement correction, to produce the full depth image Img_ without ghost phenomenon AIF.Wherein, image synthesis unit 860 according to the parameter map allin_map after adjustment to produce full depth image the step of, To image synthesis unit 460 according to scattered scape map bokeh_map to produce scattered scape image the step of it is similar, refer to Fig. 7 phase Speak on somebody's behalf it is bright analogize it, repeat no more.For example, image synthesis unit 860 also can obtain finally complete by source code (4) Depth image Img_AIF.
if(Map≥Blend_TH1)//In-of-focus area of image2
Img1_AIF=Img2
else if(Map≥Blend_TH2)//Transition area
wAIF=LUT [Map] (LUT is table and value range is0~255)
else//In-of-focus area of image1
Img1_AIF=Img1
(4)
Wherein, herein in exemplary source code (4), it is assumed that for parameter value between 0 and 255, Blend_TH1 is first mixed Critical value is closed, Blend_TH2 is the second mixing critical value, and Map is the multiple parameters in parameter map allin_map after adjusting Value, LUT [] are function of tabling look-up.It is noted that the pixel of fringe region can be obtained by the concept of weight to calculate. As described above shown in the formula in exemplary source code, using parameter value as synthetic weight wAIF, and pass through synthetic weight wAIFTo close Into the pixel for going out fringe region.
Likewise, understand that image processing method of the invention can be obtained most by multiple images via Fig. 3 explanation Output image afterwards.Base this, in the present embodiment, image acquiring device 800 can utilize a variety of different focal lengths and obtain more Image is opened, and clearly full depth image is synthesized using the different image of multiple focal lengths., can for the applicable cases of reality First analyzed for scene, to determine whether that it is all clear that the image of how many different focals of needs synthesizes whole image Clear full depth image.
In summary, image acquiring device and its image processing method provided by the present invention, by using at least two The different image of focal length calculates the parameter map with synthesis, and synthesizes main body picture rich in detail or complete according to parameter map Depth image.Wherein, image processing method provided by the present invention can allow more than one subject goal can clearly and background It is fuzzy, with more than one subject goal in saliency maps picture.In addition, shot subject and the back of the body in image can be made by the present invention Connection edge between scape is soft and is worked well and natural image naturally, reaching scattered scape.On the other hand, the present invention can also lead to Cross using multiple images for being obtained from different focus distance, to establish the full depth map that each place in image is aware that focusing Picture.In addition, and when establishing full depth image, the noise in image can also be eliminated in the lump, it is ensured that the full depth map established As the details in image will not be lost.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to The technical scheme described in foregoing embodiments can so be modified, either which part or all technical characteristic are entered Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology The scope of scheme.

Claims (14)

1. a kind of image processing method, suitable for image acquiring device, it is characterised in that the image processing method includes:
First image is obtained with the first focal length, and the second image is obtained with the second focal length, wherein first focal length is focused at least One main body;
Geometric correction program is carried out to second image, produces second image after displacement correction;
Gradient algorithm is performed to each pixel of first image to produce multiple first gradient values, and to displacement correction after Each pixel of second image perform the gradient algorithm to produce multiple second Grad;
Respectively the first gradient value and corresponding respectively second Grad are to produce multiple first pixel comparative results, and root The first parameter map is produced according to those the first pixel comparative results, including:
By those second Grad divided by those corresponding first gradient values, multiple gradient fiducial values are produced, wherein described every A gradient fiducial value corresponding to one pixel;And
The number range being located at based on those gradient fiducial values produces multiple parameter values, and by those parameter values be recorded as this One parameter map;And
Composograph is produced according to the first parameter map and first image, and output figure is produced according at least to the composograph Picture.
2. image processing method according to claim 1, it is characterised in that produce the output according at least to the composograph The step of image, includes:
3rd image is obtained with the 3rd focal length;
The geometric correction program is carried out to the 3rd image, produces the 3rd image after displacement correction;
The gradient algorithm is performed to produce multiple 3rd gradient values, and to displacement correction to each pixel of the composograph Each pixel of the 3rd image afterwards performs the gradient algorithm to produce multiple 4th gradient values;
Respectively the 3rd gradient value and corresponding respectively 4th gradient value are to produce multiple second pixel comparative results, and root The second parameter map is produced according to those the second pixel comparative results;And
According to the second parameter map, mix the 3rd image after displacement correction and produce the output image with the composograph.
3. image processing method according to claim 1, it is characterised in that the geometric correction journey is carried out to second image Sequence, produce displacement correction after second image the step of include:
First image and the second image amount of moving are estimated, use calculating homography matrix;And
Geometry affine transform is carried out to second image according to the homography matrix, to obtain second figure after displacement correction Picture.
4. image processing method according to claim 1, it is characterised in that the number being located at based on those gradient fiducial values To set each pixel described in the first parameter map, correspondingly to multiple parameter values, the step of one of them includes value scope:
Judge whether those gradient fiducial values are more than first gradient critical value;And
If those gradient fiducial values are more than the first gradient critical value, those parameter values corresponding to those gradient fiducial values are set For the first numerical value.
5. image processing method according to claim 4, it is characterised in that the number being located at based on those gradient fiducial values To set each pixel described in the first parameter map, correspondingly to multiple parameter values, the step of one of them includes value scope:
If those gradient fiducial values are had no more than the first gradient critical value, judge whether those gradient fiducial values are more than the second ladder Spend critical value;
If those gradient fiducial values are more than the second gradient critical value, those parameter values corresponding to those gradient fiducial values are set For second value;And
If those gradient fiducial values are had no more than the second gradient critical value, those ginsengs corresponding to those gradient fiducial values are set Setting value is third value,
Wherein, the first gradient critical value is more than the second gradient critical value.
6. image processing method according to claim 1, it is characterised in that depending at least on the first parameter map and this One image, which produces the step of composograph, to be included:
Blurring program is carried out to first image, produces blurred picture;And
First image is mixed with the blurred picture according to first Reference Map to produce main body picture rich in detail.
7. image processing method according to claim 6, it is characterised in that according to first Reference Map mix this first Image is included with the blurred picture with producing the step of the main body picture rich in detail:
Judge whether those parameter values are more than the first mixing critical value;
If those parameter values are more than the first mixing critical value, the pixel of the blurred picture corresponding to those parameter values is taken to make For the pixel of the main body picture rich in detail;
If those parameter values are had no more than the first mixing critical value, it is critical to judge whether those parameter values are more than the second mixing Value;
If those parameter values are more than the second mixing critical value, go out the corresponding main body picture rich in detail according to those parameter value calculations Pixel;And
If those parameter values have no the pixel for more than the second mixing critical value, taking first image corresponding to those parameter values Pixel of the point as the main body picture rich in detail, wherein, the first mixing critical value is more than the second mixing critical value.
8. image processing method according to claim 1, it is characterised in that depending at least on the first parameter map and this One image, which produces the step of composograph, to be included:
Calculated for pixel values according to each pixel in first image and second image goes out multiple exhausted corresponding to each pixel And, and according to those absolute differences and those parameter values in the first parameter map are adjusted to difference;And
According to first Reference Map after adjustment, it is complete to produce with second image after displacement correction to mix first image Depth image.
9. image processing method according to claim 8, it is characterised in that according in first image and second image The calculated for pixel values of each pixel go out those absolute differences corresponding to each pixel and, and according to those absolute differences and adjustment The step of those parameter values in the first parameter map, includes:
When those absolute differences and more than mobile critical value, according to those absolute differences and determine the weight of the respectively parameter value because Son, and those parameter values are adjusted using the weight factor, wherein respectively the parameter value is with the rising of the corresponding absolute difference sum And decline.
10. image processing method according to claim 9, it is characterised in that after being adjusted via those weight factors First Reference Map, mix second image after first image and displacement correction to produce the step of the full depth image Suddenly include:
Judge whether those parameter values are more than the first mixing critical value;
If those parameter values are more than the first mixing critical value, second figure after the displacement correction corresponding to those parameter values is taken Pixel of the pixel of picture as the full depth image;
If those parameter values are had no more than the first mixing critical value, it is critical to judge whether those parameter values are more than the second mixing Value;
If those parameter values are more than the second mixing critical value, go out the corresponding full depth image according to those parameter value calculations Pixel;And
If those parameter values have no the pixel for more than the second mixing critical value, taking first image corresponding to those parameter values Pixel of the point as the full depth image, wherein, the first mixing critical value is more than the second mixing critical value.
A kind of 11. image acquiring device, it is characterised in that including:
Image collection module, the first image is obtained with the first focal length, and the second image is obtained with the second focal length, wherein first Jiao Away from focusing in an at least main body;
Image correction module, geometric correction program is carried out to second image, produces second image after displacement correction;
Gradient calculation module, a gradient algorithm is performed to produce multiple first gradient values to each pixel of first image, And the gradient algorithm is performed to produce multiple second Grad to each pixel of second image after displacement correction;
Map generation module, respectively the first gradient value and corresponding respectively second Grad are to produce multiple first pixels Comparative result, and according to those the first pixel comparative results produce the first parameter map, wherein the map generation module by those Second Grad divided by those corresponding first gradient values, produce multiple gradient fiducial values, wherein each pixel pair The gradient fiducial value answered, and the number range being located at based on those gradient fiducial values produce multiple parameter values, And those parameter values are recorded as the first parameter map;And
Image synthesis unit, composograph is produced according to the first parameter map and first image, and according at least to the synthesis Image produces output image.
12. image acquiring device according to claim 11, it is characterised in that the image collection module is obtained with the 3rd focal length Take the 3rd image, the image correction module geometric correction program is carried out to the 3rd image and produces displacement correction after this Three images, the gradient calculation module perform the gradient algorithm to produce multiple 3rd gradients to each pixel of the composograph Value, it is multiple to produce that the gradient calculation module performs the gradient algorithm to each pixel of the 3rd image after displacement correction 4th gradient value, respectively the 3rd gradient value and corresponding respectively 4th gradient value are multiple to produce for the map generation module Second pixel comparative result, the map generation module simultaneously produce the second parameter map according to those the second pixel comparative results, should Image synthesis unit produces this according to the 3rd image after the second parameter map mixing displacement correction and the composograph Output image.
13. image acquiring device according to claim 11, it is characterised in that also including image obfuscation module, the image Obfuscation module carries out a blurring program to first image and produces blurred picture, and the image synthesis unit is according to first ginseng Examine map and mix first image with the blurred picture to produce main body picture rich in detail.
14. image acquiring device according to claim 11, it is characterised in that also including map adjusting module, the map Adjusting module goes out more corresponding to each pixel according to the calculated for pixel values of each pixel in first image and second image Individual absolute difference and, the map adjusting module and according to those absolute differences and adjust the first parameter map in those parameters Value, the image synthesis unit according to first Reference Map after adjustment mix after first image and displacement correction this second Image is to produce full depth image.
CN201310260044.3A 2013-02-06 2013-06-26 Image acquisition device and image processing method thereof Active CN103973963B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW102104649 2013-02-06
TW102104649 2013-02-06

Publications (2)

Publication Number Publication Date
CN103973963A CN103973963A (en) 2014-08-06
CN103973963B true CN103973963B (en) 2017-11-21

Family

ID=51242964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310260044.3A Active CN103973963B (en) 2013-02-06 2013-06-26 Image acquisition device and image processing method thereof

Country Status (1)

Country Link
CN (1) CN103973963B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105491278B (en) * 2014-10-09 2018-09-25 聚晶半导体股份有限公司 Image capture unit and digital zoom display methods
CN104333703A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method and terminal for photographing by virtue of two cameras
CN106303202A (en) * 2015-06-09 2017-01-04 联想(北京)有限公司 A kind of image information processing method and device
US10002435B2 (en) 2016-01-29 2018-06-19 Google Llc Detecting motion in images
CN105933602A (en) * 2016-05-16 2016-09-07 中科创达软件科技(深圳)有限公司 Camera shooting method and device
CN106161997A (en) * 2016-06-30 2016-11-23 上海华力微电子有限公司 Improve the method and system of cmos image sensor pixel
KR102560780B1 (en) 2016-10-05 2023-07-28 삼성전자주식회사 Image processing system including plurality of image sensors and electronic device including thereof
CN108377342B (en) * 2018-05-22 2021-04-20 Oppo广东移动通信有限公司 Double-camera shooting method and device, storage medium and terminal
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
CN110517211B (en) * 2019-07-31 2023-06-13 茂莱(南京)仪器有限公司 Image fusion method based on gradient domain mapping
WO2021120120A1 (en) * 2019-12-19 2021-06-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device, method of controlling electric device, and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1392724A (en) * 2001-06-19 2003-01-22 卡西欧计算机株式会社 Image pick-up device and method, storage medium for recording image pick-up method program
JP2008278763A (en) * 2007-05-08 2008-11-20 Japan Health Science Foundation Transgenic non-human animal
US7538815B1 (en) * 2002-01-23 2009-05-26 Marena Systems Corporation Autofocus system and method using focus measure gradient
CN101447079A (en) * 2008-12-11 2009-06-03 香港理工大学 Method for extracting area target of image based on fuzzytopology
CN101852970A (en) * 2010-05-05 2010-10-06 浙江大学 Automatic focusing method for camera under imaging viewing field scanning state
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102682435A (en) * 2012-05-14 2012-09-19 四川大学 Multi-focus image edge detection method based on space relative altitude information
CN102867297A (en) * 2012-08-31 2013-01-09 天津大学 Digital processing method for low-illumination image acquisition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1392724A (en) * 2001-06-19 2003-01-22 卡西欧计算机株式会社 Image pick-up device and method, storage medium for recording image pick-up method program
US7538815B1 (en) * 2002-01-23 2009-05-26 Marena Systems Corporation Autofocus system and method using focus measure gradient
JP2008278763A (en) * 2007-05-08 2008-11-20 Japan Health Science Foundation Transgenic non-human animal
CN101447079A (en) * 2008-12-11 2009-06-03 香港理工大学 Method for extracting area target of image based on fuzzytopology
CN101852970A (en) * 2010-05-05 2010-10-06 浙江大学 Automatic focusing method for camera under imaging viewing field scanning state
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102682435A (en) * 2012-05-14 2012-09-19 四川大学 Multi-focus image edge detection method based on space relative altitude information
CN102867297A (en) * 2012-08-31 2013-01-09 天津大学 Digital processing method for low-illumination image acquisition

Also Published As

Publication number Publication date
CN103973963A (en) 2014-08-06

Similar Documents

Publication Publication Date Title
CN103973963B (en) Image acquisition device and image processing method thereof
TWI602152B (en) Image capturing device nd image processing method thereof
US11882369B2 (en) Method and system of lens shading color correction using block matching
CN110663245B (en) Apparatus and method for storing overlapping regions of imaging data to produce an optimized stitched image
CN108600576B (en) Image processing apparatus, method and system, and computer-readable recording medium
CN105025215B (en) A kind of terminal realizes the method and device of group photo based on multi-cam
US20160300337A1 (en) Image fusion method and image processing apparatus
US20120019614A1 (en) Variable Stereo Base for (3D) Panorama Creation on Handheld Device
CN107925751A (en) For multiple views noise reduction and the system and method for high dynamic range
WO2015085042A1 (en) Selecting camera pairs for stereoscopic imaging
CN101616237A (en) Image processing apparatus, image processing method, program and recording medium
KR20130103527A (en) Stereoscopic (3d) panorama creation on handheld device
US11184553B1 (en) Image signal processing in multi-camera system
US20240121521A1 (en) Image processing based on object categorization
CN106612392A (en) Image shooting method and device based on double cameras
TWI599809B (en) Lens module array, image sensing device and fusing method for digital zoomed images
CN102158648A (en) Image capturing device and image processing method
JP6270413B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN109257540A (en) Take the photograph photography bearing calibration and the camera of lens group more
CN103973962B (en) Image processing method and image collecting device
WO2021145913A1 (en) Estimating depth based on iris size
JP6494388B2 (en) Image processing apparatus, image processing method, and program
JP2014049895A (en) Image processing method
JP6025555B2 (en) Image processing apparatus, image processing method, and program
CN108377376B (en) Parallax calculation method, double-camera module and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant