CN103973963A - Image acquisition device and image processing method thereof - Google Patents

Image acquisition device and image processing method thereof Download PDF

Info

Publication number
CN103973963A
CN103973963A CN201310260044.3A CN201310260044A CN103973963A CN 103973963 A CN103973963 A CN 103973963A CN 201310260044 A CN201310260044 A CN 201310260044A CN 103973963 A CN103973963 A CN 103973963A
Authority
CN
China
Prior art keywords
image
pixel
those
map
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310260044.3A
Other languages
Chinese (zh)
Other versions
CN103973963B (en
Inventor
庄哲纶
周宏隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Altek Semiconductor Corp
Original Assignee
Altek Semiconductor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Altek Semiconductor Corp filed Critical Altek Semiconductor Corp
Publication of CN103973963A publication Critical patent/CN103973963A/en
Application granted granted Critical
Publication of CN103973963B publication Critical patent/CN103973963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an image acquisition device and an image processing method thereof. The image processing method includes the following steps. The first image is acquired at a first focal length and the second image is acquired at a second focal length. And performing a geometric correction program on the second image to generate a second image after displacement correction. And performing gradient operation on each pixel point of the second image after the displacement correction to generate a plurality of second gradient values. And comparing each first gradient value with each corresponding second gradient value to generate a plurality of first pixel comparison results, and generating a first parameter map according to the first pixel comparison results. And generating a composite image according to the first parameter map and the first image, and generating an output image at least according to the composite image.

Description

Image acquiring device and image processing method thereof
Technical field
The invention relates to a kind of image acquiring device and image processing method thereof, and relate to especially and a kind ofly come image acquiring device and the image processing method thereof of vision-mix according to this by calculating pixel Grad.
Background technology
Along with the progress of optical technology, capable of regulating aperture, the shutter even digital camera of lens changeable are popularized gradually, and the function of digital camera is also tending towards variation.Digital camera is except will providing good image quality, and the accuracy of focusing technology and speed especially consumer are understood the factor of reference in the time buying product.But with existing optical system, because multiple objects have different distances in stereo scene, therefore cannot obtain full depth image completely clearly in the process of single shot image.That is, be subject to the restriction of lens optical characteristic, when with digital camera capture, can only select one of them degree of depth to focus, therefore the scenery in other degree of depth can be comparatively fuzzy in imaging.
Multiple image combinings that the method for the full depth image of existing generation adopts multiple different photography conditions to take gained mostly form.By changing one or more parameter in photography conditions and then Same Scene being shot to multiple different images, then by definition method of discrimination, these image sets are synthesized to an image clearly.The skill that adopts above-mentioned multiple different photography conditions to take to synthesize full depth image must be dependent on fixing image acquiring device and be taken.Generally speaking, user often utilizes stable foot rest to fix image acquiring device, significantly geometric warping of nothing between the image being obtained to guarantee.In addition, in shooting process, also must avoid having in scene being shot the movement of any target.
On the other hand, in the time using image shot by camera, in order to highlight the theme in captured image, generally can adopt the shooting skill of so-called loose scape (bokeh).Loose scape is illustrated in the more shallow photographic imagery of the depth of field, drops on picture beyond the depth of field and has and produce gradually loose fuzzy effect.Generally speaking the loose scape effect that, camera lens can produce is limited.If obtain preferably loose scape effect, conventionally need to meet following several important conditions: large aperture, long-focus simultaneously.In other words, need rely on high-aperture lenses and strengthen the obfuscation of distant object in order to reach loose scape effect, and allow the theme of knowing imaging be able to highlight from background.But high-aperture lenses bulky and expensive, is not that general consumption-orientation camera can be equipped with.
Generally speaking, existing generation panorama method dark or the loose scape image of generation all easily causes image after treatment to produce the discontinuous or natural not problem of the depth of field.In addition, allowing especially user feel inconvenience for the performance constraint on photographic images, similarly is quite length or complicated process of its average total shooting time, even causes final result images cannot make us pleasing oneself.
Summary of the invention
In view of this, the invention provides a kind of image acquiring device and image processing method thereof, can be worth captured image by different focal and judge the main body in image, and then produce the natural image of the clear and loose scape effect of main body.On the other hand, image processing method of the present invention also can be worth the ghost problem of captured image while avoiding producing full depth image by different focal.
The present invention proposes a kind of image processing method, is applicable to image acquiring device, and this image processing method comprises the following steps.Obtain one first image with the first focal length, and obtain the second image with the second focal length, wherein the first focal length is focused at least one main body.The second image is carried out to geometric correction program, produce the second image after displacement correction.Each pixel of the first image is carried out to gradient computing to produce multiple the first Grad, and each pixel of the second image after displacement correction is carried out to gradient computing to produce multiple the second Grad.More each the first Grad to produce multiple the first pixel comparative results, and produces the first parameter map according to these the first pixel comparative results with corresponding each the second Grad.Produce composograph according to the first parameter map and the first image, and at least produce output image according to composograph.
In one embodiment of this invention, above-mentioned image processing method, wherein at least comprises according to the step of composograph generation output image: obtain the 3rd image with the 3rd focal length.The 3rd image is carried out to geometric correction program, produce the 3rd image after displacement correction.Each pixel of synthetic images is carried out gradient computing to produce multiple the 3rd Grad, and each pixel of the 3rd image after displacement correction is carried out to gradient computing to produce multiple the 4th Grad.More each the 3rd Grad to produce multiple the second pixel comparative results, and produces the second parameter map according to these the second pixel comparative results with corresponding each the 4th Grad.According to the second parameter map, mix the 3rd image after displacement correction and composograph and produce output image.
In one embodiment of this invention, above-mentioned image processing method, wherein the second image is carried out to geometric correction program, the step that produces the second image after displacement correction comprises: the first image and the second image are carried out to amount of movement estimation, use and calculate homography (homography matrix) matrix.According to homography matrix, the second image is carried out to the affine conversion of geometry (affine transformation), to obtain the second image after displacement correction.
In one embodiment of this invention, above-mentioned image processing method, wherein more each the first Grad with corresponding each the second Grad to produce multiple the first pixel comparative results, and comprise according to the step of these the first pixel comparative results generation parameter maps: these second Grad, divided by the first corresponding Grad, are produced to multiple gradient comparison values.Produce multiple parameter values according to these gradient comparison values, and these parameter values are recorded as to parameter map.
In one embodiment of this invention, above-mentioned image processing method, wherein comprises according to the step of multiple gradient comparison values generation multiple parameter values: judge whether these gradient comparison values are greater than the first gradient critical value.If gradient comparison value is greater than the first gradient critical value, setting the corresponding parameter value of gradient comparison value is the first numerical value.
In one embodiment of this invention, above-mentioned image processing method, wherein comprise according to the step of these gradient comparison values generation multiple parameter values: be greater than the first gradient critical value if gradient comparison value there is no, judge whether gradient comparison value is greater than the second gradient critical value.If gradient comparison value is greater than the second gradient critical value, setting the corresponding parameter value of gradient comparison value is second value.Be greater than the second gradient critical value if gradient comparison value there is no, set the corresponding parameter value of gradient comparison value and be set as third value, wherein, the first gradient critical value is greater than the second gradient critical value.
In one embodiment of this invention, above-mentioned image processing method, wherein at least comprises according to the step of the first parameter map and the first image generation composograph: the first image is carried out to obfuscation program, produce blurred picture.Mix the first image and blurred picture to produce main body picture rich in detail according to the first Reference Map.
In one embodiment of this invention, above-mentioned image processing method, wherein mixes the first image according to the first Reference Map and comprises with the step that produces main body picture rich in detail with blurred picture: judge whether parameter value is greater than first and mixes critical value.If parameter value is greater than the first mixing critical value, get the pixel of the corresponding blurred picture of parameter value as the pixel of main body picture rich in detail.Be greater than the first mixing critical value if parameter value there is no, judge whether parameter value is greater than the second mixing critical value.If parameter value is greater than the second mixing critical value, go out the pixel of corresponding main body picture rich in detail according to parameter value calculation.Be greater than the second mixing critical value if parameter value there is no, get the pixel of corresponding the first image of parameter value as the pixel of main body picture rich in detail, wherein, first mixes critical value is greater than the second mixing critical value.
In one embodiment of this invention, above-mentioned image processing method, wherein at least comprise according to the step of the first parameter map and the first image generation composograph: the calculated for pixel values according to each pixel in the first image and the second image goes out the corresponding multiple absolute differences of each pixel and (Sum of AbsoluteDifferences), and according to these absolute differences and the parameter value of adjusting in the first parameter map.According to the first Reference Map after adjusting, the second image after mixing the first image and displacement correction is to produce full depth image.
In one embodiment of this invention, above-mentioned image processing method, wherein according to the calculated for pixel values of each pixel in the first image and the second image go out the corresponding absolute difference of each pixel with, and comprise according to absolute difference and the step of adjusting the parameter value in the first parameter map: when absolute difference be greater than mobile critical value, according to the weight factor of absolute difference and the each parameter value of decision, and exploitation right repeated factor adjust parameter value, wherein each parameter value along with corresponding absolute difference and rising and decline.
In one embodiment of this invention, above-mentioned image processing method, wherein, according to the first Reference Map after adjusting via weight factor, mix the first image and comprise with the step that produces full depth image with the second image after displacement correction: judge whether parameter value is greater than first and mixes critical value.If parameter value is greater than the first mixing critical value, get the pixel of the second image after the corresponding displacement correction of parameter value as the pixel of full depth image.Be greater than the first mixing critical value if parameter value there is no, judge whether parameter value is greater than the second mixing critical value.If parameter value is greater than the second mixing critical value, go out the pixel of corresponding full depth image according to parameter value calculation.Be greater than the second mixing critical value if parameter value there is no, get the pixel of corresponding the first image of parameter value as the pixel of full depth image, wherein the first mixing critical value is greater than the second mixing critical value.
From another viewpoint, the present invention proposes a kind of image acquiring device, and this image acquiring device comprises image collection module, displacement correction module, gradient calculation module, map generation module and image synthesis unit.Image collection module is obtained the first image with the first focal length, and obtains the second image with the second focal length, and wherein the first focal length is focused at least one main body.Displacement correction module is carried out geometric correction program to the second image, produces the second image after displacement correction.Gradient calculation module is carried out gradient computing to produce multiple the first Grad to each pixel of the first image, and to each pixel execution degree computing of the second image after displacement correction to produce multiple the second Grad.More each the first Grad of map generation module to produce multiple the first pixel comparative results, and produces the first parameter map according to the first pixel comparative result with corresponding each the second Grad.Image synthesis unit produces composograph according to the first parameter map and the first image, and at least produces output image according to composograph.
Based on above-mentioned, the present invention can cause the different characteristic of image by focal length difference, Same Scene is taken with different focal, and between movement images each pixel gradient disparities and produce parameter map.By the information of parameter map, can produce clearly the loose scape image of depth image or the clear blurred background of main body entirely, reach good full Deep Canvas or loose scape effect.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Brief description of the drawings
Fig. 1 is the function block schematic diagram of the shown image acquiring device of one embodiment of the invention;
Fig. 2 is the shown image processing method flow chart of one embodiment of the invention;
Fig. 3 is the schematic diagram of the shown image processing method of another embodiment of the present invention;
Fig. 4 is the calcspar of the shown image acquiring device of further embodiment of this invention;
Fig. 5 is the shown image processing method flow chart of further embodiment of this invention;
Fig. 6 is the detail flowchart of step S550 in the shown Fig. 5 of further embodiment of this invention;
Fig. 7 is the detail flowchart of step S560 in the shown Fig. 5 of further embodiment of this invention;
Fig. 8 is the calcspar of the shown image acquiring device of an embodiment more of the present invention;
Fig. 9 A is the schematic diagram of the shown block of pixels of an embodiment more of the present invention;
Fig. 9 B is the shown absolute difference of yet another embodiment of the invention and the schematic diagram that is related to weight factor.
Description of reference numerals:
100,400,800: image acquiring device;
110,410,810: image collection module;
120,420,820: image correction module;
130,430,830: gradient calculation module;
140,440,840: map generation module;
150,450,850: image synthesis unit;
460: image blurring module;
860: map adjusting module;
Img1, Img2, Img3, Img_b, Img_F, Img1_blur, Img2_cal: image
G1, G2: Grad;
Bokeh_map: loose scape map;
Map, allin_map: parameter map;
S210~S250, S510~S560, S610~S625, S710~S750: step.
Embodiment
The present invention proposes a kind of method that produces loose scape image and full depth image by utilizing different focal to be worth multiple captured images.At least one main body of taking in wish of first focusing is carried out and takes, and then utilizes another focal length to take Same Scene.Produce parameter map by the pixel gradient that compares two images, can judge according to this main part in image, and then produce the image with loose scape effect.On the other hand, produce the parameter map as the foundation of vision-mix by the pixel gradient of more at least two images, and then produce full depth image.In order to make content of the present invention more clear, below enumerate the example that embodiment can implement really according to this as the present invention.
Fig. 1 is the function block schematic diagram of the imaged image acquisition device that illustrates according to one embodiment of the invention.Please refer to Fig. 1, the image acquiring device 100 of the present embodiment is for example digital camera, slr camera, digital code camera or other smart mobile phone with image-acquisition functions, panel computer, head-mounted display etc., is not limited to above-mentioned.Image acquiring device 100 comprises image collection module 110, image correction module 120, gradient calculation module 130, map generation module 140 and image synthesis unit 150.
Image collection module 110 comprises zoom lens and photo-sensitive cell.Photo-sensitive cell is for example charge coupled cell (Charge Coupled Device, CCD), complementary matal-oxide semiconductor (Complementary Metal-Oxide Semiconductor, CMOS) element or other elements, image collection module 110 also can comprise aperture etc., neither limits at this.Image collection module 110 can be obtained different images according to different focal length values.
On the other hand, image correction module 120, gradient calculation module 130, map generation module 140 and image synthesis unit 150 can be obtained by software, hardware or its combination implementation, are not limited at this.Software is for example source code, operating system, application software or driver etc.Hardware is for example CPU (Central Processing Unit, CPU), or the microprocessor of the general service of other programmeds or special purpose (Microprocessor).
Fig. 2 is the Image Processing method flow diagram illustrating according to one embodiment of the invention.The method of the present embodiment is applicable to the image acquiring device 100 of Fig. 1, below the arrange in pairs or groups detailed step of the each module declaration the present embodiment in image acquiring device 100:
First, in step S210, image collection module 110 is obtained the first image with the first focal length, and obtains the second image with the second focal length, and wherein the first focal length is focused at least one main body.That is to say, image collection module 110 utilizes two kinds of different focal length to shoot two images.Wherein, under the same conditions, can be different with the picture result that different focal is captured.Specifically, just focus with regard to the first image of main body, the shank in its image is the most clearly.
In step S220, image correction module 120 is carried out geometric correction program to the second image, produces the second image after displacement correction.Due to the first image and the second image, system takes gained by user continuously to Same Scene, during this time due to camera rocking or mobile, may shoot the image of different angles, the first image and the second image have the generation of displacement.Therefore image correction module 120 is carried out geometric correction program to the second image, and in other words, geometric correction program can make the initial pixel position of the second image after displacement correction be same as the initial pixel position of the first image.
In step S230, gradient calculation module 130 is carried out a gradient computing to produce multiple the first Grad to each pixel of the first image, and each pixel of the second image after displacement correction is carried out to gradient computing to produce multiple the second Grad.That is to say, each pixel in the first image has its first Grad, and each pixel in the second image after displacement correction has its second Grad.
In step S240, more each the first Grad of map generation module 140 to produce multiple the first pixel comparative results, and produces the first parameter map according to the first pixel comparative result with corresponding each the second Grad.In simple terms, map generation module 140 can compare the Grad of pixel identical position, for each pixel position and Yan Douhui has a pixel comparative result.
In step S250, image synthesis unit 150 produces composograph according to the first parameter map and the first image, and at least produces output image according to composograph.Specifically,, after obtaining parameter map, image acquiring device 100 can mix the first image and other image of process image after treatment itself according to parameter map, produces according to this composograph.In addition, image acquiring device 100 also can mix the first image and the second image according to parameter map, produces according to this composograph.
It is worth mentioning that, although above-mentioned execution mode is that the present invention is not restricted to this taking two kinds of captured two images out of focal length as example.The present invention is determined by practical application situation, extends to and utilizes captured multiple images out of multiple focal lengths to obtain final output image.For instance, because the different image of focal length respectively has respectively different picture rich in detail parts, therefore can obtain full depth image clearly by the image of multiple different focal.In addition, image processing method of the present invention can be by focusing in three images of main body, background and prospect, and then produce and only have main body output image clearly.Below will enumerate another embodiment and describe it in detail.
The schematic diagram that Fig. 3 is the image processing method that illustrates according to another embodiment of the present invention.In the present embodiment, image collection module 110 utilizes the first focal length and the second focal length to obtain the first image I mg1 and the second image I mg2.Afterwards, as the explanation of above-described embodiment, by the processing of image correction module 120, gradient calculation module 130, map generation module, image synthesis unit 150, can produce according to this composograph Img_b, repeat no more in this.Should be noted, in above-described embodiment, image synthesis unit 150 can be using composograph Img_b as last output image, but in the present embodiment, composograph Img_b will further synthesize with another image and produces final output image Img_F.Specifically, as shown in Figure 3, image collection module 110 will be obtained the 3rd image I mg3 with the 3rd focal length again.Image correction module 120 is carried out geometric correction program to the 3rd image I mg3, produces the 3rd image I mg3 after displacement correction.
Afterwards, each pixel of gradient calculation module synthetic images Img_b is carried out gradient computing to produce multiple the 3rd Grad, and each pixel of the 3rd shadow Img3 picture after displacement correction is carried out to this gradient computing to produce multiple the 4th Grad.More each the 3rd Grad of map generation module 140 to produce multiple the second pixel comparative results, and produces the second parameter map according to the second pixel comparative result with corresponding each the 4th Grad.Be to obtain by the Grad of calculating composograph Img_b and the 3rd image I mg3 in this second parameter map, its inner parameter value is by different from the aforesaid parameter map that utilizes the first image I mg1 and the second image I mg2 to calculate.Image synthesis unit 150 is according to the second parameter map, and the 3rd image I mg3 and this composograph Img_b that mix after displacement correction produce output image Img_F.Based on above-mentioned known, the present invention does not limit to blend the picture number of final output image, is determined by practical application request.
But implementation of the present invention is not limited to above-mentioned explanation, can give for actual demand the content that changes above-described embodiment as one thinks fit.For example, in an embodiment more of the present invention, image acquiring device can also more comprise image blurring module, to produce the main body picture rich in detail with loose scape effect.In addition, in another embodiment of the present invention, image acquiring device can also more comprise map adjusting module, to produce the full depth image with good full Deep Canvas.How to synthesize loose scape image and full depth image according to the image of different focal in order to further illustrate gradient calculation module of the present invention, map generation module and image synthesis unit, below will enumerate respectively embodiment detailed description.
Fig. 4 is the calcspar of the image acquiring device that illustrates according to another embodiment of the present invention.Image acquiring device 400 comprises that image collection module 410, image correction module 420, gradient calculation module 430, map produce receipts module 440, image synthesis unit 450 and image blurring module 460.Wherein, image collection module 410, image correction module 420, gradient calculation module 430, map product receipts module 440 and image synthesis unit 450 are similar or be similar to the image collection module 110 shown in Fig. 1, image correction module 120, gradient calculation module 130, map product receipts module 140 and image synthesis unit 150, repeat no more in this.Embodiment illustrated in fig. 4ly can analogize it referring to figs. 1 through the related description of Fig. 3.
Need special instruction, different from the image acquiring device 100 shown in Fig. 1, image acquiring device 400 more comprises image blurring module 460.Wherein, image blurring module 460 is for example to adopt Gaussian filter (Gaussian filter), bidirectional filter (Bilateral filter) or average filter (Averagefilter) etc., in order to the first image I mg1 is carried out to obfuscation program, the present invention does not limit this.In addition, in the present embodiment, suppose that the second focal length is the focal length of focusing in background.
Fig. 5 is the image processing method flow chart illustrating according to one embodiment of the invention.The method of the present embodiment is applicable to the image acquiring device 400 of Fig. 4, below the arrange in pairs or groups detailed step of the each module declaration the present embodiment in image acquiring device 400:
First in step S510, image collection module 410 is obtained the first image I mg1 with the first focal length, and obtains the second image I mg2 with the second focal length, and wherein the first focal length is focused at least one main body, and the second focal length is focused in background.Focus in captured the first image I mg1 out of main body, main body is comparatively clear, and background is comparatively fuzzy.Compared to the first image I mg1, to focus in captured the second image I mg2 out of background, background is comparatively clear.Then, as described in step S520, image blurring module 460 is carried out obfuscation program to the first image I mg1, to produce blurred picture Img1_blur.
In step S530, image correction module 420 is carried out geometric correction program to the second image I mg2, produces the second image I mg2_cal after displacement correction.In detail, image correction module 420 can be carried out amount of movement estimation to the first image I mg1 and the second image I mg2, uses and calculates homography matrix (homography matrix).Then, image correction module 420 is carried out the affine conversion of geometry (affine transformation) according to this homography matrix to the second image I mg2, to obtain the second image I mg2_cal after the displacement correction after conversion.Accordingly, in the first image I mg1, the initial pixel position of body region can be identical with the initial pixel position of the second image I mg2_cal body region after displacement correction.
Then, in step S540, gradient calculation module 430 is carried out gradient computing to produce multiple the first Grad G1 to each pixel of the first image I mg1, and each pixel of the second image I mg2_cal after displacement correction is carried out to gradient computing to produce multiple the second Grad G2.Wherein, gradient computing can be the computing of horizontal direction Grad, the computing of vertical gradient value or two diagonal Grad computings, and the present invention does not limit this.That is to say, the first Grad and the second Grad can be horizontal direction Grad, vertical gradient value or two diagonal Grad corresponding to the mode of its gradient computing.Wherein, the horizontal direction Grad grey jump absolute value sum of the adjacent horizontal direction pixel of pixel and two-phase for this reason.Vertical gradient value is the grey jump absolute value sum of pixel and two adjacent vertical directions vegetarian refreshments for this reason.Diagonal Grad comprises the grey jump absolute value sum of this pixel and diagonal pixel.
It should be noted that, in the present embodiment, because the first image I mg1 focuses in the captured image of main body, so compared to displacement correction image I mg2_cal, the main body in the first image I mg1 can be comparatively clear.That is to say, in the focal length of the first image I mg1, the Grad of the pixel of body region can be greater than the Grad of the pixel of the same position of the second image I mg2_cal after displacement correction.Otherwise, because the second image I mg2_cal after displacement correction is the image of focusing and producing in background, so the Grad of the pixel of the background area of the first image I mg1 can be less than the Grad of the pixel of the same position of displacement correction image I mg2_cal.
Base this, in step S550, more each the first Grad G1 of map generation module 440 to produce multiple comparative results, and produces parameter map according to comparative result with corresponding each the second Grad G2.It should be noted that, in the present embodiment, parameter map is referred to as loose scape map bokeh_map.Specifically, map generation module 440 is by the Grad of the pixel of each same position in the second image I mg2_cal comparing after the first image I mg1 and displacement correction.Moreover, the relation of the Grad of each pixel in the second image I mg2_cal based on after above-mentioned the first image I mg1 and displacement correction, can determine each pixel in the first image I mg1 by comparative result is to be positioned at body region or background area.The comparative result of the Grad of each pixel in the second image I mg2_cal that map generation module 440 is passed through after the first image I mg1 and displacement correction, can produce loose scape map bokeh_map.In other words, loose scape map bokeh_map is with the comparative result information of the Grad of the first image I mg1 pixel identical with each position in the second image I mg2_cal after displacement correction.
Finally, in step S560, image synthesis unit 450 according to loose scape map bokeh_map mix the first image I mg1 with blurred picture Img1_blur with generation main body picture rich in detail Img1_bokeh.As can be seen here, the second image I mg2 produces loose scape map bokeh_map, and image synthesis unit 450 is that scape map bokeh_map mixes the first image I mg1 and blurred picture Img1_blur produces the main body picture rich in detail Img1_bokeh with loose scape effect according to faling apart.Thus, just can produce the loose scape image of clear and fuzzy other background areas that keep shot subject region.
In addition, below, how produce loose scape map bokeh_map according to more each the first Grad G1 with the result of corresponding each the second Grad G2 by further describing map generation module 440 in detail.Fig. 6 is the detail flowchart that illustrates step S550 in Fig. 5 according to the embodiment of the present invention.Referring to Fig. 4 and Fig. 6, in step S610, map generation module 440 divided by the first corresponding Grad G1, produces gradient comparison value by the second Grad G2.In step S620, map generation module 440 produces multiple parameter values according to gradient comparison value, and parameter value is recorded as to loose scape map bokeh_map.For instance, if the first image I mg1 and displacement correction image I mg2_cal have respectively 1024*768 pixel, after via image processing module 140 computings, will produce 1024*768 gradient comparison value, loose scape map bokeh_map will comprise 1024*768 parameter value.At this, step S620 can be divided into step S621 and implement it to step S625.
Map generation module 440 judges whether the gradient comparison value of each position is greater than the first gradient critical value (step S621).If gradient comparison value is greater than the first gradient critical value, it is the first numerical value (step S622) that map generation module 440 is set corresponding to the parameter value of this gradient comparison value, claims the first numerical value for loose scape background value in this.In other words,, if gradient comparison value is greater than the first gradient critical value, represent that the pixel of this position is positioned at the region of background.If gradient comparison value is not greater than the first gradient critical value, map generation module 440 judges whether gradient comparison value is greater than the second gradient critical value (step S623).If gradient comparison value is greater than the second gradient critical value, it is second value (step S324) that map generation module 440 is set corresponding to the parameter value of this gradient comparison value, claims that at this second value is loose scape marginal value.In simple terms, if between the second gradient critical value and the first gradient critical value, representing the pixel of this position, gradient comparison value connects the fringe region between background in main body.If gradient comparison value is not greater than the second gradient critical value, map generation module 440 is set corresponding to the parameter value of this gradient comparison value and is set as third value (step S625), claim that at this third value is loose scape main body value, the pixel of this position is positioned at the region of main body.Should be noted, loose scape marginal value will be between loose scape background value and loose scape main body value, and the first gradient critical value is greater than the second gradient critical value, and the first gradient critical value be the second gradient critical value according to actual conditions suitably set, the present invention does not limit this.
For instance, suppose that map generation module 440 setup parameter values are between 0 and 255, image processing module 140 can utilize following source code (1) to produce loose scape map bokeh_map:
if ( Gra 2 Gra 1 > TH 1 )
Map=255
elseif ( Gra 2 Gra 1 > TH 2 )
Map = Gra 2 Gra 1 - TH 2 TH 1 - TH 2 × 255
else
Map=0
(1)
Wherein, in this example embodiment, loose scape background value is 255, and loose scape main body value is 0, and loose scape marginal value can be critical by the first gradient, ratio calculating between the second gradient critical value and the second Grad and the first Grad obtains.Gra2 is the second Grad, and Gra1 is the first Grad, and TH1 is the first gradient critical value, and TH2 is the second gradient critical value, and Map is the multiple parameter values in loose scape map bokeh_map.
In addition, how to utilize loose scape map bokeh_map to produce main body picture rich in detail Img1_bokeh in order to describe image synthesis module 450 in detail, below will describe it in detail.Fig. 7 is the detail flowchart that exemplary embodiment illustrates step S560 in Fig. 5 according to the present invention, referring to Fig. 4 and Fig. 7.It should be noted that, the pixel of the each position in the first image I mg1 can correspond to respectively the parameters value in loose scape map bokeh_map.In step S710, image synthesis unit 450 judges whether parameters value is greater than the first mixing critical value.If parameter value is greater than the first mixing critical value, in step S720, image synthesis unit 450 is got the pixel of the corresponding blurred picture Img1_blur of these parameter values as the pixel of same position in main body picture rich in detail Img1_bokeh.The pixel that is these a little positions is identified as background area, and therefore the pixel of delivery paste image I mg1_blur produces the image of blurred background.
If parameter value is not greater than the first mixing critical value, in step S730, image blend module 150 judges whether parameter value is greater than the second mixing critical value.If parameter value is greater than the second mixing critical value, in step S740, image synthesis unit 450 is according to the pixel of the corresponding main body picture rich in detail of this parameter value of parameter value calculation Img1_bokeh.In detail, these are to be connected the fringe region between body region in background area between the first mixing critical value and the second corresponding pixel of parameter value position of mixing between critical value by differentiation.Therefore can be by synthetic the first image I mg1 and blurred picture Img1_blur, obtain the pixel of the fringe region between background area connection body region in main body picture rich in detail Img1_bokeh.
If parameter value is not greater than the second mixing critical value, in step S750, the pixel that image synthesis unit 450 is got corresponding the first image I mg_1 of parameter value is the pixel of main body picture rich in detail Img1_bokeh.That is to say, the corresponding position of these parameter values is arranged in body region by differentiation, therefore the body region pixel of the pixel of body region in main body picture rich in detail Img1_bokeh in getting the first image I mg_1 clearly.Wherein, the first mixing critical value is greater than the second mixing critical value.
For instance, suppose that image synthesis unit 450 setup parameter values are between 0 and 255, image synthesis unit 450 can utilize following source code (2) to produce main body picture rich in detail Img1_bokeh:
if(Map≥Blend_TH1)//Background area
Img1_Bokeh=Img1_Blur
else if(Map≥Blend_TH2)//Transition area
w Bokeh=LUT[Map](LUT is table and value range is0~255)
Img 1 _ Bokeh = w Bokeh × Img 1 + ( 256 - w Bokeh ) × Img 1 _ Blur 256
else//Subject
Img1_Bokeh=Img1
(2)
Wherein, in this example embodiment, Blend_TH1 is the first mixing critical value, and Blend_TH2 is the second mixing critical value, and Map is the multiple parameter values in loose scape map bokeh_map, LUT[] be the letter formula of tabling look-up.It is worth mentioning that, the pixel of fringe region can be calculated and be obtained by the concept of weight.As shown in the formula in above-mentioned exemplary source code, using parameter value as synthetic weight w bokeh, and by synthetic weight w bokehsynthesize the pixel of fringe region.That is to say, for the pixel of fringe region, to decide its fuzzy degree near body region or fuzzy region depending on its position, just can produce thus body region and be connected natural main body picture rich in detail Img1_bokeh with background area, make that in loose scape image, the edge between main body and background can be comparatively soft and natural.
In the above-described embodiments, focus in background as example taking the second focal length value, therefore can produce according to this blurred background and main body background blur image clearly.Explanation via Fig. 3 is known, and image processing method of the present invention can obtain last output image by multiple images.Base this, in other embodiments, if image acquiring device obtains another image to focus in the 3rd focal length of prospect.The background blur image that image acquiring device can utilize previous generation with focus in another image of prospect, via the processing procedure identical with above-mentioned generation background blur image, further calculate and all fuzzy and main body image clearly of generation prospect and background.
Fig. 8 is the calcspar according to the image acquiring device that an embodiment illustrates of the present invention again.Please refer to Fig. 8, in the present embodiment, image acquiring device 800 is in order to produce the dark image of panorama.Image acquiring device 800 comprises image collection module 810, image correction module 820, gradient calculation module 830, map generation module 840, image synthesis unit 850 and map adjusting module 860.Wherein, image collection module 810, image correction module 820, gradient calculation module 830, map generation module 840 and image synthesis unit 850 are similar or be similar to the image collection module 410 shown in Fig. 4, image correction module 420, gradient calculation module 430, map generation module 440 and image synthesis unit 450, repeat no more in this.
Need special instruction, different from the image acquiring device 400 shown in Fig. 4, not the thering is image blurring module with image acquiring device 800 but more comprise map adjusting module 860 of the present embodiment.Wherein, the parameter map that map adjusting module 860 produces in order to adjust map generation module 840.In the present embodiment, image collection module 810 utilizes the first focal length to obtain the first image I mg1, and obtains the second image I mg2 with the second focal length, and wherein the first focal length is focused at least one main body, and the second focal length is focused in main body region in addition.
Then, image correction module 830 is carried out geometric correction program to the second image I mg2, produces the second image I mg2_cal after displacement correction.Then gradient calculation module 840 is carried out gradient computing to produce multiple the first Grad G1 to each pixel of the first image I mg1, and each pixel of the second image I mg2_cal after displacement correction is carried out to gradient computing to produce multiple the second Grad G2.Then, more each the first Grad G1 of map generation module 840 to produce multiple comparative results, and produces parameter map map according to comparative result with corresponding each the second Grad G2.Produce the step of the second image I mg2_cal after displacement correction, the step that gradient calculation module 830 is carried out gradient computing about image correction module 830, and map generation module 840 to produce the image collection module 400 shown in step and Fig. 4 of parameter map map similar, can analogize it with reference to the explanation of Fig. 4 and Fig. 5.
In general, the Grad of same position pixel on two images can be different, namely the first Grad G1 and the second Grad G2 in the middle of the present embodiment.On the other hand, for the pixel of same position, if the Grad of the pixel of this position in the first image higher (being that G1 is greater than G2), the pixel that conventionally represents this position is seated in the first image region (i.e. region in the first focal length) comparatively clearly.If the Grad of the pixel of this position in the second image higher (being that G2 is greater than G1), the pixel that conventionally represents this position is seated in the second image region (i.e. region in the second focal length) comparatively clearly.That is to say, map generation module 840 also can obtain parameter map map by source code (1), but the present invention is not restricted to this.
Therefore, in the present embodiment, in the second image I mg2_cal that map generation module 440 is passed through after the first image I mg1 and displacement correction, the comparative result of the Grad of each pixel, can produce parameter map map.In other words, parameter map map is with the comparative result information of the Grad of the first image I mg1 pixel identical with each position in the second image I mg2_cal after displacement correction.Thus, image acquiring device 800 can learn according to parameter map map, and a certain locational pixel is the clear part that is arranged in the clear part of first image I mg1 the first focal length or is arranged in second image I mg2 the second focal length.Accordingly, image synthesis unit 850 can be picked out part comparatively clearly according to this from two images, to synthesize the more output image of clear part.
It is worth mentioning that, user, Same Scene is taken continuously and obtained in the middle of the process of the first image and the second image, in the time difference scene on taking, therefore may cause having item moving.Image correction module 820 is image to be done to the correction of global displacement (or camera displacement), can't proofread and correct the item in scene, if therefore there is the object of indivedual movements in image, can cause mixed full depth image to occur ghost phenomenon.The map adjusting module 860 of the present embodiment is in order to improve above-mentioned ghost phenomenon.
In this, map adjusting module 860 goes out the corresponding multiple absolute differences of each pixel and (Sum of AbsoluteDifferences) according to the calculated for pixel values of each pixel in the first image I mg1 and the second image I mg2, and according to the multiple parameter values in these absolute differences and adjustment parameter map map.Map adjusting module 860 is according to the Reference Map map after adjusting, and the second image I mg2_cal after mixing the first image I mg1 and displacement correction is to produce full depth image.
Specifically, first in the first image I mg1, obtain n × n block of pixels (n is positive integer).Suppose that n is 5, as shown in Figure 9 A, it comprises 25 location of pixels P to obtained 5 × 5 block of pixels of the present embodiment 00~P 44.Similarly, in the second image I mg2_cal after displacement correction, obtain the n × n block of pixels centered by location of pixels.Then, calculate the absolute difference of the specific color space component of each pixel in the second image I mg2_cal other n × n block of pixels after the first image I mg1 and displacement correction and and find out its maximum as representative.Whether absolute difference closes the characteristic of the second image I mg2_cal that can reflect after Img1 and displacement correction in this regional area of n × n block of pixels and approaches.Under YCbCr color space, specific color space component comprises luminance component, chroma blue component, and red color component, but the present invention is not limited color space.Under YCbCr color space, the present embodiment is for example absolute difference and the SAD between hypothesis n=5 the location of pixels that calculates the second image I mg2_cal after the first image I mg1 and displacement correction with following formula:
SAD _ Y = Σ i = 0 , j = 0 i = 4 · j = 4 | Y 1 ij - Y 2 ij |
SAD _ Cb = Σ i = 0 , j = 0 i = 4 · j = 4 | Cb 1 ij - Cb 2 ij |
SAD _ Cr = Σ i = 0 , j = 0 i = 4 · j = 4 | Cr 1 ij - Cr 2 ij |
SAD=max(max(SAD_Y,SAD_Cb),SAD_Cr)
Wherein, the position of i and j represent pixel point.Example as shown in Figure 9 A, each block of pixels comprises 25 location of pixels P 00~P 44.And Y1 ijbe pixel P in the first image ijluminance component, Y2 ijbe pixel P in the second image ijluminance component.Cb1 ijbe pixel P in the first image ijchroma blue component, Cb2 ijbe pixel P in the second image ijchroma blue component.Cr1 ijbe pixel P in the first image ijred color component, Cr2 ijbe pixel P in the second image ijred color component.SAD_Y, SAD_Cb and SAD_Cr be respectively absolute difference on each specific color space component and.
Base this, map adjusting module 860 of the present invention is for example utilize above-mentioned computing formula and obtain absolute difference and SAD.Afterwards, map adjusting module 860 will judge whether absolute difference and SAD are greater than mobile critical value TH_SAD.If absolute difference and SAD are not greater than mobile critical value TH_SAD, represent that this block of pixels does not have the situation that subject moves to occur, and does not need to adjust this block of pixels corresponding to the parameter value in parameter map.If absolute difference and SAD are greater than mobile critical value TH_SAD, represent that this block of pixels has subject and moves phenomenon, therefore map adjusting module 860 will be adjusted this block of pixels corresponding to the parameter value in parameter map according to the size of absolute difference and SAD.For instance, map adjusting module 860 can utilize following source code (3) to produce the parameter map allin_map after adjustment:
if(SAD>TH_SAD)
Fac=LUT[SAD];
allin_map=map×Fac
else
allin_map=map
(3)
Wherein, Fac represents that map adjusting module 8620 is in order to adjust the weight factor of parameter map map.Hence one can see that, and when absolute difference and SAD are greater than mobile critical value TH_SAD, map adjusting module 860 determines the weight factor Fac of each parameter value according to absolute difference and SAD, and exploitation right repeated factor Fac adjusts the parameter value in parameter map map.Wherein, weight factor Fac declines along with the increase of absolute difference and SAD.
Fig. 9 B illustrate into the absolute difference according to yet another embodiment of the invention and with the schematic diagram that is related to of weight factor.As shown in Figure 9 B, when absolute difference and SAD are greater than mobile critical value TH_SAD, map adjusting module 860 determines the weight factor of each parameter value according to absolute difference and SAD, and exploitation right repeated factor is adjusted parameter value.Weight factor declines along with the increase of absolute difference and SAD, that is to say, each parameter value declines along with the rising of corresponding absolute difference and SAD.
Afterwards, image synthesis unit 850 can, according to the parameter map allin_map after adjusting, be mixed the first image I mg1 and the second image I mg2_cal through displacement correction, to produce the dark image I mg_AIF of panorama without ghost phenomenon.Wherein, image synthesis unit 860 produces the step of full depth image according to the parameter map allin_map after adjusting, similar to the step that image synthesis unit 460 produces loose scape image according to loose scape map bokeh_map, the related description that please refer to Fig. 7 is analogized it, repeats no more.For instance, image synthesis unit 860 also can obtain the dark image I mg_AIF of final panorama by source code (4).
if(Map≥Blend_TH1)//In-of-focus area of image2
Img1_AIF=Img2
else if(Map≥Blend_TH2)//Transition area
w AIF=LUT[Map](LUT is table and value range is0~255)
Img 1 _ AIF = w AIF × Img 1 + ( 256 - w AIF ) × Img 2 256
else//In-of-focus area of image1
Img1_AIF=Img1
(4)
Wherein, in this exemplary source code (4), suppose that parameter value is between 0 and 255, Blend_TH1 is the first mixing critical value, Blend_TH2 is the second mixing critical value, and Map is the multiple parameter values in parameter map allin_map after adjusting, LUT[] be the letter formula of tabling look-up.It is worth mentioning that, the pixel of fringe region can be calculated and be obtained by the concept of weight.As shown in the formula in above-mentioned exemplary source code, using parameter value as synthetic weight w aIF, and by synthetic weight w aIFsynthesize the pixel of fringe region.
Same, known via the explanation of Fig. 3, image processing method of the present invention can obtain last output image by multiple images.Base this, in the present embodiment, image acquiring device 800 can utilize multiple different focal length and obtain multiple images, and utilizes the different image of multiple focal lengths to synthesize clearly full depth image.With regard to actual applicable cases, can first analyze for scene, need the image of how many different focal to synthesize all full depth images clearly of whole image further to judge.
In sum, image acquiring device provided by the present invention and image processing method thereof, by utilizing at least two different images of focal length to calculate with synthetic parameter map, and synthesize main body picture rich in detail or full depth image according to parameter map.Wherein, image processing method provided by the present invention can allow the more than one subject goal can clear and blurred background, with more than one subject goal in saliency maps picture.In addition, by the present invention can make in image between shot subject and background to be connected edge soft and natural, reach the respond well natural image again of loose scape.On the other hand, the present invention also can be obtained from by utilization multiple images of different focus distance, sets up the full depth image of the clear focusing in each place in image.In addition, and in the time setting up full depth image, also the noise in image can be eliminated in the lump, be guaranteed that set up full depth image can not lose the details in image.
Finally it should be noted that: above each embodiment, only in order to technical scheme of the present invention to be described, is not intended to limit; Although the present invention is had been described in detail with reference to aforementioned each embodiment, those of ordinary skill in the art is to be understood that: its technical scheme that still can record aforementioned each embodiment is modified, or some or all of technical characterictic is wherein equal to replacement; And these amendments or replacement do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (16)

1. an image processing method, is applicable to image acquiring device, it is characterized in that, this image processing method comprises:
Obtain the first image with the first focal length, and obtain the second image with the second focal length, wherein this first focal length is focused at least one main body;
This second image is carried out to geometric correction program, produce this second image after displacement correction;
Each pixel of this image is carried out to gradient computing to produce multiple the first Grad, and each pixel of this second image after displacement correction is carried out to this gradient computing to produce multiple the second Grad;
Respectively this first Grad, with corresponding respectively this second Grad to produce multiple the first pixel comparative results, and produces the first parameter map according to those the first pixel comparative results; And
Produce composograph according to this first parameter map and this first image, and at least produce output image according to this composograph.
2. image processing method according to claim 1, is characterized in that, the step that at least produces this output image according to this composograph comprises:
Obtain the 3rd image with the 3rd focal length;
The 3rd image is carried out to this geometric correction program, produce the 3rd image after displacement correction;
Each pixel of this composograph is carried out to this gradient computing to produce multiple the 3rd Grad, and each pixel of the 3rd image after displacement correction is carried out to this gradient computing to produce multiple the 4th Grad;
Respectively the 3rd Grad, with corresponding respectively the 4th Grad to produce multiple the second pixel comparative results, and produces the second parameter map according to those the second pixel comparative results; And
According to this second parameter map, the 3rd image and this composograph that mix after displacement correction produce this output image.
3. image processing method according to claim 1, is characterized in that, this second image is carried out to this geometric correction program, and the step that produces this second image after displacement correction comprises:
This first image and this second image are carried out to amount of movement estimation, use calculating homography matrix; And
According to this homography matrix, this second image is carried out to the affine conversion of geometry, to obtain this second image after displacement correction.
4. image processing method according to claim 1, it is characterized in that, respectively this first Grad is with corresponding respectively this second Grad to produce those the first pixel comparative results, and the step that produces this parameter map according to those the first pixel comparative results comprises:
Those second Grad, divided by those corresponding first Grad, are produced to multiple gradient comparison values; And
Produce multiple parameter values according to those gradient comparison values, and those parameter values are recorded as to this parameter map.
5. image processing method according to claim 4, is characterized in that, the step that produces those parameter values according to those gradient comparison values comprises:
Judge whether those gradient comparison values are greater than the first gradient critical value; And
If those gradient comparison values are greater than this first gradient critical value, setting corresponding those parameter values of those gradient comparison values is the first numerical value.
6. image processing method according to claim 5, is characterized in that, the step that produces those parameter values according to those gradient comparison values comprises:
Be greater than this first gradient critical value if those gradient comparison values there is no, judge whether those gradient comparison values are greater than the second gradient critical value;
If those gradient comparison values are greater than this second gradient critical value, setting corresponding those parameter values of those gradient comparison values is second value; And
Be greater than this second gradient critical value if those gradient comparison values there is no, set corresponding those parameter values of those gradient comparison values and be set as third value,
Wherein, this first gradient critical value is greater than this second gradient critical value.
7. image processing method according to claim 4, is characterized in that, the step that at least produces this composograph according to this first parameter map and this first image comprises:
This first image is carried out to obfuscation program, produce blurred picture; And
Mix this first image and this blurred picture to produce main body picture rich in detail according to this first Reference Map.
8. image processing method according to claim 7, is characterized in that, mixes this first image and this blurred picture comprises with the step that produces this main body picture rich in detail according to this first Reference Map:
Judge whether those parameter values are greater than the first mixing critical value;
If those parameter values are greater than this first mixing critical value, get the pixel of corresponding this blurred picture of those parameter values as the pixel of this main body picture rich in detail;
Be greater than this first mixing critical value if those parameter values there is no, judge whether those parameter values are greater than the second mixing critical value;
If those parameter values are greater than this second mixing critical value, go out the pixel of this corresponding main body picture rich in detail according to those parameter value calculation; And
Be greater than this second mixing critical value if those parameter values there is no, get the pixel of corresponding this first image of those parameter values as the pixel of this main body picture rich in detail, wherein, this first mixing critical value is greater than this second mixing critical value.
9. image processing method according to claim 4, is characterized in that, the step that at least produces this composograph according to this first parameter map and this first image comprises:
According to the calculated for pixel values of each pixel in this first image and this second image go out the corresponding multiple absolute differences of each pixel and, and according to those absolute differences with adjust those parameter values in this first parameter map; And
According to this first Reference Map after adjusting, mix this second image after this first image and displacement correction to produce full depth image.
10. image processing method according to claim 9, it is characterized in that, according to the calculated for pixel values of each pixel in this first image and this second image go out corresponding those absolute differences of each pixel and, and comprise according to those absolute differences and the step of adjusting those parameter values in this first parameter map:
When those absolute differences be greater than mobile critical value, according to those absolute differences with determine the respectively weight factor of this parameter value, and utilize this weight factor to adjust those parameter values, wherein respectively this parameter value along with this absolute difference of correspondence and rising and decline.
11. image processing methods according to claim 9, is characterized in that, according to this first Reference Map after adjusting via those weight factors, this second image mixing after this first image and displacement correction comprises with the step that produces this full depth image:
Judge whether those parameter values are greater than the first mixing critical value;
If those parameter values are greater than this first mixing critical value, get the pixel of this second image after the corresponding displacement correction of those parameter values as the pixel of this full depth image;
Be greater than this first mixing critical value if those parameter values there is no, judge whether those parameter values are greater than the second mixing critical value;
If those parameter values are greater than this second mixing critical value, go out the pixel of this corresponding full depth image according to those parameter value calculation; And
Be greater than this second mixing critical value if those parameter values there is no, get the pixel of corresponding this first image of those parameter values as the pixel of this full depth image, wherein, this first mixing critical value is greater than this second mixing critical value.
12. 1 kinds of image acquiring devices, is characterized in that, comprising:
Image collection module, obtains the first image with the first focal length, and obtains the second image with the second focal length, and wherein this first focal length is focused at least one main body;
Image correction module, carries out geometric correction program to this second image, produces this second image after displacement correction;
Gradient calculation module, carries out a gradient computing to produce multiple the first Grad to each pixel of this first image, and each pixel of this second image after displacement correction is carried out to this gradient computing to produce multiple the second Grad;
Map generation module, respectively this first Grad, with corresponding respectively this second Grad to produce multiple the first pixel comparative results, and produces the first parameter map according to those the first pixel comparative results; And
Image synthesis unit, produces composograph according to this first parameter map and this first image, and at least produces output image according to this composograph.
13. image acquiring devices according to claim 11, it is characterized in that, this image collection module is obtained the 3rd image with the 3rd focal length, this image correction module is carried out this geometric correction program and produces the 3rd image after displacement correction the 3rd image, this gradient calculation module is carried out this gradient computing to produce multiple the 3rd Grad to each pixel of this composograph, each pixel of three image of this gradient calculation module after to displacement correction is carried out this gradient computing to produce multiple the 4th Grad, this map generation module respectively the 3rd Grad with corresponding respectively the 4th Grad to produce multiple the second pixel comparative results, this map generation module also produces the second parameter map according to those the second pixel comparative results, three image of this image synthesis unit after according to this second parameter map mixing displacement correction and this composograph and produce this output image.
14. image acquiring devices according to claim 11, wherein this map generation module by those second Grad divided by those corresponding first Grad, to produce multiple gradient comparison values, and produce multiple parameter values according to those gradient comparison values, and those parameter values are recorded as to this parameter map.
15. image acquiring devices according to claim 11, it is characterized in that, also comprise image blurring module, module that this is image blurring is carried out an obfuscation program and is produced blurred picture this first image, and this image synthesis unit is mixed this first image and this blurred picture to produce main body picture rich in detail according to this first Reference Map.
16. image acquiring devices according to claim 11, it is characterized in that, also comprise map adjusting module, this map adjusting module according to the calculated for pixel values of each pixel in this first image and this second image go out the corresponding multiple absolute differences of each pixel and, this map adjusting module according to those absolute differences with adjust those parameter values in this first parameter map, this image synthesis unit is mixed this second image after this first image and displacement correction to produce full depth image according to this first Reference Map after adjusting.
CN201310260044.3A 2013-02-06 2013-06-26 Image acquisition device and image processing method thereof Active CN103973963B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW102104649 2013-02-06
TW102104649 2013-02-06

Publications (2)

Publication Number Publication Date
CN103973963A true CN103973963A (en) 2014-08-06
CN103973963B CN103973963B (en) 2017-11-21

Family

ID=51242964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310260044.3A Active CN103973963B (en) 2013-02-06 2013-06-26 Image acquisition device and image processing method thereof

Country Status (1)

Country Link
CN (1) CN103973963B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104333703A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method and terminal for photographing by virtue of two cameras
CN105933602A (en) * 2016-05-16 2016-09-07 中科创达软件科技(深圳)有限公司 Camera shooting method and device
CN106161997A (en) * 2016-06-30 2016-11-23 上海华力微电子有限公司 Improve the method and system of cmos image sensor pixel
CN106303202A (en) * 2015-06-09 2017-01-04 联想(北京)有限公司 A kind of image information processing method and device
CN108112271A (en) * 2016-01-29 2018-06-01 谷歌有限责任公司 Movement in detection image
CN108377342A (en) * 2018-05-22 2018-08-07 Oppo广东移动通信有限公司 double-camera photographing method, device, storage medium and terminal
CN105491278B (en) * 2014-10-09 2018-09-25 聚晶半导体股份有限公司 Image capture unit and digital zoom display methods
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
CN111614898A (en) * 2016-10-05 2020-09-01 三星电子株式会社 Image processing system
WO2021017589A1 (en) * 2019-07-31 2021-02-04 茂莱(南京)仪器有限公司 Image fusion method based on gradient domain mapping
WO2021120120A1 (en) * 2019-12-19 2021-06-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device, method of controlling electric device, and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1392724A (en) * 2001-06-19 2003-01-22 卡西欧计算机株式会社 Image pick-up device and method, storage medium for recording image pick-up method program
JP2008278763A (en) * 2007-05-08 2008-11-20 Japan Health Science Foundation Transgenic non-human animal
US7538815B1 (en) * 2002-01-23 2009-05-26 Marena Systems Corporation Autofocus system and method using focus measure gradient
CN101447079A (en) * 2008-12-11 2009-06-03 香港理工大学 Method for extracting area target of image based on fuzzytopology
CN101852970A (en) * 2010-05-05 2010-10-06 浙江大学 Automatic focusing method for camera under imaging viewing field scanning state
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102682435A (en) * 2012-05-14 2012-09-19 四川大学 Multi-focus image edge detection method based on space relative altitude information
CN102867297A (en) * 2012-08-31 2013-01-09 天津大学 Digital processing method for low-illumination image acquisition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1392724A (en) * 2001-06-19 2003-01-22 卡西欧计算机株式会社 Image pick-up device and method, storage medium for recording image pick-up method program
US7538815B1 (en) * 2002-01-23 2009-05-26 Marena Systems Corporation Autofocus system and method using focus measure gradient
JP2008278763A (en) * 2007-05-08 2008-11-20 Japan Health Science Foundation Transgenic non-human animal
CN101447079A (en) * 2008-12-11 2009-06-03 香港理工大学 Method for extracting area target of image based on fuzzytopology
CN101852970A (en) * 2010-05-05 2010-10-06 浙江大学 Automatic focusing method for camera under imaging viewing field scanning state
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102682435A (en) * 2012-05-14 2012-09-19 四川大学 Multi-focus image edge detection method based on space relative altitude information
CN102867297A (en) * 2012-08-31 2013-01-09 天津大学 Digital processing method for low-illumination image acquisition

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105491278B (en) * 2014-10-09 2018-09-25 聚晶半导体股份有限公司 Image capture unit and digital zoom display methods
CN104333703A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Method and terminal for photographing by virtue of two cameras
CN106303202A (en) * 2015-06-09 2017-01-04 联想(北京)有限公司 A kind of image information processing method and device
CN108112271A (en) * 2016-01-29 2018-06-01 谷歌有限责任公司 Movement in detection image
US11625840B2 (en) 2016-01-29 2023-04-11 Google Llc Detecting motion in images
CN108112271B (en) * 2016-01-29 2022-06-24 谷歌有限责任公司 Method and computer readable device for detecting motion in an image
CN105933602A (en) * 2016-05-16 2016-09-07 中科创达软件科技(深圳)有限公司 Camera shooting method and device
CN106161997A (en) * 2016-06-30 2016-11-23 上海华力微电子有限公司 Improve the method and system of cmos image sensor pixel
CN111614898A (en) * 2016-10-05 2020-09-01 三星电子株式会社 Image processing system
US11962906B2 (en) 2016-10-05 2024-04-16 Samsung Electronics Co., Ltd. Image processing systems for correcting processed images using image sensors
CN108377342A (en) * 2018-05-22 2018-08-07 Oppo广东移动通信有限公司 double-camera photographing method, device, storage medium and terminal
CN109816619A (en) * 2019-01-28 2019-05-28 努比亚技术有限公司 Image interfusion method, device, terminal and computer readable storage medium
WO2021017589A1 (en) * 2019-07-31 2021-02-04 茂莱(南京)仪器有限公司 Image fusion method based on gradient domain mapping
WO2021120120A1 (en) * 2019-12-19 2021-06-24 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electric device, method of controlling electric device, and computer readable storage medium
CN114902646A (en) * 2019-12-19 2022-08-12 Oppo广东移动通信有限公司 Electronic device, method of controlling electronic device, and computer-readable storage medium
CN114902646B (en) * 2019-12-19 2024-04-19 Oppo广东移动通信有限公司 Electronic device, method of controlling electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN103973963B (en) 2017-11-21

Similar Documents

Publication Publication Date Title
CN103973963A (en) Image acquisition device and image processing method thereof
TWI602152B (en) Image capturing device nd image processing method thereof
US11477395B2 (en) Apparatus and methods for the storage of overlapping regions of imaging data for the generation of optimized stitched images
US8189960B2 (en) Image processing apparatus, image processing method, program and recording medium
US8306360B2 (en) Device and method for obtaining clear image
US8947501B2 (en) Scene enhancements in off-center peripheral regions for nonlinear lens geometries
EP2704423B1 (en) Image processing apparatus, image processing method, and image processing program
US10897558B1 (en) Shallow depth of field (SDOF) rendering
CN105141841B (en) Picture pick-up device and its method
US20140105520A1 (en) Image processing apparatus that generates omnifocal image, image processing method, and storage medium
WO2022066353A1 (en) Image signal processing in multi-camera system
CN113810590A (en) Image processing method, electronic device, medium, and system
CN114071010A (en) Shooting method and equipment
JP6270413B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN109257540A (en) Take the photograph photography bearing calibration and the camera of lens group more
JP2017143354A (en) Image processing apparatus and image processing method
JP2017220885A (en) Image processing system, control method, and control program
CN103973962A (en) Image processing method and image acquisition device
JP2013128212A (en) Image processing device, imaging device, and method therefor
CN114071009B (en) Shooting method and equipment
CN107087114B (en) Shooting method and device
JP2014155000A (en) Image processing apparatus, control method for the same, and control program
US20120044389A1 (en) Method for generating super resolution image
CN117135451A (en) Focusing processing method, electronic device and storage medium
JP2018078461A (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant