CN104077759A - Multi-exposure image fusion method based on color perception and local quality factors - Google Patents

Multi-exposure image fusion method based on color perception and local quality factors Download PDF

Info

Publication number
CN104077759A
CN104077759A CN201410069705.9A CN201410069705A CN104077759A CN 104077759 A CN104077759 A CN 104077759A CN 201410069705 A CN201410069705 A CN 201410069705A CN 104077759 A CN104077759 A CN 104077759A
Authority
CN
China
Prior art keywords
image
brightness
exposures
width
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410069705.9A
Other languages
Chinese (zh)
Inventor
郑喆坤
焦李成
房莹
简萌
乔伊果
孙天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410069705.9A priority Critical patent/CN104077759A/en
Publication of CN104077759A publication Critical patent/CN104077759A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image and video processing and particularly relates to a multi-exposure image fusion method based on color perception and local quality factors. The multi-exposure image fusion method mainly solves the problem of image color distortion caused by improper selection of fusion evaluation criterions during fusion processing of a multi-exposure image and can be used for improving quality of a final image under the premise that a dynamic range of the image is not changed. Remarkable details of the image under different exposures are fused in the same image with a low dynamic range, a proper reference image is selected by means of the local quality factors to perform color correction, and the image with the low dynamic range comparing favorably with the image with a high dynamic range in the result can be obtained under the premise that the dynamic range of the image is not changed. By means of the multi-exposure image fusion method based on color perception and local quality factors, the nature and clear multi-exposure image fusion result can be obtained conveniently. Therefore, the multi-exposure image fusion method based on color perception and local quality factors can be widely applied to the fields related to image and video processing.

Description

A kind of many exposures image interfusion method based on colour vision perception and the global quality factor
Technical field
The invention belongs to image, technical field of video processing, be specifically related to a kind of many exposures image interfusion method based on colour vision perception and the global quality factor.
Background technology
High dynamic range HDR image is one of problem in computer vision and Digital Image Processing, is also one of the hot issue in current image field.The dynamic range of image refers to the ratio between the brightness value of pixel the brightest in piece image and the brightness value of the darkest pixel.Due to the limitation of dynamic range, imaging device often can not reconstruct the scene of true nature.At first, this class image all completes with optical simulation.Nowadays, high dynamic range images is easy to generate, and only needs the different photo of a series of depth of exposures, just can make the image of high dynamic range.Practice shows, utilizes the different time shutter to take identical scene, because the every width image of difference of exposure all can show not details out of other picture showings.For instance, directly by the true colors of sunlit part, can only in the minimum image of exposure, be shown, and in other photos of these a series of different exposures the color of this part overexposure all.In image, the details of shaded side could complete presenting in the high image of exposure, and be fuzzy a slice in the lower image of exposure.Due to above problem, ordinary camera cannot catch simultaneously and represent all details in piece image; And for the mankind, because people's eyes can automatically adjust according to the brightness difference of object, the details under these different exposures of perception simultaneously, so photo always can not represent true and natural scene completely aspect visual experience.
The object that high dynamic range images is processed is: utilize existing imaging technique that observed scene is shown as far as possible really, make people by ordinary camera, with regard to restructural, go out the image that can compare favourably with reality scene validity, this processing procedure is exactly the synthetic and reconstruct of high dynamic range images.Aspect synthetic, utilize prior art by the different synthetic panel height dynamic image of picture of exposure, the dynamic range of this image can be up to 25,000:1; Yet the dynamic range of general display device is usually less than 100:1, therefore, compare with low dynamic range echograms, the details in the seizure image that high dynamic range images obviously can be sharper.
Compare with low dynamic range echograms, high dynamic range images has lot of advantages, and at medical image, in some application such as video monitor, high dynamic range images is particularly important.Yet, high dynamic range images is also for imaging technique has brought challenge: picture reproducer of today, as display, the dynamic range of printer etc. is all far smaller than the dynamic range in real scene, and urgent problem is exactly how under the prerequisite that as far as possible retains image detail and vision content, to utilize the picture reproducer of low-dynamic range to show the image of high dynamic range.
The integration technology of many exposures image can the remarkable details under different exposures merge image among the image of same low-dynamic range, obtains the low dynamic range echograms that can compare favourably with high dynamic range images result under the prerequisite that does not change dynamic range of images.Therefore can be widely applied to as digital camera, smart mobile phone etc. need to obtain in the electronic product of high dynamic range images effect fast.
In recent ten years, occurred the fusion method of the image that many kinds are different, generally speaking these fusion methods can be divided into two classes, that is: for the image co-registration of moving target with for the image interfusion method of static target.
For the image interfusion method of moving target, because the object in scene under different exposures moves, simple fused images can cause " ghost " effect.So the image interfusion method for moving target is different according to going the method for " ghost ", is subdivided into again two classes: directly remove " ghost " after merging the image that several exposures are different; Or the image that utilizes a width normal exposure degree obtains the under-exposure of same position through local histogram's stretching and overexposure image merges the two kinds of methods of result figure that generate again.
For the image interfusion method of static target, because the object in scene under different exposures is not mobile, so merge relatively simple.For integration technology difference, can be subdivided into three classes, that is: Multi-sensor Image Fusion, merges the detailed information of the same image that a plurality of different sensors take; Multiple focal length images merges, and merges the detailed information that a plurality of focal lengths are taken lower same image; Many exposures image co-registration, merges the detailed information of the same image of taking under a plurality of exposures.
, still there are a lot of problems in existing many exposures image fusion technology, image height dynamic image is synthetic easily produces colour vision distortion, and the color of fused images is difficult to correct, and in fused images, eigenvalue is improper easily brings distortion.
Summary of the invention
The object of this invention is to provide a kind of many exposures image interfusion method based on colour vision perception and the global quality factor, overcome existing many exposures image fusion technology and in high dynamic range images is synthetic, easily produce the problem of colour vision distortion.
For this reason, technical scheme of the present invention is: a kind of many exposures image interfusion method based on colour vision perception and the global quality factor is provided, has it is characterized in that: comprise the steps:
Step 1: the different image I of input n width exposure n, utilize criteria of quality evaluation, i.e. contrast C n, saturation degree S nwith brightness B ncalculate respectively the weight of different exposure images;
Wherein: n is the quantity of image, C nrefer to contrast, S nrefer to saturation degree, B nthe brightness referring to, I nrefer to n width image;
Step 2: according to the weighted value calculating in step 1, for the image I of different exposures k, with linear method, obtain scalar weight map W respectively ij, k;
Wherein: W ij, kcorresponding scalar weight, subscript i, j, k refers to respectively pixel (i, j) in k width image;
Step 3: obtain original many exposures fused images according to original input picture and the fusion of scalar weight map:
The first step, the image I of utilizing Laplacian pyramid to input k, obtain every layer corresponding utilize the weight map of gaussian pyramid decomposition input as W ij, k, obtain every layer corresponding
Second step carries out image co-registration on every tomographic image, utilizes respectively n width image and its corresponding n standardized weight map as Alpha's mask;
The 3rd step, every layer of corresponding coefficient of hierarchical fusion;
The 4th step, decomposes the result figure L{R} obtaining lobtain final fused images R;
Wherein: l refers to the number of plies of the laplacian pyramid of image or the figure layer on gaussian pyramid;
Step 4: utilize global quality to evaluate the factor and select the width that meets colour vision perception most as with reference to image I in the image of the different exposures of n width of input in;
Step 5: utilize with reference to image I inbrightness value L inwith chromatic value C ininitial fused images R is carried out to many exposures image C that colour correction obtains final rectification out.
In the criteria of quality evaluation that utilizes described in step 1, i.e. contrast C n, saturation degree S nwith brightness B nthe weight that calculates respectively different exposure images, is calculated as follows:
C n(i,j)=Y n(i,j)*h(i,j)
Wherein: * is convolution symbol, Y nit is the different image I of n width exposure of input ngray level image, Y n = 0.299 × I n r + 0.587 × I n g + 0.114 × I n b , N is the quantity of input picture, represent respectively the R on corresponding diagram layer, G, B value, Hi-pass filter h is defined as follows:
h = 0 - 1 0 - 1 4 - 1 0 - 1 0
After having calculated for the local contrast of each pixel (i, j) in image, adopt the method for " winner complete " to draw the contrast of view picture figure as follows:
C ^ n ( i , j ) = 1 , if C n ( i , j ) = max { C n ( i , j ) , n = 1,2 , . . . . . . n } 0 , otherwise ;
Saturation degree S nby calculating respectively the R of each pixel (i, j), G, the standard deviation of B draws;
Brightness B nbe to take 0.5 as standard judges, not have luminance difference distance in width image, the Gaussian curvature of trying to achieve is calculated, as follows:
B n ( I , σ ) = exp ( - ( b - 0.5 ) 2 2 σ 2 )
Wherein: σ=0.2, b refers to the brightness value of pixel (i, j).
At the scalar weight map W described in step 2 ij, k, by following formula, undertaken:
W ij , k = ( C ij , k ) W c × ( S ij , k ) W s × ( B ij , k ) W B
Wherein: C, S, what B referred to respectively is contrast, saturation degree and brightness, its corresponding weight is respectively W c, W s, W b; If ω=0, its corresponding measurement index is not counted in the calculating of final weight map.
As follows at every layer of corresponding coefficient method of the hierarchical fusion described in step 3:
L { R } ij l = Σ k = 1 n G { W ^ } ij , k l L { I } ij , k l
Wherein: the image I of Laplacian pyramid input k, obtain every layer corresponding utilize the weight map of gaussian pyramid decomposition input as W ij, k, obtain every layer corresponding n is the quantity of input picture, and l represents the corresponding diagram number of plies on laplacian pyramid or gaussian pyramid.
Described in step 4, utilizing global quality to evaluate the factor in the image of the different exposures of n width of input, to select the width that meets colour vision perception most as with reference to image I inmethod as follows:
Col=0.10×Bri+0.50×Con+0.23×Det+0.12×Art
Wherein: Bri is the brightness of image, Con is the contrast of image, and Det is the details of image, and Art is the distorted portion of image, and the image of Col maximum is the reference picture of final selection.
In the utilization described in step 5 with reference to image I inbrightness value L inwith chromatic value C ininitial fused images R is carried out to many exposures image C that colour correction obtains final rectification outmethod as follows:
C out = ( ( C in L in - 1 ) r + 1 ) L out
Wherein: C outthe many exposures fused images C=R through colour correction of output, G, B1; L outthe brightness of original many exposures fused images; C inthe colourity with reference to image of selecting according to the global quality factor; L inbe the brightness with reference to image of selecting according to the global quality factor, r is that its span of coefficient is (to comprise 0.4 and 0.6) between 0.4~0.6.
Advantage of the present invention is: image, the remarkable details under different exposures merges among the image of same low-dynamic range method of the present invention, and utilize the global quality factor to select suitable with reference to figure, to carry out colour correction, under the prerequisite that does not change dynamic range of images, obtain the low dynamic range echograms that can compare favourably with high dynamic range images result.Mainly solve the synthetic middle colour vision problem of dtmf distortion DTMF producing of high dynamic range images, have the following advantages compared with the conventional method:
The colour vision cognitive method that optimal characteristics while 1, having proposed many exposures image co-registration is chosen.First utilize the overall image quality factor to judge the most important characteristics of image, finally utilize quality factor to select the optimal characteristics that needs fusion;
2, the color antidote of fused images has been proposed.The improper distortion bringing of eigenvalue in fused images that utilized colour correction correction.
Below with reference to accompanying drawing, the present invention is described in further details.
Accompanying drawing explanation
Fig. 1 is realization flow schematic diagram of the present invention.
The visual effect comparison diagram that Fig. 2 is the present invention and existing method on outdoor night image.
The visual effect comparison diagram that Fig. 3 is the present invention and existing method on outdoor daytime image.
The visual effect comparison diagram that Fig. 4 is the present invention and existing method on indoor night image.
The visual effect comparison diagram that Fig. 5 is the present invention and existing method on indoor daytime image.
Embodiment
Embodiment 1:
The invention provides as shown in Figure 1 a kind of many exposures image interfusion method based on colour vision perception and the global quality factor, remarkable details image under different exposures merges among the image of same low-dynamic range, and utilize the global quality factor to select suitable with reference to figure, to carry out colour correction, under the prerequisite that does not change dynamic range of images, obtain the low dynamic range echograms that can compare favourably with high dynamic range images result, its specific implementation step comprises as follows:
Step 1: the different image I of input n width exposure n, utilize criteria of quality evaluation, i.e. contrast C n, saturation degree S nwith brightness B ncalculate respectively the weight of different exposure images;
Wherein: n is the quantity of image, C nrefer to contrast, S nrefer to saturation degree, B nthe brightness referring to, I nrefer to n width image;
Step 2: according to the weighted value calculating in step 1, for the image I of different exposures k, with linear method, obtain scalar weight map W respectively ij, k;
Wherein: W ij, kcorresponding scalar weight, subscript i, j, k refers to respectively pixel (i, j) in k width image;
Step 3: obtain original many exposures fused images according to original input picture and the fusion of scalar weight map:
The first step, the image I of utilizing Laplacian pyramid to input k, obtain every layer corresponding utilize the weight map of gaussian pyramid decomposition input as W ij, k, obtain every layer corresponding
Second step carries out image co-registration on every tomographic image, utilizes respectively n width image and its
N corresponding standardized weight map is as Alpha's mask;
The 3rd step, every layer of corresponding coefficient of hierarchical fusion;
The 4th step, decomposes the result figure L{R} obtaining lobtain final fused images R;
Wherein: l refer on the laplacian pyramid of image or or gaussian pyramid on the number of plies of figure layer; Described Alpha's mask is Alpha's mask, is with selected image, figure or object, and pending image (whole or local) is blocked to region or the processing procedure of coming control chart picture to process.Alpha is called mask or template for specific image or the object covering.In Digital Image Processing, mask is two-dimensional matrix array, sometimes also uses multivalue image.Be mainly used in extracting region of interest, with region of interest mask and the pending image made in advance, multiply each other, obtain Image with Region of Interest, in region of interest, image value remains unchanged, and outside district, image value is all 0.
Step 4: utilize global quality to evaluate the factor and select the width that meets colour vision perception most as with reference to image I in the image of the different exposures of n width of input in;
Step 5: utilize with reference to image I inbrightness value L inwith chromatic value C ininitial fused images R is carried out to many exposures image C that colour correction obtains final rectification out.
In the criteria of quality evaluation that utilizes described in step 1, i.e. contrast C n, saturation degree S nwith brightness B nthe weight that calculates respectively different exposure images, is calculated as follows:
C n(i,j)=Y n(i,j)*h(i,j)
Wherein: * is convolution symbol, Y nit is the different image I of n width exposure of input ngray level image, Y n = 0.299 × I n r + 0.587 × I n g + 0.114 × I n b , N is the quantity of input picture, represent respectively the R on corresponding diagram layer, G, B value, Hi-pass filter h is defined as follows:
h = 0 - 1 0 - 1 4 - 1 0 - 1 0
After having calculated for the local contrast of each pixel (i, j) in image, adopt the method for " winner complete " to draw the contrast of view picture figure as follows:
C ^ n ( i , j ) = 1 , if C n ( i , j ) = max { C n ( i , j ) , n = 1,2 , . . . . . . n } 0 , otherwise ;
Saturation degree S nby calculating respectively the R of each pixel (i, j), G, the standard deviation of B draws;
Brightness B nbe to take 0.5 as standard judges, not have luminance difference distance in width image, the Gaussian curvature of trying to achieve is calculated, as follows:
B n ( I , σ ) = exp ( - ( b - 0.5 ) 2 2 σ 2 )
Wherein: σ=0.2, b refers to the brightness value of pixel (i, j).
At the scalar weight map W described in step 2 ij, k, by following formula, undertaken:
W ij , k = ( C ij , k ) W c × ( S ij , k ) W s × ( B ij , k ) W B
Wherein: C, S, what B referred to respectively is contrast, saturation degree and brightness, its corresponding weight is respectively W c, W s, W b; If ω=0, its corresponding measurement index is not counted in the calculating of final weight map.
As follows at every layer of corresponding coefficient method of the hierarchical fusion described in step 3:
L { R } ij l = Σ k = 1 n G { W ^ } ij , k l L { I } ij , k l
Wherein: the image I of Laplacian pyramid input k, obtain every layer corresponding utilize the weight map of gaussian pyramid decomposition input as W ij, k, obtain every layer corresponding n is the quantity of input picture, and l represents the corresponding diagram number of plies on laplacian pyramid or gaussian pyramid.
Described in step 4, utilizing global quality to evaluate the factor in the image of the different exposures of n width of input, to select the width that meets colour vision perception most as with reference to image I inmethod as follows:
Col=0.10×Bri+0.50×Con+0.23×Det+0.12×Art
Wherein: Bri is the brightness of image, Con is the contrast of image, and Det is the details of image, and Art is the distorted portion of image, and the image of Col maximum is the reference picture of final selection.
In the utilization described in step 5 with reference to image I inbrightness value L inwith chromatic value C ininitial fused images R is carried out to many exposures image C that colour correction obtains final rectification outmethod as follows:
C out = ( ( C in L in - 1 ) r + 1 ) L out
Wherein: C outthe many exposures fused images C=R through colour correction of output, G, B1, R refers to redness herein, and G refers to green, and B1 refers to blueness; L outthe brightness of original many exposures fused images; C inthe colourity with reference to image of selecting according to the global quality factor; L inbe the brightness with reference to image of selecting according to the global quality factor, r is that its span of coefficient is (to comprise 0.4 and 0.6) between 0.4~0.6.
Embodiment 2:
Result of the present invention can further illustrate by following experiment:
1. experiment condition:
At central processing unit, be Intel (R) Pentium (R) Dual CPU, internal memory 16G, video card is NVIDIA Quadro NVS140M, operating system: on the platform of Windows Vista Home Basic x32Edition, carry out.
2. experiment content:
For the validity of checking this method, select four class images under 4 width different illumination conditions to test.Wherein, Fig. 2 is Monument image, outdoor night image, Fig. 3 is Plateau image, outdoor daytime image, Fig. 4 is Memorial image, indoor night image, Fig. 5 is Church image, indoor daytime image.
Below by 4 experiments, describe.
Experiment 1, carries out the comparison of visual effect by the present invention and existing additive method, and result is as Fig. 2, wherein:
Shown in Fig. 2 (a) is the result of original many exposures image co-registration,
Shown in Fig. 2 (b) is experimental result of the present invention,
Shown in Fig. 2 (c) is the result of histogram equalization,
Shown in Fig. 2 (d) is the result of the function of the imadjust that proposes in matlab software.
As seen from Figure 2, the present invention can effectively retain tone sense organ and the detailed information of image, and it is more naturally clear that Output rusults image is compared with other existing methods.And original many exposures Image Fusion cannot be complete the color detail of reservation image, the function that matlab provides cannot retain the hue information of image, algorithm of histogram equalization can cause the image fault of bringing due to a large amount of artificial information.In sum, the image nature that these several existing methods produce not as the present invention.
Experiment 2, carries out the comparison of visual effect by the present invention and existing additive method, and result is as Fig. 3, wherein:
Shown in Fig. 3 (a) is the result of original many exposures image co-registration,
Shown in Fig. 3 (b) is experimental result of the present invention,
Shown in Fig. 3 (c) is the result of histogram equalization,
Shown in Fig. 3 (d) is the result of the function of the imadjust that proposes in matlab software.
As seen from Figure 3, the present invention can effectively retain tone sense organ and the detailed information of image, and it is more naturally clear that Output rusults image is compared with other existing methods.And original many exposures Image Fusion cannot be complete the color detail of reservation image, the function that matlab provides cannot retain the hue information of image, algorithm of histogram equalization can cause the image fault of bringing due to a large amount of artificial information.In sum, the image nature that these several existing methods produce not as the present invention.
Experiment 3, carries out the comparison of visual effect by the present invention and existing additive method, and result is as Fig. 4, wherein:
Shown in Fig. 4 (a) is the result of original many exposures image co-registration,
Shown in Fig. 4 (b) is experimental result of the present invention,
Shown in Fig. 4 (c) is the result of histogram equalization,
Shown in Fig. 4 (d) is the result of the function of the imadjust that proposes in matlab software.
As seen from Figure 4, the present invention can effectively retain tone sense organ and the detailed information of image, and it is more naturally clear that Output rusults image is compared with other existing methods.And original many exposures Image Fusion cannot be complete the color detail of reservation image, the function that matlab provides cannot retain the hue information of image, algorithm of histogram equalization can cause the image fault of bringing due to a large amount of artificial information.In sum, the image nature that these several existing methods produce not as the present invention.
Experiment 4, carries out the comparison of visual effect by the present invention and existing additive method, and result is as Fig. 5, wherein:
Shown in Fig. 5 (a) is the result of original many exposures image co-registration,
Shown in Fig. 5 (b) is experimental result of the present invention,
Shown in Fig. 5 (c) is the result of histogram equalization,
Shown in Fig. 5 (d) is the result of the function of the imadjust that proposes in matlab software.
The present invention can effectively retain tone sense organ and the detailed information of image, and it is more naturally clear that Output rusults image is compared with other existing methods.And original many exposures Image Fusion cannot be complete the color detail of reservation image, the function that matlab provides cannot retain the hue information of image, algorithm of histogram equalization can cause the image fault of bringing due to a large amount of artificial information.In sum, the image nature that these several existing methods produce not as the present invention.
For further checking validity of the present invention, used respectively two kinds of methods of subjective evaluation and objective evaluation, the results are shown in Table 1 and table 2.
The average suggestion value of table 1 test pattern (Mean Opinion Score)
The objective evaluation result of table 2 test pattern
By form, can be found out, the method that the present invention proposes can obtain the low dynamic range echograms that can compare favourably with high dynamic range images result under the prerequisite that does not change dynamic range of images.
More than exemplifying is only to illustrate of the present invention, does not form the restriction to protection scope of the present invention, within the every and same or analogous design of the present invention all belongs to protection scope of the present invention.

Claims (6)

1. the many exposures image interfusion method based on colour vision perception and the global quality factor, is characterized in that: comprise the steps:
Step 1: the different image I of input n width exposure n, utilize criteria of quality evaluation, i.e. contrast C n, saturation degree S nwith brightness B ncalculate respectively the weight of different exposure images;
Wherein: n is the quantity of image, C nrefer to contrast, S nrefer to saturation degree, B nthe brightness referring to, I nrefer to n width image;
Step 2: according to the weighted value calculating in step 1, for the image I of different exposures k, with linear method, obtain scalar weight map W respectively ij, k;
Wherein: W ij, kcorresponding scalar weight, subscript i, j, k refers to respectively pixel (i, j) in k width image;
Step 3: obtain original many exposures fused images according to original input picture and the fusion of scalar weight map:
The first step, the image I of utilizing Laplacian pyramid to input k, obtain every layer of correspondence
's utilize the weight map of gaussian pyramid decomposition input as W ij, k, obtain every layer corresponding
Second step carries out image co-registration on every tomographic image, utilizes respectively n width image and its corresponding n standardized weight map as Alpha's mask;
The 3rd step, every layer of corresponding coefficient of hierarchical fusion;
The 4th step, decomposes the result figure L{R} obtaining lobtain final fused images R;
Wherein: l refers on the laplacian pyramid of image or the number of plies of the figure layer on gaussian pyramid;
Step 4: utilize global quality to evaluate the factor and select the width that meets colour vision perception most as with reference to image I in the image of the different exposures of n width of input in;
Step 5: utilize with reference to image I inbrightness value L inwith chromatic value C ininitial fused images R is carried out to many exposures image C that colour correction obtains final rectification out.
2. a kind of many exposures image interfusion method based on colour vision perception and the global quality factor as claimed in claim 1, is characterized in that: described in step 1, utilize criteria of quality evaluation, i.e. contrast C n, saturation degree S nwith brightness B nthe weight that calculates respectively different exposure images, is calculated as follows:
C n(i,j)=Y n(i,j)*h(i,j)
Wherein: * is convolution symbol, Y nit is the different image I of n width exposure of input ngray level image, Y n = 0.299 × I n r + 0.587 × I n g + 0.114 × I n b , N is the quantity of input picture, represent respectively the R on corresponding diagram layer, G, B value, Hi-pass filter h is defined as follows:
h = 0 - 1 0 - 1 4 - 1 0 - 1 0
After having calculated for the local contrast of each pixel (i, j) in image, adopt the method for " winner complete " to draw the contrast of view picture figure as follows:
C ^ n ( i , j ) = 1 , if C n ( i , j ) = max { C n ( i , j ) , n = 1,2 , . . . . . . n } 0 , otherwise ;
Saturation degree S nby calculating respectively the R of each pixel (i, j), G, the standard deviation of B draws;
Brightness B nbe to take 0.5 as standard judges, not have luminance difference distance in width image, the Gaussian curvature of trying to achieve is calculated, as follows:
B n ( I , σ ) = exp ( - ( b - 0.5 ) 2 2 σ 2 )
Wherein: σ=0.2, b refers to the brightness value of pixel (i, j).
3. a kind of many exposures image interfusion method based on colour vision perception and the global quality factor as claimed in claim 1, is characterized in that: the scalar weight map W described in step 2 ij, k, by following formula, undertaken:
W ij , k = ( C ij , k ) W c × ( S ij , k ) W s × ( B ij , k ) W B
Wherein: C, S, what B referred to respectively is contrast, saturation degree and brightness, its corresponding weight is respectively W c, W s, W b; If ω=0, its corresponding measurement index is not counted in the calculating of final weight map.
4. a kind of many exposures image interfusion method based on colour vision perception and the global quality factor as claimed in claim 1, is characterized in that: every layer of corresponding coefficient method of the hierarchical fusion described in step 3 is as follows:
L { R } ij 1 = Σ k = 1 n G { W ^ } ij , k 1 L { I } ij , k 1
Wherein: the image I of Laplacian pyramid input k, obtain every layer corresponding utilize the weight map of gaussian pyramid decomposition input as W ij, k, obtain every layer corresponding n is the quantity of input picture, and l represents the corresponding diagram number of plies on laplacian pyramid or gaussian pyramid.
5. a kind of many exposures image interfusion method based on colour vision perception and the global quality factor as claimed in claim 1, is characterized in that: described in step 4, utilize global quality to evaluate the factor in the image of the different exposures of n width of input, to select the width that meets colour vision perception most as with reference to image I inmethod as follows:
Col=0.10×Bri+0.50×Con+0.23×Det+0.12×Art
Wherein: Bri is the brightness of image, Con is the contrast of image, and Det is the details of image, and Art is the distorted portion of image, and the image of Col maximum is the reference picture of final selection.
6. a kind of many exposures image interfusion method based on colour vision perception and the global quality factor as claimed in claim 1, is characterized in that: the utilization described in step 5 is with reference to image I inbrightness value L inwith chromatic value C ininitial fused images R is carried out to many exposures image C that colour correction obtains final rectification outmethod as follows:
C out = ( ( C in L in - 1 ) r + 1 ) L out
Wherein: C outthe many exposures fused images C=R through colour correction of output, G, B1; L outthe brightness of original many exposures fused images; C inthe colourity with reference to image of selecting according to the global quality factor; L inbe the brightness with reference to image of selecting according to the global quality factor, r is that its span of coefficient is (to comprise 0.4 and 0.6) between 0.4~0.6.
CN201410069705.9A 2014-02-28 2014-02-28 Multi-exposure image fusion method based on color perception and local quality factors Pending CN104077759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410069705.9A CN104077759A (en) 2014-02-28 2014-02-28 Multi-exposure image fusion method based on color perception and local quality factors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410069705.9A CN104077759A (en) 2014-02-28 2014-02-28 Multi-exposure image fusion method based on color perception and local quality factors

Publications (1)

Publication Number Publication Date
CN104077759A true CN104077759A (en) 2014-10-01

Family

ID=51599001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410069705.9A Pending CN104077759A (en) 2014-02-28 2014-02-28 Multi-exposure image fusion method based on color perception and local quality factors

Country Status (1)

Country Link
CN (1) CN104077759A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN105791659A (en) * 2014-12-19 2016-07-20 联想(北京)有限公司 Image processing method and electronic device
CN105931213A (en) * 2016-05-31 2016-09-07 南京大学 Edge detection and frame difference method-based high-dynamic range video de-ghosting method
CN106169182A (en) * 2016-05-25 2016-11-30 西安邮电大学 A kind of method synthesizing several different exposure images
CN106408547A (en) * 2015-07-28 2017-02-15 展讯通信(上海)有限公司 Image fusion method and apparatus, and terminal device
CN106408518A (en) * 2015-07-30 2017-02-15 展讯通信(上海)有限公司 Image fusion method and apparatus, and terminal device
CN106447642A (en) * 2016-08-31 2017-02-22 北京云图微动科技有限公司 Double exposure fusion method and device for image
CN106780463A (en) * 2016-12-15 2017-05-31 华侨大学 It is a kind of that fused image quality appraisal procedures are exposed based on contrast and the complete of saturation degree more with reference to
CN106920221A (en) * 2017-03-10 2017-07-04 重庆邮电大学 Take into account the exposure fusion method that Luminance Distribution and details are presented
CN107220956A (en) * 2017-04-18 2017-09-29 天津大学 A kind of HDR image fusion method of the LDR image based on several with different exposures
CN107679470A (en) * 2017-09-22 2018-02-09 天津大学 A kind of traffic mark board detection and recognition methods based on HDR technologies
CN107895350A (en) * 2017-10-27 2018-04-10 天津大学 A kind of HDR image generation method based on adaptive double gamma conversion
CN108184075A (en) * 2018-01-17 2018-06-19 百度在线网络技术(北京)有限公司 For generating the method and apparatus of image
CN108305252A (en) * 2018-02-26 2018-07-20 中国人民解放军总医院 Image interfusion method for portable electronic scope
CN109636765A (en) * 2018-11-09 2019-04-16 深圳市华星光电技术有限公司 High dynamic display methods based on the fusion of image multiple-exposure
CN110087003A (en) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 More exposure image fusion methods
CN110266965A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110572585A (en) * 2019-08-26 2019-12-13 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110580696A (en) * 2019-08-30 2019-12-17 金陵科技学院 Multi-exposure image fast fusion method for detail preservation
CN110717878A (en) * 2019-10-12 2020-01-21 北京迈格威科技有限公司 Image fusion method and device, computer equipment and storage medium
CN113129391A (en) * 2021-04-27 2021-07-16 西安邮电大学 Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN113191994A (en) * 2021-04-26 2021-07-30 北京小米移动软件有限公司 Image processing method, device and storage medium
CN113362264A (en) * 2021-06-23 2021-09-07 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113808034A (en) * 2021-08-09 2021-12-17 天津大学 Local self-adaptive wavelet image denoising method combined with global threshold

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100329554A1 (en) * 2009-06-29 2010-12-30 Jiefu Zhai Automatic exposure estimation for HDR images based on image statistics
CN102905058A (en) * 2011-07-28 2013-01-30 三星电子株式会社 Apparatus and method for generating high dynamic range image from which ghost blur is removed
CN103069453A (en) * 2010-07-05 2013-04-24 苹果公司 Operating a device to capture high dynamic range images
CN103400342A (en) * 2013-07-04 2013-11-20 西安电子科技大学 Mixed color gradation mapping and compression coefficient-based high dynamic range image reconstruction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100329554A1 (en) * 2009-06-29 2010-12-30 Jiefu Zhai Automatic exposure estimation for HDR images based on image statistics
CN103069453A (en) * 2010-07-05 2013-04-24 苹果公司 Operating a device to capture high dynamic range images
CN102905058A (en) * 2011-07-28 2013-01-30 三星电子株式会社 Apparatus and method for generating high dynamic range image from which ghost blur is removed
CN103400342A (en) * 2013-07-04 2013-11-20 西安电子科技大学 Mixed color gradation mapping and compression coefficient-based high dynamic range image reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MARTIN CADIK ET AL.: "Evaluation of HDR tone mapping methods using essential perceptual attributes", 《 COMPUTERS & GRAPHICS》 *
R.MANTIUK ET AL.: "Color correction for tone mapping", 《EUROGRAPHICS 2009》 *
T. MERTENS ET AL.: "Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography", 《COMPUTER GRAPHICS FORUM》 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791659B (en) * 2014-12-19 2020-10-27 联想(北京)有限公司 Image processing method and electronic device
CN105791659A (en) * 2014-12-19 2016-07-20 联想(北京)有限公司 Image processing method and electronic device
CN104881854B (en) * 2015-05-20 2017-10-31 天津大学 High dynamic range images fusion method based on gradient and monochrome information
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN106408547A (en) * 2015-07-28 2017-02-15 展讯通信(上海)有限公司 Image fusion method and apparatus, and terminal device
CN106408518A (en) * 2015-07-30 2017-02-15 展讯通信(上海)有限公司 Image fusion method and apparatus, and terminal device
CN106408518B (en) * 2015-07-30 2019-09-06 展讯通信(上海)有限公司 Image interfusion method, device and terminal device
CN106169182B (en) * 2016-05-25 2019-08-09 西安邮电大学 A method of synthesizing several different exposure images
CN106169182A (en) * 2016-05-25 2016-11-30 西安邮电大学 A kind of method synthesizing several different exposure images
CN105931213B (en) * 2016-05-31 2019-01-18 南京大学 The method that high dynamic range video based on edge detection and frame difference method removes ghost
CN105931213A (en) * 2016-05-31 2016-09-07 南京大学 Edge detection and frame difference method-based high-dynamic range video de-ghosting method
CN106447642A (en) * 2016-08-31 2017-02-22 北京云图微动科技有限公司 Double exposure fusion method and device for image
CN106447642B (en) * 2016-08-31 2019-12-31 北京贝塔科技股份有限公司 Image double-exposure fusion method and device
CN106780463A (en) * 2016-12-15 2017-05-31 华侨大学 It is a kind of that fused image quality appraisal procedures are exposed based on contrast and the complete of saturation degree more with reference to
CN106780463B (en) * 2016-12-15 2019-07-05 华侨大学 It is a kind of to expose fused image quality appraisal procedures with reference to entirely more
CN106920221B (en) * 2017-03-10 2019-03-26 重庆邮电大学 Take into account the exposure fusion method that Luminance Distribution and details are presented
CN106920221A (en) * 2017-03-10 2017-07-04 重庆邮电大学 Take into account the exposure fusion method that Luminance Distribution and details are presented
CN107220956A (en) * 2017-04-18 2017-09-29 天津大学 A kind of HDR image fusion method of the LDR image based on several with different exposures
CN107679470A (en) * 2017-09-22 2018-02-09 天津大学 A kind of traffic mark board detection and recognition methods based on HDR technologies
CN107895350A (en) * 2017-10-27 2018-04-10 天津大学 A kind of HDR image generation method based on adaptive double gamma conversion
CN107895350B (en) * 2017-10-27 2020-01-03 天津大学 HDR image generation method based on self-adaptive double gamma transformation
CN108184075A (en) * 2018-01-17 2018-06-19 百度在线网络技术(北京)有限公司 For generating the method and apparatus of image
CN108184075B (en) * 2018-01-17 2019-05-10 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN108305252A (en) * 2018-02-26 2018-07-20 中国人民解放军总医院 Image interfusion method for portable electronic scope
CN108305252B (en) * 2018-02-26 2020-03-27 中国人民解放军总医院 Image fusion method for portable electronic endoscope
CN109636765A (en) * 2018-11-09 2019-04-16 深圳市华星光电技术有限公司 High dynamic display methods based on the fusion of image multiple-exposure
CN110087003A (en) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 More exposure image fusion methods
CN110266965A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110266965B (en) * 2019-06-28 2021-06-01 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110572585A (en) * 2019-08-26 2019-12-13 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110580696A (en) * 2019-08-30 2019-12-17 金陵科技学院 Multi-exposure image fast fusion method for detail preservation
CN110717878B (en) * 2019-10-12 2022-04-15 北京迈格威科技有限公司 Image fusion method and device, computer equipment and storage medium
CN110717878A (en) * 2019-10-12 2020-01-21 北京迈格威科技有限公司 Image fusion method and device, computer equipment and storage medium
CN113191994B (en) * 2021-04-26 2023-11-21 北京小米移动软件有限公司 Image processing method, device and storage medium
CN113191994A (en) * 2021-04-26 2021-07-30 北京小米移动软件有限公司 Image processing method, device and storage medium
CN113129391A (en) * 2021-04-27 2021-07-16 西安邮电大学 Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN113129391B (en) * 2021-04-27 2023-01-31 西安邮电大学 Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN113362264B (en) * 2021-06-23 2022-03-18 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113362264A (en) * 2021-06-23 2021-09-07 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113808034A (en) * 2021-08-09 2021-12-17 天津大学 Local self-adaptive wavelet image denoising method combined with global threshold
CN113808034B (en) * 2021-08-09 2023-07-28 天津大学 Local self-adaptive wavelet image denoising method combined with global threshold

Similar Documents

Publication Publication Date Title
CN104077759A (en) Multi-exposure image fusion method based on color perception and local quality factors
CN102722868B (en) Tone mapping method for high dynamic range image
CN111080724A (en) Infrared and visible light fusion method
CN106920221B (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN105122302B (en) Generation without ghost image high dynamic range images
CN103400342A (en) Mixed color gradation mapping and compression coefficient-based high dynamic range image reconstruction method
CN101901475B (en) High dynamic range image tone mapping method based on retina adaptive model
CN107220956A (en) A kind of HDR image fusion method of the LDR image based on several with different exposures
CN102982520B (en) Robustness face super-resolution processing method based on contour inspection
CN107203985A (en) A kind of many exposure image fusion methods under end-to-end deep learning framework
US11763432B2 (en) Multi-exposure image fusion method based on feature distribution weight of multi-exposure image
CN108010024A (en) It is a kind of blind with reference to tone mapping graph image quality evaluation method
CN108022223A (en) A kind of tone mapping method based on the processing fusion of logarithmic mapping function piecemeal
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
CN110910336B (en) Three-dimensional high dynamic range imaging method based on full convolution neural network
CN103268596A (en) Method for reducing image noise and enabling colors to be close to standard
CN109785252A (en) Based on multiple dimensioned residual error dense network nighttime image enhancing method
CN111612725A (en) Image fusion method based on contrast enhancement of visible light image
CN108305232A (en) A kind of single frames high dynamic range images generation method
CN109427041A (en) A kind of image white balance method and system, storage medium and terminal device
CN116681636A (en) Light infrared and visible light image fusion method based on convolutional neural network
CN112330613A (en) Method and system for evaluating quality of cytopathology digital image
CN106709888B (en) A kind of high dynamic range images production method based on human vision model
CN106780463B (en) It is a kind of to expose fused image quality appraisal procedures with reference to entirely more
CN113409247B (en) Multi-exposure fusion image quality evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141001