CN104835130A - Multi-exposure image fusion method - Google Patents

Multi-exposure image fusion method Download PDF

Info

Publication number
CN104835130A
CN104835130A CN201510184151.1A CN201510184151A CN104835130A CN 104835130 A CN104835130 A CN 104835130A CN 201510184151 A CN201510184151 A CN 201510184151A CN 104835130 A CN104835130 A CN 104835130A
Authority
CN
China
Prior art keywords
image
low
frequency
fusion
high frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510184151.1A
Other languages
Chinese (zh)
Inventor
王金华
何宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201510184151.1A priority Critical patent/CN104835130A/en
Publication of CN104835130A publication Critical patent/CN104835130A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a multi-exposure image fusion method. The Laplacian pyramid decomposition is utilized to perform multi-scale decomposition of an original image, so that a high frequency image and a low frequency image of the original image are obtained. Different fusion mechanisms are adopted for the high frequency image and the low frequency image to finally obtain a reconstructed image. In the Laplacian pyramid decomposition process, the size of the lower frequency image is much smaller than the size of the original image due to the processing process of down sampling of each layer, so that the time complexity of the sparse representation framework fusion method is greatly reduced, information of a specific frequency band can be highlighted, and more direction and texture information of the original image can be reserved.

Description

A kind of many exposure images fusion method
Technical field
The present invention relates to computer vision field, particularly relate to a kind of many exposure images fusion method.
Background technology
In recent years, the emerging research direction playing calculating photography (Computational Photography) in computer vision field, its aim is the limitation overcoming Imaging And Display Device, be that visual world generating content enriches, image true to nature by computing technique, meet the perception of human visual system to objective world.
Calculating photography is the research field that a multi-crossed disciplines is very strong, relates to the technology such as computer graphics, computer vision, image procossing, visually-perceptible, optics and traditional photography art.Many exposures control fusion has wherein become a very important problem.The dynamic range (10 of existing photograph and display equipment 2) far below the dynamic range (10 of real scene 10).In high dynamic range scene, the photo taking out with digital camera or partial exposure are not enough or local excessive exposure, always cause some local detail information dropout.By carrying out certain process (recovering camera response function as utilized the time shutter) to exposure image sequence more than a group, a panel height dynamic image can be obtained.But, because existing display device is low-dynamic range, directly cannot show high dynamic range images, therefore, also need that the high dynamic range images obtained is carried out dynamic range compression process to show on equipment, such processing mode calculated amount is general very large.Many exposure images merge be for this problem a research direction deriving, according to many exposure images sequence equally, by the Fusion Features from different images together, can generate and there is details enrich and the high image of contrast, final goal makes the result images of generation after being shown, the perception that the mankind obtain will place oneself in the midst of with it obtain in true environment the same, namely the information that not only represents of observed image and real scene is consistent, and if the visual sense feeling brought to the mankind also consistent.Which type of feature to describe the detailed information of piece image by, be one of this field key issue that will solve.
In order to address this problem, researcher proposes to utilize sparse representation theory to carry out the significantly information such as Description Image edge, direction, on the basis of framework of sparse representation, realizes exposing fusion more.Sparse representation theory is proposed by Mallat the earliest, and its basic thought replaces nonredundant orthogonal basis function by the super complete redundancy functions system being referred to as dictionary, and the element in dictionary is called as atom, and signal is represented by the linear combination of atom.The number of its Atom is larger than the dimension of signal, thereby produces redundancy (being called super completeness).Just because of this characteristic, have a lot of method representing signal, the expression wherein with minimum coefficient (the most sparse) is the simplest, is also considered to optimum a kind of method for expressing.Overcomplete sparse representation theory can make the performance of a lot of image processing method be improved, and mainly has benefited from two dot characteristics of rarefaction representation: the mistake completeness of dictionary and the openness of expression coefficient.Cross completeness and ensure that dictionary content is abundanter, atom wherein in super complete dictionary can be not only Fourier transform, wavelet transformation, the basis function of the conversion such as discrete cosine transform, ridge ripple (Ridgelet), Qu Bo (Curvelet), band ripple (Bandelet), profile ripple (Contourlet) can also be that the combination in any of this several transform-based is to adapt to dissimilar pending signal.In addition, super complete dictionary can also be obtained by sample learning according to different image types and different images Processing tasks.
Based on many exposures fusion process of framework of sparse representation, as shown in Figure 1, the image sequence of different exposure, after " sliding window setting technique " processes dyad, composing images block matrix, utilizes the complete dictionary of mistake trained, obtains corresponding sparse coefficient and represent, finally adopt and get the rear coefficient of maximum fusion rule acquisition fusion, after reconstruct, obtain fusion results image.In this framework, due to the size of " sliding window setting technique " dependency graph picture, the image normally high resolving power that current general camera obtains, image size is general very large, causes the many exposures blending algorithm time complexity based on this framework higher, thus limits its application.
Summary of the invention
In order to solve problems of the prior art, the present invention proposes a kind of many exposure images fusion method in conjunction with multi-resolution decomposition and framework of sparse representation.Utilize Laplacian pyramid, be high frequency imaging and low-frequency image picture breakdown, because the low-frequency information of image can approximate simulation original image, and inherit some attributes of original image, as mean flow rate and texture information, and relative to original image, the size of low-frequency image reduces greatly, adopt " sliding window setting technique " to merge so again, greatly can reduce time complexity.Be directed to high frequency imaging, adopt the neighborhood information of pixel as criterion, it carries out image co-registration relative to the mode of the simple selection (gray scale extremum method) only according to single independent pixel or simple weighted, more rationally.For the image of 1024 × 768 sizes, in the processing procedure of the original fusion based on framework of sparse representation, need the processing time of more than 30 second, and the many exposure images fusion method adopting the present invention to propose, processing procedure needed less than 10 seconds.In addition, due to the treatment mechanism of high-frequency information, more marginal information can be retained.
One many exposure images fusion method of the present invention, it is characterized in that, comprise: multi-resolution decomposition step, for two width images, two width images are comprised to the Laplacian pyramid of low-pass filtering, down-sampled, interpolate value and bandpass filtering, the laplacian pyramid that two width picture breakdowns are the frequency layer comprising the identical number of plies, obtain low-frequency image and the high frequency imaging of two width images respectively, low-frequency image fusion steps, adopt Laplce's low-frequency image of indoor and outdoor scene image as training sample, dictionary learning algorithm K-SVD is utilized to generate dictionary matrix, according to dictionary matrix, two width images low-frequency image is separately divided into multiple image block, according to the two width images sparse coefficient vector of low-frequency image block of same position and the weighting factor of correspondence separately, obtain two width images separately same position low-frequency image merge after treat reconstruction coefficients, multiple matrix of coefficients after the fusion of reconstruction coefficients composition, matrix of coefficients and dictionary matrix multiple, obtain the low-frequency image after merging, wherein said weighting factor is determined by the norm of sparse coefficient vector, high frequency imaging fusion steps, matching degree between the high frequency imaging calculating two width images, when matching degree is less than threshold value, select the large regional center grey scale pixel value of energy as the gray-scale value of the central point of fused image on corresponding region, when matching degree is not less than threshold value, high frequency imaging is adopted to the gray-scale value of the central point of average weighted mode determination fused image on corresponding region, the gray-scale value of each central point of trying to achieve successively, as the gray-scale value of the pixel of each point of the high frequency imaging after fusion, obtains the high frequency imaging after merging, image reconstruction step, have passed through above-mentioned low-frequency image and merges and high frequency imaging fusion steps, obtain the laplacian pyramid after a fusion, carry out inverse transformation reconstructed image, obtain the image after two width image co-registration to it.
Further, many exposure images fusion method of the present invention, is characterized in that, when being the laplacian pyramid of the frequency layer comprising the identical number of plies two width picture breakdowns, the number of plies is artificial setting.
Further, many exposure images fusion method of the present invention, it is characterized in that, its last layer image is deducted with this each tomographic image pyramidal of N floor height, up-sampling is carried out to result and does Gaussian convolution process, just can obtain N-1 error image, be the high frequency imaging of laplacian pyramid, this pyramidal top layer images of N floor height is the low-frequency image of laplacian pyramid.
Further, many exposure images fusion method of the present invention, is characterized in that, when being the laplacian pyramid of the frequency layer comprising the identical number of plies two width picture breakdowns, the number of plies is 4, and every width image can obtain 3 high frequency imagings, 1 low-frequency image.
Accompanying drawing explanation
Fig. 1 is the schematic diagram based on crossing complete rarefaction representation many exposure fusions framework.
Fig. 2 is the schematic diagram of the image co-registration process based on multi-resolution decomposition.
Fig. 3 is the schematic diagram of the technology of the present invention framework.
Embodiment
The present invention adopts many exposure images sequence construct learning sample, utilizes dictionary learning algorithm K-SVD to generate dictionary matrix D.The openness rarefaction representation that makes can be selected and the maximally related atom of pending signal more exactly adaptively, and strengthen the adaptive ability of signal processing method, this is that the present invention utilizes sparse matrix to represent the reason of characteristics of image.
In addition, the stimulation of human visual system to characteristics of image is present on different yardsticks, based on this thought, creates the Image Fusion of frequency field.In fusion treatment process, utilize Multiresolution Decompositions Approach frequency layer different for picture breakdown, fusion process is carried out respectively in each frequency layer.Like this, just for the feature of different frequency layer and details, different fusion rules can be adopted, thus reach the object of characteristic sum details on outstanding special frequency band.The multi-resolution decomposition of image carries out bottom-up decomposition to image, and each tomographic image is all that its last tomographic image result obtains through certain computing.Based in the image co-registration of multi-resolution decomposition, Image Fusion based on Pyramid transform obtains extensive concern, its base conditioning method is: carry out pyramid decomposition to each width input picture, then, carry out coefficient selection by fusion rule algorithm and obtain the pyramid after merging, finally, contravariant is carried out to new pyramid and brings reconstructed image, obtain fusion results image.This process can represent with Fig. 2.
Based on above analysis, the present invention proposes to realize exposing fusion in conjunction with the framework of framework of sparse representation and multi-resolution decomposition more, the low-frequency image approximate simulation original image utilizing multi-resolution decomposition to obtain, and utilize " degree of rarefication " to design a kind of weighted mean fusion rule, realize the low-frequency image fusion treatment of sparse representation theory framework, and for high frequency imaging, utilize the convergence strategy of neighborhood information as criterion of pixel, edge and the texture information of more images can be obtained.
The method that the present invention carries forms primarily of Image Multiscale decomposition step, low-frequency image fusion steps, high frequency imaging fusion steps and image reconstruction step four part.Specifically as shown in Figure 3.The present invention, in order to describe simplification, supposes that input image sequence only comprises an a low exposure image A and high exposure image B.For this two width image, first Laplacian pyramid is utilized to realize the Scale Decomposition of image, obtain high frequency imaging and the low-frequency image of image A, B respectively, then, different fusion methods is adopted for different frequency images, first state the processing procedure of laplacian pyramid below, then state the fusion process of low frequency and high frequency respectively.
1. laplacian pyramid
In the building process of gaussian pyramid, image can lost part detail of the high frequency through the operation of Convolution sums down-sampling.In order to address this problem, on the basis of gaussian pyramid, propose Laplce (Laplacian) pyramid.Its base conditioning method is: deduct its last layer image with each tomographic image of gaussian pyramid, up-sampling is carried out to result and does Gaussian convolution process, just can obtain a series of error image, add the top layer images of gaussian pyramid, be Laplacian exploded view picture.Generally, the Laplacian pyramidal decomposition setting up image has four basic steps: low-pass filtering, down-sampled, interpolate value and bandpass filtering.The pyramidal each layer (except top layer) of Laplacian of image all retains and highlights the edge feature information of image, this information for image compression or further analyze, understand and process significant.
1) Laplacian Pyramid transform process:
First by G linterpolation is amplified, and is amplified image make size and G l-1measure-alike.Write to simplify, same introducing amplifies operator Expand, and formula is defined as:
G l * = Expand ( G l )
(1)
Wherein, Expand operator is defined as:
G l * ( i , j ) = 4 &Sigma; m = - 2 2 &Sigma; n = - 2 2 w ( m , n ) G l &prime; ( i + m 2 , j + n 2 ) ( 0 < l &le; Num , 0 &le; i < C l , 0 &le; j < R l )
(2)
(3)
Expand operator is the inverse operator of Reduce operator, size and G l-1measure-alike.As can be seen from formula (1), between original pixel, the gray-scale value of the new pixel of interpolation is by determining the weighted mean of original grey scale pixel value.Because to G l-1carry out low-pass filtering and just obtain G l, i.e. G lobfuscation, down-sampled G l-1, so, the detailed information comprised is less than G l-1.The pyramidal decomposable process of Laplacian is defined as follows:
(4)
In above-mentioned formula: Num represents the level number of Laplacian pyramid top layer; LP lrepresent the l tomographic image of Laplacian Pyramid transform; G lrepresent the l tomographic image of Gauss's Pyramid transform; By LP 0, LP 1..., LP l..., LP numjust constitute Laplacian pyramid, except top layer, its each tomographic image is the difference of this tomographic image of gaussian pyramid tomographic image high with it image after amplifying operator and amplifying, and is high frequency imaging, and top layer is the low-frequency image obtained.Wherein, the number of plies is artificial setup parameter, when being set to 4, represents that every width image can obtain 3 high frequency imagings, 1 low-frequency image.1 high frequency imaging is merged by " the low-frequency image syncretizing mechanism " of stating below, and 3 high frequency imagings utilize " high frequency imaging syncretizing mechanism " to merge, and then by following pyramid process of reconstruction, can obtain fusion results image.
2) original image process is rebuild by Laplacian pyramid:
Can obtain the pyramidal reconstruction formula of Laplacian by formula (4) is:
(5)
Above formula shows, carries out recursion according to above formula from the pyramidal top layer of Laplacian successively from top to bottom, can recover the gaussian pyramid of its correspondence, and finally can obtain original image G 0.By each tomographic image of Laplacian pyramid through Expand operator progressively interpolation be amplified to equally large with original image, and then to be added, can Exact Reconstruction original image.This shows that the Laplacian pyramidal decomposition of image is the complete representation of image, and this is one of key property of Laplacian pyramidal decomposition.
After Laplacian pyramid above, obtain low-frequency image and the high frequency imaging of image, different band images, the present invention proposes to utilize different syncretizing mechanisms, low-frequency image can simulate original image, in order to reduce the time complexity directly carrying out the fusion process of framework of sparse representation on the original image, propose the rarefaction representation syncretizing mechanism of low-frequency image, processing procedure is as described below.
2. low-frequency image syncretizing mechanism:
In framework of sparse representation, how obtaining study dictionary is one of key issue that will solve.The construction method of base combination is a kind of feasible method simple to operate.The base that can select or conversion have small echo (wavelet) to wrap, band ripple (bandlet) wraps.Although but the dictionary construction method of base combination is simple, directly perceived, the complete dictionary of mistake using the method to obtain does not possess adaptivity.When constructed dictionary is suitable for the rarefaction representation of certain class signal, for the signal that other are dissimilar, but not necessarily can ensure that it represents that coefficient has good openness.Therefore, in actual applications, the complete dictionary of the mistake being suitable for such signal is built usually through to the dictionary learning of certain class signal.In the present invention, adopt Laplce's low-frequency image of several typical indoor and outdoor scene images as training sample, utilize dictionary learning algorithm K-SVD to generate dictionary matrix D, built complete dictionary.
Usually adopting absolute coefficient to get large fusion rule based on crossing in complete rarefaction representation blending algorithm, that is to say the l of sparse coefficient vector 1norm gets large rule.The fusion results adopting this fusion rule to obtain can lose the detailed information of scene.What consider to be expressed atom in complete dictionary is some edge features, and so represent that the degree of rarefication of coefficient is lower, namely nonzero coefficient is more, and mean that the notable feature in image block is more, contained quantity of information is abundanter.And when the degree of rarefication of coefficient vector corresponding to source images is consistent, select its l 1the coefficient that norm is larger merges, and is to select that edge feature quantity is identical, feature more obvious image block reconstruction result image.The present invention analyzed the feature of complete rarefaction representation and rarefaction representation coefficient, designed a kind of fusion rule about degree of rarefication, namely extracted image block clearly by the judgement of degree of rarefication, designed a kind of weighted mean rule and merged.Consider that the absolute value of coefficient has reacted the local energy size being expressed image block, what local energy was larger then shows that it is clear area.For image block sparse coefficient vector, then its l 1norm reflects the activity degree of image block, i.e. l 1norm is larger, shows that the quantity of information that it is with is more.
Based on above analysis, dictionary learning algorithm (such as K-SVD) is utilized to learn out a dictionary matrix, suppose that size is 64*512 (i.e. 512 atoms), by sliding window setting technique, image A, B low-frequency image is separately divided into a series of images block respectively, tile size be consistent for the dictionary size obtaining sparse coefficient, the dimension as dictionary is 64 dimensions, and so the size of image block is 8*8.Every block rearrangement formation 64 dimensional vector, can regard input vector as, obtain the rarefaction representation of this block on this dictionary, i.e. sparse vector by this input vector and dictionary matrix two parts.
The computing formula of the fusion rule of our design is as follows:
f=ω αα+ω ββ
ω α=||α|| 1/(||α|| 1+||β|| 1)
ω β=1-ω α
(6)
Wherein α, β are respectively the sparse coefficient vector of a certain same position image block (such as, size is 8*8 pixel) in the input low-frequency image of image A, B, and f is that waiting after merging reconstructs sparse coefficient, ω α, ω βbe respectively corresponding weighting factor, || || 1for representing the l of coefficient vector 1norm, is the absolute value sum of sparse vector element.After all same position image blocks of the low-frequency image of image A, B utilize formula (6) to calculate, obtain the matrix of coefficients after the fusion of reconstruction coefficients composition of each same position, matrix of coefficients is multiplied with dictionary matrix D, finally obtains the fusion gray level image of low frequency.
3. high frequency imaging syncretizing mechanism:
For the high frequency imaging after laplacian decomposition, the present invention adopts using region energy as criterion, and for a certain pixel, adopt a certain size window to carry out filtering to image, filtered pixel value represents the region energy of this point, concrete steps:
The first step, to the high frequency imaging of source images A and B, the energy definition of the neighborhood window area centered by (m, n) point is:
S A ( m , n ) = &Sigma; u &Element; U &Sigma; v &Element; V &omega; ( u , v ) [ A ( m + u , n + v ) ] 2
S B ( m , n ) = &Sigma; u &Element; U &Sigma; v &Element; V &omega; ( u , v ) [ B ( m + u , n + v ) ] 2
(7)
Wherein ω (u, v) represents template window coefficient, can adopt Gaussian template.
Second step, calculates local, normalized matching degree between image A and B:
M AB ( m , n ) = 2 &Sigma; u &Element; U &Sigma; v &Element; V &omega; ( u , v ) A ( m + u , n + v ) B ( m + u , n + v ) S A ( m , n ) + S B ( m , n )
(8)
3rd step, for the corresponding regional area of image A, B, the matching degree M between it aBwhen being less than threshold value T (generally getting 0.85), illustrate that two images energy difference is on the area comparatively large, the regional center grey scale pixel value that now selection energy is large is as the gray-scale value of the central point of fused image F on corresponding region; Otherwise, when two width images energy is on the area close, so, adopt the gray-scale value of average weighted mode determination fused image central point on the area.Said process formula can be expressed as:
Work as M aBduring (m, n)≤T:
(9)
Work as M aBduring (m, n) > T:
F(m,n)=w(m,n)A(m,n)+[1-w(m,n)]B(m,n)
(10)
(11)
Above-mentioned based in the fusion rule algorithm of region energy, have employed the neighborhood information of pixel as criterion, for the value of the some pixels in high frequency fused image, when the value of two width images on this position merged differs greatly, just utilize large pixel value as the result after fusion, when two value differences are very little, just utilize formula (11) to obtain respective fusion weight coefficient, use average weighted method to merge.Utilize above-mentioned algorithm, the grey scale pixel value that (m, the n) that tried to achieve the fused image F of the identical layer of the high frequency imaging of A, B image puts, profit in the same way afterwards, in the hope of the grey scale pixel value of each point of image F, also, high frequency imaging can be completed and merges.It carries out image co-registration relative to the mode (weight coefficient directly provides) of the simple selection (gray scale extremum method) only according to single independent pixel or simple weighted, more rationally.
During Image Reconstruction, have passed through above-mentioned low frequency and merge and high frequency fusion process, obtain one and merge rear laplacian pyramid, carry out, as the inverse transformation in above-mentioned formula (5) gets final product reconstructed image, obtaining fusion results image to it.
Above embodiment is only for illustration of the present invention; and be not limitation of the present invention; the those of ordinary skill of relevant technical field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all equivalent technical schemes also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (4)

1. the fusion method of exposure image more than, is characterized in that, comprising:
Multi-resolution decomposition step, for two width images, two width images are comprised to the Laplacian pyramid of low-pass filtering, down-sampled, interpolate value and bandpass filtering, the laplacian pyramid that two width picture breakdowns are the frequency layer comprising the identical number of plies, obtain low-frequency image and the high frequency imaging of two width images respectively;
Low-frequency image fusion steps, adopt Laplce's low-frequency image of indoor and outdoor scene image as training sample, dictionary learning algorithm K-SVD is utilized to generate dictionary matrix, according to dictionary matrix, two width images low-frequency image is separately divided into multiple image block, according to the two width images sparse coefficient vector of low-frequency image block of same position and the weighting factor of correspondence separately, obtain two width images separately same position low-frequency image merge after treat reconstruction coefficients, multiple matrix of coefficients after the fusion of reconstruction coefficients composition, matrix of coefficients and dictionary matrix multiple, obtain the low-frequency image after merging, wherein said weighting factor is determined by the norm of sparse coefficient vector,
High frequency imaging fusion steps, matching degree between the high frequency imaging calculating two width images, when matching degree is less than threshold value, select the large regional center grey scale pixel value of energy as the gray-scale value of the central point of fused image on corresponding region, when matching degree is not less than threshold value, high frequency imaging is adopted to the gray-scale value of the central point of average weighted mode determination fused image on corresponding region, the gray-scale value of each central point of trying to achieve successively, as the gray-scale value of the pixel of each point of the high frequency imaging after fusion, obtains the high frequency imaging after merging;
Image reconstruction step, have passed through above-mentioned low-frequency image and merges and high frequency imaging fusion steps, obtain the laplacian pyramid after a fusion, carry out inverse transformation reconstructed image, obtain the image after two width image co-registration to it.
2. many exposure images fusion method as claimed in claim 1, is characterized in that, when being the laplacian pyramid of the frequency layer comprising the identical number of plies two width picture breakdowns, the number of plies is artificial setting.
3. many exposure images fusion method as claimed in claim 1, it is characterized in that, its last layer image is deducted with this each tomographic image pyramidal of N floor height, up-sampling is carried out to result and does Gaussian convolution process, just N-1 error image can be obtained, be the high frequency imaging of laplacian pyramid, this pyramidal top layer images of N floor height is the low-frequency image of laplacian pyramid.
4. many exposure images fusion method as claimed in claim 1, is characterized in that, when being the laplacian pyramid of the frequency layer comprising the identical number of plies two width picture breakdowns, the number of plies is 4, and every width image can obtain 3 high frequency imagings, 1 low-frequency image.
CN201510184151.1A 2015-04-17 2015-04-17 Multi-exposure image fusion method Pending CN104835130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510184151.1A CN104835130A (en) 2015-04-17 2015-04-17 Multi-exposure image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510184151.1A CN104835130A (en) 2015-04-17 2015-04-17 Multi-exposure image fusion method

Publications (1)

Publication Number Publication Date
CN104835130A true CN104835130A (en) 2015-08-12

Family

ID=53813000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510184151.1A Pending CN104835130A (en) 2015-04-17 2015-04-17 Multi-exposure image fusion method

Country Status (1)

Country Link
CN (1) CN104835130A (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187739A (en) * 2015-09-18 2015-12-23 北京中科慧眼科技有限公司 Camera sensor design method based on HDR algorithm
CN105844606A (en) * 2016-03-22 2016-08-10 博康智能网络科技股份有限公司 Wavelet transform-based image fusion method and system thereof
CN106127718A (en) * 2016-06-17 2016-11-16 中国人民解放军国防科学技术大学 A kind of many exposure images fusion method based on wavelet transformation
CN106251365A (en) * 2016-07-22 2016-12-21 北京邮电大学 Many exposure video fusion method and device
CN106375675A (en) * 2016-08-30 2017-02-01 中国科学院长春光学精密机械与物理研究所 Aerial camera multi-exposure image fusion method
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN107103331A (en) * 2017-04-01 2017-08-29 中北大学 A kind of image interfusion method based on deep learning
CN107726990A (en) * 2017-09-18 2018-02-23 西安电子科技大学 The collection of dot matrix grid image and recognition methods in a kind of Sheet metal forming strain measurement
CN107729905A (en) * 2017-10-19 2018-02-23 珠海格力电器股份有限公司 Image information processing method and device
CN108074220A (en) * 2017-12-11 2018-05-25 上海顺久电子科技有限公司 A kind of processing method of image, device and television set
CN108717690A (en) * 2018-05-21 2018-10-30 电子科技大学 A kind of synthetic method of high dynamic range photo
CN108827184A (en) * 2018-04-28 2018-11-16 南京航空航天大学 A kind of structure light self-adaptation three-dimensional measurement method based on camera response curve
CN108898609A (en) * 2018-06-21 2018-11-27 深圳辰视智能科技有限公司 A kind of method for detecting image edge, detection device and computer storage medium
CN109003228A (en) * 2018-07-16 2018-12-14 杭州电子科技大学 A kind of micro- big visual field automatic Mosaic imaging method of dark field
CN109120819A (en) * 2017-06-26 2019-01-01 泰肯贸易股份公司 The well of minitype plate is imaged
CN109410219A (en) * 2018-10-09 2019-03-01 山东大学 A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109492628A (en) * 2018-10-08 2019-03-19 杭州电子科技大学 A kind of implementation method of the spherical positioning system applied to the undisciplined crawl in classroom
CN109670522A (en) * 2018-09-26 2019-04-23 天津工业大学 A kind of visible images and infrared image fusion method based on multidirectional laplacian pyramid
CN109727188A (en) * 2017-10-31 2019-05-07 比亚迪股份有限公司 Image processing method and its device, safe driving method and its device
CN110047058A (en) * 2019-03-25 2019-07-23 杭州电子科技大学 A kind of image interfusion method based on residual pyramid
CN110084773A (en) * 2019-03-25 2019-08-02 西北工业大学 A kind of image interfusion method based on depth convolution autoencoder network
CN110111287A (en) * 2019-04-04 2019-08-09 上海工程技术大学 A kind of fabric multi-angle image emerging system and its method
CN110246108A (en) * 2018-11-21 2019-09-17 浙江大华技术股份有限公司 A kind of image processing method, device and computer readable storage medium
CN110288558A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A kind of super depth image fusion method and terminal
CN110516435A (en) * 2019-09-02 2019-11-29 国网电子商务有限公司 Private key management method and device based on biological characteristics
CN110689510A (en) * 2019-09-12 2020-01-14 北京航天控制仪器研究所 Sparse representation-based image fusion method introducing dictionary information
CN111583167A (en) * 2020-05-14 2020-08-25 山东大学第二医院 Image fusion method for holmium laser gravel
CN111696067A (en) * 2020-06-16 2020-09-22 桂林电子科技大学 Gem image fusion method based on image fusion system
CN111967523A (en) * 2020-08-19 2020-11-20 佳木斯大学 Data fusion agricultural condition detection system and method based on multi-rotor aircraft
WO2020237931A1 (en) * 2019-05-24 2020-12-03 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN112116102A (en) * 2020-09-27 2020-12-22 张洪铭 Method and system for expanding domain adaptive training set
CN112184606A (en) * 2020-09-24 2021-01-05 南京晓庄学院 Fusion method of visible light image and infrared image based on Laplacian pyramid
WO2021026822A1 (en) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 Image processing method and apparatus, image photographing device, and mobile terminal
CN112634187A (en) * 2021-01-05 2021-04-09 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN113362264A (en) * 2021-06-23 2021-09-07 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113793272A (en) * 2021-08-11 2021-12-14 东软医疗系统股份有限公司 Image noise reduction method and device, storage medium and terminal
US11408987B2 (en) 2017-09-25 2022-08-09 Philips Image Guided Therapy Corporation Ultrasonic imaging with multi-scale processing for grating lobe suppression
CN117611471A (en) * 2024-01-22 2024-02-27 中国科学院长春光学精密机械与物理研究所 High-dynamic image synthesis method based on texture decomposition model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540045A (en) * 2009-03-25 2009-09-23 湖南大学 Multi-source image fusion method based on synchronous orthogonal matching pursuit algorithm
CN102393958A (en) * 2011-07-16 2012-03-28 西安电子科技大学 Multi-focus image fusion method based on compressive sensing
CN102651124A (en) * 2012-04-07 2012-08-29 西安电子科技大学 Image fusion method based on redundant dictionary sparse representation and evaluation index
CN102855616A (en) * 2012-08-14 2013-01-02 西北工业大学 Image fusion method based on multi-scale dictionary learning
CN103164850A (en) * 2013-03-11 2013-06-19 南京邮电大学 Method and device for multi-focus image fusion based on compressed sensing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540045A (en) * 2009-03-25 2009-09-23 湖南大学 Multi-source image fusion method based on synchronous orthogonal matching pursuit algorithm
CN102393958A (en) * 2011-07-16 2012-03-28 西安电子科技大学 Multi-focus image fusion method based on compressive sensing
CN102651124A (en) * 2012-04-07 2012-08-29 西安电子科技大学 Image fusion method based on redundant dictionary sparse representation and evaluation index
CN102855616A (en) * 2012-08-14 2013-01-02 西北工业大学 Image fusion method based on multi-scale dictionary learning
CN103164850A (en) * 2013-03-11 2013-06-19 南京邮电大学 Method and device for multi-focus image fusion based on compressed sensing

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JINHUA WANG 等: "Exposure fusion based on sparse representation using approximate K-SVD", 《NEUROCOMPUTING》 *
王金华: "高动悉范围场景可视化技术研究", 《中国博士学位论文全文数据库(电子期刊) 信息科技辑》 *
陈垚佳 等: "基于分块过完备稀疏表示的多聚焦图像融合", 《电视技术》 *
陈磊: "异类图像多级混合融合技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
首照宇 等: "改进的基于稀疏表示的多聚焦图像融合", 《电视技术》 *

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187739A (en) * 2015-09-18 2015-12-23 北京中科慧眼科技有限公司 Camera sensor design method based on HDR algorithm
CN105844606A (en) * 2016-03-22 2016-08-10 博康智能网络科技股份有限公司 Wavelet transform-based image fusion method and system thereof
CN106127718B (en) * 2016-06-17 2018-12-07 中国人民解放军国防科学技术大学 A kind of more exposure image fusion methods based on wavelet transformation
CN106127718A (en) * 2016-06-17 2016-11-16 中国人民解放军国防科学技术大学 A kind of many exposure images fusion method based on wavelet transformation
CN106251365A (en) * 2016-07-22 2016-12-21 北京邮电大学 Many exposure video fusion method and device
CN106375675A (en) * 2016-08-30 2017-02-01 中国科学院长春光学精密机械与物理研究所 Aerial camera multi-exposure image fusion method
CN106375675B (en) * 2016-08-30 2019-04-05 中国科学院长春光学精密机械与物理研究所 A kind of more exposure image fusion methods of aerial camera
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN106780392B (en) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 Image fusion method and device
US11030731B2 (en) 2016-12-27 2021-06-08 Zhejiang Dahua Technology Co., Ltd. Systems and methods for fusing infrared image and visible light image
CN107103331A (en) * 2017-04-01 2017-08-29 中北大学 A kind of image interfusion method based on deep learning
CN109120819B (en) * 2017-06-26 2020-11-24 帝肯贸易股份公司 Imaging wells of a microplate
CN109120819A (en) * 2017-06-26 2019-01-01 泰肯贸易股份公司 The well of minitype plate is imaged
US10841507B2 (en) 2017-06-26 2020-11-17 Tecan Trading Ag Imaging a well of a microplate
CN107726990B (en) * 2017-09-18 2019-06-21 西安电子科技大学 The acquisition of dot matrix grid image and recognition methods in a kind of Sheet metal forming strain measurement
CN107726990A (en) * 2017-09-18 2018-02-23 西安电子科技大学 The collection of dot matrix grid image and recognition methods in a kind of Sheet metal forming strain measurement
US11408987B2 (en) 2017-09-25 2022-08-09 Philips Image Guided Therapy Corporation Ultrasonic imaging with multi-scale processing for grating lobe suppression
CN107729905A (en) * 2017-10-19 2018-02-23 珠海格力电器股份有限公司 Image information processing method and device
WO2019085929A1 (en) * 2017-10-31 2019-05-09 比亚迪股份有限公司 Image processing method, device for same, and method for safe driving
CN109727188A (en) * 2017-10-31 2019-05-07 比亚迪股份有限公司 Image processing method and its device, safe driving method and its device
CN108074220A (en) * 2017-12-11 2018-05-25 上海顺久电子科技有限公司 A kind of processing method of image, device and television set
CN108074220B (en) * 2017-12-11 2020-07-14 上海顺久电子科技有限公司 Image processing method and device and television
CN108827184A (en) * 2018-04-28 2018-11-16 南京航空航天大学 A kind of structure light self-adaptation three-dimensional measurement method based on camera response curve
CN108827184B (en) * 2018-04-28 2020-04-28 南京航空航天大学 Structured light self-adaptive three-dimensional measurement method based on camera response curve
CN108717690B (en) * 2018-05-21 2022-03-04 电子科技大学 Method for synthesizing high dynamic range picture
CN108717690A (en) * 2018-05-21 2018-10-30 电子科技大学 A kind of synthetic method of high dynamic range photo
CN108898609A (en) * 2018-06-21 2018-11-27 深圳辰视智能科技有限公司 A kind of method for detecting image edge, detection device and computer storage medium
CN109003228A (en) * 2018-07-16 2018-12-14 杭州电子科技大学 A kind of micro- big visual field automatic Mosaic imaging method of dark field
CN109003228B (en) * 2018-07-16 2023-06-13 杭州电子科技大学 Dark field microscopic large-view-field automatic stitching imaging method
CN109670522A (en) * 2018-09-26 2019-04-23 天津工业大学 A kind of visible images and infrared image fusion method based on multidirectional laplacian pyramid
CN109492628A (en) * 2018-10-08 2019-03-19 杭州电子科技大学 A kind of implementation method of the spherical positioning system applied to the undisciplined crawl in classroom
CN109410219A (en) * 2018-10-09 2019-03-01 山东大学 A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109410219B (en) * 2018-10-09 2021-09-03 山东大学 Image segmentation method and device based on pyramid fusion learning and computer readable storage medium
US11875520B2 (en) 2018-11-21 2024-01-16 Zhejiang Dahua Technology Co., Ltd. Method and system for generating a fusion image
CN110246108A (en) * 2018-11-21 2019-09-17 浙江大华技术股份有限公司 A kind of image processing method, device and computer readable storage medium
CN110084773A (en) * 2019-03-25 2019-08-02 西北工业大学 A kind of image interfusion method based on depth convolution autoencoder network
CN110047058A (en) * 2019-03-25 2019-07-23 杭州电子科技大学 A kind of image interfusion method based on residual pyramid
CN110047058B (en) * 2019-03-25 2021-04-30 杭州电子科技大学 Image fusion method based on residual pyramid
CN110111287A (en) * 2019-04-04 2019-08-09 上海工程技术大学 A kind of fabric multi-angle image emerging system and its method
WO2020237931A1 (en) * 2019-05-24 2020-12-03 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN110288558A (en) * 2019-06-26 2019-09-27 纳米视觉(成都)科技有限公司 A kind of super depth image fusion method and terminal
CN110288558B (en) * 2019-06-26 2021-08-31 福州鑫图光电有限公司 Super-depth-of-field image fusion method and terminal
WO2021026822A1 (en) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 Image processing method and apparatus, image photographing device, and mobile terminal
CN110516435A (en) * 2019-09-02 2019-11-29 国网电子商务有限公司 Private key management method and device based on biological characteristics
CN110689510B (en) * 2019-09-12 2022-04-08 北京航天控制仪器研究所 Sparse representation-based image fusion method introducing dictionary information
CN110689510A (en) * 2019-09-12 2020-01-14 北京航天控制仪器研究所 Sparse representation-based image fusion method introducing dictionary information
CN111583167B (en) * 2020-05-14 2022-06-07 山东大学第二医院 Image fusion method for holmium laser gravel
CN111583167A (en) * 2020-05-14 2020-08-25 山东大学第二医院 Image fusion method for holmium laser gravel
CN111696067B (en) * 2020-06-16 2023-04-07 桂林电子科技大学 Gem image fusion method based on image fusion system
CN111696067A (en) * 2020-06-16 2020-09-22 桂林电子科技大学 Gem image fusion method based on image fusion system
CN111967523A (en) * 2020-08-19 2020-11-20 佳木斯大学 Data fusion agricultural condition detection system and method based on multi-rotor aircraft
CN111967523B (en) * 2020-08-19 2022-11-15 佳木斯大学 Data fusion agricultural condition detection system and method based on multi-rotor aircraft
CN112184606A (en) * 2020-09-24 2021-01-05 南京晓庄学院 Fusion method of visible light image and infrared image based on Laplacian pyramid
CN112116102A (en) * 2020-09-27 2020-12-22 张洪铭 Method and system for expanding domain adaptive training set
CN112634187B (en) * 2021-01-05 2022-11-18 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN112634187A (en) * 2021-01-05 2021-04-09 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN113362264A (en) * 2021-06-23 2021-09-07 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113362264B (en) * 2021-06-23 2022-03-18 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113793272A (en) * 2021-08-11 2021-12-14 东软医疗系统股份有限公司 Image noise reduction method and device, storage medium and terminal
CN113793272B (en) * 2021-08-11 2024-01-26 东软医疗系统股份有限公司 Image noise reduction method and device, storage medium and terminal
CN117611471A (en) * 2024-01-22 2024-02-27 中国科学院长春光学精密机械与物理研究所 High-dynamic image synthesis method based on texture decomposition model
CN117611471B (en) * 2024-01-22 2024-04-09 中国科学院长春光学精密机械与物理研究所 High-dynamic image synthesis method based on texture decomposition model

Similar Documents

Publication Publication Date Title
CN104835130A (en) Multi-exposure image fusion method
Zuo et al. Gradient histogram estimation and preservation for texture enhanced image denoising
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
Kumar et al. Convolutional neural networks for wavelet domain super resolution
CN105046672B (en) A kind of image super-resolution rebuilding method
CN106981080A (en) Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN105046651B (en) A kind of ultra-resolution ratio reconstructing method and device of image
CN106408550A (en) Improved self-adaptive multi-dictionary learning image super-resolution reconstruction method
Li et al. Fusion of medical sensors using adaptive cloud model in local Laplacian pyramid domain
Akl et al. A survey of exemplar-based texture synthesis methods
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
Chen et al. A sparse representation and dictionary learning based algorithm for image restoration in the presence of Rician noise
CN105931181B (en) Super resolution image reconstruction method and system based on non-coupled mapping relations
Chen et al. End-to-end single image enhancement based on a dual network cascade model
CN104361571A (en) Infrared and low-light image fusion method based on marginal information and support degree transformation
CN109559278B (en) Super resolution image reconstruction method and system based on multiple features study
Zhang et al. Infrared and visible image fusion with entropy-based adaptive fusion module and mask-guided convolutional neural network
Karthikeyan et al. Energy based denoising convolutional neural network for image enhancement
CN113034371A (en) Infrared and visible light image fusion method based on feature embedding
Wu et al. Details-preserving multi-exposure image fusion based on dual-pyramid using improved exposure evaluation
CN112686830A (en) Super-resolution method of single depth map based on image decomposition
CN117197627A (en) Multi-mode image fusion method based on high-order degradation model
Luo et al. Infrared and visible image fusion based on VPDE model and VGG network
CN107133921A (en) The image super-resolution rebuilding method and system being embedded in based on multi-level neighborhood

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150812