CN101303764A - Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave - Google Patents

Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave Download PDF

Info

Publication number
CN101303764A
CN101303764A CNA2008100182377A CN200810018237A CN101303764A CN 101303764 A CN101303764 A CN 101303764A CN A2008100182377 A CNA2008100182377 A CN A2008100182377A CN 200810018237 A CN200810018237 A CN 200810018237A CN 101303764 A CN101303764 A CN 101303764A
Authority
CN
China
Prior art keywords
search
fusion
region
low
subband
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100182377A
Other languages
Chinese (zh)
Other versions
CN101303764B (en
Inventor
焦李成
侯彪
常霞
王爽
公茂果
刘芳
张向荣
马文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Discovery Turing Technology Xi'an Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2008100182377A priority Critical patent/CN101303764B/en
Publication of CN101303764A publication Critical patent/CN101303764A/en
Application granted granted Critical
Publication of CN101303764B publication Critical patent/CN101303764B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a multi-sensor image adaptive fusion method based on non-lower sampling contourlet, which mainly aims at solving the problem that the existing image fusion method easily causes distortion in fused images. The fusion process comprises the steps that: the source images are respectively subject to a non-lower sampling contourlet decomposition to obtain low pass subbands and high-frequency directional subbands of the source images in various scales; the Fibonacci method is applied to the obtained low pass subband to find out the optimal low-frequency subband fusion weight; a fusion is adaptively carried out by utilizing the optimal low-frequency subband fusion weight to obtain the low pass subband of the fused image; a fusion is conducted by hiring the high-frequency fusion formula to fuse the high-frequency directional subbands of the source image in various scales so as to obtain the high-frequency directional subbands of the image in various scales; finally, an NSCT inverse transformation is applied to the low pass subbands and the high-frequency directional subbands of the image to be fused to obtain the fused image. The method has the advantages of smoothness, clarity and rich detailed information of the fused image, and thus can be applied to the pre-treatment of remote sensing images and aerial images.

Description

Method for self-adaption amalgamation of multi-sensor image based on non-down sampling contourlet
Technical field
The invention belongs to technical field of image processing, say so especially to relate to and utilize non-down sampling contourlet that multi-sensor image is carried out the method that self-adaptation merges.This method can be used for remote sensing images, the pre-service of the image of taking photo by plane.
Background technology
It is an important branch of data fusion that multi-sensor image merges.The view data that various single-sensors obtain exists tangible limitation and otherness at aspects such as how much, spectrum, time and spatial resolutions, so only utilize a kind of sensor image data to be difficult to practical requirement, particularly in future war, electromagnetic environment will be complicated unusually, to such an extent as to no matter be the three-dimensional warfare that air battle, sea warfare or land battle land, sea, air combine, all will depend on various sensor devices day by day.For observed object being had more comprehensive, clear, a understanding accurately and understanding, people urgently wish to seek a kind of technical method that fully utilizes all kinds of view data.Therefore the advantage separately of different view data and complementarity are integrated to be used and just seem extremely important and practical.
The multi-sensor image fusion method can be divided into two classes substantially.One class is on spatial domain, the source images after the coupling directly is weighted average treatment, thereby obtains the new fused images of a width of cloth.This method is simple, but ignored different sensor images in what difference of respective objects important information amount that the zone comprises, this method is equivalent in the fused images that contains maximum quantity of information and contains get one between the fused images of minimum information amount and trade off.When differing greatly between the Fused source images, the fused images that this method obtains can produce significantly artificial splicing vestige, is unfavorable for subsequent image processing.In contrast to this class methods, in the last few years, proposed a lot of successful methods both at home and abroad based on conversion.These methods are with the instrument of multi-scale transform as the abstract image notable feature.They comprise the method for decomposing based on turriform, as Laplace tower decomposition, gradient pyramid, ratio low pass pyramid etc. with based on the method for small echo.
The method of decomposing than turriform can provide better pictures to merge performance based on the image interfusion method of small echo.Yet the decomposition of wavelet transformation and reconstruct are actual is the process of low pass high-pass filtering.And in the interpolation problem that most of wave filters relate to, can make result images produce ring, because good comprehensive function normally vibrates.Wavelet transformation lacks translation invariance, and near the even regional area the sharp edges of fused image is introduced ring and shake inevitably.And the discrete feature that the ring shake is sampled during with down-sampling is relevant.Lack translation invariance in order to overcome small echo, existing scholar has proposed series of solutions.At present, the strategy of non-lower sampling conversion is a prefered method of avoiding ringing.Non-down sampling contourlet transform NSCT is that a kind of new multiple dimensioned, local, multidirectional complete graph of crossing with translation invariance is as method for expressing.NSCT satisfies the anisotropy scaling relation, and good directivity is arranged, and can catch edge contour information and grain details information in the image exactly, is suitable for expressing having the multi-sensor image that enriches details and directional information.But because existing multi-sensor image fusion method, mostly be to adopt the method for weighted sum to set to the low pass subband Fusion Model parameter after the conversion, and the relation that multi-sensor image merges between the target is not simple linear weighted function relation, and weighted sum need be known each objective weight in advance, there is very big subjective preferences, directly influence the quality of fused image, therefore be necessary to adopt the method expectation of optimization to obtain best parameter configuration.
The content of invention
The objective of the invention is to overcome the deficiency of prior art, promptly avoid because of the simple problem of easily in fused images, introducing distortion based on the method for conversion that adopts, a kind of method for self-adaption amalgamation of multi-sensor image based on non-down sampling contourlet has been proposed, to improve the quality of fused image.
The technical scheme that realizes the object of the invention is: adopt the instrument of non-down sampling contourlet conduct to the source images conversion, and adopt Fibonacci method search optimal low-frequency subband fusion weights, realize that adapting to image merges decomposition, the specific implementation process is as follows:
(1) input multisensor source images A and B, and respectively it is carried out L layer non-down sampling contourlet NSCT and decompose, low frequency sub-band S obtained L A(n, m), S L B(n, m) and each yardstick on high frequency direction subband { D L, i A(n, m), 0≤l≤L-1,1≤i≤k l, { D L, i B(n, m), 0≤l≤L-1,1≤i≤k l, k lBe illustrated in yardstick 2 -lOn the high frequency direction number of sub-bands, D L, i A(n, m) expression source images A is at yardstick 2 -lOn i direction subband, D L, i B(n, m) expression source images B is at yardstick 2 -lOn i direction subband, L is 3~5;
(2) adopt Fibonacci method to search for its optimal low-frequency subband fusion weight w to described low frequency sub-band *, and utilize the low frequency fusion formula: S L F ( n , m ) = w * * S L A ( n , m ) + ( 1 - w * ) * S L B ( n , m ) , Low frequency sub-band coefficient to the multisensor source images merges;
(3), utilize the high frequency fusion formula to merge, that is: to the high frequency direction subband on described each yardstick
D l , i F ( n , m ) = D l , i A ( n , m ) if D l A ( n , m ) ≥ D l B ( n , m ) D l , i B ( n , m ) otherwise , 0 ≤ l ≤ L - 1 , 1 ≤ i ≤ k l
In the formula, D l A ( n , m ) = Σ 1 ≤ i ≤ k l | D l , i A ( n , m ) | Be source images A yardstick 2 -lOn the high frequency direction sub-band information and,
D l B ( n , m ) = Σ 1 ≤ i ≤ k l | D l , i B ( n , m ) | , Be source images B yardstick 2 -lOn the high frequency direction sub-band information and;
(4) to the low frequency sub-band S after merging F(n, m) and each yardstick on high frequency direction subband { D L, i F(n, m), 0≤l≤L-1,1≤i 〉=k l, carry out non-down sampling contourlet NSCT inverse transformation, obtain fused images F.
Describedly adopt Fibonacci method to search for its optimal low-frequency subband fusion weight w to low frequency sub-band *, carry out according to the following procedure:
1) with the fusion mass index Q of the low frequency sub-band coefficient of sensor image E(w), as the ferret out function:
2) initial value of establishing the left end point a of the region of search is 0, and the initial value of the right endpoint b of the region of search is 1, initial interval [a to be searched, b] value be [0,1], the minimum length ε of the region of search=0.01, and a w is put in first exploration of calculating initial interval to be searched [a, b] medium and low frequency subband fusion weights 1Sound out some w for second of=a+0.382* (b-a) and low-frequency subband fusion weights 2=a+b-w 1, w 1, w 2∈ [a, b];
3) first sounds out the value of putting according to the low-frequency subband fusion weights that calculated, and calculates the fusion mass index Q that this first exploration is put E(w 1);
4), calculate this second and sound out a fusion mass index Q according to second value of souning out point of the low-frequency subband fusion weights that calculated E(w 2);
5) compare Q E(w 1) and Q E(w 2) result of calculation, if Q E(w 1) greater than Q E(w 2), the left end point value of then upgrading the region of search is a '=w 1, and with new region of search length | b-a ' | compare with the minimum length ε in initial interval to be searched, determine the optimal low-frequency subband fusion weight w *:
First kind of situation: if | b-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b)/2, stop search;
Second kind of situation: if | b-a ' | 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1') and Q E(w 2'), again by the comparative result of these two fusion mass indexes is determined the optimal low-frequency subband fusion weight w *
6) compare Q E(w 1) and Q E(w 2) result of calculation, if Q E(w 1) be less than or equal to Q E(w 2), the right endpoint value of then upgrading the region of search is b '=w 2, and with new region of search length | the minimum length ε in b '-a| and initial interval to be searched compares, and determines the optimal low-frequency subband fusion weight w *:
First kind of situation: if | b '-a|<ε, then w *Be taken as the mid point w in new search interval *=(a+b ')/2 stop search;
Second kind of situation: if | b '-a| 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1') and Q E(w 2'), again by the comparative result of these two fusion mass indexes is determined the optimal low-frequency subband fusion weight w *
The present invention compared with prior art has the following advantages:
1, the present invention can effectively avoid lacking the shake distortion that translation invariance produces because of transformation tool in fused images owing to adopt non-down sampling contourlet transform.
2, the present invention is owing to adopt the low frequency fusion weights of selecting the best at different multi-sensor images adaptively, not only made full use of the information of image, effectively keep texture and the marginal information in the source images, and can access clear, the detailed fused images of object scene.
3. simulation result shows, the present invention is with respect to traditional multi-sensor image fusion method, and its effect is significantly improved.
Technical process of the present invention and effect can describe in detail in conjunction with the following drawings:
Description of drawings
Fig. 1 is an implementation procedure synoptic diagram of the present invention;
Fig. 2 is that adaptable search of the present invention selects best low frequency to merge weight w *The process synoptic diagram;
Fig. 3 is with the present invention and the existing method emulation fusion results figure to sensor image group im1;
Fig. 4 is with the present invention and the existing method emulation fusion results figure to sensor image group im2;
Fig. 5 is with the present invention and the existing method emulation fusion results figure to sensor image group im3.
Embodiment
Example of the present invention is that two width of cloth multi-sensor image A and B are merged.
With reference to Fig. 1, concrete steps of the present invention are as follows:
Step 1, input multi-sensor image A and B, and respectively it is carried out L layer NSCT and decompose.
NSCT is that a kind of new multiple dimensioned, local, multidirectional complete graph of crossing with translation invariance is as method for expressing.The structure of NSCT is separate between two parts based on the turriform bank of filters of non-lower sampling and the anisotropic filter group of non-lower sampling.Source images A and B are carried out one deck NSCT decomposition respectively, and its process is:
1) source images A and B are imported the turriform bank of filters of non-lower sampling respectively, obtain low frequency signal and bandpass signal that source images A and B one deck NSCT decompose;
2) with the anisotropic filter group of the bandpass signal of source images A and B input non-lower sampling, obtain the high frequency direction subband that source images A and B one deck NSCT decompose, the number of high frequency direction subband can be 2 time power arbitrarily;
3) low frequency signal that the NSCT of source images A and B is decomposed repeats above-mentioned steps 1 as new input source image) and 2), the low frequency sub-band S that source images A and B carry out L layer NSCT decomposition respectively obtained L A(n, m), S L B(n, m) and the high frequency direction subband { D on each yardstick L, i A(n, m), 0≤l≤L-1,1≤i≤k l, { D L, i B(n, m), 0≤l≤L-1,1≤i≤k l,
Wherein, k lBe illustrated in yardstick 2 -lOn the high frequency direction number of sub-bands, D L, i A(n, m) expression source images A is at yardstick 2 -lOn i direction subband, L is 3~5;
Step 2 adopts Fibonacci method search optimal low-frequency subband fusion weight w *
Fibonacci method is to adopt to remove " bad point " interval in addition, and the region of search is shortened in the interval that stays " better " place, relatively sounds out the functional value at a place through repeated multiple times, just can more and more accurately estimate the position of optimum weights.Objective function is taken as and can reflects that the fused images edge keeps the edge fusion mass index Q of situation E(w).The numerical value of this index is high more, shows that the quality of fused image is good more.The present invention adopts Fibonacci method search optimal low-frequency subband fusion weights, process be:
1) with the fusion mass index Q of the low frequency sub-band coefficient of multi-sensor image E(w), as the ferret out function:
2) initial value of establishing the left end point a of the region of search is 0, and the initial value of the right endpoint b of the region of search is 1, initial interval [a to be searched, b] value be [0,1], the minimum length ε of the region of search=0.01, and a w is put in first exploration of calculating initial interval to be searched [a, b] medium and low frequency subband fusion weights 1Sound out some w for second of=a+0.382* (b-a) and low-frequency subband fusion weights 2=a+b-w 1, w 1, w 2∈ [a, b];
3) first sounds out the value of putting according to the low-frequency subband fusion weights that calculated, and calculates the fusion mass index Q that this first exploration is put E(w 1);
4), calculate this second and sound out a fusion mass index Q according to second value of souning out point of the low-frequency subband fusion weights that calculated E(w 2);
5) compare Q E(w 1) and Q E(w 2) result of calculation, if Q E(w 1) greater than Q E(w 2), the left end point value of then upgrading the region of search is a '=w 1, and with new region of search length | b-a ' | compare with the minimum length ε in initial interval to be searched, determine the optimal low-frequency subband fusion weight w *:
First kind of situation: if | b-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b)/2, stop search;
Second kind of situation: if | b-a ' | 〉=ε, then determine w according to the following procedure *:
5a) upgrading first value of souning out point respectively is w 1'=w 2, second value of souning out point is w 2'=a '+0.618* (b-a ');
5b) recomputate fusion mass index Q E(w 1') and Q E(w 2'), and compare Q E(w 1') and Q E(w 2') size;
5c) for Q E(w 1') greater than Q E(w 2'), the left end point value of then upgrading the region of search is a "=w 1', and with new region of search length | b-a " | compare with the minimum length ε in initial interval to be searched:
If | b-a " |<ε, then w *Be taken as the mid point w in new search interval *=(a "+b)/2, stop search;
If | b-a " | 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1") and Q E(w 2"), and these two fusion mass indexes are compared, go back to step 5c again), this process circulation is carried out, up to searching out the optimal low-frequency subband fusion weight w *
5d) for Q E(w 1') be less than or equal to Q E(w 2'), the right endpoint value of then upgrading the region of search is b '=w 2, and with new region of search length | b '-a ' | compare with the minimum length ε in initial interval to be searched,
If | b '-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b ')/2, stop search;
If | b '-a ' | 〉=ε, sound out point, go back to step 5b for two that then upgrade this region of search), this process circulation is carried out, up to searching out the optimal low-frequency subband fusion weight w *
6) compare Q E(w 1) and Q E(w 2) result of calculation, for Q E(w 1) be less than or equal to Q E(w 2), the right endpoint value of then upgrading the region of search is b '=w 2, and with new region of search length | the minimum length ε in b '-a| and initial interval to be searched compares, and determines the optimal low-frequency subband fusion weight w *:
First kind of situation: if | b '-a|<ε, then w *Be taken as the mid point w in new search interval *=(a+b ')/2 stop search;
Second kind of situation: if | b '-a| 〉=ε, then determine w according to the following procedure *:
6a) upgrading first value of souning out point is w 1'=a+0.382* (b '-a), second value of souning out point is w 2'=w 1
6b) recomputate fusion mass index Q E(w 1') and Q E(w 2'), and compare Q E(w 1') and Q E(w 2') size;
6c) for Q E(w 1') greater than Q E(w 2'), the left end point value of then upgrading the region of search is a '=w 1', and with new region of search length | b '-a ' | compare with the minimum length ε in initial interval to be searched:
If | b '-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b ')/2, stop search;
If | b '-a ' | 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1") and Q E(w 2"), and these two fusion mass indexes are compared, go back to step 6c again), this process is carried out in circulation, up to searching out the optimal low-frequency subband fusion weight w *
If 6d) Q E(w 1') be less than or equal to Q E(w 2'), the right endpoint value of then upgrading the region of search is b "=w 2', and with new region of search length | b " the minimum length ε in a| and initial interval to be searched compares:
If | b " a|<ε, then w *Be taken as the mid point w in new search interval *=(a+b ")/2 stops search;
If | b " a| 〉=ε sounds out point, goes back to step 6b for two that then upgrade this region of search), this process is carried out in circulation, up to searching out the optimal low-frequency subband fusion weight w *
Step 3 is utilized the optimum low-frequency subband fusion weight w that searches *, merge by the low frequency sub-band coefficient of low frequency fusion formula to the multisensor source images.
The low frequency sub-band of multisensor source images has comprised the summary info of image, and its fusion formula is:
S L F ( n , m ) = w * * S L A ( n , m ) + ( 1 - w ) * S L B ( n , m ) , W is the fusion weight, and w ∈ [0,1] (1)
With the optimum low-frequency subband fusion weight w that searches *, substitution low-frequency subband fusion formula (1), the low frequency sub-band conversion coefficient that can obtain fused images is: S L F ( n , m ) = w * * S L A ( n , m ) + ( 1 - w * ) * S L B ( n , m ) .
With the optimum low-frequency subband fusion weight w that searches *, substitution low frequency fusion formula,
Step 4 to the high frequency direction subband on each yardstick of multisensor source images, utilizes the high frequency fusion formula to merge.
NSCT not only provides multiscale analysis to source images, and its basis function also has abundanter direction and shape than wavelet basis function, so it is more effective on smooth contoured of catching image and geometry.The high frequency direction sub-band coefficients of source images all fluctuates near null value, has comprised the detailed information of image, such as the edge, and line feature and zone boundary.The bigger conversion coefficient of mould value has comprised more edge and texture information in two width of cloth source images to be merged, and has pointed out the position at edge, and general fusion rule is to be that each pixel is selected a bigger conversion coefficient.Source images high frequency direction subband after NSCT decomposes is identical with the source images size, and each pixel of high frequency direction subband is corresponding one by one with the pixel on the source images same position, strong edge all the high frequency direction subbands on same yardstick that can also observe source images all have the bigger coefficient of mould value, compare so the present invention compiles the high frequency direction sub-band information on the same yardstick of each pixel of source images.Source images is after NSCT decomposes, at yardstick 2 -lOn the high frequency direction sub-band information and be defined as
D l ( n , m ) = Σ 1 ≤ i ≤ k l | D l , i ( n , m ) | - - - ( 2 )
The fusion formula that the high frequency direction subband is arranged:
D l , i F ( n , m ) = D l , i A ( n , m ) if D l A ( n , m ) ≥ D l B ( n , m ) D l , i B ( n , m ) otherwise , 0 ≤ l ≤ L - 1 , 1 ≤ i ≤ k l - - - ( 3 )
According to the fusion formula of high frequency direction subband, the high frequency direction subband on each yardstick of multisensor source images is merged according to following different situations:
If source images A is at yardstick 2 -lGo up any one location of pixels (n, high frequency direction sub-band information m) and more than or equal to source images B the high frequency direction sub-band information of this position and, promptly D l A ( n , m ) ≥ D l B ( n , m ) , Then choose source images A at yardstick 2 -lGo up (n, m) the high frequency direction sub-band coefficients of location of pixels is as the NSCT coefficient of dissociation of fused images F on the relevant position;
If source images A is at yardstick 2 -lGo up any one location of pixels (n, high frequency direction sub-band information m) and less than source images B the high frequency direction sub-band information of this position and, promptly D l A ( n , m ) < D l B ( n , m ) , Then choose source images B at yardstick 2 -lGo up (n, m) the high frequency direction sub-band coefficients of location of pixels is as the NSCT coefficient of dissociation of fused images F on the relevant position;
Obtain the high frequency direction subband { D on each yardstick of fused images F at last L, i F(n, m), 0≤l≤L-1,1≤i≤k l.
Step 5 to the low frequency sub-band of fused images F and the high frequency direction subband on each yardstick, is done the NSCT inverse transformation, obtains fused images F.
The NSCT inverse transformation is a process of utilizing the NSCT coefficient of dissociation that image is reconstructed.Low frequency sub-band S to fused images F F(n, m) and each yardstick on high frequency direction subband { D L, i F(n, m), 0≤l≤L-1,1≤i≤k l, do the NSCT inverse transformation, its process is:
1) successively to high frequency direction subband { D L, i F(n, m), 0≤l≤L-1,1≤i≤k lDo the reconstruct of non-lower sampling anisotropic filter group, and obtain fused images F L, L-1 ..., 1 layer of bandpass signal that NSCT decomposes;
2) to low frequency sub-band S F(n, m) and fused images F L layer bandpass signal do the reconstruct of non-lower sampling turriform bank of filters, obtain the low-pass signal that fused images F L-1 layer NSCT decomposes;
3) low-pass signal of fused images F N layer NSCT decomposition and the bandpass signal of fused images F N layer NSCT decomposition are done the reconstruct of non-lower sampling turriform bank of filters, obtain the low-pass signal that fused images F N-1 layer NSCT decomposes, make N=L-1 successively, L-2, ..., 1;
Finally obtain the fused images F of accurate reconstruct, i.e. the low-pass signal of the 0th layer of NSCT decomposition of fused images F.
Below validity by emulation experiment checking the inventive method.
Simulated conditions: the source figure of use is the sensor image of 512*512 size, has comprised several scenes in the image, as factory, city and natural scene.
The emulation content: 1. selected widely used image fusion technology based on multiscale analysis in the image processing field, promptly image interfusion method WTF and the method NSCTF of the present invention based on wavelet transformation compares; 2. the profile wave convert of having selected not have translation invariance compares method CTF and the method NSCTF of the present invention that image merges; 3. selected fusion method and the method NSCTF of the present invention of stationary wavelet conversion SWTF to compare.
In the experiment image is all adopted four layers of decomposition.Wherein the wavelet basis of selecting for use among WTF and the SWTF is ' db8 ', CTF, NSCTF all adopt classical ' 9-7 ' turriform decompose and ' c-d ' anisotropic filter group, be 16,8,4 by thin yardstick to the decomposition number of thick dimension subband, 4.
Employed fusion rule is in the experiment: the close approximation coefficient under the smallest dimension is got average, other coefficient of dissociation choose the absolute value maximum as fusion image data.The low frequency optimum fusion weight that the present invention adopts Fibonacci method to search corresponds to image sets im1, im2, and im3 and be respectively 0.0163,0.0933,0.2392, with selected w=0.5 usually, difference is bigger.
Fused images is estimated: the objective evaluation of fused images should accord with subjective assessment, that is to say, the statistical parameter feature of image should meet the visual sensation of human eye.For the evaluation of multi-sensor image syncretizing effect, should take all factors into consideration image information the degree of enriching and to source images spatial edge detailed information maintenance.The evaluation index that the present invention adopts is:
(1) the information entropy entropy average information that is in the image to be comprised.The definition of this information entropy entropy is:
H = - &Sigma; j = 0 J - 1 p j log p j - - - ( 4 )
In the formula, the entropy of H presentation video, the number of greyscale levels that the J presentation video is total, p jThe expression gray-scale value is the pixel count N of j iWith the ratio of the total pixel count N of image, that is: p j=N j/ N.Image information entropy is to weigh the important indicator that image information is enriched degree, can contrast the details expressive ability of image by the comparison to image information entropy.What of quantity of information that image carries are the size of entropy reflected.The entropy of fused images is big more, illustrates that the quantity of information that fused images carries is big more.
(2) mutual information MI.Mutual information MI is the key concept in the information theory, can be used as measuring of the correlativity weighed between two variablees, therefore can weigh the degree of correlation of fused images and source images with mutual information, estimates syncretizing effect.The MI value is big more, and then fused images is many more from the information that source images obtains, and syncretizing effect is good more.
The mutual information of fused images and source images A, B is expressed as:
I FA = &Sigma; r = 0 J - 1 &Sigma; t = 0 J - 1 P FA ( r , t ) log 2 P FA ( r , t ) P F ( r ) P A ( t ) - - - ( 5 )
Wherein, P A, P BAnd P FBe the probability density of image A, B and F, P FA(r, t) and P FB(r t) is respectively the joint probability density of fused images and source images.
Among the present invention, get information summation that fused images comprises source images as total mutual information, with this total mutual information divided by source image information entropy sum, and with its normalizing to [0,1], that is:
MI = I FA + I FB H A + H B - - - ( 6 )
(3) edge fusion mass index Q EQ EBe the objective indicator of a kind of new evaluation fused image quality that proposed in recent years, the edge that can reflect fused images keeps the power of ringing effect around situation and the edge, is defined as:
Q E(A,B,F)=Q H′(A,B,F) 1-α·Q H′(A′,B′,F′) α (7)
Q wherein EExpression edge fusion mass index, Q H 'It is the weighting fusion evaluation index.A ', B ' and F ' are respectively source images A, the edge image of B and fused images F, and parameter alpha ∈ [0,1] has reflected the significance level of edge image in original image, α is important more near 1 edge image more.Edge fusion mass index is big more, and the quality of fused images is high more.
Simulation result:
(1) according to the several image metric indexs such as the table 1 of described emulation content simulation.
Table 1 experimental result relatively
Annotate: "--" expression does not have data
From the experimental data of table 1 as seen, the method NSCTF that the present invention proposes is at information entropy entropy, mutual information MI and edge fusion mass Q EUpward compare and all have advantage with other fusion methods, as for image sets im1, the information entropy of source images A is 5.6150, the information entropy of source images B is 6.8066, uses the image interfusion method WTF based on wavelet transformation, the fusion method of stationary wavelet conversion SWTF, the information entropy of the fused images that obtains based on profile wave convert CTF and method NSCTF of the present invention is respectively 6.1895,6.1585,6.1995,6.7962; And the inventive method the mutual information MI and the edge fusion mass Q that obtain EBe respectively 0.2368 and 0.3859, all exceed additive method.
(2) use the present invention and existing method emulation fusion results such as Fig. 3 to sensor image group im1.Wherein Fig. 3 (a) and Fig. 3 (b) are respectively source images A and the B of sensor image group im1; The fused images that Fig. 3 (c) obtains for the image interfusion method WTF that adopts based on wavelet transformation; The fused images that Fig. 3 (d) obtains for the image interfusion method SWTF that adopts the stationary wavelet conversion; The fused images that Fig. 3 (e) obtains for the image interfusion method CTF that adopts profile wave convert; The fused images of Fig. 3 (f) for adopting method NSCTF of the present invention to obtain.
(3) use the present invention and existing method emulation fusion results such as Fig. 4 to sensor image group im2.Wherein Fig. 4 (a) and Fig. 4 (b) are respectively source images A and the B of sensor image group im2; The fused images that Fig. 4 (c) obtains for the image interfusion method WTF that adopts based on wavelet transformation; The fused images that Fig. 4 (d) obtains for the image interfusion method SWTF that adopts the stationary wavelet conversion; The fused images that Fig. 4 (e) obtains for the image interfusion method CTF that adopts profile wave convert; The fused images of Fig. 4 (f) for adopting method NSCTF of the present invention to obtain.
(4) use the present invention and existing method emulation fusion results such as Fig. 5 to sensor image group im3.Fig. 5 (a) and 5 Fig. 5 (b) are respectively source images A and the B of sensor image group im3; The fused images that Fig. 5 (c) obtains for the image interfusion method WTF that adopts based on wavelet transformation; The fused images that Fig. 5 (d) obtains for the image interfusion method SWTF that adopts the stationary wavelet conversion; The fused images that Fig. 5 (e) obtains for the image interfusion method CTF that adopts profile wave convert; The fused images of Fig. 5 (f) for adopting method NSCTF of the present invention to obtain.
With reference to Fig. 3, Fig. 4 and Fig. 5 as seen, near the circle profile of the rice field of Fig. 3, near the waters of Fig. 4 and Fig. 5, the fused images that adopts the image interfusion method WTF based on wavelet transformation to obtain: distortion is shaken in Fig. 3 (c), Fig. 4 (c) and Fig. 5 (c) existence in various degree; The fused images that employing obtains based on the image interfusion method CTF of profile wave convert: Fig. 3 (e), Fig. 4 (e) and Fig. 5 (e) exist and shake distortion in various degree; And the fused images that adopts the SWTF method to obtain: Fig. 3 (d), Fig. 4 (d), Fig. 5 (d) and fused images Fig. 3 (f), Fig. 4 (f), Fig. 5 (f) of adopting NSCTF method of the present invention to obtain seem level and smooth and clear in these zones.This is because stationary wavelet and non-down sampling contourlet transform of the present invention all are translation invariant conversion, has avoided introducing in fused images distortion.
The present invention is than existing traditional images fusion method, no matter from the evaluation of objective parameter, or looking from image Feel all to have superiority on the quality, can effectively avoid for want of translation invariance and the image that produces loses of some conversion Very, can effectively catch in the multi-sensor image abundant direction and detail textures information, can access the target field scape clear, Detailed fused images is a kind of effective and feasible fusion method.

Claims (4)

1. method for self-adaption amalgamation of multi-sensor image based on non-down sampling contourlet comprises following process:
(1) input multisensor source images A and B, and respectively it is carried out L layer non-down sampling contourlet NSCT and decompose, low frequency sub-band S obtained L A(n, m), S L B(n, m) and each yardstick on high frequency direction subband { D L, i A(n, m), 0≤l≤L-1,1≤i≤k l, { D L, i B(n, m), 0≤l≤L-1,1≤i≤k l, k lBe illustrated in yardstick 2 -lOn the high frequency direction number of sub-bands, D L, i A(n, m) expression source images A is at yardstick 2 -lOn i direction subband, D L, i B(n, m) expression source images B is at yardstick 2 -lOn i direction subband, L is 3~5;
(2) adopt the Fibonacci method adaptable search to select best low frequency to merge weight w to described low frequency sub-band *, and utilize the low frequency fusion formula: S L F ( n , m ) = w * * S L A ( n , m ) + ( 1 - w * ) * S L B ( n , m ) , Low frequency sub-band coefficient to the multisensor source images merges;
(3), utilize the high frequency fusion formula to merge, that is: to the high frequency direction subband on described each yardstick
D l , i F ( n , m ) = D l , i A ( n , m ) if D l A ( n , m ) &GreaterEqual; D l B ( n , m ) D l , i B ( n , m ) otherwise , 0 &le; l &le; L - 1 , 1 &le; i &le; k l
In the formula, D l A ( n , m ) = &Sigma; 1 &le; i &le; k l | D l , i A ( n , m ) | Be source images A yardstick 2 -lOn the high frequency direction sub-band information and,
D l B ( n , m ) = &Sigma; 1 &le; i &le; k l | D l , i B ( n , m ) | Be source images B yardstick 2 -lOn the high frequency direction sub-band information and;
(4) to the low frequency sub-band S after merging F(n, m) and each yardstick on high frequency direction subband { D L, i F(n, m), 0≤l≤L-1,1≤i≤k l, carry out non-down sampling contourlet NSCT inverse transformation, obtain fused images F.
2. according to claims 1 described method for self-adaption amalgamation of multi-sensor image, wherein step (2) is described adopts Fibonacci method to search for its optimal low-frequency subband fusion weight w to low frequency sub-band *, carry out according to the following procedure:
1) with the fusion mass index Q of the low frequency sub-band coefficient of sensor image E(w), as the ferret out function:
2) initial value of establishing the left end point a of the region of search is 0, and the initial value of the right endpoint b of the region of search is 1, initial interval [a to be searched, b] value be [0,1], the minimum length ε of the region of search=0.01, and a w is put in first exploration of calculating initial interval to be searched [a, b] medium and low frequency subband fusion weights 1Sound out some w for second of=a+0.382* (b-a) and low-frequency subband fusion weights 2=a+b-w 1, w 1, w 2∈ [a, b];
3) first sounds out the value of putting according to the low-frequency subband fusion weights that calculated, and calculates the fusion mass index Q that this first exploration is put E(w 1):
4), calculate this second and sound out a fusion mass index Q according to second value of souning out point of the low-frequency subband fusion weights that calculated E(w 2);
5) compare Q E(w 1) and Q E(w 2) result of calculation, if Q E(w 1) greater than Q E(w 2), the left end point value of then upgrading the region of search is a '=w 1, and with new region of search length | b-a ' | compare with the minimum length ε in initial interval to be searched, determine the optimal low-frequency subband fusion weight w *:
First kind of situation: if | b-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b)/2;
Second kind of situation: if | b-a ' | 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1') and Q E(w 2'), again by the comparative result of these two fusion mass indexes is determined the optimal low-frequency subband fusion weight w *
6) compare Q E(w 1) and Q E(w 2) result of calculation, if Q E(w 1) be less than or equal to Q E(w 2), the right endpoint value of then upgrading the region of search is b '=w 2, and with new region of search length | the minimum length ε in b '-a| and initial interval to be searched compares, and determines the optimal low-frequency subband fusion weight w *:
First kind of situation: if | b '-a|<ε, then w *Be taken as the mid point w in new search interval *=(a+b ')/2;
Second kind of situation: if | b '-a| 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1') and Q E(w 2'), again by the comparative result of these two fusion mass indexes is determined the optimal low-frequency subband fusion weight w *
3. according to claims 2 described method for self-adaption amalgamation of multi-sensor image, the described second kind of situation of step 5) wherein, carry out according to the following procedure:
5a) upgrading first value of souning out point respectively is w 1'=w 2, second value of souning out point is w 2'=a '+0.618* (b-a ');
5b) recomputate fusion mass index Q E(w 1') and Q E(w 2'), and compare Q E(w 1') and Q E(w 2') size;
If 5c) Q E(w 1') greater than Q E(w 2'), the left end point value of then upgrading the region of search is a "=w 1', and with new region of search length | b-a " | compare with the minimum length ε in initial interval to be searched:
If | b-a " |<ε, then w *Be taken as the mid point w in new search interval *=(a "+b)/2, stop search;
If | b-a " | 〉=ε, sound out some w for two that then upgrade this region of search 1" and w 2", recomputate fusion mass index Q E(w 1") and Q E(w 2"), and these two fusion mass indexes are compared, go back to step 5c again), this process is carried out in circulation, up to searching out the optimal low-frequency subband fusion weight w *
If 5d) Q E(w 1') be less than or equal to Q E(w 2'), the right endpoint value of then upgrading the region of search is b '=w 2, and with new region of search length | b '-a ' | compare with the minimum length ε in initial interval to be searched,
If | b '-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b ')/2, stop search;
If | b '-a ' | 〉=ε, sound out point, go back to step 5b for two that then upgrade this region of search), this process circulation is carried out, up to searching out the optimal low-frequency subband fusion weight w *
4. according to claims 2 described method for self-adaption amalgamation of multi-sensor image, the described second kind of situation of step 6) wherein, carry out according to the following procedure:
6a) upgrading first value of souning out point is w 1'=a+0.382* (b '-a), second value of souning out point is w 2'=w 1
6b) recomputate fusion mass index Q E(w 1') and Q E(w 2'), and compare Q E(w 1') and Q E(w 2') size;
If 6c) Q E(w 1') greater than Q E(w 2'), the left end point value of then upgrading the region of search is a '=w 1', and with new region of search length | b '-a ' | compare with the minimum length ε in initial interval to be searched:
If | b '-a ' |<ε, then w *Be taken as the mid point w in new search interval *=(a '+b ')/2, stop search;
If | b '-a ' | 〉=ε, sound out point, recomputate fusion mass index Q for two that then upgrade this region of search E(w 1") and Q E(w 2"), and these two fusion mass indexes are compared, go back to step 6c again), this process circulation is carried out, up to searching out the optimal low-frequency subband fusion weight w *
If 6d) Q E(w 1') be less than or equal to Q E(w 2'), the right endpoint value of then upgrading the region of search is b "=w 2', and with new region of search length | b " the minimum length ε in a| and initial interval to be searched compares:
If | b " a|<ε, then w *Be taken as the mid point w in new search interval *=(a+b ")/2 stops search;
If | b " a| 〉=ε sounds out point, goes back to step 6b for two that then upgrade this region of search), this process is carried out in circulation, up to searching out the optimal low-frequency subband fusion weight w *
CN2008100182377A 2008-05-16 2008-05-16 Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave Expired - Fee Related CN101303764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100182377A CN101303764B (en) 2008-05-16 2008-05-16 Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100182377A CN101303764B (en) 2008-05-16 2008-05-16 Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave

Publications (2)

Publication Number Publication Date
CN101303764A true CN101303764A (en) 2008-11-12
CN101303764B CN101303764B (en) 2010-08-04

Family

ID=40113657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100182377A Expired - Fee Related CN101303764B (en) 2008-05-16 2008-05-16 Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave

Country Status (1)

Country Link
CN (1) CN101303764B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930603A (en) * 2010-08-06 2010-12-29 华南理工大学 Method for fusing image data of medium-high speed sensor network
CN102034233A (en) * 2010-10-21 2011-04-27 苏州科技学院 Method for detecting SAR (stop and reveres) image wave group parameters based on contourlet conversion
CN102129605A (en) * 2011-04-06 2011-07-20 河南工业大学 Colour fundus image de-noising novel method based on vector median filtering and non-subsampled contourlet transform (VMF-NSCT)
CN101667289B (en) * 2008-11-19 2011-08-24 西安电子科技大学 Retinal image segmentation method based on NSCT feature extraction and supervised classification
CN101673398B (en) * 2009-10-16 2011-08-24 西安电子科技大学 Method for splitting images based on clustering of immunity sparse spectrums
CN101504766B (en) * 2009-03-25 2011-09-07 湖南大学 Image amalgamation method based on mixed multi-resolution decomposition
CN102208103A (en) * 2011-04-08 2011-10-05 东南大学 Method of image rapid fusion and evaluation
CN102222322A (en) * 2011-06-02 2011-10-19 西安电子科技大学 Multiscale non-local mean-based method for inhibiting infrared image backgrounds
CN102236891A (en) * 2011-06-30 2011-11-09 北京航空航天大学 Multispectral fusion method based on contourlet transform and free search differential evolution (CT-FSDE)
CN102298768A (en) * 2010-06-24 2011-12-28 江南大学 High-resolution image reconstruction method based on sparse samples
CN101566688B (en) * 2009-06-05 2012-02-08 西安电子科技大学 Method for reducing speckle noises of SAR image based on neighborhood directivity information
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)
CN102800070A (en) * 2012-06-19 2012-11-28 南京大学 Multi-modality image fusion method based on region and human eye contrast sensitivity characteristic
CN102800079A (en) * 2012-08-03 2012-11-28 西安电子科技大学 Multimode image fusion method based on SCDPT transformation and amplitude-phase combination thereof
CN102968781A (en) * 2012-12-11 2013-03-13 西北工业大学 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation
CN103247052A (en) * 2013-05-16 2013-08-14 东北林业大学 Image segmentation algorithm for local region characteristics through nonsubsampled contourlet transform
CN103400360A (en) * 2013-08-03 2013-11-20 浙江农林大学 Multi-source image fusing method based on Wedgelet and NSCT (Non Subsampled Contourlet Transform)
CN104574268A (en) * 2014-12-30 2015-04-29 长春大学 Cloud and fog removing method based on non-subsample contourlet transform and non-negative matrix factorization
CN105281707A (en) * 2015-09-09 2016-01-27 哈尔滨工程大学 Dynamic reconstructible filter set low-complexity realization method
CN106296655A (en) * 2016-07-27 2017-01-04 西安电子科技大学 Based on adaptive weight and the SAR image change detection of high frequency threshold value
CN106897999A (en) * 2017-02-27 2017-06-27 江南大学 Apple image fusion method based on Scale invariant features transform
CN107220628A (en) * 2017-06-06 2017-09-29 北京环境特性研究所 The method of infrared jamming source detection
CN109377447A (en) * 2018-09-18 2019-02-22 湖北工业大学 A kind of contourlet transformation image interfusion method based on cuckoo searching algorithm
CN109523513A (en) * 2018-10-18 2019-03-26 天津大学 Based on the sparse stereo image quality evaluation method for rebuilding color fusion image

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667289B (en) * 2008-11-19 2011-08-24 西安电子科技大学 Retinal image segmentation method based on NSCT feature extraction and supervised classification
CN101504766B (en) * 2009-03-25 2011-09-07 湖南大学 Image amalgamation method based on mixed multi-resolution decomposition
CN101566688B (en) * 2009-06-05 2012-02-08 西安电子科技大学 Method for reducing speckle noises of SAR image based on neighborhood directivity information
CN101673398B (en) * 2009-10-16 2011-08-24 西安电子科技大学 Method for splitting images based on clustering of immunity sparse spectrums
CN102298768A (en) * 2010-06-24 2011-12-28 江南大学 High-resolution image reconstruction method based on sparse samples
CN102298768B (en) * 2010-06-24 2013-10-02 江南大学 High-resolution image reconstruction method based on sparse samples
CN101930603A (en) * 2010-08-06 2010-12-29 华南理工大学 Method for fusing image data of medium-high speed sensor network
CN102034233A (en) * 2010-10-21 2011-04-27 苏州科技学院 Method for detecting SAR (stop and reveres) image wave group parameters based on contourlet conversion
CN102034233B (en) * 2010-10-21 2012-07-18 苏州科技学院 Method for detecting SAR (stop and reveres) image wave group parameters based on contourlet conversion
CN102129605A (en) * 2011-04-06 2011-07-20 河南工业大学 Colour fundus image de-noising novel method based on vector median filtering and non-subsampled contourlet transform (VMF-NSCT)
CN102129605B (en) * 2011-04-06 2012-09-05 河南工业大学 Colour fundus image de-noising novel method based on vector median filtering and non-subsampled contourlet transform (VMF-NSCT)
CN102208103A (en) * 2011-04-08 2011-10-05 东南大学 Method of image rapid fusion and evaluation
CN102222322A (en) * 2011-06-02 2011-10-19 西安电子科技大学 Multiscale non-local mean-based method for inhibiting infrared image backgrounds
CN102236891A (en) * 2011-06-30 2011-11-09 北京航空航天大学 Multispectral fusion method based on contourlet transform and free search differential evolution (CT-FSDE)
CN102521818B (en) * 2011-12-05 2013-08-14 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)
CN102800070A (en) * 2012-06-19 2012-11-28 南京大学 Multi-modality image fusion method based on region and human eye contrast sensitivity characteristic
CN102800070B (en) * 2012-06-19 2014-09-03 南京大学 Multi-modality image fusion method based on region and human eye contrast sensitivity characteristic
CN102800079B (en) * 2012-08-03 2015-01-28 西安电子科技大学 Multimode image fusion method based on SCDPT transformation and amplitude-phase combination thereof
CN102800079A (en) * 2012-08-03 2012-11-28 西安电子科技大学 Multimode image fusion method based on SCDPT transformation and amplitude-phase combination thereof
CN102968781A (en) * 2012-12-11 2013-03-13 西北工业大学 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation
CN102968781B (en) * 2012-12-11 2015-01-28 西北工业大学 Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation
CN103247052A (en) * 2013-05-16 2013-08-14 东北林业大学 Image segmentation algorithm for local region characteristics through nonsubsampled contourlet transform
CN103400360A (en) * 2013-08-03 2013-11-20 浙江农林大学 Multi-source image fusing method based on Wedgelet and NSCT (Non Subsampled Contourlet Transform)
CN104574268B (en) * 2014-12-30 2017-06-16 长春大学 Cloud and mist method is gone based on non-down sampling contourlet transform and Non-negative Matrix Factorization
CN104574268A (en) * 2014-12-30 2015-04-29 长春大学 Cloud and fog removing method based on non-subsample contourlet transform and non-negative matrix factorization
CN105281707A (en) * 2015-09-09 2016-01-27 哈尔滨工程大学 Dynamic reconstructible filter set low-complexity realization method
CN105281707B (en) * 2015-09-09 2018-12-25 哈尔滨工程大学 A kind of implementation method of dynamic reconfigurable filter group
CN106296655A (en) * 2016-07-27 2017-01-04 西安电子科技大学 Based on adaptive weight and the SAR image change detection of high frequency threshold value
CN106296655B (en) * 2016-07-27 2019-05-21 西安电子科技大学 SAR image change detection based on adaptive weight and high frequency threshold value
CN106897999A (en) * 2017-02-27 2017-06-27 江南大学 Apple image fusion method based on Scale invariant features transform
CN107220628A (en) * 2017-06-06 2017-09-29 北京环境特性研究所 The method of infrared jamming source detection
CN107220628B (en) * 2017-06-06 2020-04-07 北京环境特性研究所 Method for detecting infrared interference source
CN109377447A (en) * 2018-09-18 2019-02-22 湖北工业大学 A kind of contourlet transformation image interfusion method based on cuckoo searching algorithm
CN109377447B (en) * 2018-09-18 2022-11-15 湖北工业大学 Contourlet transformation image fusion method based on rhododendron search algorithm
CN109523513A (en) * 2018-10-18 2019-03-26 天津大学 Based on the sparse stereo image quality evaluation method for rebuilding color fusion image
CN109523513B (en) * 2018-10-18 2023-08-25 天津大学 Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image

Also Published As

Publication number Publication date
CN101303764B (en) 2010-08-04

Similar Documents

Publication Publication Date Title
CN101303764B (en) Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave
CN101482617B (en) Synthetic aperture radar image denoising method based on non-down sampling profile wave
CN112488924B (en) Image super-resolution model training method, image super-resolution model reconstruction method and image super-resolution model reconstruction device
CN105913393A (en) Self-adaptive wavelet threshold image de-noising algorithm and device
CN102572499B (en) Based on the non-reference picture quality appraisement method of wavelet transformation multi-resolution prediction
CN103295204B (en) A kind of image self-adapting enhancement method based on non-down sampling contourlet transform
CN102142136A (en) Neural network based sonar image super-resolution reconstruction method
CN114549925A (en) Sea wave effective wave height time sequence prediction method based on deep learning
CN104657951A (en) Multiplicative noise removal method for image
CN103514600B (en) A kind of infrared target fast robust tracking based on rarefaction representation
CN115359771A (en) Underwater acoustic signal noise reduction method, system, equipment and storage medium
CN105913402A (en) Multi-remote sensing image fusion denoising method based on DS evidence theory
CN106971392A (en) A kind of combination DT CWT and MRF method for detecting change of remote sensing image and device
CN102509268B (en) Immune-clonal-selection-based nonsubsampled contourlet domain image denoising method
CN104361596A (en) Reduced reference image quality evaluation method based on Contourlet transformation and Frobenius norm
Zhao et al. Multifractal theory with its applications in data management
CN107578064B (en) Sea surface oil spill detection method based on superpixel and utilizing polarization similarity parameters
CN104463325A (en) Noise suppression method for polar ice-penetrating radar original data
Wang et al. Retracted: Complex image denoising framework with CNN‐wavelet under concurrency scenarios for informatics systems
CN107085832A (en) A kind of Fast implementation of the non local average denoising of digital picture
CN109300086B (en) Image blocking method based on definition
CN112731327A (en) HRRP radar target identification method based on CN-LSGAN, STFT and CNN
CN112907456A (en) Deep neural network image denoising method based on global smooth constraint prior model
CN117251737B (en) Lightning waveform processing model training method, classification method, device and electronic equipment
Esmaeilzehi et al. DMML: Deep Multi-Prior and Multi-Discriminator Learning for Underwater Image Enhancement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210425

Address after: 710076 room 104, block B2, software new town phase II, tianguba Road, Yuhua Street office, high tech Zone, Xi'an City, Shaanxi Province

Patentee after: Discovery Turing Technology (Xi'an) Co.,Ltd.

Address before: 710071 No. 2 Taibai Road, Shaanxi, Xi'an

Patentee before: XIDIAN University

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100804

CF01 Termination of patent right due to non-payment of annual fee