CN102436666A - Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform - Google Patents

Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform Download PDF

Info

Publication number
CN102436666A
CN102436666A CN2011102540528A CN201110254052A CN102436666A CN 102436666 A CN102436666 A CN 102436666A CN 2011102540528 A CN2011102540528 A CN 2011102540528A CN 201110254052 A CN201110254052 A CN 201110254052A CN 102436666 A CN102436666 A CN 102436666A
Authority
CN
China
Prior art keywords
fusion
ihs
image
scene
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011102540528A
Other languages
Chinese (zh)
Inventor
丁友东
魏小成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN2011102540528A priority Critical patent/CN102436666A/en
Publication of CN102436666A publication Critical patent/CN102436666A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform. The method comprises the following steps: firstly using the IHS transform and the intensity fusion to form a light mask of an image; and then achieving the aim of the fusion of the object and the scene by recovering object details in the light mask. The invention provides three schemes by respectively using a standard IHS transform technology, an SFIM (Smoothing Filter-based Intensity Modulation) technology and a wavelet technology as an intensity fusion tool. In the method, a to-be-fused image is free from rectification, less fusion conditions are needed, the method can be realized easily, the fusion of the object and the scene can be realized by only considering the intensity fusion; the method has fast fusion speed and great potential in the video processing, and can be used for maximally reducing the color distortion generated in fusion process; and meanwhile, the method has wider use since different intensity fusion algorithms can be implanted to form different fusion schemes.

Description

Object and scene fusion method based on the IHS conversion
Technical field
The present invention relates to a kind of object and scene fusion method based on IHS (Intensity-Hue-Saturation color space, brightness-colourity-saturation degree color space) conversion, belong to the visualization technique field, is a kind of non real-time visual simulating technology.
Background technology
It is an application direction of image co-registration that object and scene merge; After it is meant and from the scene at its original place, splits interested destination object; Be synthesized in another scene through stack, combination and processed and go; Formed new object scene image seems it must is true nature, thereby creates the image effect that makes new advances.The fusion of object scene has in the picture editting field very widely uses; Particularly in the production of film and TV process; A lot of camera lenses can't obtain through taking on the spot; Go as being embedded into the behavior of performer in the middle of reality in the illusory world, these camera lenses just can be realized by object scene integration technology.
The key of object and scene integration technology is that the effect that fusion is obtained is true to nature, that is to say to make destination object seem that in new scene illumination is consistent, the transition nature, and the significantly artificial vestige that splices can not occur.
It is exactly direct replica method that object and scene merge the simplest method, the appropriate location of the new scene image that is added to after promptly through dividing method object being split from former scene.It does not change the color of destination object, and illumination is carried out certain adjustment to adapt to its position under new scene to its size or direction at the most.When the illumination condition contrast of object and new background was very big, the fused images that obtains will be very stiff, do not have the sense of reality fully.
People such as Patrick P é rez are at " Poisson image editing " (Proceedings of the SIGGRAPH Conference; 2003; The Poisson fusion method that 313-318) proposes in the literary composition uses basic interpolation algorithm to carry out the fusion of destination object and new scene through finding the solution Poisson equation.When the gradient fields of new scene is simple relatively, this method dry straight, but when the texture of object and scene differed greatly, effect was then relatively poor, and the color of object is understood distortion.
Jue Wang etc. are at " Simultaneous Matting and Compositing " (Proceedings of the CVPR Conference; 2007; 1-8) a kind of Alpha fusion method has been proposed in the literary composition; It calculates the illiteracy plate (Mask) of destination object image earlier, and then destination object is reduced to get off to be attached on the new scene image, and the effect of these class methods when the illumination of destination object and new scene is inconsistent is relatively poor.
Based on Alpha fusion method and Poisson fusion method; Tao Chen etc. are at " Sketch2Photo:Internet Image Montage " (ACM Transactions on Graphics; Volume 28; Issue 5,2009,124:1-124:10) proposed mixing (Hybrid) fusion method of two kinds of method advantages of a kind of combination in the literary composition and obtained effect preferably.
At present, accuracy is the highest, and what the fusion results that obtains was best is three-dimensional luminescence method.It goes out the three-dimensional structure of object and new scene earlier from many image reconstructions, the actual light of detailed calculated object various piece is shone under three-dimensional situation then.But for the reconstruction of three-dimensional structure, need some respectively about the picture material of destination object and new scene, therefore when the usable image material more after a little while, for example destination object image and new scene image all have only one, this method is just no longer suitable.In addition, the condition that three-dimensional luminescence method needs is many, and implements very complicacy, and this is that unreal border also there is no need for the not too high fusion of general requirement.
Image interfusion method based on the IHS conversion is a kind of current basic method of in remote sensing field and multiple focussing image fusion, using, proposes in nineteen eighty-two by people such as Haydan, and be one of basic skills of remote sensing image fusion.Its important application realizes PAN image (Panchromatic image exactly; Full-colour image) with MS image (Multispectral image; Multispectral image) fusion can be fused in the MS image PAN image space information and goes under the prerequisite of the spectral information that does not destroy the MS image.The key step of the IHS converter technique realization PAN image of standard and the fusion of MS image is following:
1. the MS image is arrived the IHS color space by RGB (Red-Green-Blue, R-G-B) color space conversion;
2. with the luminance component I that obtains above the replacement of PAN image;
3. carry out the IHS inverse transformation and obtain the fused images under the RGB color space.
1987; People such as Gillespie introduce the notion of " intensification modulation (intensity modulation) "; " Color enhancement of highly correlated images-II. Channel ratio and chromaticity transformation techniques " (Remote Sensing of Environment, Volume 22, Issue 2; 1987,343-365) literary composition has proposed the Brovey conversion.The Brovey conversion realizes merging through I component being modulated (rather than directly replacement), and it is simply more effective than the method for Haydan.But work as brightness replacement (or modulation) spectral range of image and the scope that band covered of fused images not simultaneously; These two kinds of methods will cause the color distortion (color distortion) of fused images; And people such as Krista Amolins " Wavelet based image fusion techniques-An introduction; review and comparison " (Journal of Photogrammetry and Remote Sensing, Volume 62, Issue 4; 2007; Point out in the literary composition that 249-263) because image to be merged is normally taken in different time or different seasonal by the different sensor system, this color distortion can not be controlled and can't quantize.
For this reason; In order to keep the spectral characteristic of original image, reach good syncretizing effect again, develop towards both direction based on the remote sensing image fusion method of IHS conversion: the one, utilize some preconditioning techniques; Like filtering; Sharpenings etc. are treated fused images earlier and are carried out pre-service, simultaneously the intensification modulation method are improved; Method based on this thought has SFIM (Smoothing Filter-based Intensity Modulation is based on the intensification modulation of smothing filtering) technology, Pan-Sharpening (panorama sharpening) technology etc.; Another is exactly that direction towards multiresolution analysis develops, and for example utilizes wavelet transformation technique.Experiment showed, in most remote sensing image fusion, can obtain than the general better syncretizing effect of the standard scheme without small echo, especially minimizing in the color distortion based on the scheme of small echo.But its complexity is high, and the processing time is long, and also very strict to the requirement of registration accuracy, and this causes it extensive not as general standard scheme in the application in the field of quick interaction process of this requirement of remote sensing and real-time visual.
Summary of the invention
The objective of the invention is to defective, a kind of object and scene fusion method based on the IHS conversion is provided to the prior art existence.Plate is covered in the illumination that this method utilizes IHS conversion and brightness fusion to form fused images earlier, and then reaches the purpose of object and scene fusion through reduction object details in illumination illiteracy plate.This method not only syncretizing effect is good, and compares with the scene fusion method with existing object, and is simpler, quicker.
For achieving the above object, the present invention adopts following technical proposals:
A kind of object and scene fusion method based on the IHS conversion, its input is two width of cloth images: contain the former scene image of destination object and need destination object be incorporated new scene image wherein, its operation steps is following:
1. cut apart and superpose: at first destination object is split from former scene image, the appropriate location of the new scene image I that is added to then mage1 obtains image I mage2.
2. IHS conversion: respectively Image1 and Image2 are carried out the IHS conversion, obtain their image I mage1_hsi and Image2_hsi under the IHS color space, and obtain the two luminance component I1 and I2.
3. brightness is merged: two luminance component I1 and I2 are merged obtain merging brightness I1 '.
4. IHS inverse transformation: with the luminance component of I1 ' replacement Image1_hsi, then Image1_hsi is carried out the IHS inverse transformation, plate Maskimage is covered in the illumination that obtains fused images.
5. details reduction: obtain fused images thereby cover the details that restores destination object on the plate image M askimage in illumination.
1. said step is cut apart and the operation that superposes is to accomplish by image editing software Photoshop; Use its Magnetic Lasso Tool to accomplish Object Segmentation easily, paste the appropriate location in the new scene after Object Segmentation is come out again; Pasting the back uses " editor → free conversion " menu item that the size of object is adjusted to adapt to its position under new scene.
Said step 2. IHS conversion is: use following formula to accomplish from the RGB color space to the conversion of IHS color space:
Figure 860548DEST_PATH_IMAGE001
Figure 127581DEST_PATH_IMAGE002
, here
Wherein, R, G, B represent three color components of red, green, blue under the rgb space respectively, and I, H, S represent three components of brightness, colourity, saturation degree under the IHS space respectively.
Said step 3. brightness fusion is a core link of the present invention, because it is directly determining to merge the brightness and the colouring information of back destination object, has promptly determined the effect that merges.Use the different brightness blending algorithm just can obtain different integration programs in this link.We use standard I HS converter technique respectively in embodiment 1, embodiment 2 and the embodiment 3 of " embodiment " trifle at the back; SFIM technology and wavelet technique are as the brightness fusion tool; Three kinds of integration programs are provided, experimentize respectively simultaneously and tested their syncretizing effect.Wherein merge the fastest scheme that is to use standard I HS converter technique, its step that realizes the brightness fusion is following:
1. with the histogram of I1 as the standard histogram, I2 is carried out the histogram coupling obtains I2 ';
Even the luminance component of the I2 ' substitute I mage1_hsi image that obtains above 2. using is I1 '=I2 ';
And the best scheme that is to use the SFIM technology of syncretizing effect, its step that realizes the brightness fusion is following:
1. with the histogram of I1 as the standard histogram, I2 is carried out the histogram coupling obtains I2 ';
2. use following formula that luminance component I1 and I2 ' are merged:
Figure 51041DEST_PATH_IMAGE005
Wherein,
Figure 352840DEST_PATH_IMAGE006
is the pixel of I1;
Figure 355431DEST_PATH_IMAGE007
is the pixel of I2 ', and
Figure 941133DEST_PATH_IMAGE008
is the pixel of I1 with the filtered image of mean filter. is the fusion luminance picture of being asked; Be I1 ', pixel.
Said step 4. IHS inverse transformation is: the IHS inverse transformation from the IHS color space to the RGB color space, use following formula to accomplish:
During as ,
Figure 205171DEST_PATH_IMAGE011
Figure 583062DEST_PATH_IMAGE012
Figure 246125DEST_PATH_IMAGE013
During as
Figure 709467DEST_PATH_IMAGE014
,
Figure 53861DEST_PATH_IMAGE015
Figure 364888DEST_PATH_IMAGE016
Figure 870955DEST_PATH_IMAGE017
During as
Figure 87173DEST_PATH_IMAGE018
,
Figure 31995DEST_PATH_IMAGE019
Figure 373033DEST_PATH_IMAGE021
Wherein, I, H, S represent three components of brightness, colourity, saturation degree under the IHS space respectively, and R, G, B represent three color components of red, green, blue under the rgb space respectively.
Said step 5. details reduction is through illumination being covered the details retrieving algorithm that plate image M askimage and the new scene image I mage2 that contains destination object carry out weighted sum, and this algorithm is accomplished through following formula:
Figure 810967DEST_PATH_IMAGE022
Wherein,
Figure 497164DEST_PATH_IMAGE023
and
Figure 766471DEST_PATH_IMAGE024
satisfies
Figure 879920DEST_PATH_IMAGE025
; and
Figure 842508DEST_PATH_IMAGE027
be presentation video Maskimage and image I mage2 respectively, and
Figure 169584DEST_PATH_IMAGE028
is final fusion results; The illumination degree of back object, the details abundance of decision objects are merged in weights
Figure 516252DEST_PATH_IMAGE029
decision; For general fusion application; Weighting coefficient is taken as
Figure 691199DEST_PATH_IMAGE030
; is preferable; When illumination simple or object and new scene differs big to image texture; Can suitably increase the value of ; As be increased to 0.3 or 0.4, but generally be no more than 0.5.
Principle of the present invention can be represented with the process flow diagram of Fig. 1.To Fig. 1 have following some need the explanation:
1. " cutting apart, superpose " this, to handle actual be exactly the operation of carrying out above-mentioned direct replica method.Because partitioning algorithm is not in discussion scope of the present invention; So we do not do detailed argumentation to cutting operation; And only be absorbed in fusion, and accomplish " cut apart, superpose " thereby operate by image editing software Photoshop and obtain " having added the new scene image I mage2 of destination object ".
2. in order to reach the destination object purpose consistent with new scene illumination; Thought based on intensification modulation; The present invention has used the brightness mixing operation, and it only carries out to the luminance component I of image, because if chromatic component H or saturation degree component S are merged; Can destroy the relativeness of H and S, thereby in fused images, produce insignificant color.Simultaneously; To the requirement of brightness blending algorithm is that it can be realized that monochrome information with I2 is fused to and removes to obtain merging brightness I1 ' among the I1; Can not destroy the H of I1 and Image1, the relativeness of S component again, color can not occur when guaranteeing to distort with I1 ' reduction Image1.
3. the object of same color seems to present various colors under the different illumination condition.It is such piece image in theory that plate image M askimage is covered in illumination that the IHS inverse transformation obtains: comprised the illumination information after object incorporates new scene in destination object and scene overlapping areas (hereinafter to be referred as the target area); Be monochrome information and colouring information, the information in other zones (hereinafter to be referred as the background area) except that the target area is identical with the counterpart of scene image Image1 then.This also is the another one requirement to the brightness blending algorithm, when the brightness blending algorithm can't accomplish that the counterpart of information and scene image Image1 of background area is identical, should accomplish at least to let human eye can not differentiate their difference.
4. to cover plate image M askimage be not final fusion results in illumination, but it has comprised the illumination information after object incorporates new scene, so just can obtain fused images as long as restore the minutia of object in the target area of Maskimage.The details reduction can not change background area information equally.Consider the content characteristic of Maskimage; We have adopted a simple and effective details method of reducing exactly Maskimage and Image2 to be carried out weighted sum; The weights sum of the two is 1 as long as make, and just can both realize the reduction of object details, does not change the background area again.Be formulated as follows:
Figure 539441DEST_PATH_IMAGE022
Wherein,
Figure 980917DEST_PATH_IMAGE023
and
Figure 17007DEST_PATH_IMAGE024
satisfies
Figure 643160DEST_PATH_IMAGE025
;
Figure 827017DEST_PATH_IMAGE026
and
Figure 196818DEST_PATH_IMAGE027
be presentation video Maskimage and image I mage2 respectively, and
Figure 664577DEST_PATH_IMAGE028
is final fusion results.The illumination degree of back object, the details abundance of
Figure 132785DEST_PATH_IMAGE024
decision objects are merged in weights
Figure 461632DEST_PATH_IMAGE029
decision.
Figure 40698DEST_PATH_IMAGE023
excessive (or
Figure 864429DEST_PATH_IMAGE024
is too small) will cause the object details abundant inadequately; Transparency is high, just the have powerful connections image of image of target area.And the light conditions (brightness and color) under new scene can not well be embodied if
Figure 894702DEST_PATH_IMAGE023
too small (or
Figure 319996DEST_PATH_IMAGE024
is excessive) merged the back object, just possibly make object in new scene, seem unnatural.Therefore the selection of weights is very important, also may be different and in different scene images, incorporate the selection of object weights, so should select suitable weights according to actual conditions.We draw through a large amount of experiments; For general fusion application; Weighting coefficient is taken as
Figure 93917DEST_PATH_IMAGE030
; is preferable; When illumination simple or object and new scene differs big to image texture; Can suitably increase the value of
Figure 176591DEST_PATH_IMAGE032
; As be increased to 0.3 or 0.4, but generally be no more than 0.5.
According to the flow process shown in Figure 1 programming flowchart of the present invention that can draw, see Fig. 2.Wherein criterion " illumination is covered plate and whether reached requirement " is meant whether the illumination illiteracy plate image M askimage that obtains through the IHS inverse transformation reaches the above-mentioned the 2nd; The requirement of explaining in the 3rd liang of point; Be that the color distortion does not appear in Maskimage; And comprised the illumination information after object incorporates new scene in its target area, though the information of background area is then identical or have the difference human eye can not differentiate their difference with the counterpart of new scene image I mage1.Criterion "object illumination level and detail richness reasonableness" In the present invention, the main details of the reduction algorithm is the weighted sum of the two weights
Figure 556757DEST_PATH_IMAGE023
and
Figure 383636DEST_PATH_IMAGE024
associated with it.Described in top the 4th point; The illumination degree of back object is merged in weights
Figure 431226DEST_PATH_IMAGE029
decision; The details abundance of
Figure 740985DEST_PATH_IMAGE024
decision objects; Their these those long relations that disappear; So will choose suitable weights according to the fused images of treating of reality; Make the illumination degree and the details abundance of object all satisfy people's visual requirement, thereby make fused images seem true nature.
The present invention compares with existing object scene integration technology has following characteristics:
1. the condition that needs is few, realizes simple.The present invention only needs destination object and new scene two width of cloth images can realize the fusion of object and scene, need not more picture materials, and treats that fused images does not need registration, thereby realize simple.
2. because the IHS conversion extracts luminance component from coloured image, like this, can only consider that the brightness fusion just can realize the fusion of object and scene.
3. what the present invention proposed is a kind of framework, just can obtain different integration programs through implanting the different brightness blending algorithm therein.These methods all have own advantage, or fusion mass is good, or fusion speed is fast, and for example the method among the embodiment 1 of back has characteristics simply fast, and the method for embodiment 2 can reduce the color distortion well.Can select different integration programs according to the real needs of practical application.
4. fusion speed is fast, and this is the maximum characteristics of the present invention.We it will be appreciated that in the contrast experiment of embodiment 4; Even merge the wavelet technique that link adopts relative complex in brightness; The fusion speed of this method is also at more than 8 times of Poisson fusion method; This is appreciable, because it shows that the potentiality that the present invention is applied in the Video processing are very big.
5. can reduce the color distortion that fusion process produces to greatest extent.A large amount of experimental results show that the SFIM technical scheme that embodiment 2 introduces all can reduce the color distortion effectively with the quick wavelet technique scheme that example 3 is introduced.
6. syncretizing effect is good, clear its advantage on relative Poisson fusion method of the evaluating objective quality tables of data of the fused images of embodiment 4.And owing to can form different integration programs through implanting different brightness blending algorithms, thereby the present invention has purposes widely, can be applied to numerous areas such as video display special efficacy, city planning, architectural design.
Description of drawings
Fig. 1 is a principle flow chart of the present invention.
Fig. 2 is a flow chart of the present invention.
Fig. 3 is the syncretizing effect figure of embodiment 1.
Fig. 4 is the syncretizing effect figure of embodiment 2.
Fig. 5 is the syncretizing effect figure of embodiment 3.
Fig. 6 is the syncretizing effect comparison diagram of embodiment 4.
Fig. 7 is the objective evaluation data of chromatic component H of the fused images of embodiment 4.
Fig. 8 is the objective evaluation data of luminance component I of the fused images of embodiment 4.
Fig. 9 is the fusion treatment time ratio of each integration program of embodiment 4 employings.
Figure 10 is more syncretizing effect figure of the present invention.These design sketchs are divided into four groups, and four images that are positioned at delegation are one group, from left to right, are respectively destination object figure in one group, new scene image, the directly syncretizing effect of replica method and the syncretizing effect image that uses the inventive method.
Embodiment
For the present invention's technology is described better, describe with instance below.Wherein, We use standard I HS converter technique respectively first three example; SFIM technology and wavelet technique provide three kinds of integration programs as the brightness fusion tool; And experimentize separately, last example is a contrast experiment who investigates these several kinds of integration program syncretizing effects, also they is compared with Poisson fusion method relatively more commonly used at present simultaneously.And the details retrieving algorithm that uses in all testing all is above-mentioned weighted sum method.
Embodiment 1: this object and scene fusion method based on the IHS conversion uses standard I HS converter technique to carry out the brightness fusion; Actual is exactly a brightness replacement operation, uses standard I HS converter technique to realize that as the brightness fusion tool concrete steps of object and scene fusion are following in the present embodiment:
1. IHS conversion: scene image Image1 is carried out the IHS conversion respectively with the scene image Image2 that has added destination object obtain Image1_hsi and Image2_hsi, get their luminance component I1, I2;
2. with the histogram of I1 as the standard histogram, I2 is carried out the histogram coupling obtains I2 '.Do like this is in order to weaken the influence of fusion process to former scene image spectral information;
3. brightness is merged: the luminance component of the I2 ' substitute I mage1_hsi image that obtains above using, even I1 '=I2 ';
4. IHS inverse transformation: utilize H, the S component of I1 ' and Image1 to carry out the IHS inverse transformation and obtain illumination and cover plate image M askimage;
5. details reduction: Maskimage is carried out the details retrieving algorithm obtain fused images.
Fig. 3 has provided this scheme and has been taken as
Figure 546130DEST_PATH_IMAGE033
, the syncretizing effect when
Figure 412586DEST_PATH_IMAGE034
at details reduction coefficient.
Embodiment 2: in remote sensing image fusion is used; The IHS converter technique of SFIM compared with techniques standard and the sharpest edges of Brovey converter technique are that it had both improved the fusion faculty of spatial information; The spectral characteristic that can keep source images again better, the fused images that obtains with it and the spectral properties of high-definition picture are irrelevant.But SFIM is more responsive to the registration accuracy of image.And utilize this not have the registration accuracy problem based on treat fused images Image1 and Image2 that the object and the processing of scene fusion method of IHS conversion obtain, can think that they are 100% registrations, so can the SFIM technology be introduced as the brightness fusion tool.Merging available following formula with its realization brightness describes:
Figure 252366DEST_PATH_IMAGE035
Wherein, I1 and I2 are respectively new scene image and the luminance component that has added the new scene image of destination object;
Figure 795343DEST_PATH_IMAGE006
is the pixel of I1;
Figure 353363DEST_PATH_IMAGE036
is the pixel of I2, and
Figure 944881DEST_PATH_IMAGE008
is the pixel of I1 with the filtered image of mean filter.Most of SFIM handle the size of mean filter window are not strict with, and generally are taken as n * n (n >=3).
Figure 216332DEST_PATH_IMAGE009
is exactly the fusion luminance picture of being asked; Be I1 ', pixel.
Use the SFIM technology to realize that as the brightness fusion tool object is similar with the step of embodiment 1 with the concrete steps that scene merges in the present embodiment, need only the brightness replacement operation of its step in 3. changed into SFIM luminance component I1 and I2 ' are merged.
Fig. 4 has provided this scheme and has got 3 * 3 at the mean filter window size; Details reduction coefficient is taken as
Figure 867893DEST_PATH_IMAGE037
, the syncretizing effect when
Figure 647630DEST_PATH_IMAGE038
.
Embodiment 3: utilize wavelet transformation can pass through a lot of additive methods from the detailed information that piece image extracts, like simple substitution, superpose etc., be fused in another width of cloth image and go.Simultaneously, in most remote sensing image fusion are used, can obtain than the general better syncretizing effect of the standard scheme without small echo, especially minimize in the color distortion based on the scheme of small echo.Introduced wavelet technique in the present embodiment based on this.Through introducing a weighted model, we provide in the present embodiment and use wavelet technique to realize that as the brightness fusion tool a kind of performing step of object and scene fusion is following:
1. scene image Image1 is carried out the IHS conversion respectively with the scene image Image2 that has added destination object and obtain Image1_hsi and Image2_hsi, get their luminance component I1, I2;
2. with the histogram of I1 as the standard histogram, I2 is carried out the histogram coupling obtains I3;
3. I1 and I3 are carried out wavelet transformation respectively, obtain wavelet coefficient separately;
4. use the weighted sum method to be fused in the high fdrequency component of I1 the high fdrequency component of I3 and go, promptly
Figure 105156DEST_PATH_IMAGE039
Wherein, weights With
Figure 555040DEST_PATH_IMAGE041
Satisfy , subscript H, V, DRepresent level respectively, vertical and three directions of diagonal line;
5. carry out wavelet inverse transformation with the wavelet coefficient of I1 and obtain I1 ';
6. use the luminance component of I1 ' substitute I mage1_hsi image, carry out the IHS inverse transformation and obtain illumination illiteracy plate image M askimage;
7. Maskimage is carried out the details retrieving algorithm and obtain fused images.
Fig. 5 has provided this scheme and has used the quick small echo of Mallat; Wavelet basis is got db2; Only once decompose; The weighting coefficient of high fdrequency component is
Figure 83291DEST_PATH_IMAGE043
;
Figure 752169DEST_PATH_IMAGE044
;
Figure 57118DEST_PATH_IMAGE045
; Details reduction coefficient is
Figure 811447DEST_PATH_IMAGE037
, the syncretizing effect when
Figure 814038DEST_PATH_IMAGE038
.
Embodiment 4: the picture material of above three examples is still mostly, and this has reflected actual application value of the present invention.For standard of comparison IHS converter technique, SFIM technology and wavelet technique utilization syncretizing effect in the present invention fast, also in order to verify superiority of the present invention, we have carried out the contrast experiment on 900 * 675 the scene image.The size that Fig. 6 has provided the mean filter of SFIM technology is made as 3 * 3; Wavelet basis is got db2; Only once decompose; The weighting coefficient of high fdrequency component is taken as ;
Figure 564005DEST_PATH_IMAGE044
;
Figure 805631DEST_PATH_IMAGE045
; Details reduction coefficient is
Figure 159383DEST_PATH_IMAGE046
, the syncretizing effect when .
Simultaneously, in order to estimate fusion mass objectively, we have used following four evaluation indexes:
1. related coefficient CC: the background area through calculating fused images and the background area of Image2 image CCEstimate the background area and receive the fusion process effect, CCMerge more little near 1 explanation more to the influence of background area.
2. mutual information sum
Figure 200337DEST_PATH_IMAGE048
: measure the fusion situation of target area through calculating fused images and Image1 and Image2 in the mutual information sum of target area, the big more explanation of mutual information sum is good more to the fusion of scene information and object information.
3. average gradient : estimate sharpness and the fused images of fused images ability to express to minor detail contrast and texture transformation characteristic through the average gradient that calculates fused images.In general, average gradient is big more, and image level is many more, and image is clear more.
4. to the consideration of counting yield, we with the fusion treatment time also as an evaluation index.We only consider to merge, cutting object and object is added to time spent in the scene not in computer capacity.We with the mean value of the actual treatment time of 10 identical mixing operations of same algorithm as the last processing time; Simultaneously; The difference of the time of fusion that causes for the difference that shields owing to experimental situation; We are the fusion treatment time set of the integration program of using standard I HS converter technique 1, provide the fusion treatment time of other schemes and its ratio then.
Fig. 7, Fig. 8 have provided the objective evaluation data of the chromatic component H and the luminance component I of fused images respectively.Fig. 9 has provided the fusion treatment time ratio of different schemes.

Claims (6)

1. object and scene fusion method based on an IHS conversion, its input is two width of cloth images: contain the former scene image of destination object and need destination object be incorporated new scene image wherein, it is characterized in that operation steps is following:
A. cut apart and superpose: at first destination object is split from former scene image, the appropriate location of the new scene image I that is added to then mage1 obtains image I mage2;
B.IHS conversion: respectively Image1 and Image2 are carried out the IHS conversion, obtain their image I mage1_hsi and Image2_hsi under the IHS color space, and obtain the two luminance component I1 and I2;
C. brightness is merged: two luminance component I1 and I2 are merged obtain merging brightness I1 ';
The d.IHS inverse transformation: with the luminance component of I1 ' replacement Image1_hsi, then Image1_hsi is carried out the IHS inverse transformation, plate Maskimage is covered in the illumination that obtains fused images;
E details reduction: obtain fused images thereby cover the details that restores destination object on the plate image M askimage in illumination.
2. according to claim 1 described object and scene fusion method, it is characterized in that said step a is cut apart and the operation that superposes is to accomplish by image editing software Photoshop based on the IHS conversion; Use its Magnetic Lasso Tool to accomplish Object Segmentation easily, paste the appropriate location in the new scene after Object Segmentation is come out again; Pasting the back uses " editor → free conversion " menu item that the size of object is adjusted to adapt to its position under new scene.
3. according to claim 1 described object and scene fusion method, it is characterized in that said step bIHS conversion is: use following formula to accomplish from the RGB color space to the conversion of IHS color space based on the IHS conversion:
Figure 753469DEST_PATH_IMAGE001
Figure 208721DEST_PATH_IMAGE002
,
Figure 108544DEST_PATH_IMAGE003
here
Figure 736971DEST_PATH_IMAGE004
Wherein, R, G, B represent three color components of red, green, blue under the rgb space respectively, and I, H, S represent three components of brightness, colourity, saturation degree under the IHS space respectively.
4. according to claim 1 described object and scene fusion method based on the IHS conversion; It is characterized in that it is through using standard I HS converter technique that said step c brightness is merged; SFIM technology and wavelet technique are as the brightness fusion tool; Three kinds of integration programs are arranged, wherein merge the fastest scheme that is to use standard I HS converter technique, its step that realizes the brightness fusion is following:
C-1-1. with the histogram of I1 as the standard histogram, I2 is carried out the histogram coupling obtains I2 ';
Even the luminance component of the I2 ' substitute I mage1_hsi image that c-1-2 obtains above using is I1 '=I2 ';
And the best scheme that is to use the SFIM technology of syncretizing effect, its step that realizes the brightness fusion is following:
C-2-1. with the histogram of I1 as the standard histogram, I2 is carried out the histogram coupling obtains I2 ';
C-2-2 uses following formula that luminance component I1 and I2 ' are merged:
Figure 773061DEST_PATH_IMAGE005
Wherein,
Figure 212263DEST_PATH_IMAGE006
is the pixel of I1;
Figure 599382DEST_PATH_IMAGE007
is the pixel of I2 ', and is the pixel of I1 with the filtered image of mean filter;
Figure 922096DEST_PATH_IMAGE009
is the fusion luminance picture of being asked; Be I1 ', pixel.
5. according to claim 1 described object and scene fusion method, it is characterized in that the IHS inverse transformation of said steps d is based on the IHS conversion: the IHS inverse transformation from the IHS color space to the RGB color space, use following formula to accomplish:
During as
Figure 719151DEST_PATH_IMAGE010
Figure 639571DEST_PATH_IMAGE011
Figure 823745DEST_PATH_IMAGE013
During as
Figure 209224DEST_PATH_IMAGE016
Figure 339991DEST_PATH_IMAGE017
During as
Figure 478848DEST_PATH_IMAGE018
Figure 124593DEST_PATH_IMAGE019
Figure 374309DEST_PATH_IMAGE020
Wherein, I, H, S represent three components of brightness, colourity, saturation degree under the IHS space respectively, and R, G, B represent three color components of red, green, blue under the rgb space respectively.
6. according to claim 1 described object and scene fusion method based on the IHS conversion; It is characterized in that the reduction of said step e details is through illumination being covered the details retrieving algorithm that plate image M askimage and the new scene image I mage2 that contains destination object carry out weighted sum, this algorithm is accomplished through following formula:
Figure 980926DEST_PATH_IMAGE022
Wherein,
Figure 51650DEST_PATH_IMAGE023
and
Figure 105057DEST_PATH_IMAGE024
satisfies ;
Figure 300863DEST_PATH_IMAGE026
and
Figure 593304DEST_PATH_IMAGE027
be presentation video Maskimage and image I mage2 respectively, and
Figure 450401DEST_PATH_IMAGE028
is final fusion results; The illumination degree of back object, the details abundance of
Figure 858566DEST_PATH_IMAGE024
decision objects are merged in weights
Figure 207005DEST_PATH_IMAGE029
decision; For general fusion application; Weighting coefficient is taken as
Figure 903882DEST_PATH_IMAGE030
;
Figure 604817DEST_PATH_IMAGE031
is preferable; When illumination simple or object and new scene differs big to image texture; Can suitably increase the value of ; As be increased to 0.3 or 0.4, but generally be no more than 0.5.
CN2011102540528A 2011-08-31 2011-08-31 Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform Pending CN102436666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011102540528A CN102436666A (en) 2011-08-31 2011-08-31 Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011102540528A CN102436666A (en) 2011-08-31 2011-08-31 Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform

Publications (1)

Publication Number Publication Date
CN102436666A true CN102436666A (en) 2012-05-02

Family

ID=45984706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011102540528A Pending CN102436666A (en) 2011-08-31 2011-08-31 Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform

Country Status (1)

Country Link
CN (1) CN102436666A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303542A (en) * 2015-09-22 2016-02-03 西北工业大学 Gradient weighted-based adaptive SFIM image fusion algorithm
CN107958594A (en) * 2017-12-26 2018-04-24 潘永森 One kind monitors accurate traffic comprehensive monitor system
CN108230376A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN108288344A (en) * 2017-12-26 2018-07-17 李文清 A kind of efficient forest fire early-warning system
CN108711160A (en) * 2018-05-18 2018-10-26 西南石油大学 A kind of Target Segmentation method based on HSI enhancement models
CN108898569A (en) * 2018-05-31 2018-11-27 安徽大学 A kind of fusion method being directed to visible light and infrared remote sensing image and its fusion results evaluation method
CN109102484A (en) * 2018-08-03 2018-12-28 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN110929657A (en) * 2019-11-28 2020-03-27 武汉奥恒胜科技有限公司 Environmental pollution multispectral image analysis and identification method
CN112488967A (en) * 2020-11-20 2021-03-12 中国传媒大学 Object and scene synthesis method and system based on indoor scene
CN112700513A (en) * 2019-10-22 2021-04-23 阿里巴巴集团控股有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266686A (en) * 2008-05-05 2008-09-17 西北工业大学 An image amalgamation method based on SFIM and IHS conversion
CN101303733A (en) * 2008-05-26 2008-11-12 东华大学 Method for viewing natural color at night with sense of space adopting pattern database
CN101841662A (en) * 2010-04-16 2010-09-22 华为终端有限公司 Method for acquiring photo frame composite image by mobile terminal and mobile terminal
CN102063710A (en) * 2009-11-13 2011-05-18 烟台海岸带可持续发展研究所 Method for realizing fusion and enhancement of remote sensing image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266686A (en) * 2008-05-05 2008-09-17 西北工业大学 An image amalgamation method based on SFIM and IHS conversion
CN101303733A (en) * 2008-05-26 2008-11-12 东华大学 Method for viewing natural color at night with sense of space adopting pattern database
CN102063710A (en) * 2009-11-13 2011-05-18 烟台海岸带可持续发展研究所 Method for realizing fusion and enhancement of remote sensing image
CN101841662A (en) * 2010-04-16 2010-09-22 华为终端有限公司 Method for acquiring photo frame composite image by mobile terminal and mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯振 等: "结合抠像技术的图像分类方法", 《中国图象图形学报》 *
王蓉 等: "图像融合技术及融合效果评价的研究", 《农机化研究》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303542B (en) * 2015-09-22 2018-10-30 西北工业大学 Adaptive SFIM Image Fusions based on gradient weighting
CN105303542A (en) * 2015-09-22 2016-02-03 西北工业大学 Gradient weighted-based adaptive SFIM image fusion algorithm
CN108230376B (en) * 2016-12-30 2021-03-26 北京市商汤科技开发有限公司 Remote sensing image processing method and device and electronic equipment
CN108230376A (en) * 2016-12-30 2018-06-29 北京市商汤科技开发有限公司 Remote sensing image processing method, device and electronic equipment
CN107958594A (en) * 2017-12-26 2018-04-24 潘永森 One kind monitors accurate traffic comprehensive monitor system
CN108288344A (en) * 2017-12-26 2018-07-17 李文清 A kind of efficient forest fire early-warning system
CN108711160A (en) * 2018-05-18 2018-10-26 西南石油大学 A kind of Target Segmentation method based on HSI enhancement models
CN108711160B (en) * 2018-05-18 2022-06-14 西南石油大学 Target segmentation method based on HSI (high speed input/output) enhanced model
CN108898569A (en) * 2018-05-31 2018-11-27 安徽大学 A kind of fusion method being directed to visible light and infrared remote sensing image and its fusion results evaluation method
CN109102484B (en) * 2018-08-03 2021-08-10 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN109102484A (en) * 2018-08-03 2018-12-28 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN112700513A (en) * 2019-10-22 2021-04-23 阿里巴巴集团控股有限公司 Image processing method and device
CN110929657A (en) * 2019-11-28 2020-03-27 武汉奥恒胜科技有限公司 Environmental pollution multispectral image analysis and identification method
CN112488967A (en) * 2020-11-20 2021-03-12 中国传媒大学 Object and scene synthesis method and system based on indoor scene

Similar Documents

Publication Publication Date Title
CN102436666A (en) Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform
Zhang et al. Fast haze removal for nighttime image using maximum reflectance prior
Li et al. Nighttime haze removal with glow and multiple light colors
Agarwala et al. Interactive digital photomontage
TWI455062B (en) Method for 3d video content generation
Peng et al. Image haze removal using airlight white correction, local light filter, and aerial perspective prior
Weeks et al. Histogram specification of 24-bit color images in the color difference (CY) color space
KR101502598B1 (en) Image processing apparatus and method for enhancing of depth perception
CN104272377B (en) Moving picture project management system
CN107045715A (en) A kind of method that single width low dynamic range echograms generates high dynamic range images
TW201432622A (en) Generation of a depth map for an image
CN103700078B (en) The defogging method of a small amount of background image containing mist
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN110248242B (en) Image processing and live broadcasting method, device, equipment and storage medium
Gao et al. Haze filtering with aerial perspective
CN106846258A (en) A kind of single image to the fog method based on weighted least squares filtering
AU2015213286B2 (en) System and method for minimal iteration workflow for image sequence depth enhancement
Choi et al. Referenceless perceptual image defogging
Kekre et al. Using assorted color spaces and pixel window sizes for colorization of grayscale images
Li et al. Applying daytime colors to night-time imagery with an efficient color transfer method
Liu et al. Research on image dehazing algorithms based on physical model
CN103474009B (en) Wisdom media system and exchange method
Cohen et al. Image stacks
Ding et al. A new framework for the fusion of object and scene based on IHS transform
Shen et al. High dynamic range image tone mapping and retexturing using fast trilateral filtering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120502