CN102609919A - Region-based compressed sensing image fusing method based on - Google Patents

Region-based compressed sensing image fusing method based on Download PDF

Info

Publication number
CN102609919A
CN102609919A CN2012100346621A CN201210034662A CN102609919A CN 102609919 A CN102609919 A CN 102609919A CN 2012100346621 A CN2012100346621 A CN 2012100346621A CN 201210034662 A CN201210034662 A CN 201210034662A CN 102609919 A CN102609919 A CN 102609919A
Authority
CN
China
Prior art keywords
image
compressed sensing
sensing image
zone
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100346621A
Other languages
Chinese (zh)
Inventor
覃征
陈旸
李颖
方峻
李环
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2012100346621A priority Critical patent/CN102609919A/en
Publication of CN102609919A publication Critical patent/CN102609919A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a region-based compressed sensing image fusing method, comprising the following steps of: step 1, compressing an image so as to enable the image to have certain confidentiality and have a file size which is suitable for transmission; step 2, carrying out region division on the image to enable the image to have intelligent representation and be convenient for subsequent intelligent image fusing; step 3, carrying out united region division on the image; combining region dividing results of a plurality of the images to form combined region division; step 4, fusing a compressed sensing image by region division to enable the fused compressed sensing image to have intelligence, namely have continuity and smoothness in the divided regions; and step 5, carrying out inverse transformation on the fused compressed sensing image to enable the image to be convenient for human eyes to observe and guarantee the integrity of signals. According to the region-based compressed sensing image fusing method, disclosed by the invention, a standard reference image does not need to be used as evaluation guidance and a comprehensive evaluation can be carried out according to the representing capability of respective characteristics of infrared light and visible light in a fused result.

Description

A kind of compressed sensing image interfusion method based on the zone
Technical field
The present invention relates to a kind of image interfusion method, particularly transmit fast, the less demanding a kind of compressed sensing image interfusion method of information integrity based on the zone to sensor.
Background technology
Image co-registration is widely used in civilian and military field, and the image that image co-registration obtains different sensors carries out comprehensively, and the image after the fusion is behaved or the further identification of machine provides accurate and effective, comprehensive foundation.Because modern battlefield is complicated and changeable, the sensor limited computational power has all proposed higher requirement to image interfusion method.Good image interfusion method not only need be applicable to the sensor passage that transfer rate is not high, but also need carry out encryption to images, the precision after need guaranteeing in addition to merge.
Summary of the invention
In order to overcome the defective of prior art, the object of the present invention is to provide a kind of compressed sensing image interfusion method based on the zone, be used for sensor processing, transmission fast, good confidentiality.
In order to achieve the above object, technical scheme of the present invention is achieved in that
A kind of compressed sensing image interfusion method based on the zone comprises the steps:
Step 1, image is compressed, use image to have certain confidentiality, and file size is suitable for transmission;
Step 2, image is carried out area dividing, make image have the intelligence expression, convenient follow-up intelligent image merges;
Step 3, image is united area dividing, the area dividing result of comprehensive a plurality of images generates the associating area dividing;
Step 4, area dividing merge the compressed sensing image, and the compressed sensing image behind feasible the fusion has intelligent, promptly in the zoning, has continuity, flatness;
Step 5, the compressed sensing image after merging is carried out inverse transformation, make image make things convenient for eye-observation, and guarantee the integrality of signal;
The present invention is through the feature of image of model to infrared and visible light; Hierarchy type extracted region model and hierarchy type evaluation model have been made up; This model is based on the target detection effect of infrared image, and the background that reaches visible images presents ability, is the target area with image division; Important background, general three levels of background.And carry out the evaluation of fusion results respectively to three levels.The evaluation model high spot reviews of hierarchy type the target detection effect of fusion results, the effect that appears of important background and general background.The present invention does not need the canonical reference image as estimate instructing, and makes comprehensive evaluation according to the embodiment ability of the characteristics separately of the infrared and visible light in the fusion results.
Description of drawings
Fig. 1 is that infrared hierarchy type with visible light according to the invention merges assessment models figure;
Fig. 2 is that the profile of target area according to the invention extracts process flow diagram;
Fig. 3 is that the profile in important background according to the invention zone extracts process flow diagram;
Fig. 4 is that the profile of general background area according to the invention extracts process flow diagram;
Fig. 5 is the extracted region process flow diagram on three levels according to the invention;
Fig. 6 estimates process flow diagram for the fusion of target area under the present invention;
Fig. 7 estimates process flow diagram for the fusion in important background zone under the present invention;
Fig. 8 estimates process flow diagram for the fusion of general background area under the present invention;
Fig. 9 estimates process flow diagram for hierarchy type under the present invention merges;
Embodiment
Below in conjunction with accompanying drawing and practical implementation process road net model according to the invention and construction method are elaborated.
A kind of compressed sensing image interfusion method based on the zone comprises the steps:
Step 1, image is compressed, use image to have certain confidentiality, and file size is suitable for transmission;
Step 2, image is carried out area dividing, make image have the intelligence expression, convenient follow-up intelligent image merges;
Step 3, image is united area dividing, the area dividing result of comprehensive a plurality of images generates the associating area dividing;
Step 4, area dividing merge the compressed sensing image, and the compressed sensing image behind feasible the fusion has intelligent, promptly in the zoning, has continuity, flatness;
Step 5, the compressed sensing image after merging is carried out inverse transformation, make image make things convenient for eye-observation, and guarantee the integrality of signal;
Comprise the method description of image being ground institute's perception in the step 1.
Comprise the description of original image being carried out region partitioning method in the step 2.
Comprise in the step 3 image division zone is carried out comprehensively, form the description of association area domain classification method.
Comprise in the step 4 image is carried out the description of fusion method according to the associating area dividing.
Comprise in the step 5 compressed sensing image after merging is carried out the compressed sensing inverse transformation, form original image appearance.
Embodiment is: with reference to the illustraton of model of Fig. 1,
1, objective contour extracts
It is exactly to utilize infrared target detection ability that target is extracted, and target is extracted from infrared image.In general the high more part of gray-scale value in the infrared image, the importance of target is strong more.Therefore the available threshold method is separated for the high bright parts of image.
Shown in accompanying drawing 2, carry out the setting of threshold value earlier for infrared image, the pixel of gradation of image value>threshold value is extracted, and obtains highlight regions.High bright parts is carried out profile extract, obtain the profile of target area.
2 background profiles extract
The background profile extracts and is divided into: the general background profile of (1) important background profile (2)
2.1 the important background profile extracts
Shown in accompanying drawing 3, the extent of target area profile according to appointment expanded the profile after obtaining expanding.Instruct visible images to extract closed profile with this profile, promptly block diagram 2.The profile of continuous closure as outline, as interior profile, can be reached the target area profile effect of removing the target area, thereby obtains the important background profile.This moment, the important background profile was made up of inside and outside contour, and promptly the important background zone does not comprise the target area.
2.2 general background area
Shown in accompanying drawing 4, identical with the important background method for distilling, at first extract its outline for the important background profile, its outline is as interior profile then, with the profile (in general, being the peripheral frame of image) of image as outline.Part in the middle of the inside and outside contour is general background area.
3, the extracted region of three width of cloth images on three levels
Shown in the accompanying drawing 5; With step 1; The target area profile that obtains in 2, important background profile, general background profile are applied to infrared image, visible images and fused images respectively, can obtain the area image (as shown above) of this three width of cloth image on three levels.
4, hierarchy type appraisal procedure
Several kinds of image evaluation methods commonly used, as: square error MSE (Mean square Error), root-mean-square error RMSE (Root Mean Square Error); Signal to noise ratio snr (Signal-to-Noise Ratio); Y-PSNR PSNR (Peak-to-peak Signal-toNoise Ratio), related coefficient (Correlation Coefficient), information entropy (Entropy); Cross entropy (Cross Entropy); Combination entropy (Joint Entropy), mutual information (Mutual Information), Renyi information entropy.
This patent can all can use these appraisal procedures, as every layer regional assessment standard.Below be that example is done explanation with the related coefficient.
4.1 merging, the target area estimates:
With the degree of correlation example as every layer fusion evaluation criteria, the related coefficient C of two width of cloth images (P Q) is defined as:
C ( P , Q ) = Σ x = 1 M Σ y = 1 N [ P ( x , y ) - P ‾ ] [ Q ( x , y ) - Q ‾ ] Σ x = 1 M Σ y = 1 N [ P ( x , y ) - P ‾ ] 2 Σ x = 1 M Σ y = 1 N [ Q ( x , y ) - Q ‾ ] 2
M wherein, N are the wide of image and long.(P Q)≤1, more near 1, shows that the degree of approach of two width of cloth images is good more to related coefficient 0≤C.
The emphasis that hierarchy type merges is to give different weights to infrared with the feature representation of visible light on three different levels.Adopt the mode of weighting fusion that two fusion assessment results are carried out normalization.Shown in accompanying drawing 6, the target area of establishing fusion results is C1 with the degree of correlation of corresponding region of ultra-red, and weight is w1.Target area in the fusion results is C2 with the degree of correlation of corresponding visible region, and weight is w2.Then comprehensive target area fusion is evaluated as
C Target=C 1* w 1+ C 2* w 2(w 1+ w 2=1)
In general, between the infrared w1 desirable 0.9 to 1.0, then the w2 of corresponding visible light is between 0.1 to 0.
4.2 merging, important background estimates:
Shown in accompanying drawing 7, the evaluation criterion that the fusion evaluation of important background is adopted is identical with target area fusion evaluation criterion.
C Important background=C 1* w 1+ C 2* w 2(w 1+ w 2=1)
In general, between the important background weight w2 desirable 0.7 to 1.0 of visible light.Between the important background weight desirable 0.3 to 0 of corresponding infrared image.
4.3 merging, general background estimates:
Shown in accompanying drawing 8, the evaluation criterion that the fusion evaluation of important background is adopted is identical with target area fusion evaluation criterion.
C General background=C 1* w 1+ C 2* w 2(w 1+ w 2=1)
In general, between the general background weight w2 desirable 0.7 to 1.0 of visible light.Between the general background weight desirable 0.3 to 0 of corresponding infrared image.
4.4 hierarchy type evaluation result
Shown in accompanying drawing 9, the evaluation of three levels is carried out comprehensively can obtaining merging evaluation result.If Target Fusion is evaluated as C Target, weight is w Target, important background be evaluated as C Important background, weight is w Important background, general background be evaluated as C General background, weight is w General background, then total fusion evaluation result is:
C=C Target* w Target+ C Important background* w Important background+ C General background* w General background
(w Target+ w Important background+ w General background=1)
In general, the importance w of target TargetImportance w greater than important background Important backgroundImportance w greater than general background General backgroundDesirable w Target=0.6, w Important background=0.3, w General background=0.1.

Claims (6)

1. the compressed sensing image interfusion method based on the zone is characterized in that, comprises the steps:
Step 1, image is compressed, use image to have certain confidentiality, and file size is suitable for transmission;
Step 2, image is carried out area dividing, make image have the intelligence expression, convenient follow-up intelligent image merges;
Step 3, image is united area dividing, the area dividing result of comprehensive a plurality of images generates the associating area dividing;
Step 4, area dividing merge the compressed sensing image, and the compressed sensing image behind feasible the fusion has intelligent, promptly in the zoning, has continuity, flatness;
Step 5, the compressed sensing image after merging is carried out inverse transformation, make image make things convenient for eye-observation, and guarantee the integrality of signal.
2. according to claim 1 based on the compressed sensing image interfusion method in zone, it is characterized in that, comprise the method description of image being ground institute's perception in the step 1.
3. according to claim 1 based on the compressed sensing image interfusion method in zone, it is characterized in that, comprise the description of original image being carried out region partitioning method in the step 2.
4. according to claim 1 based on the compressed sensing image interfusion method in zone, it is characterized in that, comprise in the step 3 image division zone is carried out comprehensively, form the description of association area domain classification method.
5. according to claim 1 based on the compressed sensing image interfusion method in zone, it is characterized in that, comprise in the step 4 image is carried out the description of fusion method according to the associating area dividing.
6. according to claim 1 based on the compressed sensing image interfusion method in zone, it is characterized in that, comprise in the step 5 compressed sensing image after merging is carried out the compressed sensing inverse transformation, form original image appearance.
CN2012100346621A 2012-02-16 2012-02-16 Region-based compressed sensing image fusing method based on Pending CN102609919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100346621A CN102609919A (en) 2012-02-16 2012-02-16 Region-based compressed sensing image fusing method based on

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100346621A CN102609919A (en) 2012-02-16 2012-02-16 Region-based compressed sensing image fusing method based on

Publications (1)

Publication Number Publication Date
CN102609919A true CN102609919A (en) 2012-07-25

Family

ID=46527265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100346621A Pending CN102609919A (en) 2012-02-16 2012-02-16 Region-based compressed sensing image fusing method based on

Country Status (1)

Country Link
CN (1) CN102609919A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118053A (en) * 2015-08-06 2015-12-02 浙江科技学院 All-reference-image-quality objective evaluation method based on compressed sensing
CN106952246A (en) * 2017-03-14 2017-07-14 北京理工大学 The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002682A (en) * 2007-01-19 2007-07-25 哈尔滨工程大学 Method for retrieval and matching of hand back vein characteristic used for identification of status
CN101621634A (en) * 2009-07-24 2010-01-06 北京工业大学 Method for splicing large-scale video with separated dynamic foreground
KR20110084025A (en) * 2010-01-15 2011-07-21 삼성전자주식회사 Apparatus and method for image fusion
CN101546428B (en) * 2009-05-07 2011-08-17 西北工业大学 Image fusion of sequence infrared and visible light based on region segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002682A (en) * 2007-01-19 2007-07-25 哈尔滨工程大学 Method for retrieval and matching of hand back vein characteristic used for identification of status
CN101546428B (en) * 2009-05-07 2011-08-17 西北工业大学 Image fusion of sequence infrared and visible light based on region segmentation
CN101621634A (en) * 2009-07-24 2010-01-06 北京工业大学 Method for splicing large-scale video with separated dynamic foreground
KR20110084025A (en) * 2010-01-15 2011-07-21 삼성전자주식회사 Apparatus and method for image fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI TAO等: "A multi-sensor image fusion and enhancement system for assisting drivers in poor lighting conditions", 《APPLIED IMAGERY AND PATTERN RECOGNITION WORKSHOP, 2005. PROCEEDINGS. 34TH 》, 1 December 2005 (2005-12-01), pages 1 - 6 *
袁晓辉等: "基于形态学滤波和分水线算法的目标图像分割", 《数据采集与处理》, vol. 18, no. 4, 15 December 2003 (2003-12-15), pages 455 - 459 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118053A (en) * 2015-08-06 2015-12-02 浙江科技学院 All-reference-image-quality objective evaluation method based on compressed sensing
CN105118053B (en) * 2015-08-06 2018-02-23 浙江科技学院 A kind of full reference picture assessment method for encoding quality based on compressed sensing
CN106952246A (en) * 2017-03-14 2017-07-14 北京理工大学 The visible ray infrared image enhancement Color Fusion of view-based access control model attention characteristic

Similar Documents

Publication Publication Date Title
CN103902976B (en) A kind of pedestrian detection method based on infrared image
CN101872473B (en) Multiscale image natural color fusion method and device based on over-segmentation and optimization
CN101950426B (en) Vehicle relay tracking method in multi-camera scene
CN103578110B (en) Multiband high-resolution remote sensing image dividing method based on gray level co-occurrence matrixes
CN105321172A (en) SAR, infrared and visible light image fusion method
CN103164695B (en) A kind of fruit identification method based on multi-source image information fusion
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN103440499B (en) Traffic shock wave real-time detection based on information fusion and tracking
CN104966286A (en) 3D video saliency detection method
CN109410171B (en) Target significance detection method for rainy image
CN104517095B (en) A kind of number of people dividing method based on depth image
CN104376551A (en) Color image segmentation method integrating region growth and edge detection
CN102637297A (en) Visible light and infrared image fusion method based on Curvelet transformation
CN101271578A (en) Depth sequence generation method of technology for converting plane video into stereo video
CN101527043B (en) Video picture segmentation method based on moving target outline information
CN105389797A (en) Unmanned aerial vehicle video small-object detecting method based on super-resolution reconstruction
CN104616274A (en) Algorithm for fusing multi-focusing image based on salient region extraction
CN111161160B (en) Foggy weather obstacle detection method and device, electronic equipment and storage medium
CN104867133A (en) Quick stepped stereo matching method
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN105894513B (en) Take the remote sensing image variation detection method and system of imaged object change in time and space into account
CN102915523A (en) Improved wavelet transformation remote-sensing image fusion method and improved wavelet transformation remote-sensing image fusion system
CN104408746A (en) Passenger flow statistical system based on depth information
CN106887002B (en) A kind of infrared image sequence conspicuousness detection method
CN104182983B (en) Highway monitoring video definition detection method based on corner features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120725