CN109360179A - A kind of image interfusion method, device and readable storage medium storing program for executing - Google Patents

A kind of image interfusion method, device and readable storage medium storing program for executing Download PDF

Info

Publication number
CN109360179A
CN109360179A CN201811214128.2A CN201811214128A CN109360179A CN 109360179 A CN109360179 A CN 109360179A CN 201811214128 A CN201811214128 A CN 201811214128A CN 109360179 A CN109360179 A CN 109360179A
Authority
CN
China
Prior art keywords
image
pixel
blending
shot chart
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811214128.2A
Other languages
Chinese (zh)
Other versions
CN109360179B (en
Inventor
程永翔
刘坤
于晟焘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201811214128.2A priority Critical patent/CN109360179B/en
Publication of CN109360179A publication Critical patent/CN109360179A/en
Application granted granted Critical
Publication of CN109360179B publication Critical patent/CN109360179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a kind of image interfusion method, device and readable storage medium storing program for executing, are applied to technical field of image processing, image interfusion method includes: the first image and the second image after being registrated first;Classification the first shot chart of output and the second shot chart after convolutional neural networks training;The respective pixel of first shot chart and the second shot chart is compared, binary map is obtained;Obtain the first blending image;First structure similarity graph is calculated, and calculates the second structural similarity figure;Obtain the disparity map of first structure similarity graph and the second structural similarity figure;Based on disparity map, the first image and the second image, the second blending image is obtained.Using the embodiment of the present invention, infrared and visible images blending images are obtained by binary channels convolutional neural networks, algorithm of the convolutional neural networks as deep learning, characteristics of image can be automatically selected, the unicity for improving feature extraction, avoids the defect of existing infrared image and visible light image fusion method.

Description

A kind of image interfusion method, device and readable storage medium storing program for executing
Technical field
The present invention relates to people's image fusion technology field more particularly to a kind of image interfusion methods, device and readable storage Medium.
Background technique
Infrared sensor is sensitive to the infrared thermal characteristics of target area, it can with work double tides and overcome illumination difficulty come It was found that target, but it often lacks detailed information abundant, blurred background;And visible images include more abundant texture Feature and detailed information, but its image-forming condition is to the more demanding of illumination.If by infrared image letter complementary with visible images Breath carries out effective integration, and the blending image information of acquisition is richer, robustness is stronger, is subsequent image segmentation, detection, identification It haves laid a good foundation.Therefore infrared and visual image fusion technology is widely used in military and security monitoring field.
Image co-registration is divided into: Pixel-level, feature level and decision level.The image co-registration of the Pixel-level figure of basis and fusion the most As information is richer.Image interfusion method based on multi-scale transform (MST) and rarefaction representation (SR) is pixel-level image fusion Most common method in method, image characteristics extraction device needs manual designs in such method, and operation efficiency is low;It mentions simultaneously The single characteristics of image got not is the image-context that can be applied to all kinds of complexity well, is easy in the region of uniform gray level Erroneous judgement.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of image interfusion method, device and readable storage medium storing program for executing, by double Channel convolutional neural networks obtain infrared and visible images blending images, calculation of the convolutional neural networks as deep learning Method can automatically select characteristics of image, improve the unicity of feature extraction, avoid existing infrared image and melt with visible images The defect of conjunction method.Specific technical solution is as follows:
In order to achieve the above objectives, the embodiment of the invention provides a kind of image interfusion methods, comprising:
Infrared image is registrated with visible images, the first image and the second image after being registrated, wherein institute It is that visible images are the visible images that state the first image, which be parts of images, second image in the infrared image, In parts of images;
The first image and second image are input in trained convolutional neural networks, by the convolution Classification the first shot chart of output and the second shot chart after neural metwork training;
The respective pixel of first shot chart and second shot chart is compared, binary map is obtained;
Based on the binary map, the first image and second image, the first blending image is obtained;
The first structure similarity graph of the first image and first blending image is calculated, and calculates the second image With the second structural similarity figure of first blending image;
Obtain the disparity map of the first structure similarity graph and the second structural similarity figure;
Based on the disparity map, the first image and second image, the second blending image is obtained.
In a kind of implementation, the respective pixel to first shot chart and second shot chart compares Compared with the step of obtaining binary map, comprising:
For the first pixel on first shot chart, judge whether the pixel value greater than the second pixel, wherein First pixel is any one pixel on first shot chart, and second pixel is second score Pixel corresponding with first pixel on figure;
If it is, the pixel value of third pixel is 1 in the binary map;Otherwise, the pixel value of third pixel It is 0, wherein the third pixel is the pixel in the binary map with the first pixel corresponding position.
In a kind of implementation, first blending image embodies formula are as follows:
F1(x, y)=D1(x,y)A(x,y)+(1-D1(x,y)B(x,y))
Wherein, D1For binary map, A is the first image, and B is the second image, F1For the first blending image, x, y are to constitute pixel The coordinate value of point.
In a kind of implementation, the difference for obtaining the first structure similarity graph and the second structural similarity figure The step of different figure, comprising:
Obtain the difference of the first structure similarity graph and the second structural similarity figure;
Using the absolute value of the difference as the difference of the first structure similarity graph and the second structural similarity figure Different figure.
It is described to be based on the disparity map, the first image and second image in a kind of implementation, obtain second The step of blending image includes:
Based on target area, region unrelated with target in the disparity map is removed, obtains target's feature-extraction image;
According to the target's feature-extraction image, the first image and second image, the second blending image is obtained.
In a kind of implementation, second blending image embodies formula are as follows:
F2(x, y)=D2(x,y)A(x,y)+(1-D2(x,y)B(x,y))
Wherein, D2For target's feature-extraction image, A is the first image, and B is the second image, and x, y are the seat for constituting pixel Scale value, F2For the second blending image.
By the binary map as decision diagram, initial fusion image is obtained using Weighted Fusion rule, is finally mentioned using SSIM The notable figure for taking out target area, merges again, obtains final blending image;
In a kind of implementation, the training step of the convolutional neural networks, comprising:
The first quantity original image having a size of 32 × 32 is extracted from the first image set, and is added in the second image set Second quantity visible images;
The original image and the visible images are converted into grayscale image, and the above gray level image is cut into 16 × 16 sub-block, as high resolution graphics image set;
Gaussian Blur processing is carried out to the first quantity original image that the first image is concentrated, and the second image is added The infrared light image for the second quantity concentrated, then first quantity is opened into original image and the second quantity infrared light image It is cut into 16 × 16 sub-block, as fuzzy graph image set.
Convolutional neural networks structure is trained on the fuzzy graph image set and high resolution graphics image set made.
In a kind of implementation, the convolutional neural networks are binary channels network, each channel is by 5 layers of convolution Neural network is constituted, including 3 convolutional layers, and 1 maximum pond layer and 1 full articulamentum, last output layer are 1 Softmax classifier.
In addition, the embodiment of the invention also provides a kind of image fusion devices, comprising:
Registration module, for being registrated to infrared image with visible images, the first image after being registrated and Two images, wherein the first image is that parts of images, second image in the infrared image are that visible images are Parts of images in the visible images;
Categorization module, for the first image and second image to be input to trained convolutional neural networks In, classification the first shot chart of output and the second shot chart after convolutional neural networks training;
Comparison module is compared for the respective pixel to first shot chart and second shot chart, obtains Binary map;
First Fusion Module obtains first and melts for being based on the binary map, the first image and second image Close image;
Computing module, for calculating the first structure similarity graph of the first image Yu first blending image, with And calculate the second structural similarity figure of the second image and first blending image;
Module is obtained, for obtaining the disparity map of the first structure similarity graph and the second structural similarity figure;
Second Fusion Module obtains second and melts for being based on the disparity map, the first image and second image Close image.
And a kind of readable storage medium storing program for executing is provided, and it is stored thereon with computer program, it is real when which is executed by processor The step of incumbent item of image fusion method.
Using a kind of image interfusion method provided in an embodiment of the present invention, device and readable storage medium storing program for executing, pass through convolution mind Infrared and visible images blending images are obtained through network, characteristics of image is automatically selected, improves the unicity of feature extraction, keep away The defect of existing infrared image and visible light image fusion method is exempted from.For binary segmentation there is no completely by target area with Background area is accurately divided, and the case where shade occurs so as to cause the blending image in later period, according to infrared and visible light source figure Conspicuousness target area figure is obtained as the difference with the structural similarity of original fusion image, secondary fusion steps is taken to change Kind fused image quality, the fusion method based on conspicuousness can keep the integrality of prominent target area, and improve fusion figure The visual quality of picture, so as to preferably serve subsequent image understanding and identification etc..
Detailed description of the invention
Fig. 1 is the flow diagram of image interfusion method provided in an embodiment of the present invention;
Fig. 2 is the first effect diagram provided in an embodiment of the present invention;
Fig. 3 is second of effect diagram provided in an embodiment of the present invention;
Fig. 4 is the third effect diagram provided in an embodiment of the present invention;
Fig. 5 is the 4th kind of effect diagram provided in an embodiment of the present invention;
Fig. 6 is the 5th kind of effect diagram provided in an embodiment of the present invention;
Fig. 7 is the 6th kind of effect diagram provided in an embodiment of the present invention;
Fig. 8 is the 7th kind of effect diagram provided in an embodiment of the present invention;
Fig. 9 is the 8th kind of effect diagram provided in an embodiment of the present invention;
Figure 10 is the 9th kind of effect diagram provided in an embodiment of the present invention;
Figure 11 is the provided in an embodiment of the present invention ten kind of effect diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
It should be noted that in the image processing arts, the thermoradiation efficiency in infrared image with target is larger, and can Light-exposed image grayscale differs greatly even opposite;Infrared image background gray scale is low without apparent thermal sensation effect contrast, with Visible images are compared, and spectral information is lacked, but equally include detailed information.Therefore, only have when being merged to image More go the information for retaining original image that could further improve syncretizing effect.
Referring to Fig. 1, the embodiment of the invention provides a kind of image interfusion methods, include the following steps:
S101 is registrated infrared image with visible images, the first image and the second image after being registrated, In, the first image is that parts of images, second image in the infrared image are that visible images are described visible Parts of images in light image.
It should be noted that geometrical registration refer to different time, different-waveband, different remote sensor systems are obtained same The image (data) in one area, through geometric transformation make corresponding image points in position with the operation that is overlapped completely in orientation.Specifically Geometrical registration process is the prior art, and this will not be repeated here for the embodiment of the present invention.
It is understood that sliding window is the commonly used image processing tool in image procossing, specifically, sliding window The size of mouth can be 3*3,5*5 either 16*16 etc., and the embodiment of the present invention is not specifically limited herein.
Illustratively, by taking the first image as an example, the sliding window of 16*16 can be opened from first pixel in the upper left corner Begin, as first central pixel point of 16*16 sliding window, then successively moves the 16*16 sliding window.So the The chance of pixel centered on any one pixel in one image has, then and so on, for the second image Be in this way, so any one central pixel point in the first image can be calculated according to this principle, and it is right in the second image Answer the structural similarity of central pixel point.
Sliding window is defined having a size of 16 × 16, step-length 1, the infrared image being registrated and visible images of input are distinguished It is done from left to right in infrared image and visible images, slide from top to bottom obtains the first image of infrared image sub-block VA, as shown in Figure 2;The second image of visible images sub-block VB, as shown in Figure 3.
The first image and second image are input in trained convolutional neural networks, by institute by S102 Classification exports the first shot chart and the second shot chart after stating convolutional neural networks training.
It should be noted that convolutional neural networks are a kind of depth feed forward-fuzzy controls in machine learning, have become It is applied to image recognition to function.Convolutional neural networks are a kind of feedforward neural networks, and artificial neuron can respond surrounding cells, It can carry out large-scale image procossing, including convolutional layer and pond layer.
In a kind of implementation, the training step of the convolutional neural networks, comprising: from the first image set extract having a size of 32 × 32 the first quantity original image, and the second quantity visible images being added in the second image set;By the original Beginning image and the visible images are converted into grayscale image, and the above gray level image is cut into 16 × 16 sub-block, as height Resolution chart image set;Gaussian Blur processing is carried out to the first quantity original image that the first image is concentrated, and is added the The infrared light image of the second quantity in two image sets, then first quantity original image and second quantity is infrared Light image is cut into 16 × 16 sub-block, as fuzzy graph image set.
Illustratively, 2000 original clear images having a size of 32 × 32 are extracted from Cifar-10 image set, and added Enter 200 visible images in TNO_Image_Fusion_Datase image set, is then converted into grayscale image and image is complete Portion is cut into 16 × 16 sub-block, as high resolution graphics image set;Secondly high to all being carried out from Cifar-10 image subblock This Fuzzy Processing (since infrared light image background area is low compared with visible images resolution ratio), and TNO_Image_ is added 200 infrared light images (sub-block for having been entirely cut into 16 × 16) in Fusion_Datase image set, as fuzzy graph image set.
Using binary channels network, each channel is made of 5 layers of convolutional neural networks, including 3 convolutional layers, and 1 Maximum pond layer and 1 full articulamentum, last output layer are 1 softmax classifiers.Input picture block size is 16 × 16, the convolution kernel size of convolutional layer is set as 3 × 3, and step-length is set as 1;Maximum pond layer convolution kernel size 2 × 2, step-length 2 swash Function living is Relu.Momentum and weight decaying are set to 0.9 and 0.0005, learning rate 0.0001.
It is understood that the first image is input in trained convolutional neural networks, by the convolution Neural network is trained each of the first image pixel, obtains the score to each pixel, thus right All pixels point in first image obtains the score of all pixels point after being trained, thus the first score after being trained Scheme SA, similarly, the corresponding second shot chart S of the second image can be obtainedB.Detailed process is shown in Figure 4, in convolutional Neural net Network exports the image after training after convolution twice, maximum pond, convolution sum connect entirely.
S103 is compared the respective pixel of first shot chart and second shot chart, obtains binary map.
Specifically, judging whether the pixel greater than the second pixel for the first pixel on first shot chart Value, wherein first pixel is any one pixel on first shot chart, and second pixel is described Pixel corresponding with first pixel on second shot chart;If it is, the third pixel in the binary map Pixel value is 1;Otherwise, the pixel value of third pixel be 0, wherein the third pixel be the binary map on it is described The pixel of first pixel corresponding position.
For binary map T, the first shot chart and the second shot chart are subjected to individual element comparison, if any one pixel Point, position are (m, n), value SAPixel point value be greater than SBRespective pixel point value, then the pixel is corresponding in binary map Value is 1 at position (m, n), conversely, then the pixel in the corresponding position of binary map obtains 0, shown in following formula, illustratively, Based on Fig. 2 and Fig. 3, the binary map obtained after through neural network shown in Fig. 4 is as shown in Figure 5.
The binary map of a target area and background area is thus obtained, wherein white area indicates infrared image Target area, black region is background area, which can be used as the decision diagram of image co-registration.
S104 is based on the binary map, the first image and second image, obtains the first blending image.
First image and the second image are weighted according to binary map can obtain initial fusion as a result, initial fusion purpose That the background area of the target area of infrared image and high-resolution visible images is integrated into an image, based on Fig. 2, Fig. 3 and Fig. 5 obtains the first blending image as shown in FIG. 6.
In a kind of implementation, first blending image embodies formula are as follows:
F1(x, y)=D1(x,y)A(x,y)+(1-D1(x,y)B(x,y))
Wherein, D1For binary map, A is the first image, and B is the second image, F1For the first blending image, x, y are to constitute pixel The coordinate value of point.
S105 calculates the first structure similarity graph of the first image and first blending image, and calculating the Second structural similarity figure of two images and first blending image.
There are very strong relevance between infrared image and visible images pixel, there is a large amount of among these relevances Structural information, image structure similarity SSIM (structural similarity index) are that one kind is used to assess image matter The index of amount.From the perspective of image construction, structural information is defined as brightness and contrast by structural similarity index, with this To reflect the structural of objects in images.For two images C and D, then the similarity measure function of two images is defined as:
Wherein, μa, μbIt is the average gray of image C and D, σa, μbIt is the standard deviation of image C and D, σabIt is image C and D Covariance, C1, C2, C3It is minimum normal number, it is therefore an objective to unstable caused by when avoiding denominator close to 0.α, beta, gamma > 0 are to use To adjust brightness, contrast, the weight of structure function.
Therefore, the first image A and the first blending image F is calculated1First structure similarity graph SAF, illustratively, it is based on Fig. 2 and Fig. 6 obtains first structure similarity graph as shown in Figure 7, calculates the second image B and the first blending image F1The second knot Structure similarity graph SBF, the second structural similarity figure as shown in Figure 8 is obtained based on Fig. 3 and Fig. 6.
S106 obtains the disparity map of the first structure similarity graph and the second structural similarity figure.
In a kind of implementation, the difference for obtaining the first structure similarity graph and the second structural similarity figure The step of different figure, comprising: obtain the difference of the first structure similarity graph and the second structural similarity figure;By the difference Disparity map of the absolute value of value as the first structure similarity graph and the second structural similarity figure.Specifically, first Structural similarity figure and the second structural similarity figure disparity map are as follows:
S=| SAF-SBF|
Wherein, first structure similarity graph SAF, the second structural similarity figure SBF, S is disparity map, illustratively, based on figure The disparity map that 7 and Fig. 8 is obtained is as shown in Figure 9.
S107 is based on the disparity map, the first image and second image, obtains the second blending image.
Since the first blending image that initial fusion obtains completely does not divide target area accurately with background area, Cause the blending image in later period shade occur, therefore takes secondary fusion steps to improve fused image quality.
It is described to be based on the disparity map, the first image and second image in a kind of implementation, obtain second The step of blending image includes: to be removed region unrelated with target in the disparity map based on target area, obtained target signature Extract image;According to the target's feature-extraction image, the first image and second image, the second fusion figure is obtained Picture.
Illustratively, it is based on disparity map shown in Fig. 9, obtains target's feature-extraction image as shown in Figure 10.
In a kind of implementation, second blending image embodies formula are as follows:
F2(x, y)=D2(x,y)A(x,y)+(1-D2(x,y)B(x,y))
Wherein, D2For target's feature-extraction image, A is the first image, and B is the second image, and x, y are the seat for constituting pixel Scale value, F2For the second blending image.
Regard secondary fusion as infrared image and visual image fusion based on conspicuousness Objective extraction.Disparity map S Contain the salient region of infrared image.Using morphological images processing method, area unrelated with target in disparity map is removed Domain obtains target's feature-extraction figure, it is to be appreciated that target area is the target person extracted by infrared sensor Therefore infrared figure enhances the conspicuousness of target area, so as to improve the detailed information retained in blending image, such as scheme Shown in 11, the second blending image based on Figure 10 and Fig. 2, Fig. 3 acquisition.
Using the thought of binary segmentation, obtain infrared merging figure with visible images by binary channels convolutional neural networks Picture, algorithm of the convolutional neural networks as deep learning, can automatically select characteristics of image, improve the unicity of feature extraction, Avoiding the defect of existing infrared image and visible light image fusion method, (majority needs that manual designs extract feature and feature mentions Take it is single, be easily lost).Secondly, completely do not divide target area accurately with background area for binary segmentation, thus The blending image in later period is caused the case where shade occur, according to infrared and visible light source image and original fusion image structure The difference of similitude obtains conspicuousness target area figure, and secondary fusion steps is taken to improve fused image quality, based on aobvious The fusion method of work property can keep the integrality of prominent target area, and improve the visual quality of blending image, so as to more Good serves subsequent image understanding and identification etc..
In addition, the embodiment of the invention also provides a kind of image fusion devices, comprising:
Registration module, for being registrated to infrared image with visible images, the first image after being registrated and Two images, wherein the first image is that parts of images, second image in the infrared image are that visible images are Parts of images in the visible images;
Categorization module, for the first image and second image to be input to trained convolutional neural networks In, classification the first shot chart of output and the second shot chart after convolutional neural networks training;
Comparison module is compared for the respective pixel to first shot chart and second shot chart, obtains Binary map;
First Fusion Module obtains first and melts for being based on the binary map, the first image and second image Close image;
Computing module, for calculating the first structure similarity graph of the first image Yu first blending image, with And calculate the second structural similarity figure of the second image and first blending image;
Module is obtained, for obtaining the disparity map of the first structure similarity graph and the second structural similarity figure;
Second Fusion Module obtains second and melts for being based on the disparity map, the first image and second image Close image.
And a kind of readable storage medium storing program for executing is provided, and it is stored thereon with computer program, it is real when which is executed by processor The step of incumbent item of image fusion method.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (10)

1. a kind of image interfusion method characterized by comprising
Infrared image is registrated with visible images, the first image and the second image after being registrated, wherein described One image is that parts of images, second image in the infrared image are that visible images are in the visible images Parts of images;
The first image and second image are input in trained convolutional neural networks, by the convolutional Neural Classification the first shot chart of output and the second shot chart after network training;
The respective pixel of first shot chart and second shot chart is compared, binary map is obtained;
Based on the binary map, the first image and second image, the first blending image is obtained;
The first structure similarity graph of the first image and first blending image is calculated, and calculates the second image and institute State the second structural similarity figure of the first blending image;
Obtain the disparity map of the first structure similarity graph and the second structural similarity figure;
Based on the disparity map, the first image and second image, the second blending image is obtained.
2. a kind of image interfusion method according to claim 1, which is characterized in that described to first shot chart and institute State the step of respective pixel of the second shot chart is compared, obtains binary map, comprising:
For the first pixel on first shot chart, judge whether the pixel value greater than the second pixel, wherein described First pixel is any one pixel on first shot chart, and second pixel is on second shot chart Pixel corresponding with first pixel;
If it is, the pixel value of third pixel is 1 in the binary map;Otherwise, the pixel value of third pixel is 0, Wherein, the third pixel is the pixel in the binary map with the first pixel corresponding position.
3. a kind of image interfusion method according to claim 1 or 2, which is characterized in that the tool of first blending image Body expression formula are as follows:
F1(x, y)=D1(x,y)A(x,y)+(1-D1(x,y)B(x,y))
Wherein, D1For binary map, A is the first image, and B is the second image, F1For the first blending image, x, y are to constitute pixel Coordinate value.
4. a kind of image interfusion method according to claim 1 or 2, which is characterized in that described to obtain the first structure The step of similarity graph and the disparity map of the second structural similarity figure, comprising:
Obtain the difference of the first structure similarity graph and the second structural similarity figure;
Using the absolute value of the difference as the disparity map of the first structure similarity graph and the second structural similarity figure.
5. a kind of image interfusion method according to claim 1 or 2, which is characterized in that described to be based on the disparity map, institute The step of stating the first image and second image, obtaining the second blending image include:
Based on target area, region unrelated with target in the disparity map is removed, obtains target's feature-extraction image;
According to the target's feature-extraction image, the first image and second image, the second blending image is obtained.
6. a kind of image interfusion method according to claim 5, which is characterized in that the specific table of second blending image Up to formula are as follows:
F2(x, y)=D2(x,y)A(x,y)+(1-D2(x,y)B(x,y))
Wherein, D2For target's feature-extraction image, A is the first image, and B is the second image, and x, y are the coordinate value for constituting pixel, F2For the second blending image.
By the binary map as decision diagram, initial fusion image is obtained using Weighted Fusion rule, is finally extracted using SSIM The notable figure of target area, is merged again, obtains final blending image.
7. a kind of image interfusion method according to claim 1, which is characterized in that the training step of the convolutional neural networks Suddenly, comprising:
The first quantity original image having a size of 32 × 32, and second be added in the second image set are extracted from the first image set Quantity visible images;
The original image and the visible images are converted into grayscale image, and the above gray level image is cut into 16 × 16 Sub-block, as high resolution graphics image set;
Gaussian Blur processing is carried out to the first quantity original image that the first image is concentrated, and is added in the second image set The second quantity infrared light image, then first quantity original image and the second quantity infrared light image are cut It is divided into 16 × 16 sub-block, as fuzzy graph image set.
8. a kind of image interfusion method according to claim 1 or claim 7, which is characterized in that the convolutional neural networks are double Channel network, each channel are made of 5 layers of convolutional neural networks, including 3 convolutional layers, 1 maximum pond layer, and 1 full articulamentum, last output layer are 1 softmax classifiers.
9. a kind of image fusion device characterized by comprising
Registration module, the first image and the second figure for being registrated to infrared image with visible images, after being registrated Picture, wherein the first image is that parts of images, second image in the infrared image are described in visible images are Parts of images in visible images;
Categorization module is passed through for the first image and second image to be input in trained convolutional neural networks Classification exports the first shot chart and the second shot chart after crossing the convolutional neural networks training;
Comparison module is compared for the respective pixel to first shot chart and second shot chart, obtains two-value Figure;
First Fusion Module obtains the first fusion figure for being based on the binary map, the first image and second image Picture;
Computing module, for calculating the first structure similarity graph of the first image Yu first blending image, Yi Jiji Calculate the second structural similarity figure of the second image Yu first blending image;
Module is obtained, for obtaining the disparity map of the first structure similarity graph and the second structural similarity figure;
Second Fusion Module obtains the second fusion figure for being based on the disparity map, the first image and second image Picture.
10. a kind of readable storage medium storing program for executing, is stored thereon with computer program, which is characterized in that when the program is executed by processor The step of realizing image interfusion method described in claim 1 to 8.
CN201811214128.2A 2018-10-18 2018-10-18 Image fusion method and device and readable storage medium Active CN109360179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811214128.2A CN109360179B (en) 2018-10-18 2018-10-18 Image fusion method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811214128.2A CN109360179B (en) 2018-10-18 2018-10-18 Image fusion method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN109360179A true CN109360179A (en) 2019-02-19
CN109360179B CN109360179B (en) 2022-09-02

Family

ID=65345711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811214128.2A Active CN109360179B (en) 2018-10-18 2018-10-18 Image fusion method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN109360179B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415200A (en) * 2019-07-26 2019-11-05 西南科技大学 A kind of bone cement implant CT image layer interpolation method
CN110555820A (en) * 2019-08-28 2019-12-10 西北工业大学 Image fusion method based on convolutional neural network and dynamic guide filtering
CN112686274A (en) * 2020-12-31 2021-04-20 上海智臻智能网络科技股份有限公司 Target object detection method and device
CN113378009A (en) * 2021-06-03 2021-09-10 上海科技大学 Binary neural network quantitative analysis method based on binary decision diagram
CN114782296A (en) * 2022-04-08 2022-07-22 荣耀终端有限公司 Image fusion method, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673396A (en) * 2009-09-07 2010-03-17 南京理工大学 Image fusion method based on dynamic object detection
CN103578092A (en) * 2013-11-11 2014-02-12 西北大学 Multi-focus image fusion method
CN103700075A (en) * 2013-12-25 2014-04-02 浙江师范大学 Tetrolet transform-based multichannel satellite cloud picture fusing method
CN103793896A (en) * 2014-01-13 2014-05-14 哈尔滨工程大学 Method for real-time fusion of infrared image and visible image
US8755597B1 (en) * 2011-02-24 2014-06-17 Exelis, Inc. Smart fusion of visible and infrared image data
CN106530266A (en) * 2016-11-11 2017-03-22 华东理工大学 Infrared and visible light image fusion method based on area sparse representation
CN106709477A (en) * 2017-02-23 2017-05-24 哈尔滨工业大学深圳研究生院 Face recognition method and system based on adaptive score fusion and deep learning
CN107194904A (en) * 2017-05-09 2017-09-22 西北工业大学 NSCT area image fusion methods based on supplement mechanism and PCNN
CN107578432A (en) * 2017-08-16 2018-01-12 南京航空航天大学 Merge visible ray and the target identification method of infrared two band images target signature

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673396A (en) * 2009-09-07 2010-03-17 南京理工大学 Image fusion method based on dynamic object detection
US8755597B1 (en) * 2011-02-24 2014-06-17 Exelis, Inc. Smart fusion of visible and infrared image data
CN103578092A (en) * 2013-11-11 2014-02-12 西北大学 Multi-focus image fusion method
CN103700075A (en) * 2013-12-25 2014-04-02 浙江师范大学 Tetrolet transform-based multichannel satellite cloud picture fusing method
CN103793896A (en) * 2014-01-13 2014-05-14 哈尔滨工程大学 Method for real-time fusion of infrared image and visible image
CN106530266A (en) * 2016-11-11 2017-03-22 华东理工大学 Infrared and visible light image fusion method based on area sparse representation
CN106709477A (en) * 2017-02-23 2017-05-24 哈尔滨工业大学深圳研究生院 Face recognition method and system based on adaptive score fusion and deep learning
CN107194904A (en) * 2017-05-09 2017-09-22 西北工业大学 NSCT area image fusion methods based on supplement mechanism and PCNN
CN107578432A (en) * 2017-08-16 2018-01-12 南京航空航天大学 Merge visible ray and the target identification method of infrared two band images target signature

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DEJAN DRAJIC等: "Adaptive Fusion of Multimodal Surveillance Image Sequences in Visual Sensor Networks", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》 *
YU LIU等: "Infrared and visible image fusion with convolutional neural networks", 《INTERNATIONAL JOURNAL OF WAVELETS, MULTIRESOLUTION AND INFORMATION PROCESSING》 *
YU LIU等: "Multi-focus image fusion with a deep convolutional neural network", 《INFORMATION FUSION》 *
张蕾等: "采用非采样Contourlet变换与区域分类的红外和可见光图像融合", 《光学精密工程》 *
王建等: "基于Contourlet的图像融合方法", 《微处理机》 *
马丽娟: "基于多尺度分析的图像融合技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415200A (en) * 2019-07-26 2019-11-05 西南科技大学 A kind of bone cement implant CT image layer interpolation method
CN110415200B (en) * 2019-07-26 2022-03-08 西南科技大学 Method for interpolating among CT (computed tomography) image layers of bone cement implant
CN110555820A (en) * 2019-08-28 2019-12-10 西北工业大学 Image fusion method based on convolutional neural network and dynamic guide filtering
CN112686274A (en) * 2020-12-31 2021-04-20 上海智臻智能网络科技股份有限公司 Target object detection method and device
CN112686274B (en) * 2020-12-31 2023-04-18 上海智臻智能网络科技股份有限公司 Target object detection method and device
CN113378009A (en) * 2021-06-03 2021-09-10 上海科技大学 Binary neural network quantitative analysis method based on binary decision diagram
CN113378009B (en) * 2021-06-03 2023-12-01 上海科技大学 Binary decision diagram-based binary neural network quantitative analysis method
CN114782296A (en) * 2022-04-08 2022-07-22 荣耀终端有限公司 Image fusion method, device and storage medium

Also Published As

Publication number Publication date
CN109360179B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN108446617B (en) Side face interference resistant rapid human face detection method
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN107316307B (en) Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
US11887362B2 (en) Sky filter method for panoramic images and portable terminal
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN107808132A (en) A kind of scene image classification method for merging topic model
CN109858466A (en) A kind of face critical point detection method and device based on convolutional neural networks
CN112766160A (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN107609459A (en) A kind of face identification method and device based on deep learning
CN106372581A (en) Method for constructing and training human face identification feature extraction network
CN108564120B (en) Feature point extraction method based on deep neural network
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN108052884A (en) A kind of gesture identification method based on improvement residual error neutral net
CN103902958A (en) Method for face recognition
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN107563388A (en) A kind of convolutional neural networks object identification method based on depth information pre-segmentation
CN108537782A (en) A method of building images match based on contours extract with merge
CN110263768A (en) A kind of face identification method based on depth residual error network
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
CN113592911B (en) Apparent enhanced depth target tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant