CN109410160B - Infrared polarization image fusion method based on multi-feature and feature difference driving - Google Patents
Infrared polarization image fusion method based on multi-feature and feature difference driving Download PDFInfo
- Publication number
- CN109410160B CN109410160B CN201811180813.8A CN201811180813A CN109410160B CN 109410160 B CN109410160 B CN 109410160B CN 201811180813 A CN201811180813 A CN 201811180813A CN 109410160 B CN109410160 B CN 109410160B
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- local
- images
- polarization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention provides a method based on multi-featureThe infrared polarization image fusion method driven by the difference of the characteristic comprises the following steps: representing the polarization of light by using a Stokes vector, and calculating a polarization degree image P and a polarization angle image R; carrying out linear weighting on the polarization angle images R and U to obtain an image R'; calculating respective unique portions between the images R', I and P, excluding common portions, denoted as R1、I1And P1(ii) a Image P1、I1And R1Mapping to an R channel, a G channel and a B channel in an RGB space to obtain an RGB image, converting the RGB image into a YUV image, and extracting a brightness component Y; fusing images I by a method based on multi-feature separation1And P1Obtaining F; and replacing Y with F to obtain a replaced YUV image, and performing inverse transformation on the replaced YUV image to obtain an RGB image, namely a polarization image fusion result. A plurality of polarization images of the infrared polarization images are fused, so that the fused image scenes are richer, and the camouflage target can be identified. The invention is applied to the field of computer vision.
Description
Technical Field
The invention relates to the field of computer vision, in particular to an infrared polarization image fusion method based on multi-feature and feature difference driving.
Background
At present, with the rapid development of the requirements of the infrared imaging detection technology in the application fields of military affairs, medical treatment, security protection, earth observation and the like, the traditional infrared detection technology is not satisfactory for some extent under the upgrading of precision, complex environment and camouflage technology. The traditional infrared imaging system mainly images the infrared radiation intensity of a scene, and the traditional infrared imaging system is mainly related to the temperature, the radiance and the like of the scene. When a noise source with the same temperature is placed around a target object, the existing thermal infrared imager cannot identify the target, and the infrared imaging technology faces serious limitations and challenges.
Compared with the traditional infrared imaging, the polarization imaging of the light can reduce the degradation influence of a complex scene, and meanwhile, the structure and distance information of the scene can be obtained. The infrared polarization imaging technology can detect the infrared intensity information of a target scene, can also obtain the polarization information of the target scene, can obviously improve the contrast between a target object and a natural background, and has the capability of reflecting the outline and the details of the object, thereby improving the quality of an infrared image, and a useful signal can be detected under a complex background by using a polarization means.
Polarization is a basic characteristic of light and cannot be directly observed by human eyes, so that polarization information needs to be displayed in a certain form and is perceived by human eyes or convenient for computer processing. The polarization state of light is represented by a stokes vector, which describes the polarization state and the intensity of light by four stokes parameters, which are all time-averaged values of light intensity, have a dimension of intensity, and can be directly detected by a detector. The stokes vector S is represented as:
in the formula I1、I2、I3And I4Respectively representing the acquired light intensity images with the polarization directions of 0 degree, 45 degrees, 90 degrees and 135 degrees; i represents the total intensity of light; q represents the intensity difference between horizontal and vertical polarization, and U represents the intensity difference between 45 ° and 135 ° in the polarization direction; v represents the intensity difference between the left-and right-hand circularly polarized components of the light.
The polarization angle image can better describe different surface orientations, and the polarization degree image contains the polarization information of an object, can better represent an artificial target and improve the contrast of the target and a background; the total light intensity image reflects the intensity information of the scene. The existing polarization image fusion methods only consider the image fusion algorithm of a single difference characteristic and cannot effectively describe all uncertain and randomly changed image characteristics in an image, so that valuable information is lost in the fusion process, and fusion and identification are failed; meanwhile, the fusion process has the problem that the contrast, the bright characteristic and the edge detail characteristic are difficult to be considered simultaneously.
Disclosure of Invention
Aiming at the problem that in the prior art, only a single-difference-characteristic image fusion algorithm is considered, and further all uncertain and randomly-changed image characteristics in an image cannot be effectively described, the invention aims to provide an infrared polarization image fusion method based on multi-characteristic and characteristic difference driving, wherein a plurality of polarization images of the infrared polarization image are fused, and a plurality of polarization quantities comprise an image Q, an image U, an image V, a total light intensity image I, a polarization degree image P and a polarization angle image R, so that fused image scenes are richer, and a plurality of uncertain and randomly-changed image characteristics are simultaneously integrated, so that a plurality of characteristics of the image are considered in the fusion process, the edge detail information of the image is enhanced, the contrast of the image is improved, the fusion result is favorable for identifying a camouflage target, and meanwhile, the problem of information redundancy among the polarization quantities is effectively solved by obtaining respective unique parts of the polarization images in the image fusion process .
The technical scheme adopted by the invention is as follows:
an infrared polarization image fusion method based on multi-feature and feature difference driving specifically comprises the following steps:
s1, representing the polarization of light by using a stokes vector, i.e., S ═ I, Q, U, V, and calculating a polarization degree image P and a polarization angle image R from the S vector;
s2, carrying out linear weighting on the polarization angle image R and the polarization angle image U to obtain an image R';
s3, calculating image R', each unique part except the common part between the total light intensity image I and the polarization degree image P, and respectively recording as R1、I1And P1;
S4, image P1、I1And R1Mapping the RGB image to an R channel, a G channel and a B channel in an RGB space respectively to obtain an RGB image, converting the RGB image into a YUV image, and extracting a brightness component Y;
s5 fusing the images I through a method based on multi-feature separation1And P1Obtaining a fusion result F;
s6, replacing the brightness component Y in the step S4 with the fusion result F in the step S5 to obtain a replaced YUV image, and then performing inverse transformation on the replaced YUV image to obtain an RGB image, namely a final polarization image fusion result.
As a further improvement of the above technical solution, step S1 specifically includes:
s11, calculating a polarization degree image P:
wherein Q represents an intensity difference between horizontal polarization and vertical polarization, U represents an intensity difference between 45 ° and 135 ° in the polarization direction, and I represents a total light intensity image;
s12, calculating a polarization angle image R:
as a further improvement of the above technical solution, step S3 specifically includes:
s31, calculation image R', common portion Co between total light intensity image I and polarization degree image P:
Co=R'∩I∩P=min{R',I,P};
s32, calculating R' of image, total light intensity image I and unique part R of polarization degree image P1、I1And P1:
As a further improvement of the above technical solution, step S4 specifically includes:
s41, image P1、I1And R1Respectively mapping to an R channel, a G channel and a B channel in an RGB space to obtain an RGB image;
s42, converting the RGB image into a YUV image:
s43, extracting luminance component Y:
Y=0.299R+0.587G+0.114B。
as a further improvement of the above technical solution, step S5 specifically includes:
s51, for image I1And P1Performing multi-feature separation to obtain image I1Bright feature image, dark feature image and detail feature image of, and image P1Bright feature images, dark feature images, and detail feature images;
s52 fusion image I1The bright feature image and the image P1Obtaining a bright feature fusion result FL;
S53 fusion image I1The dark feature image and the image P1Obtaining a dark feature fusion result FD;
S54 fusion image I1The detail characteristic image and the image P1Obtaining a detailed feature fusion result FDIF;
S55, fusion FL、FDAnd FDIFTo obtain a fusion result F.
As a further improvement of the above technical solution, in step S51, a multi-feature separation method based on the dark primary theory is used for each image I1And P1Performing multi-feature separation, specifically comprising:
s511, obtaining an image I1And P1Dark primary color image of (1):
in the formula (I), the compound is shown in the specification,as an image I1The dark primary-color image of (a),is an image P1The dark primary-color image of (a),c is an image I1Or P1The three color channels R, G, B, N (x) are pixels in the window area centered on pixel point x, (I)1)C(y) and (P)1)C(y) are respectively represented as image I1And P1A color channel map of;
s512, image I1And picture P1Respectively negating to obtain imagesAnd imagesImage of dark primary colorAndrespectively associated with the imagesAndfusing according to the rule of taking the absolute value to be small to obtain an image I1Dark feature image ofAnd image P1Dark feature image of
S513, the dark primary color imageAndrespectively corresponding to dark feature imagesAndmaking a difference to obtain an image I1Bright feature image ofAnd image P1Bright feature image of
S514, image I1And P1Respectively corresponding to dark primary color imagesAndmaking a difference to obtain an image I1Detail feature image ofAnd image P1Detail feature image of
As a further improvement of the above technical solution, in step S52, the image I is fused by using a matching method based on local region energy features1The bright feature image and the image P1The bright feature image specifically includes:
wherein k is I1Or P1,Representing bright feature imagesOrGaussian weighted local energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, and t is (N-1)/2;
s522, obtaining bright characteristic imageAndmatching degree of gaussian weighted local energy of (1):
in the formula, ME(m, n) represents a bright feature imageAndthe degree of matching of the local energies is weighted by the gaussian,representing bright feature imagesA Gaussian-weighted local energy centered at point (m, n),representing bright feature imagesGaussian weighted local energy centered at point (m, n);
s523, fusing the bright feature images through the Gaussian weighted local energy and the Gaussian weighted local energy matching degreeAnd
in the formula, FL(m, n) is a bright feature imageAndfusion result of (1), TlFusing the threshold value for judging similarity for bright features, if the threshold value is ME(m,n)<TlThen the characteristic image is brightAndregions centered on point (m, n) are dissimilar, bright feature imagesAndthe fusion result selects the one with larger energy in the Gaussian weighted region, otherwise, the image with bright featuresAndthe fusion result of (2) is a coefficient weighted average.
As a further improvement of the above technical solution, in step S53, image I is fused by using a matching method based on local area weighted variance features1The dark feature image and the image P1The dark feature image specifically includes:
wherein k is I1Or P1,Representing dark feature imagesOrLocal region weighted variance energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, t ═ N-1/2,represents the local area average centered at point (m, n);
in the formula, MV(m, n) denotes a dark feature imageAndthe degree of matching of the local region weighted variance energies,representing dark feature imagesLocal region weighted variance energy centered at point (m, n),representing dark feature imagesLocal region weighted variance energy centered at point (m, n);
s533, fusing two dark feature images through the local area weighted variance energy and the local area weighted variance energy matching degreeAnd
in the formula, FD(m, n) is a dark feature imageAndfusion result of (1), ThIf the threshold value for judging the similarity of the dark feature fusion is ME(m,n)<ThThen two imagesAndthe regions centered on the point (m, n) are not similar, the two imagesAndselecting the one with large weighted variance energy in the local area as the fusion result; otherwise, two imagesAndthe fusion result of (2) is a coefficient weighted average.
As a further improvement of the above technical solution, in step S54, the fused image I is driven by fuzzy logic and feature difference1The detail characteristic image and the image P1The detailed feature image specifically includes:
wherein k is I1Or P1,Image representing detail featureOrLocal gradients at the middle pixel (m, n),respectively representing horizontal and vertical edge images obtained by convolution of horizontal and vertical templates of a Sobel operator and the detail characteristic image;
wherein k is I1Or P1,Vk P(m, n) represents a detail feature imageOrLocal region weighted variance energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, t ═ N-1/2,represents the local area average centered at point (m, n);
s543, obtaining detail characteristic imageAndlocal difference gradient Δ T (m, n), local difference variance Δ V (m, n), local differenceDegree of partial gradient matching MT(M, n) and local weighted variance matching MV1(m,n):
In the formula (I), the compound is shown in the specification,image representing detail featureLocal gradients at the middle pixel (m, n),image representing detail featureLocal gradients at the middle pixel (m, n),image representing detail featureLocal region weighted variance energy centered at point (m, n),image representing detail featureLocal region weighted variance energy centered at point (m, n);
s544, a decision graph based on pixels is obtained according to the local difference gradient and the local difference variance, and the matching degree of the local gradient and the local weighted variance is obtained
Where PDG (m, n) is a pixel-based decision graph g1~g9A decision graph showing that the pixel position of the time point (m, n) satisfying the corresponding condition is 1 and the other pixel positions are 0, DDG (m, n) is a feature difference degree decision graph, d1And d2A decision diagram in which the pixel position of the point (m, n) satisfying the corresponding condition is 1 and the other pixel positions are 0;
s545, judging the detail feature image P according to the pixel-based decision map PDG (m, n) and the feature difference degree decision map DDG (m, n)I1And PP1The definite area and the uncertain area of (2): g1、g2、g3、g4、g5、g6、g7And g8Belongs to a certain area, g9Belonging to an uncertain region;
s546, fusing detail feature images P by using feature difference drivingI1And PP1The determination region of (2):
DIF(m,n)=ΔT(m,n)·ΔV(m,n)
in the formula (I), the compound is shown in the specification,image representing detail featureAnddetermining a fused image of the region, DIF (m, n) representing the fusion driving factor of the determined region, "· represents the product of the values at the corresponding pixel positions in the matrix;
s547, fusing detail characteristic images by using fuzzy logic theoryAndthe uncertainty region of (a);
μT∩V(Pk(m,n))=min[μT(Pk(m,n)),μV(Pk(m,n))]
in the formula (I), the compound is shown in the specification,image representing detail featureAnda fused image of the uncertain region, ". represents the product of the values at the corresponding pixel locations in the matrix,./represents the division of the values at the corresponding pixel locations in the matrix,image representing detail featureMembership functions of pixel values at position (m, n) to the degree of importance of the fused image of the uncertain region,image representing detail featureMembership function, μ, of pixel value at location (m, n) to the importance of the fused image of the uncertain regionT(Pk(m, n)) represents a "detail feature imageAndis a membership function of the large "case, muV(Pk(m, n)) represents a "detail feature imageAndis large, k is I1Or P1;
S549, to FDIF(m, n) performing consistency check:
using a size 3 × 3 window in image FDIF(m, n) move up, verify the center pixel with the pixels around the window if the center pixel is fromAndin one of the images, and s (4 < s < 8) surrounding the central pixel are from the other image, the central pixel value is changed to the pixel value of the other image at that location, and the window traverses the entire image FDIF(m, n) to obtain corrected FDIF(m,n)。
As a further improvement of the above technical solution, in step S55, the fusion result F is obtained by:
F=αFL+βFD+γFDIF
wherein α, β, and γ are fusion weight coefficients.
The invention has the beneficial technical effects that:
1. the method of the invention fuses a plurality of polarization quantities of the infrared polarization image, wherein the plurality of polarization quantities comprise an image Q, an image U, an image V, a total light intensity image I, a polarization degree image P and a polarization angle image R, so that the fused image scene is richer, the identification of a camouflage target is facilitated, and meanwhile, the problem of information redundancy among the polarization quantities is effectively solved by obtaining respective unique parts except a common part among the polarization images in the image fusion process.
2. The method of the invention separates a plurality of characteristics of the image and integrates a plurality of uncertain and randomly changed image characteristics, thereby giving consideration to a plurality of characteristics of the image in the fusion process, enhancing the edge detail information of the image and improving the contrast of the image.
Drawings
FIG. 1 is a flowchart of an infrared polarization image fusion method based on multi-feature and feature difference driving according to the present embodiment;
fig. 2 is a flowchart of a fusion method based on multi-feature separation according to this embodiment.
Detailed Description
In order to facilitate the practice of the invention, further description is provided below with reference to specific examples.
As shown in fig. 1, an infrared polarization image fusion method based on multi-feature and feature difference driving specifically includes the following steps:
s1, the polarization of light is expressed by a stokes vector, i.e., S ═ I, Q, U, V, and the degree-of-polarization image P and the angle-of-polarization image R are calculated from the S vector:
in practical polarization, a phase retarder is not needed, and the Stokes parameters can be obtained only by rotating a linear polarizer. The polarization degree image P and the polarization angle image R of the polarized light can be expressed as:
s2, linearly weighting the polarization angle images R and U to obtain an image R':
R'=(R+U)/2。
s3, calculating image R', each unique part except the common part between the total light intensity image I and the polarization degree image P, and respectively recording as R1、I1And P1The method specifically comprises the following steps:
s31, image R ', total intensity image I, and degree of polarization image P have redundant and complementary information therebetween, and the common portion Co between image R', total intensity image I, and degree of polarization image P is calculated using the following formula:
Co=R'∩I∩P=min{R',I,P};
s32, calculating R' of image, total light intensity image I and unique part R of polarization degree image P1、I1And P1:
S4, image P1、I1And R1Mapping to an R channel, a G channel and a B channel in an RGB space respectively to obtain an RGB image, converting the RGB image into a YUV image, and extracting a brightness component Y, wherein UV in the YUV image is a color component, and Y is a brightness component:
s41, image P1Mapping to R channel in RGB space, image I1Mapping to G channel in RGB space, image R1Mapping to a channel B in an RGB space to obtain an RGB image;
s42, converting the RGB image into a YUV image:
s43, extracting luminance component Y:
Y=0.299R+0.587G+0.114B。
s5, referring to FIG. 2, fusing the image I by a method based on multi-feature separation1And P1Obtaining a fusion result F, wherein the method based on multi-feature separation firstly carries out on the image I1And P1Performing multi-feature separation, and then fusing image images I1And P1The same characteristic is added, and then the fusion results of different characteristics are fused again, namely the image I is completed1And P1The method based on multi-feature separation in this embodiment separates dark features, bright features, and detail features of an image, and integrates local area energy features, local area variance features, and local area laddersThe method considers the relationship between image pixels by a plurality of uncertain and randomly-changed image features of the degree, thereby giving consideration to the bright and dark features in the fusion process, enhancing the edge detail information of the image, and improving the contrast of the image, and specifically comprises the following steps:
s51, respectively carrying out multi-feature separation on the images I by adopting a multi-feature separation method based on the dark primary color theory1And P1Performing multi-feature separation to obtain image I1Bright feature image, dark feature image and detail feature image of, and image P1The dark primary color is He and the like and is used for estimating the transmissivity in the atmospheric scattering model, and the natural image is defogged quickly. Therefore, for the gray level image, the dark primary color image includes a bright area in the original image, and a low-frequency part of the original image is embodied, that is, an area with relatively smooth gray level change in the original image is reserved, so that the difference of the bright and dark features is more prominent, and the information of the local area with relatively violent gray level change and high contrast, especially the edge detail information, is lost, and the obtaining process specifically includes:
s511, obtaining an image I1And P1Dark primary color image of (1):
in the formula (I), the compound is shown in the specification,as an image I1The dark primary-color image of (a),is an image P1C is the image I1Or P1R, G, B, N: (x) is a pixel in the window area centered on pixel point x, (I)1)C(y) and (P)1)C(y) are respectively represented as image I1And P1A color channel map of;
s512, image I1And picture P1Respectively negating to obtain imagesAnd imagesImage of dark primary colorAndrespectively associated with the imagesAndfusing according to the rule of taking the absolute value to be small to obtain an image I1Dark feature image ofAnd image P1Dark feature image of
S513, the dark primary color imageAndrespectively corresponding to dark feature imagesAndmaking a difference to obtain an image I1Bright feature image ofAnd image P1Bright feature image of
Andmaking a difference to obtain an image I1Detail feature image ofAnd image P1Detail feature image of
S52, fusing the image I by adopting a matching method based on local region energy characteristics1The bright feature image and the image P1Obtaining a bright feature fusion result FLThe bright characteristic information concentrates on a bright area in the original image, and reflects low-frequency components in the original image, and the calculation process specifically comprises the following steps:
wherein k is I1Or P1,Representing bright feature imagesOrGaussian weighted local energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, and t is (N-1)/2;
s522, obtaining bright characteristic imageAndmatching degree of gaussian weighted local energy of (1):
in the formula, ME(m, n) represents a bright feature imageAndthe degree of matching of the local energies is weighted by the gaussian,representing bright feature imagesA Gaussian-weighted local energy centered at point (m, n),representing bright feature imagesGaussian weighted local energy centered at point (m, n);
s523, fusing the bright feature images through the Gaussian weighted local energy and the Gaussian weighted local energy matching degreeAnd
in the formula, FL(m, n) is a bright feature imageAndfusion result of (1), TlAnd (3) fusing a threshold value for judging similarity for bright features, wherein the value is 0-0.5, and if the value is ME(m,n)<TlThen the characteristic image is brightAndregions centered on point (m, n) are dissimilar, bright feature imagesAndthe fusion result selects the one with larger energy in the Gaussian weighted region, otherwise, the image with bright featuresAndthe fusion result of (2) is a coefficient weighted average.
S53, fusing the image I by adopting a matching method based on local region weighted variance characteristics1The dark feature image and the image P1Obtaining a dark feature fusion result FDDark feature images lack bright areas in the source image but can still be considered as an approximation of the source image, packetThe method comprises the following steps of containing main energy of an image and embodying a basic outline of the image, wherein the calculation process specifically comprises the following steps:
wherein k is I1Or P1,Representing dark feature imagesOrLocal region weighted variance energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, t ═ N-1/2,represents the local area average centered at point (m, n);
in the formula, MV(m, n) denotes darkCharacteristic imageAndthe degree of matching of the local region weighted variance energies,representing dark feature imagesLocal region weighted variance energy centered at point (m, n),representing dark feature imagesLocal region weighted variance energy centered at point (m, n);
s533, fusing two dark feature images through the local area weighted variance energy and the local area weighted variance energy matching degreeAnd
in the formula, FD(m, n) is a dark feature imageAndfusion result of (1), ThThe threshold value for judging the similarity of the dark features is selected from 0.5 to 1, and if the threshold value is ME(m,n)<ThThen two imagesAndthe regions centered on the point (m, n) are not similar, the two imagesAndselecting the one with large weighted variance energy in the local area as the fusion result; otherwise, two imagesAndthe fusion result of (2) is a coefficient weighted average.
S54, the local gradient and the local variance can well reflect the detail information of the image and express the definition of the image. In order to keep the detail information of the detail feature image as much as possible and improve the definition, the fuzzy logic and the feature difference are adopted to drive the fusion image I1The detail characteristic image and the image P1Obtaining a detailed feature fusion result FDIFThe calculation process specifically includes:
wherein k is I1Or P1,Image representing detail featureOrLocal gradients at the middle pixel (m, n),respectively representing horizontal and vertical edge images obtained by convolution of horizontal and vertical templates of a Sobel operator and the detail characteristic image;
wherein k is I1Or P1,Image representing detail featureOrLocal region weighted variance energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, t ═ N-1/2,represents the local area average centered at point (m, n);
s543, obtaining detail characteristic imageAndlocal difference gradient Δ T (M, n), local difference variance Δ V (M, n), local gradient matching degree MT(M, n) and local weighted variance matching MV1(m,n):
In the formula (I), the compound is shown in the specification,image representing detail featureLocal gradients at the middle pixel (m, n),image representing detail featureLocal gradients at the middle pixel (m, n),image representing detail featureLocal region weighted variance energy centered at point (m, n),image representing detail featureLocal region weighted variance energy centered at point (m, n);
s544, a decision graph based on pixels is obtained according to the local difference gradient and the local difference variance, and a feature difference degree decision graph is obtained according to the local gradient matching degree and the local weighted variance matching degree:
where PDG (m, n) is a pixel-based decision graph g1~g9A decision graph showing that the pixel position of the time point (m, n) satisfying the corresponding condition is 1 and the other pixel positions are 0, DDG (m, n) is a feature difference degree decision graph, d1And d2A decision diagram in which the pixel position of the point (m, n) satisfying the corresponding condition is 1 and the other pixel positions are 0;
s545, judging the detail feature image according to the pixel-based decision graph PDG (m, n) and the feature difference degree decision graph DDG (m, n)Andthe definite area and the uncertain area of (2):
the definite region indicates that the PDG (m, n) or DDG (m, n) decision graph can reflect that the gray scale value of the pixel point can be retained in the pixel of the fused image, and the indefinite region indicates that the PDG (m, n) or DDG (m, n) decision graph cannot reflect that the gray scale value of the pixel point can be retained in the pixel of the fused image, specifically:
for g1And g2In the two cases, the local difference gradient Δ T (m, n) and the local difference variance Δ V (m, n) can both reflect whether the gray value of the corresponding pixel point remains in the fused image, so g1And g2Belongs to a determined area;
for g3And g4For the two situations, the feature difference degree decision graph DDG can be used for determining the difference degree of the local features of the two images, then the difference feature with larger difference degree is selected, and the difference feature can reflect whether the gray value of the corresponding pixel point is kept in the fusion image, so g3And g4Belongs to a determined area;
for g5、g6、g7And g8For the four cases, any one of the local difference gradient Δ T (m, n) and the local difference variance Δ V (m, n) can reflect whether the gray value of the corresponding pixel point remains in the fused image, so g5、g6、g7And g8Belongs to a determined area;
for g9In this case, it cannot be reflected whether the gray value of the corresponding pixel point remains in the fused image according to the two decision maps PDG and DDG, so g9Belonging to an uncertain region.
S546, fusing detail feature images by using feature difference drivingAndthe determination region of (2):
DIF(m,n)=ΔT(m,n)·ΔV(m,n)
in the formula (I), the compound is shown in the specification,image representing detail featureAnddetermining a fused image of the region, DIF (m, n) representing a fusion driving factor of the determined region, expressed as a product of the local difference gradient Δ T (m, n) and the local difference variance Δ V (m, n), "· represents a product of values at corresponding pixel locations in the matrix;
For detail feature imagesAndwhether the local gradient of the detail feature image is large or the local weighted variance of the detail feature image is large needs to be considered, and the membership function of the detail feature image is constructed according to the group of relations. Suppose "detail feature imageAndthe local gradient of (a) is large is respectively a membership function ofAnddetail feature imageAndthe local weighted variance of (a) is large is respectivelyAndif so, then there is;
wherein k is I1Or P1;
The detail characteristic images can be respectively calculated by using the delivery operation rule of fuzzy logicAndthe membership functions of the pixel value at the position (m, n) to the importance degree of the fused image of the uncertain region are respectivelyAnd
μT∩V(Pk(m,n))=min[μT(Pk(m,n)),μV(Pk(m,n))]
wherein k is I1Or P1;
The fusion image of the uncertain region of the two image detail characteristic images is as follows:
in the formula (I), the compound is shown in the specification,image representing detail featureAnda fused image of the uncertain region, ". represents the product of the values at the corresponding pixel locations in the matrix,./represents the division of the values at the corresponding pixel locations in the matrix,image representing detail featureMembership functions of pixel values at position (m, n) to the degree of importance of the fused image of the uncertain region,image representing detail featureMembership functions of the pixel values at the position (m, n) to the importance degree of the fused image of the uncertain region;
s549, to FDIF(m, n) performing consistency check:
using a size 3 × 3 window in image FDIF(m, n) move up, verifying the central pixel with the pixels around the window. If the central pixel comes fromAndin one of the images, and s (4 < s < 8) surrounding the central pixel are from the other image, the central pixel value is changed to the pixel value of the other image at that location, and the window traverses the entire image FDIF(m, n) to obtain corrected FDIF(m,n)。
S55, fusion FL、FDAnd FDIFObtaining a fusion result F:
F=αFL+βFD+γFDIF
in the formula, α, β, and γ are fusion weight coefficients, and the value range is [0,1], in order to reduce the supersaturation of the fusion image and improve the contrast, the value of α is 1, the value of β is 0.3, and the value of γ is 1.
S6, replacing the luminance component Y in step S4 with the fusion result F in step S5 to obtain a replaced YUV image, and performing inverse transformation on the replaced YUV image to obtain an RGB image, that is, a final polarization image fusion result:
the brightness contrast of the image is reduced in the RGB color mapping process, so that the brightness component needs to be subjected to gray scale enhancement, and this implementation has already adopted the gray scale fusion image to replace the brightness component to enhance the brightness, that is, the fusion result F of step S5 replaces the brightness component Y, thereby obtaining the final polarization image fusion result.
The foregoing description of the preferred embodiments of the present invention has been included to describe the features of the invention in detail, and is not intended to limit the inventive concepts to the particular forms of the embodiments described, as other modifications and variations within the spirit of the inventive concepts will be protected by this patent. The subject matter of the present disclosure is defined by the claims, not by the detailed description of the embodiments.
Claims (10)
1. The infrared polarization image fusion method based on multi-feature and feature difference driving is characterized by specifically comprising the following steps of:
s1, representing the polarization of light by using a stokes vector, i.e., S ═ I, Q, U, V, and calculating a polarization degree image P and a polarization angle image R from the S vector;
s2, carrying out linear weighting on the polarization angle image R and the polarization angle image U to obtain an image R';
s3, calculating image R', each unique part except the common part between the total light intensity image I and the polarization degree image P, and respectively recording as R1、I1And P1;
S4, image P1、I1And R1Mapping the RGB image to an R channel, a G channel and a B channel in an RGB space respectively to obtain an RGB image, converting the RGB image into a YUV image, and extracting a brightness component Y;
s5 fusing the images I through a method based on multi-feature separation1And P1Obtaining a fusion result F;
s6, replacing the brightness component Y in the step S4 with the fusion result F in the step S5 to obtain a replaced YUV image, and then performing inverse transformation on the replaced YUV image to obtain an RGB image, namely a final polarization image fusion result.
2. The infrared polarization image fusion method based on multi-feature and feature difference driving according to claim 1, wherein the step S1 specifically includes:
s11, calculating a polarization degree image P:
wherein Q represents an intensity difference between horizontal polarization and vertical polarization, U represents an intensity difference between 45 ° and 135 ° in the polarization direction, and I represents a total light intensity image;
s12, calculating a polarization angle image R:
3. the infrared polarization image fusion method based on multi-feature and feature difference driving according to claim 1, wherein the step S3 specifically includes:
s31, calculation image R', common portion Co between total light intensity image I and polarization degree image P:
Co=R'∩I∩P=min{R',I,P};
s32, calculating R' of image, total light intensity image I and unique part R of polarization degree image P1、I1And P1:
4. The infrared polarization image fusion method based on multi-feature and feature difference driving according to claim 1, wherein the step S4 specifically includes:
s41, image P1、I1And R1Respectively mapping to an R channel, a G channel and a B channel in an RGB space to obtain an RGB image;
s42, converting the RGB image into a YUV image:
s43, extracting luminance component Y:
Y=0.299R+0.587G+0.114B。
5. the infrared polarization image fusion method based on multi-feature and feature difference driving according to claim 1, wherein the step S5 specifically includes:
s51, for image I1And P1Performing multi-feature separation to obtain image I1Bright feature image, dark feature image and detail feature image of, and image P1Bright feature images, dark feature images, and detail feature images;
s52 fusion image I1The bright feature image and the image P1Obtaining a bright feature fusion result FL;
S53 fusion image I1The dark feature image and the image P1Obtaining a dark feature fusion result FD;
S54 fusion image I1The detail characteristic image and the image P1Obtaining a detailed feature fusion result FDIF;
S55, fusion FL、FDAnd FDIFTo obtain a fusion result F.
6. The infrared polarization image fusion method based on multi-feature and feature difference driving according to claim 5, which comprisesCharacterized in that in step S51, a multi-feature separation method based on the dark primary theory is adopted for each image I1And P1Performing multi-feature separation, specifically comprising:
s511, obtaining an image I1And P1Dark primary color image of (1):
in the formula (I), the compound is shown in the specification,as an image I1The dark primary-color image of (a),is an image P1C is the image I1Or P1The three color channels R, G, B, N (x) are pixels in the window area centered on pixel point x, (I)1)C(y) and (P)1)C(y) are respectively represented as image I1And P1A color channel map of;
s512, image I1And picture P1Respectively negating to obtain imagesAnd imagesImage of dark primary colorAndrespectively associated with the imagesAndfusing according to the rule of taking the absolute value to be small to obtain an image I1Dark feature image ofAnd image P1Dark feature image of
S513, the dark primary color imageAndrespectively corresponding to dark feature imagesAndmaking a difference to obtain an image I1Bright feature image ofAnd image P1Bright feature image of
S514, image I1And P1Respectively corresponding to dark primary color imagesAndmaking a difference to obtain an image I1Detail feature image ofAnd image P1Detail feature image of
7. The multi-feature based on claim 5The infrared polarization image fusion method driven by feature difference is characterized in that in step S52, an image I is fused by adopting a matching method based on local region energy features1The bright feature image and the image P1The bright feature image specifically includes:
wherein k is I1Or P1,Representing bright feature imagesOrGaussian weighted local energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, and t is (N-1)/2;
s522, obtaining bright characteristic imageAndmatching degree of gaussian weighted local energy of (1):
in the formula, ME(m, n) represents a bright feature imageAndthe degree of matching of the local energies is weighted by the gaussian,representing bright feature imagesA Gaussian-weighted local energy centered at point (m, n),representing bright feature imagesGaussian weighted local energy centered at point (m, n);
s523, fusing the bright feature images through the Gaussian weighted local energy and the Gaussian weighted local energy matching degreeAnd
in the formula, FL(m, n) is a bright feature imageAndfusion result of (1), TlFusing the threshold value for judging similarity for bright features, if the threshold value is ME(m,n)<TlThen the characteristic image is brightAndregions centered on point (m, n) are dissimilar, bright feature imagesAndthe fusion result selects the one with larger energy in the Gaussian weighted region, otherwise, the image with bright featuresAndthe fusion result of (2) is a coefficient weighted average.
8. The infrared polarization image fusion method based on multi-feature and feature difference driving as claimed in claim 5, wherein in step S53, image I is fused by using matching method based on local area weighted variance features1The dark feature image and the image P1The dark feature image specifically includes:
wherein k is I1Or P1,Representing dark feature imagesOrLocal region weighted variance energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, t ═ N-1/2,represents the local area average centered at point (m, n);
in the formula, MV(m, n) denotes a dark feature imageAndthe degree of matching of the local region weighted variance energies,representing dark feature imagesLocal region weighted variance energy centered at point (m, n),representing dark feature imagesLocal region weighted variance energy centered at point (m, n);
s533, fusing two dark feature images through the local area weighted variance energy and the local area weighted variance energy matching degreeAnd
in the formula, FD(m, n) is a dark feature imageAndfusion result of (1), ThIf the threshold value for judging the similarity of the dark feature fusion is ME(m,n)<ThThen two imagesAndthe regions centered on the point (m, n) are not similar, the two imagesAndselecting the one with large weighted variance energy in the local area as the fusion result; otherwise, two imagesAndthe fusion result of (2) is a coefficient weighted average.
9. The infrared polarization image fusion method based on multi-feature and feature difference driving as claimed in claim 5, wherein in step S54, fused image I is driven by fuzzy logic and feature difference1The detail characteristic image and the image P1The detailed feature image specifically includes:
wherein k is I1Or P1,Tk P(m, n) represents a detail feature imageOrLocal gradients at the middle pixel (m, n),respectively representing horizontal and vertical edge images obtained by convolution of horizontal and vertical templates of a Sobel operator and the detail characteristic image;
wherein k is I1Or P1,Vk P(m, n) represents a detail feature imageOrLocal region weighted variance energy centered at point (m, N), w (i, j) is a gaussian filter matrix, N is the size of the region, t ═ N-1/2,representing local area averages centred on point (m, n)A value;
s543, obtaining detail characteristic imageAndlocal difference gradient Δ T (M, n), local difference variance Δ V (M, n), local gradient matching degree MT(M, n) and local weighted variance matching MV1(m,n):
In the formula (I), the compound is shown in the specification,image representing detail featureLocal gradients at the middle pixel (m, n),image representing detail featureLocal gradients at the middle pixel (m, n),image representing detail featureLocal region weighted variance energy centered at point (m, n),image representing detail featureLocal region weighted variance energy centered at point (m, n);
s544, a decision graph based on pixels is obtained according to the local difference gradient and the local difference variance, and a feature difference degree decision graph is obtained according to the local gradient matching degree and the local weighted variance matching degree:
where PDG (m, n) is a pixel-based decision graph g1~g9A decision graph showing that the pixel position of the time point (m, n) satisfying the corresponding condition is 1 and the other pixel positions are 0, DDG (m, n) is a feature difference degree decision graph, d1And d2A decision diagram in which the pixel position of the point (m, n) satisfying the corresponding condition is 1 and the other pixel positions are 0;
s545, judging the detail feature image according to the pixel-based decision graph PDG (m, n) and the feature difference degree decision graph DDG (m, n)Andthe definite area and the uncertain area of (2): g1、g2、g3、g4、g5、g6、g7And g8Belongs to a certain area, g9Belonging to an uncertain region;
s546, fusing detail feature images by using feature difference drivingAndthe determination region of (2):
DIF(m,n)=ΔT(m,n)·ΔV(m,n)
in the formula (I), the compound is shown in the specification,image representing detail featureAnddetermining a fused image of the region, DIF (m, n) representing the fusion driving factor of the determined region, "· represents the product of the values at the corresponding pixel positions in the matrix;
s547, fusing detail characteristic images by using fuzzy logic theoryAndthe uncertainty region of (a);
μT∩V(Pk(m,n))=min[μT(Pk(m,n)),μV(Pk(m,n))]
in the formula (I), the compound is shown in the specification,image representing detail featureAnda fused image of the uncertain region, ". represents the product of the values at the corresponding pixel locations in the matrix,./represents the division of the values at the corresponding pixel locations in the matrix,image representing detail featureMembership functions of pixel values at position (m, n) to the degree of importance of the fused image of the uncertain region,image representing detail featureMembership function, μ, of pixel value at location (m, n) to the importance of the fused image of the uncertain regionT(Pk(m, n)) represents "Detail feature imageAndis a membership function of the large "case, muV(Pk(m, n)) represents a "detail feature imageAndis large, k is I1Or P1;
s549, to FDIF(m, n) performing consistency check:
using a size 3 × 3 window in image FDIF(m, n) move up, verify the center pixel with the pixels around the window if the center pixel is fromAndin one of the images, the surrounding s pixels of the central pixel are from the other image, where 4 < s < 8, then the central pixel value is changed to the pixel value of the other image at that location, and the window traverses the entire image FDIF(m, n) to obtain corrected FDIF(m,n)。
10. The method for fusing infrared polarization images based on multi-feature and feature difference driving as claimed in claim 5, wherein in step S55, the fusion result F is obtained by:
F=αFL+βFD+γFDIF
wherein α, β, and γ are fusion weight coefficients.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811180813.8A CN109410160B (en) | 2018-10-09 | 2018-10-09 | Infrared polarization image fusion method based on multi-feature and feature difference driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811180813.8A CN109410160B (en) | 2018-10-09 | 2018-10-09 | Infrared polarization image fusion method based on multi-feature and feature difference driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109410160A CN109410160A (en) | 2019-03-01 |
CN109410160B true CN109410160B (en) | 2020-09-22 |
Family
ID=65467599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811180813.8A Active CN109410160B (en) | 2018-10-09 | 2018-10-09 | Infrared polarization image fusion method based on multi-feature and feature difference driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109410160B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111292279B (en) * | 2020-01-17 | 2022-07-29 | 中国科学院上海技术物理研究所 | Polarization image visualization method based on color image fusion |
CN115035210B (en) * | 2022-08-10 | 2022-11-11 | 天津恒宇医疗科技有限公司 | PS-OCT visibility improving method and system based on polarization multi-parameter fusion |
CN116091361B (en) * | 2023-03-23 | 2023-07-21 | 长春理工大学 | Multi-polarization parameter image fusion method, system and terrain exploration monitor |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682443A (en) * | 2012-05-10 | 2012-09-19 | 合肥工业大学 | Rapid defogging algorithm based on polarization image guide |
CN104835113A (en) * | 2015-04-30 | 2015-08-12 | 北京环境特性研究所 | Polarization image fusion method based on super-resolution image reconstruction |
CN104978724A (en) * | 2015-04-02 | 2015-10-14 | 中国人民解放军63655部队 | Infrared polarization fusion method based on multi-scale transformation and pulse coupled neural network |
CN105139347A (en) * | 2015-07-10 | 2015-12-09 | 中国科学院西安光学精密机械研究所 | Polarized image defogging method combined with dark channel prior principle |
CN105279747A (en) * | 2015-11-25 | 2016-01-27 | 中北大学 | Infrared polarization and light intensity image fusing method guided by multi-feature objective function |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9008457B2 (en) * | 2010-05-31 | 2015-04-14 | Pesonify, Inc. | Systems and methods for illumination correction of an image |
-
2018
- 2018-10-09 CN CN201811180813.8A patent/CN109410160B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682443A (en) * | 2012-05-10 | 2012-09-19 | 合肥工业大学 | Rapid defogging algorithm based on polarization image guide |
CN104978724A (en) * | 2015-04-02 | 2015-10-14 | 中国人民解放军63655部队 | Infrared polarization fusion method based on multi-scale transformation and pulse coupled neural network |
CN104835113A (en) * | 2015-04-30 | 2015-08-12 | 北京环境特性研究所 | Polarization image fusion method based on super-resolution image reconstruction |
CN105139347A (en) * | 2015-07-10 | 2015-12-09 | 中国科学院西安光学精密机械研究所 | Polarized image defogging method combined with dark channel prior principle |
CN105279747A (en) * | 2015-11-25 | 2016-01-27 | 中北大学 | Infrared polarization and light intensity image fusing method guided by multi-feature objective function |
Also Published As
Publication number | Publication date |
---|---|
CN109410160A (en) | 2019-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Galdran et al. | Enhanced variational image dehazing | |
Shin et al. | Radiance–reflectance combined optimization and structure-guided $\ell _0 $-Norm for single image dehazing | |
CN109410160B (en) | Infrared polarization image fusion method based on multi-feature and feature difference driving | |
Negru et al. | Exponential contrast restoration in fog conditions for driving assistance | |
US9870600B2 (en) | Raw sensor image and video de-hazing and atmospheric light analysis methods and systems | |
CN109410161B (en) | Fusion method of infrared polarization images based on YUV and multi-feature separation | |
Chen et al. | Hazy image restoration by bi-histogram modification | |
Chen et al. | An advanced visibility restoration algorithm for single hazy images | |
CN111292257B (en) | Retinex-based image enhancement method in scotopic vision environment | |
CN110866889A (en) | Multi-camera data fusion method in monitoring system | |
Huo et al. | Fast fusion-based dehazing with histogram modification and improved atmospheric illumination prior | |
Cho et al. | Channel invariant online visibility enhancement for visual SLAM in a turbid environment | |
Wang et al. | Multiscale single image dehazing based on adaptive wavelet fusion | |
Wei et al. | An image fusion dehazing algorithm based on dark channel prior and retinex | |
Pal et al. | Visibility enhancement techniques for fog degraded images: a comparative analysis with performance evaluation | |
Ali et al. | Boundary-constrained robust regularization for single image dehazing | |
Lai et al. | Single image dehazing with optimal transmission map | |
Petrovic | Multilevel image fusion | |
Hong et al. | Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches | |
CN107301625B (en) | Image defogging method based on brightness fusion network | |
Thepade et al. | Improved haze removal method using proportionate fusion of color attenuation prior and edge preserving | |
Pal et al. | Visibility enhancement of fog degraded images using adaptive defogging function | |
Ali et al. | A comparative study of various image dehazing techniques | |
Ancuti et al. | A semi-global color correction for underwater image restoration | |
Naseeba et al. | KP Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |