CN112330639A - Significance detection method for color-thermal infrared image - Google Patents
Significance detection method for color-thermal infrared image Download PDFInfo
- Publication number
- CN112330639A CN112330639A CN202011237976.2A CN202011237976A CN112330639A CN 112330639 A CN112330639 A CN 112330639A CN 202011237976 A CN202011237976 A CN 202011237976A CN 112330639 A CN112330639 A CN 112330639A
- Authority
- CN
- China
- Prior art keywords
- image
- thermal infrared
- matrix
- color
- tensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 56
- 239000011159 matrix material Substances 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 29
- 239000013598 vector Substances 0.000 claims abstract description 20
- 230000011218 segmentation Effects 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 6
- 238000012163 sequencing technique Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000003384 imaging method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000004438 eyesight Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 230000000576 supplementary effect Effects 0.000 description 2
- TVZRAEYQIKYCPH-UHFFFAOYSA-N 3-(trimethylsilyl)propane-1-sulfonic acid Chemical compound C[Si](C)(C)CCCS(O)(=O)=O TVZRAEYQIKYCPH-UHFFFAOYSA-N 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
A significance detection method for color-thermal infrared images belongs to the field of computer visual image detection; firstly, combining a thermal infrared image as an image channel with a color image, and performing superpixel segmentation on the thermal infrared image by using a superpixel segmentation method; then extracting multi-type characteristic vectors from each super pixel, establishing a graph model, and calculating an original graph adjacency matrix; then, stacking the multi-modal multi-feature graph adjacency matrixes into the tensor, and performing graph learning of the low-rank tensor; and finally, carrying out self-adaptive learning on the multiple image adjacency matrixes by using a self-adaptive collaborative sorting algorithm to obtain an optimal image adjacency matrix, and respectively carrying out super-pixel significance sorting on the background seed points and the foreground seed points to obtain a final significance map. The invention can adaptively combine the image information of color and thermal infrared modes, thereby greatly improving the robustness and accuracy of significance detection.
Description
Technical Field
The invention belongs to the field of computer vision image detection, and particularly relates to a saliency detection method for a color-thermal infrared image based on low-rank tensor learning and a collaborative ordering algorithm.
Background
Saliency detection, which aims to automatically locate the part of an image that is most appealing to attention, has been widely used in a variety of computer vision tasks. In the past decades, image saliency detection has been widely explored by researchers, and models for saliency detection can be divided into two broad categories depending on whether a priori knowledge is applied to the algorithm of saliency: a bottom-up model and a top-down model. The bottom-up model is mainly driven by sensory stimulation, and depends on image characteristics of lower layers, such as color, texture, position and the like, and the main defects of the saliency detection method are that the detection result cannot contain all saliency areas and is extremely easy to be interfered by the background; the top-down model relies primarily on task-driven saliency detection driven by incorporating high-level human sensory knowledge such as environmental, semantic, background priors. Although the corresponding research has made great progress, the saliency detection still faces many problems, for example, in some scenes with a cluttered background, under the influence of severe weather such as rain, snow, fog and the like, or under the condition of low light such as night, the imaging of the traditional visible light camera is greatly influenced, and the saliency detection of the image is also negatively influenced. Therefore, the introduction of other cameras besides the traditional visible light camera, such as a thermal infrared camera, will significantly improve the effect of detecting the saliency of the image.
The thermal infrared camera is a device which converts a temperature distribution image of a target object into a visual image by utilizing an infrared thermal imaging technology and through infrared radiation detection of the target object and means such as signal processing, photoelectric conversion and the like, has good penetrating power and special identification camouflage capability, is not interfered by illumination and shadow, and is widely applied to the fields of night vision, low-visibility environment imaging and the like at present. However, the imaging of such a sensor has the disadvantage of poor resolution, and if the temperature difference of the measured object is not very large, the imaging contrast is very low, and detailed information such as the geometry and texture of the object cannot be retained. In contrast, the resolution of the image acquired by the visible light camera is high, and the image contains abundant geometric and texture details for detecting an object, but has the problem of influence of light, environment and the like. Therefore, the two imaging methods have respective advantages and disadvantages, and if the two imaging methods can complement each other and perform imaging together, the significance detection effect can be greatly improved.
Currently, the detection of the significance of color-thermal infrared images is a very novel research field, and only a few researchers have conducted related work. Li et al propose a multitask manifold ordering algorithm for color-thermal infrared image saliency detection; tu et al established a collaborative image learning model; tang et al propose a collaborative ranking algorithm based on graph theory. These algorithms typically suffer from several problems: 1. multi-type features extracted from a pair of color-thermal infrared images, which typically contain the same image information, interact with each other, potentially creating redundancy of information, obscuring useful information, and adding significant computational complexity; 2. when the image of one of the two modalities is severely disturbed, the algorithms of Li, Tu and Tang et al often do not achieve good detection results because they do not adapt well to the image of the two modalities. Therefore, it is necessary to develop an efficient saliency detection algorithm for color-thermal infrared images.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a saliency detection method for a color-thermal infrared image based on low-rank tensor learning and a collaborative ordering algorithm.
The technical scheme of the invention is as follows:
as shown in fig. 1 and 2, a saliency detection method for color-thermal infrared images comprises the following steps:
step 1, channel fusion is carried out on the color image and the thermal infrared image to form a color-thermal infrared image, the image channel fusion adopts a method that 0.5 times of the color image and 0.5 times of the thermal infrared image are superposed, and the superposed color-thermal infrared image is used for superpixel segmentation;
step 2, performing superpixel segmentation on the color-thermal infrared image obtained in the step 1 by using a SLIC superpixel algorithm to form n superpixel blocks, wherein labels of the superpixel segmentation are respectively mapped into the color image and the thermal infrared image for feature extraction;
step 3, extracting the CIE-LAB color characteristic, the first convolution layer characteristic of the FCN-32S network and the fifth convolution layer characteristic of the FCN-32S network of each superpixel block in the color image and the thermal infrared image respectively, and forming three characteristic matrixes of the color image and three characteristic matrixes of the thermal infrared image respectively;
step 4, respectively establishing a graph model G (S, E) for the three feature matrixes of the color image and the three feature matrixes of the thermal infrared image, wherein S represents a node of the graph model, namely a feature vector corresponding to each super pixel; e represents an edge connected between nodes;
step 5, calculating the adjacency matrix of each graph modelWherein R ∈ {1,2, …, R } represents a different modality image, i.e., a color image or a thermal infrared image; q ∈ {1,2, …, Q } represents a different feature of each modality image;
step 6, map adjacency matrix A obtained in step 5(r,q)Variables stacked into a tensor formAnd (3) constructing a low-rank tensor learning model:
in the formula, A(r)Is the input graph adjacency matrix tensor, Z(r)Is the adjacency matrix tensor obtained after the model learning,is a t-SVD tensor nuclear norm form, i | · | | of tensor countingFIs in the form ofα and β are parameters of the model;
step 7, decomposing the adjacent matrix tensor Z obtained by the low rank tensor learning model in the step 6(r)Obtaining the adjacency matrix of each graph model
And 8, sequentially constructing sequencing background indication vectors by using the superpixel blocks of the upper boundary, the lower boundary, the left boundary and the right boundary after the color-thermal infrared image superpixel segmentation as background seed points: y ist,yb,yl,yrWherein y ═ y1,y2,…,yn]T,yi1 represents that the node belongs to a salient region, yi0 represents that the node does not belong to a salient region;
step 9, the adjacent matrix Z obtained in the step 7(r,q)And the sorted background indication vector y obtained in step 8t,yb,yl,yrInputting the data into a co-ranking algorithm to calculate corresponding significance ranking ft,fb,fl,fr:
Wherein f is the significance ranking obtained; i is an identity matrix; w ═ W1,w2,...,wn]Is the adjacency matrix after graph learning, LW=DWW is the Laplace matrix of the matrix W, DW=∑jWij;Is a matrix Z(r,q)A laplacian matrix of; tr (-) is the trace of the matrix; i | · | purple windFSolving the F-norm of the matrix; learning _1 is a first form of image Learning, where λ and μ are adaptive parameters, respectively control the weights between different modality images and features, and satisfy:learning _2 is a second form of image Learning, in which the matrix H(r,q)Is calculated for each element ofT=DZ -1Z(r,q),Matrix d satisfies dT1V=1;θ,γ,δ, η, ε 1 and ε 2 are parameters of the co-ranking algorithm;
step 10, four significance orderings f obtained by calculation in step 9t,fb,fl,frThe elements of (A) are multiplied correspondingly to obtain a significance value of the first stage;
step 11, setting a threshold value for the first-stage saliency value obtained in step 10, namely, regarding a point larger than the average value of the first-stage saliency value as a foreground seed point, and regarding the rest as a background seed point; and constructing a foreground indication vector, and calculating by a collaborative sorting algorithm to obtain a final significance sorting value so as to obtain a significance map.
The invention has the beneficial effects that:
(1) different from the traditional significance detection algorithm, the significance detection method disclosed by the invention combines the color image and the thermal infrared image for significance detection, fully utilizes the advantages of the two modal images, achieves the effect of gain complementation of the two modal images, greatly contributes to the improvement of the significance detection effect, and can meet the detection requirements in special environments, such as adverse scenes of black night, haze, overcast and rainy and the like;
(2) according to the method, the low-rank tensor learning model is used for processing the adjacent matrixes of the graph model, and the operation stacks a plurality of multi-modal multi-feature graph adjacent matrixes into a tensor form for optimal solution, so that redundant information and noise in the multi-modal multi-feature graph adjacent matrixes can be effectively removed, foreground information can be better highlighted, and the calculation complexity is reduced;
(3) the collaborative sequencing algorithm can adaptively combine multi-modal multi-feature image information, and solve the optimal adjacency matrix by adopting two graph learning forms, wherein the main learning form is a matrix trace form, so that the learning process from a multi-matrix to an optimal matrix is ensured; the supplementary learning form is a matrix Frobenius norm form, the learning value of each characteristic can be judged in a self-adaptive manner, and the learning intensity is automatically adjusted through the parameter d, so that the supplementary effect on the main learning form is achieved; in addition, the graph learning model and the significance ranking model are integrated into one algorithm, so that the graph learning process and the significance ranking process can be mutually constrained, and the significance detection effect is further improved.
Drawings
Fig. 1 is a flowchart of a saliency detection method for color-thermal infrared images based on low rank tensor learning and co-ranking algorithm according to the present invention.
Fig. 2 is a schematic diagram of a specific image detection process of the saliency detection method for color-thermal infrared images based on low rank tensor learning and co-ranking algorithm according to the present invention.
FIG. 3 is a graph of saliency detection results on a particular image by an embodiment of the method of the present invention; (a) color images of four different targets; (b) thermal infrared images of four different targets; (c) saliency maps of four different targets.
Fig. 4 is a comparison graph of detection results of the saliency detection method for a color-thermal infrared image based on the low rank tensor learning and the co-ranking algorithm and the existing saliency detection method.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
As shown in fig. 1, a saliency detection method for color-thermal infrared images comprises the following steps:
step 1, carrying out channel fusion on the color image shown in the figure 3(a) and the thermal infrared image shown in the figure 3(b) to form a color-thermal infrared image, wherein the image channel fusion adopts a method of superposing the color image of 0.5 times and the thermal infrared image of 0.5 times, and the superposed color-thermal infrared image is used for superpixel segmentation;
the image channel fusion method is represented as follows: 0.5 × color (RGB) image +0.5 × thermal infrared (T) image ═ color-thermal infrared (RGB-T) image;
step 2, performing superpixel segmentation on the color-thermal infrared image obtained in the step 1 by using a SLIC superpixel algorithm to form n superpixel blocks, wherein labels of the superpixel segmentation are respectively mapped into the color image and the thermal infrared image for feature extraction;
step 3, extracting the CIE-LAB color characteristic, the first convolution layer characteristic of the FCN-32S network and the fifth convolution layer characteristic of the FCN-32S network of each superpixel block in the color image and the thermal infrared image respectively, and forming three characteristic matrixes of the color image and three characteristic matrixes of the thermal infrared image respectively;
step 4, respectively establishing a graph model G (S, E) for the three feature matrixes of the color image and the three feature matrixes of the thermal infrared image, wherein S represents a node of the graph model, namely a feature vector corresponding to each super pixel; e represents an edge connected between nodes;
step 5, calculating the adjacency matrix of each graph modelWherein R ∈ {1,2, …, R } represents a different modality image, i.e., a color image or a thermal infrared image; q ∈ {1,2, …, Q } represents a different feature of each modality image;
calculating graph adjacency moments of graph modelsMatrix ofThe method comprises the following steps:
in the formula, xiIs the feature vector, σ, of node ir={σ1,σ2Is a balance parameter, where σ1Adjacent matrix calculation for color images, σ2Adjacency matrix calculation for thermal infrared images;
step 6, map adjacency matrix A obtained in step 5(r,q)Variables stacked into a tensor formAnd (3) constructing a low-rank tensor learning model:
in the formula, A(r)Is the input graph adjacency matrix tensor, Z(r)Is the adjacency matrix tensor obtained after the model learning,is a t-SVD tensor nuclear norm form, i | · | | of tensor countingFIs in the form ofα and β are parameters of the model;
the solving method of the low-rank tensor learning model is as follows:
in the formula, tensor t-SVD decomposition form A(r)=U(r)×S(r)×V(r)T;J(r)Is a Fourier diagonal tensor, the formula for calculating each element of which isU(r)And V(r)Is an orthogonal tensor with dimensions n × n × Q; s(r)Is a Fourier diagonal tensor with dimensions nxnxnxnxq; parameter tau is beta/alpha, n3=Q;
Step 7, decomposing the adjacent matrix tensor Z obtained by the low rank tensor learning model in the step 6(r)Obtaining the adjacency matrix of each graph model
And 8, sequentially constructing sequencing background indication vectors by using the superpixel blocks of the upper boundary, the lower boundary, the left boundary and the right boundary after the color-thermal infrared image superpixel segmentation as background seed points: y ist,yb,yl,yrWherein y ═ y1,y2,…,yn]T,yi1 represents that the node belongs to a salient region, yi0 represents that the node does not belong to a salient region;
step 9, the adjacent matrix Z obtained in the step 7(r,q)And the sorted background indication vector y obtained in step 8t,yb,yl,yrInputting the data into a co-ranking algorithm to obtain corresponding significance ranking f by calculationt,fb,fl,fr:
Wherein f is the significance ranking obtained; i is an identity matrix; w ═ W1,w2,...,wn]Is the adjacency matrix after graph learning, LW=DWW is the Laplace matrix of the matrix W, DW=∑jWij;Is a matrix Z(r,q)A laplacian matrix of; tr (-) is the trace of the matrix; i | · | purple windFSolving the F-norm of the matrix; learning _1 is a first form of image Learning, where λ and μ are adaptive parameters, respectively control the weights between different modality images and features, and satisfy:learning _2 is a second form of image Learning, in which the matrix H(r,q)Is calculated for each element ofT=DZ -1Z(r,q),Matrix d satisfies dT1V=1;θ,γ,δ, η, ε 1 and ε 2 are parameters of the co-ranking algorithm;
the iterative solution form of the co-ranking algorithm is as follows:
step one, updating W:
in which each of the matrices FThe formula for calculating the element is Fij=(fi-fj)2;
Step two, updating mu:
and step three, updating lambda:
step four, updating d:
in the formula, wnIs the nth row vector of the matrix W,is a matrix Z(r,q)The nth row vector of (1); the formula is a standard solving form, and can be easily solved by using an MATLAB tool box;
step five, updating f:
step 10, four significance orderings f obtained by calculation in step 9t,fb,fl,frThe elements of (A) are multiplied correspondingly to obtain a significance value of the first stage;
the significance ranking calculation form of the first stage is as follows:whereinRepresenting the multiplication of corresponding elements of two vectors;
step 11, setting a threshold value for the first-stage saliency value obtained in the step 10, namely, regarding the node which is larger than the average value of the first-stage saliency value as a foreground seed point, and regarding the rest as a background seed point; and further constructing a foreground indication vector, calculating by a co-ranking algorithm, obtaining a final significance ranking value, and obtaining a significance map as shown in fig. 3 (c).
In this embodiment, the parameters of the color-thermal infrared image saliency detection method based on low-rank tensor learning and a co-ranking algorithm are set as follows:
as shown in fig. 3, the saliency detection result of the method of the present invention on a specific image is visually demonstrated. Fig. 3(a) is an input color image, fig. 3(b) is an input thermal infrared image, and fig. 3(c) is a saliency detection result obtained by the method of the present invention. The result shows that the method can effectively combine the multi-modal image information to detect the multi-modal image significance, and obtains good effect, thereby having good application value.
FIG. 4 is a graph comparing the detection results of the method of the present invention with other currently advanced significance detection methods. Fig. 4(a) is a graph of the detection result of the MR method, fig. 4(b) is a graph of the detection result of the RBD method, fig. 4(c) is a graph of the detection result of the CA method, fig. 4(d) is a graph of the detection result of the RRWR method, fig. 4(e) is a graph of the detection result of the FCNN method, fig. 4(f) is a graph of the detection result of the DSS method, fig. 4(g) is a graph of the detection result of the MILPS method, fig. 4(h) is a graph of the detection result of the MTMR method, fig. 4(i) is a graph of the detection result of the method of the present invention, and fig. 4(j) is a truth-significance labeled graph. Compared with other advanced significance detection algorithms, the method can better extract the significant region of the image, effectively inhibit the non-significant background region, and further verify the significance detection effectiveness of the method on the color-thermal infrared image.
Claims (4)
1. A saliency detection method for chromatic-thermal infrared images, characterized in that it comprises the following steps:
step 1, channel fusion is carried out on the color image and the thermal infrared image to form a color-thermal infrared image, the image channel fusion adopts a method that 0.5 times of the color image and 0.5 times of the thermal infrared image are superposed, and the superposed color-thermal infrared image is used for superpixel segmentation;
step 2, performing superpixel segmentation on the color-thermal infrared image obtained in the step 1 by using a SLIC superpixel algorithm to form n superpixel blocks, wherein labels of the superpixel segmentation are respectively mapped into the color image and the thermal infrared image for feature extraction;
step 3, extracting the CIE-LAB color characteristic, the first convolution layer characteristic of the FCN-32S network and the fifth convolution layer characteristic of the FCN-32S network of each superpixel block in the color image and the thermal infrared image respectively, and forming three characteristic matrixes of the color image and three characteristic matrixes of the thermal infrared image respectively;
step 4, respectively establishing a graph model G (S, E) for the three feature matrixes of the color image and the three feature matrixes of the thermal infrared image, wherein S represents a node of the graph model, namely a feature vector corresponding to each super pixel; e represents an edge connected between nodes;
step 5, calculating the adjacency matrix of each graph modelWherein R ∈ {1,2, …, R } represents a different modality image, i.e., a color image or a thermal infrared image; q ∈ {1,2, …, Q } represents a different feature of each modality image;
step 6, map adjacency matrix A obtained in step 5(r,q)Variables stacked into a tensor formAnd (3) constructing a low-rank tensor learning model:
in the formula, A(r)Is the input graph adjacency matrix tensor, Z(r)Is the adjacency matrix tensor obtained after the model learning,is a t-SVD tensor nuclear norm form, i | · | | of tensor countingFIs in the form ofα and β are parameters of the model;
step 7, decomposing the adjacent matrix tensor Z obtained by the low rank tensor learning model in the step 6(r)Obtaining the adjacency matrix of each graph model
And 8, sequentially constructing sequencing background indication vectors by using the superpixel blocks of the upper boundary, the lower boundary, the left boundary and the right boundary after the color-thermal infrared image superpixel segmentation as background seed points: y ist,yb,yl,yrWherein y ═ y1,y2,…,yn]T,yi1 represents that the node belongs to a salient region, yi0 represents that the node does not belong to a salient region;
step 9, the adjacent matrix Z obtained in the step 7(r,q)And the sorted background indication vector y obtained in step 8t,yb,yl,yrInputting the data into a co-ranking algorithm, and calculating corresponding significance ranking ft,fb,fl,fr;
Step 10, four significance orderings f obtained by calculation in step 9t,fb,fl,frThe elements of (A) are multiplied correspondingly to obtain a significance value of the first stage;
step 11, setting a threshold value for the first-stage saliency value obtained in step 10, namely, regarding the points larger than the first-stage saliency average value as foreground seed points, and regarding the rest as background seed points; and constructing a foreground indication vector, and calculating by a collaborative sorting algorithm to obtain a final significance sorting value so as to obtain a significance map.
2. Method for detecting the saliency for chromatic-thermal infrared images according to claim 1, characterized in that said graph adjacency matrix of the graph model is calculated in step 5The method comprises the following steps:
in the formula, xiIs the feature vector, σ, of node ir={σ1,σ2Is a balance parameter, where σ1Adjacent matrix calculation for color images, σ2Adjacency matrix calculation for thermal infrared images.
3. The saliency detection method for color-thermal infrared images according to claim 1, characterized in that said low rank tensor learning model in step 6 is solved by:
in the formula, tensor t-SVD decomposition form A(r)=U(r)×S(r)×V(r)T;J(r)Is a Fourier diagonal tensor, the formula for calculating each element of which isU(r)And V(r)Is an orthogonal tensor with dimensions n × n × Q; s(r)Is a Fourier diagonal tensor with dimensions nxnxnxnxq; parameter tau is beta/alpha, n3=Q。
4. The saliency detection method for chromatic-thermal infrared images according to claim 1, characterized in that said co-ranking algorithm in step 9 is:
wherein f is the significance ranking obtained; i is an identity matrix; w ═ W1,w2,...,wn]Is the adjacency matrix after graph learning, LW=DWW is the Laplace matrix of the matrix W, DW=∑jWij;Is a matrix Z(r,q)A laplacian matrix of; tr (-) is the trace of the matrix; i | · | purple windFSolving the F-norm of the matrix; learning _1 is a first form of image Learning, where λ and μ are adaptive parameters, respectively control the weights between different modality images and features, and satisfy:learning _2 is a second form of image Learning, in which the matrix H(r,q)Is calculated for each element ofT=DZ -1Z(r,q),Matrix d satisfies dT1V=1;θ,γ,δ, η, ε 1 and ε 2 are parameters of the co-ranking algorithm;
the iterative solution form of the co-ranking algorithm is as follows:
step one, updating W:
wherein the formula for calculating each element of the matrix F is Fij=(fi-fj)2;
Step two, updating mu:
and step three, updating lambda:
step four, updating d:
in the formula, wnIs the nth row vector of the matrix W,is momentArray Z(r,q)The nth row vector of (1); the formula is a standard solving form, and can be easily solved by using an MATLAB tool box;
step five, updating f:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011237976.2A CN112330639A (en) | 2020-11-09 | 2020-11-09 | Significance detection method for color-thermal infrared image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011237976.2A CN112330639A (en) | 2020-11-09 | 2020-11-09 | Significance detection method for color-thermal infrared image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112330639A true CN112330639A (en) | 2021-02-05 |
Family
ID=74316604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011237976.2A Pending CN112330639A (en) | 2020-11-09 | 2020-11-09 | Significance detection method for color-thermal infrared image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112330639A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011438A (en) * | 2021-03-16 | 2021-06-22 | 东北大学 | Node classification and sparse graph learning-based bimodal image saliency detection method |
CN115240042A (en) * | 2022-07-05 | 2022-10-25 | 抖音视界有限公司 | Multi-modal image recognition method and device, readable medium and electronic equipment |
-
2020
- 2020-11-09 CN CN202011237976.2A patent/CN112330639A/en active Pending
Non-Patent Citations (1)
Title |
---|
LIMING HUANG等: "RGB-T Saliency Detection via Low-Rank Tensor Learning and Unified Collaborative Ranking", 《IEEE SIGNAL PROCESSING LETTERS》, vol. 27 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011438A (en) * | 2021-03-16 | 2021-06-22 | 东北大学 | Node classification and sparse graph learning-based bimodal image saliency detection method |
CN113011438B (en) * | 2021-03-16 | 2023-09-05 | 东北大学 | Bimodal image significance detection method based on node classification and sparse graph learning |
CN115240042A (en) * | 2022-07-05 | 2022-10-25 | 抖音视界有限公司 | Multi-modal image recognition method and device, readable medium and electronic equipment |
CN115240042B (en) * | 2022-07-05 | 2023-05-16 | 抖音视界有限公司 | Multi-mode image recognition method and device, readable medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105930868B (en) | A kind of low resolution airport target detection method based on stratification enhancing study | |
CN107578418B (en) | Indoor scene contour detection method fusing color and depth information | |
CN108537239B (en) | Method for detecting image saliency target | |
CN108121991B (en) | Deep learning ship target detection method based on edge candidate region extraction | |
CN108573276A (en) | A kind of change detecting method based on high-resolution remote sensing image | |
CN109376591B (en) | Ship target detection method for deep learning feature and visual feature combined training | |
CN108038846A (en) | Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks | |
CN110110755B (en) | Pedestrian re-identification detection method and device based on PTGAN region difference and multiple branches | |
CN108846404B (en) | Image significance detection method and device based on related constraint graph sorting | |
CN111401380B (en) | RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization | |
CN107146219B (en) | Image significance detection method based on manifold regularization support vector machine | |
CN110969171A (en) | Image classification model, method and application based on improved convolutional neural network | |
CN116188999B (en) | Small target detection method based on visible light and infrared image data fusion | |
CN105760898A (en) | Vision mapping method based on mixed group regression method | |
CN112883850A (en) | Multi-view aerospace remote sensing image matching method based on convolutional neural network | |
CN111368637B (en) | Transfer robot target identification method based on multi-mask convolutional neural network | |
CN112330639A (en) | Significance detection method for color-thermal infrared image | |
CN108345835B (en) | Target identification method based on compound eye imitation perception | |
CN115937552A (en) | Image matching method based on fusion of manual features and depth features | |
CN115908924A (en) | Multi-classifier-based small sample hyperspectral image semantic segmentation method and system | |
CN113011438B (en) | Bimodal image significance detection method based on node classification and sparse graph learning | |
CN114067273A (en) | Night airport terminal thermal imaging remarkable human body segmentation detection method | |
CN117541645A (en) | Ore blocking pose detection method based on double-attention mechanism feature fusion | |
CN104778683A (en) | Multi-modal image segmenting method based on functional mapping | |
CN117351078A (en) | Target size and 6D gesture estimation method based on shape priori |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210205 |