CN108765460B - Hyperspectral image-based space-time joint anomaly detection method and electronic equipment - Google Patents

Hyperspectral image-based space-time joint anomaly detection method and electronic equipment Download PDF

Info

Publication number
CN108765460B
CN108765460B CN201810493266.2A CN201810493266A CN108765460B CN 108765460 B CN108765460 B CN 108765460B CN 201810493266 A CN201810493266 A CN 201810493266A CN 108765460 B CN108765460 B CN 108765460B
Authority
CN
China
Prior art keywords
hyperspectral image
frame
image sequence
graph
stp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810493266.2A
Other languages
Chinese (zh)
Other versions
CN108765460A (en
Inventor
王津申
李阳
刘翔
谢启明
鲜宁
龙华保
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Shanghai Aerospace Control Technology Institute
Original Assignee
Beihang University
Shanghai Aerospace Control Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Shanghai Aerospace Control Technology Institute filed Critical Beihang University
Priority to CN201810493266.2A priority Critical patent/CN108765460B/en
Publication of CN108765460A publication Critical patent/CN108765460A/en
Application granted granted Critical
Publication of CN108765460B publication Critical patent/CN108765460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a space-time joint anomaly detection method based on hyperspectral images and electronic equipment, wherein the method comprises the following steps: aiming at a hyperspectral image sequence, acquiring a spatial anomaly map of each frame of image in the hyperspectral image sequence; acquiring a time anomaly map of each frame of image in the hyperspectral image sequence; acquiring a track prediction graph of target detection of the current frame according to the combined abnormal graph of the previous frame of image; the previous frame is a frame adjacent to the current frame; and acquiring a combined abnormal graph of the target in the hyperspectral image cube according to the spatial abnormal graph, the temporal abnormal graph and the track prediction graph of each frame. The method provided by the invention has a lower false alarm rate and a higher detection probability when being applied to a detection scene of an aerial airplane target.

Description

Hyperspectral image-based space-time joint anomaly detection method and electronic equipment
Technical Field
The invention belongs to an image identification technology, and particularly relates to a space-time joint anomaly detection method based on a hyperspectral image and electronic equipment.
Background
With the development of hyperspectral sensors, hyperspectral images have been applied to many classical image processing problems. Compared with a single-waveband sensor, the hyperspectral sensor can acquire spectral characteristic information and spatial characteristic information of an image at the same time. This is also an important advantage of hyperspectral images in the field of image processing.
A weak object refers to an object of interest that is small in size, weak in intensity, and low in signal-to-noise ratio in the image. The detection of weak and small targets has wide application in both military and civil fields, and attracts the interest of a plurality of researchers. Due to the presence of complex backgrounds, interference of noise clutter and attenuation of long distance transmissions in hyperspectral images, the signal-to-noise ratio of the object of interest can be very low. Moreover, different targets have different signal-to-noise ratios, which may also cause omission or false determination of target detection. Researchers cannot obtain accurate and reliable detection results from infrared images of a single waveband. The hyperspectral image contains spectral information of a target, and a better detection result can be obtained. Therefore, how to detect and track the weak and small targets becomes one of the currently urgent technical problems to be solved in the field of hyperspectral image processing.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a space-time joint anomaly detection method based on a hyperspectral image and electronic equipment.
In a first aspect, the present invention provides a spatio-temporal joint anomaly detection method based on hyperspectral images, including:
101. aiming at a hyperspectral image sequence, acquiring a spatial anomaly map of each frame of image in the hyperspectral image sequence;
102. acquiring a time anomaly map of each frame of image in the hyperspectral image sequence;
103. acquiring a track prediction image of the target detection of the current frame according to the target detection result of the previous frame of image; the previous frame is a frame adjacent to the current frame, and the target detection result is generated through a space abnormal image, a time abnormal image and a track prediction image of the previous frame image;
104. and acquiring a combined abnormal graph of the target in the hyperspectral image sequence according to the spatial abnormal graph, the temporal abnormal graph and the track prediction graph of each frame.
Optionally, before the step 101, the method further includes:
100. performing dimensionality reduction on the hyperspectral image sequence to obtain a dimensionality-reduced hyperspectral image sequence;
correspondingly, the step 101 specifically includes:
obtaining a spatial anomaly map of each frame in the hyperspectral image sequence after dimension reduction;
the step 102 specifically includes:
and acquiring a time anomaly map of each frame in the hyperspectral image sequence after dimension reduction.
Optionally, the step 101 includes:
using the formula S (x, y, t) ═ Vtt)T·(Φt)-1·(Vtt) Acquiring a spatial anomaly map S (x, y, t);
wherein, Vt∈R1×kRefers to the feature vector, μ, of the pixel to be detected in PC (x, y, k, t)t∈R1×kMeans the mean value of PC (x, y, k, t), phitRefers to the autocovariance of PC (x, y, k, t); and PC (x, y, k, t) is the hyperspectral image sequence or the original hyperspectral image sequence after dimension reduction.
Optionally, the step 102 includes:
according to the formula T (x, y, T) ═ Vtt)T·(Φt+1)-1·(Vtt) Acquiring a time anomaly map T (x, y, T);
wherein, Vt∈R1×kRefers to the feature vector, μ, of the pixel to be detected in PC (x, y, k, t)t∈R1×kMeans the mean value of PC (x, y, k, t), phit+1Refers to the autocovariance of PC (x, y, k, t + 1);
PC (x, y, k, t) is a current frame of the dimension-reduced hyperspectral image sequence or the original hyperspectral image sequence, and PC (x, y, k, t +1) is a next frame of the current frame in the dimension-reduced hyperspectral image sequence or the original hyperspectral image sequence.
Optionally, the step 103 includes:
according to the formula
Figure BDA0001668462240000031
Acquiring a track prediction graph P (x, y, t) of target detection of a current frame;
STP (x, y, t-1) identifies the joint anomaly map for the previous frame;
STP (x, y, T) ═ N (ST (x, y, T) + C) · N (P (x, y, T) + C), N (-) denotes normalization operation, C is an empirical constant, ST (x, y, T) ═ N (S (x, y, T)) · N (T (x, y, T))
Optionally, the step 104 includes:
acquiring a combined anomaly map STP (x, y, t) of a target in the hyperspectral image sequence according to a formula STP (x, y, t) ═ N (ST (x, y, t) + C) · N (P (x, y, t) + C);
where ST (x, y, T) is N (S (x, y, T)) · N (T (x, y, T)), P (x, y, T) is a trajectory prediction map, T (x, y, T) is a temporal anomaly map, and S (x, y, T) is a spatial anomaly map.
Optionally, the method further comprises:
105. carrying out post-processing on the obtained combined abnormal graph of the target in the hyperspectral image sequence;
the post-processing comprises: anti-interference processing and adaptive threshold processing.
Or;
the anti-interference processing comprises the following steps: exponentially enhancing the joint anomaly map STP (x, y, t) by using a formula STP (x, y, t) ═ STP (x, y, t)) r, wherein r is an empirical constant;
using the formula Th ═ muSTP+k·σSTPObtaining a self-adaptive threshold Th, processing each point in the combined abnormal graph by adopting the self-adaptive threshold, and obtaining a processed combined abnormal graphSynthesizing an abnormal graph;
wherein, muSTPIs the mean, σ, of the joint anomaly map STP (x, y, t)STPK is an empirical constant for the variance of the joint anomaly map STP (x, y, t).
Optionally, the spectral range of the hyperspectral image sequence comprises all or part of the following bands:
0.39-0.7 μm visible light, 0.76-2.5 μm short-wave infrared light, 3-5 μm medium-wave infrared light and 8-12 μm long-wave infrared light.
The partial wavelength band of this embodiment may be a partial wavelength band of visible light, a partial wavelength band of short-wave infrared rays, a partial wavelength band of medium-wave infrared rays, and a partial wavelength band of long-wave infrared rays, or may be a partial wavelength band selected from visible light, short-wave infrared rays, and the like, and may be selected according to actual needs without limiting the wavelength-divided wavelength band thereof.
Optionally, the step 100 includes:
and performing dimensionality reduction on the hyperspectral image sequence by adopting a principal component analysis mode.
The invention has the following beneficial effects:
the hyperspectral image sequence not only contains spatial information, but also contains time information and spectral information. In order to detect the motion consistency characteristics of the target, the method generates a track prediction graph. And the spatial anomaly map, the time anomaly map and the track prediction map are fused, so that an interested target can be conveniently detected from the background. The method is applied to a test data set of an aircraft target in a cloud clutter background. Experimental results show that the method has lower false alarm rate and higher detection probability. Further, small target detection plays an important role in both civilian and military fields.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a generation process of a hyperspectral data cube according to an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating a dimension reduction process of a hyperspectral data cube according to an embodiment of the invention;
fig. 3 is a schematic diagram of a space-time joint anomaly detection method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a frame in a hyperspectral image sequence according to an embodiment of the invention;
FIG. 5 is a schematic three-dimensional projection of FIG. 4;
FIG. 6 is a schematic representation of the experimental results of the method of the present invention.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
In the following description, various aspects of the invention will be described, however, it will be apparent to those skilled in the art that the invention may be practiced with only some or all of the structures or processes of the present invention. Specific numbers, configurations and sequences are set forth in order to provide clarity of explanation, but it will be apparent that the invention may be practiced without these specific details. In other instances, well-known features have not been set forth in detail in order not to obscure the invention.
The method provided by the embodiment of the invention is used for detecting the weak and small moving targets of the hyperspectral image sequence. A spatial anomaly map is first computed. The spatial anomaly map can be obtained by extracting spatial significance information of a hyperspectral image sequence through an RXD algorithm. Then, based on the assumption that two adjacent frames of images have similar backgrounds, the time domain hyperspectral anomaly map can be obtained by calculating two continuous frames of hyperspectral images. In addition, the invention also introduces a track prediction graph to extract the motion continuity characteristics of the target. And finally, fusing the spatial anomaly map, the time domain anomaly map and the track prediction map to obtain the target joint anomaly map.
The hyperspectral image generated by the hyperspectral sensor can contain dozens of spectral bands, such as visible light (0.39-0.7 mu m), short-wave infrared (SWIR: 0.76-2.5 mu m), medium-wave infrared (MWIR: 3-5 mu m) and long-wave infrared (LWIR: 8-12 mu m). The spectrum range of the hyperspectral image sequence used in the embodiment of the invention comprises visible light and part of short-wave infrared rays (0.76-0.96 mu m). The set of generated images, called hyperspectral image cube, is taken by a hyperspectral camera in the corresponding spectral range with sampling intervals of about 10 nm. Each cube contains 25 bands and 25 images are generated by the hyperspectral camera over the entire spectral range, as shown in fig. 1.
The data volume of the hyperspectral image generated by the hyperspectral sensor is usually several times higher than that of the ordinary infrared image. Therefore, the subsequent image analysis system must improve the computing power to meet the requirements of image transmission, storage, calculation, and the like. In order to reduce the calculation amount, the hyperspectral image is subjected to dimensionality reduction preprocessing in the embodiment of the invention. For example, Principal Component Analysis (PCA) may be used to reduce the dimensionality of the hyperspectral image data. In the hyperspectral image processing, PCA provides a simple method, which not only can reduce the dimensionality of data, but also can inhibit noise. In addition, the data obtained by PCA can also be considered as features of the raw hyperspectral image data. Some of the embodiments described below use hyperspectral image data and some regions use hyperspectral images, which are referred to identically.
As shown in FIG. 2, it is assumed that each hyperspectral image data contains L bands, and that an image I of each bandn(n ═ 1, 2., L) all have the same size mxn. Each hyperspectral image data may then be denoted as I ═ I1;I2;...;IL]The purpose of the pca is to reduce the dimensionality of the data from the L dimension to the k dimension and to generate a reduced-dimension matrix PC (x, y, k) of size m × n × k, as shown in fig. 2. After PCA dimensionality reduction, each of the hyperspectral imagesThe pixels have a feature vector of 1 xk.
Based on the hyperspectral image/hyperspectral image data/hyperspectral image sequence after the dimensionality reduction treatment, the method provided by the embodiment of the invention can comprise the following steps:
101. and acquiring a spatial anomaly map of each frame of image in the hyperspectral image after dimension reduction.
As shown in fig. 3, (x, y) indicates the image pixel coordinates, and t indicates the time coordinates in fig. 3.
The spatial anomaly map is suitable for mining spatial singularity features. For the hyperspectral image after dimensionality reduction, the detection of the target is based on finding pixels which are different from the surrounding background due to the fact that no priori knowledge exists. The RX algorithm is an anomaly detection algorithm. The RX algorithm may be expressed as follows:
Figure BDA0001668462240000061
wherein, x is the pixel point to be detected,
Figure BDA0001668462240000062
is the mean of the image and phi is the autocovariance of the image.
In this embodiment, in order to find abnormal pixel points in the spatial domain, the RX algorithm is used to calculate the spatial anomaly map S (x, y, t). The spatial anomaly map S (x, y, t) can be represented by the following equation:
S(x,y,t)=(Vtt)T·(Φt)-1·(Vtt) (2)
in the formula, Vt∈R1×kRefers to the feature vector, μ, of the pixel to be detected in PC (x, y, k, t)t∈R1×kMeans the mean value of PC (x, y, k, t), phitRefers to the autocovariance of PC (x, y, k, t).
The spatial anomaly map in this embodiment can be understood as an m × n matrix, where each element in the matrix has a value between 0 and 1. The spatial anomaly map is used for describing the anomaly degree of each pixel point in the hyperspectral image. The smaller the numerical value of a certain pixel point in the spatial anomaly map is, the lower the degree of anomaly of the point is, and the more the background in target detection tends to be; the larger the value of the point is, the higher the degree of abnormality of the point is, and the more the target in the target detection is. The spatial anomaly map is obtained according to the spatial characteristics of the hyperspectral image, and is calculated through an RXD algorithm in the embodiment.
102. And acquiring a time anomaly map of each frame of image in the hyperspectral image after dimension reduction.
The temporal anomaly map is used to detect temporal singularity features in this embodiment. It is assumed that the background samples of the previous frame are still valid in the current frame and in this case a very accurate background estimate can be established over time. The calculation of the time anomaly map T (x, y, T) depends on the principal components of the current frame hyperspectral image and the next frame hyperspectral image. Unlike the spatial anomaly map, the temporal anomaly map uses information of both the current frame and the next frame of image, as shown in equation (3):
T(x,y,t)=(Vtt)T·(Φt+1)-1·(Vtt) (3)
wherein, Vt∈R1×kRefers to the feature vector, μ, of the pixel to be detected in PC (x, y, k, t)t∈R1×kMeans the mean value of PC (x, y, k, t), phit+1Refers to the autocovariance of PC (x, y, k, t + 1).
The time anomaly map in this embodiment can be understood as an m × n matrix, where each element of the matrix has a value between 0 and 1. Similar to the spatial anomaly map, the temporal anomaly map in this step is also used to describe the anomaly degree of each pixel point in the hyperspectral image. The smaller the numerical value of a certain pixel point in the time anomaly graph is, the lower the degree of anomaly of the point is, and the more the point tends to the background in target detection; the larger the value of the point is, the higher the degree of abnormality of the point is, and the more the target in the target detection is. Unlike the spatial anomaly map, the temporal anomaly map takes into account the singularities of the current and next frame images, while the spatial feature map takes into account only the singularity of the current frame image. The temporal anomaly map regards the next frame image as a background and detects anomaly points on the current frame image. The spatial anomaly map is obtained from the hyperspectral image temporal features, and is calculated by a modified RXD algorithm in the embodiment.
103. Acquiring a track prediction image of the target detection of the current frame according to the target detection result of the previous frame of image; the previous frame is a frame adjacent to the current frame.
The target detection result of the previous frame image is generated by the spatial anomaly map, the temporal anomaly map and the trajectory prediction map of the previous frame image.
It can be understood that the generation of the trajectory prediction map of the current frame in the present embodiment includes not only the spatial and temporal anomaly maps of the previous frame, but also the trajectory prediction map of the previous frame. The generation of the trajectory prediction graph is actually a continuously iterative process. The trajectory prediction map of the first frame in the hyperspectral image sequence may be preset, and the trajectory prediction map of the subsequent frame may be obtained in an iterative manner.
For example, for most hyperspectral image sequences, even if the object is small in size and weak in intensity, the object still has some similarity from frame to frame, and noise in the image sequence is often discontinuous. The property of the object appearing consecutively in adjacent frames of the image sequence may be inter-frame continuity. Based on the above-described inter-frame continuity assumption, the motion continuity feature of the target can be extracted using the trajectory prediction map in the present embodiment. Assuming that the target in the image sequence does not move in a large scale, the trajectory prediction map can be calculated by a zeta × zeta convolution kernel. The calculation of the trajectory prediction graph P (i, j, t) requires the use of the joint anomaly graph of the previous frame. The trajectory prediction map P (i, j, t) can be expressed as follows:
Figure BDA0001668462240000081
in the formula, STP (x, y, t-1) refers to the joint anomaly map calculated in the previous frame. The STP (x, y, t-1) is specifically calculated as shown in the following equation (6).
104. And acquiring a combined abnormal image of the target in the hyperspectral image according to the spatial abnormal image, the temporal abnormal image and the track prediction image of each frame.
Specifically, a spatial anomaly map, a temporal anomaly and a trajectory prediction map are fused;
first, the spatial anomaly map and the temporal anomaly map are fused as shown in the following formula:
ST(x,y,t)=N(S(x,y,t))·N(T(x,y,t)) (5)
in the formula, N (·) is normalized, and ST (x, y, t) refers to the singular features of the image in the spatio-temporal joint domain.
In addition, in order to continuously detect the weak and small targets, the final joint anomaly map also fuses a trajectory prediction map P (x, y, t), as shown in the following formula:
STP(x,y,t)=N(ST(x,y,t)+C)·N(P(x,y,t)+C) (6)
where N (. smallcircle.) denotes normalization, C is an empirical constant and is set to 1X 10 in this experiment-8. The constant C enhances the robustness of the method in the iterative process and ensures that each element in the anomaly map ST (i, j, t) and the trajectory map P (i, j, t) is non-zero.
The joint anomaly map in this embodiment can be understood as an m × n matrix, where each element in the matrix has a value between 0 and 1. As can be seen from the above equation (6), the joint anomaly map is obtained by fusing a spatial anomaly map, a temporal anomaly map, and a trajectory prediction map. The spatial singularity, the temporal singularity and the motion continuity in the hyperspectral image are investigated by the combined anomaly map. Therefore, compared with the target detection of a single abnormal graph, the combined abnormal detection can better detect the target in a complex background and can reduce the probability of the occurrence of false alarm.
Optionally, the method may further include:
105. post-processing the predicted track; such as interference rejection processing and adaptive thresholding, etc.
Anti-interference: in the target detection of the hyperspectral image, the signal-to-noise ratios of the target to be detected and the interference are different. The value of the interference in the joint anomaly map STP (x, y, t) is often tens of times that of the target. In order to enhance the detection effect of the target, STP (x, y, t) is exponentially enhanced as represented by the following formula:
STP(x,y,t)=(STP(x,y,t))r (7)
where r is an empirical constant and is set in this experiment
Figure BDA0001668462240000101
Adaptive threshold: the resulting joint anomaly map shows the inherent difference between a blurred moving object and a complex background. To maximize the signal-to-noise ratio (SNR), the present embodiment sets an adaptive threshold, as follows:
Th=μSTP+k·σSTP (8)
in the formula, muSTPIs the mean, σ, of the joint anomaly map STP (x, y, t)STPK is a variance of the joint anomaly map STP (x, y, t), and is an empirical constant, and is set to 10 in this experiment.
In this embodiment, an adaptive threshold is used to process each point in the joint anomaly map, so as to obtain a processed joint anomaly map.
For example, each point in the joint anomaly map is a probability value, and each point in the map is classified as a target or a background by an adaptive threshold. For example, assuming that the adaptive threshold is set to 0.5, a point value in the joint anomaly map is 0.3, less than the threshold, and set to 0; another point value is 0.8, greater than the threshold, set to 1. The self-adaptive threshold value converts the joint abnormal image into a binary image, and the pixel point with the value of 0 in the image represents that the pixel point is a background in the original image; the pixel point with the value of 1 in the figure represents that the point is the target in the original image.
Further, in a specific implementation process, the above steps 101 to 105 are directed to each frame in the hyperspectral image, that is, to each frame in the traversal image sequence.
In addition, in a preferred embodiment, the calculation time can also be saved, each key frame in the hyperspectral image is selected to perform the above steps 101 to 105, the selection of the key frame can be performed by using the technique of identifying the key frame in the prior art, or N frames per interval can be selected as one key frame, for example, N is 3, 4, 5, etc.
In the following experimental examples, the inventor selects an image cube as a key frame every five frames only for reducing the calculation amount in the experiment. The effect is the same even if no key frame is selected, each frame is used for the calculation.
Examples of the experiments
In order to evaluate the performance of the detection method of the weak and small moving target, a hyperspectral image sequence under a complex cloud background is combined into a test data set. As shown in fig. 4 and 5, fig. 4 shows the effect of a wavelength band (wavelength is 0.68 μm) in the hyperspectral image data sequence, fig. 5 shows the three-dimensional projection effect of the image under the wavelength band, and because the size of the weak and small object is small, the object of interest is difficult to distinguish from the background; while cloud background and noise clutter may result in a low signal-to-noise ratio.
By adopting the method of the embodiment of the invention, the key frames needing to be detected are sequentially selected from the original sequence according to the fixed interval l-5. And labeling the moving target in the key frame for quantitative evaluation. The PCA principal component dimension k and the size of the convolution kernel ζ in the trajectory prediction graph in this experiment are set to 10 and 3, respectively.
The above figure 6 shows the effect of the method in the critical step. Under the above parameter configuration, all images in the constructed data set are tested, and the visualization result is shown in fig. 5. It should be noted that the space-time combining method proposed in this embodiment can reduce the false alarm rate better than other methods. The experimental result can prove the feasibility and the performance of the hyperspectral dim target detection, as shown in figure 6
The embodiment of the invention solves the problem of detection of weak and small moving targets in the prior art. The experimental result of the invention shows that the space-time joint anomaly method provided by the invention obtains a good hyperspectral weak and small moving target detection effect. The method is applied to scenes such as target recognition, target tracking and the like in an expansion mode.
According to another aspect of embodiments of the present invention, there is also provided an electronic device, which includes a memory, a processor, a bus, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the method steps of any of the embodiments described above are implemented. The electronic device of the embodiment may be a mobile terminal, a fixed terminal, or the like.
Further, the present embodiment also provides a computer storage medium having stored thereon a computer program which, when being executed by a processor, carries out the method steps of any of the embodiments as described above.
Finally, it should be noted that: the above-mentioned embodiments are only used for illustrating the technical solution of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A space-time joint anomaly detection method based on hyperspectral images is characterized by comprising the following steps:
100. performing dimensionality reduction on the hyperspectral image sequence aiming at the hyperspectral image sequence to obtain a dimensionality-reduced hyperspectral image sequence; the spectral range of the hyperspectral image sequence comprises: visible light 0.39-0.7 μm, short-wave infrared light 0.76-2.5 μm, medium-wave infrared light 3-5 μm, and long-wave infrared light 8-12 μm;
101. obtaining a spatial anomaly map of each frame of image in the hyperspectral image sequence after dimension reduction; calculating a space abnormal graph of each frame of image in the hyperspectral image sequence after dimension reduction through an RXD algorithm according to the space characteristics of the hyperspectral image;
102. acquiring a time anomaly map of each frame of image in the hyperspectral image sequence after dimension reduction;
103. acquiring a track prediction image of the target detection of the current frame according to the target detection result of the previous frame of image; the previous frame is a frame adjacent to the current frame, and the target detection result is generated through a space abnormal image, a time abnormal image and a track prediction image of the previous frame image; the generation of the track prediction graph of the current frame comprises a space abnormal graph and a time abnormal graph of the previous frame and also comprises a track prediction graph of the previous frame, the generation of the track prediction graph is obtained in an iterative mode, and the track prediction graph is used for extracting the motion continuity characteristics of the target;
104. acquiring a combined abnormal graph of a target in the hyperspectral image sequence according to the spatial abnormal graph, the temporal abnormal graph and the track prediction graph of each frame;
the step 101 comprises:
using the formula S (x, y, t) ═ Vtt)T·(Φt)-1·(Vtt) Acquiring a spatial anomaly map S (x, y, t);
wherein, Vt∈R1×kRefers to the feature vector, μ, of the pixel to be detected in PC (x, y, k, t)t∈R1×kMeans the mean value of PC (x, y, k, t), phitRefers to the autocovariance of PC (x, y, k, t); PC (x, y, k, t) is a dimension-reduced hyperspectral image sequence or an original hyperspectral image sequence; the step 102 comprises:
according to the formula T (x, y, T) ═ Vtt)T·(Φt+1)-1·(Vtt) Acquiring a time anomaly map T (x, y, T);
Vt∈R1×krefers to the feature vector, μ, of the pixel to be detected in PC (x, y, k, t)t∈R1×kMeans the mean value of PC (x, y, k, t), phit+1Refers to the autocovariance of PC (x, y, k, t + 1);
PC (x, y, k, t) is a current frame of the dimension-reduced hyperspectral image sequence or the original hyperspectral image sequence, and PC (x, y, k, t +1) is a next frame of the current frame in the dimension-reduced hyperspectral image sequence or the original hyperspectral image sequence.
2. The method of claim 1, wherein the step 103 comprises:
according to the formula
Figure FDA0002886481310000021
Acquiring a track prediction graph P (x, y, t) of target detection of a current frame; ζ represents the size of the convolution kernel;
STP (x, y, t-1) identifies the joint anomaly map for the previous frame;
STP (x, y, T) is N (ST (x, y, T) + C) · N (P (x, y, T) + C), N (·) is normalization operation, C is an empirical constant, and ST (x, y, T) is N (S (x, y, T)) · N (T (x, y, T)).
3. The method of claim 2, wherein the step 104 comprises:
acquiring a combined anomaly map STP (x, y, t) of a target in the hyperspectral image sequence according to a formula STP (x, y, t) ═ N (ST (x, y, t) + C) · N (P (x, y, t) + C);
wherein, P (x, y, T) is a track prediction graph, T (x, y, T) is a time anomaly graph, and S (x, y, T) is a space anomaly graph.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
105. carrying out post-processing on the obtained combined abnormal graph of the target in the hyperspectral image sequence;
the post-processing comprises: anti-interference processing and adaptive threshold processing;
the anti-interference processing comprises the following steps: adopting the formula STP (x, y, t) ═ STP (x, y, t)rExponentially enhancing a joint anomaly map STP (x, y, t), wherein r is an empirical constant;
using the formula Th ═ muSTP+k·σSTPObtaining a self-adaptive threshold Th, and processing each point in the combined abnormal graph by adopting the self-adaptive threshold to obtain a processed combined abnormal graph;
wherein, muSTPIs the mean, σ, of the joint anomaly map STP (x, y, t)STPK is an empirical constant for the variance of the joint anomaly map STP (x, y, t).
5. The method of claim 1, wherein the step 100 comprises:
and performing dimensionality reduction on the hyperspectral image sequence by adopting a principal component analysis mode.
6. An electronic device comprising a memory, a processor, a bus and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of claims 1-5 when executing the program.
7. A computer storage medium having a computer program stored thereon, characterized in that: the program when executed by a processor implementing the steps of any of claims 1-5.
CN201810493266.2A 2018-05-22 2018-05-22 Hyperspectral image-based space-time joint anomaly detection method and electronic equipment Active CN108765460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810493266.2A CN108765460B (en) 2018-05-22 2018-05-22 Hyperspectral image-based space-time joint anomaly detection method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810493266.2A CN108765460B (en) 2018-05-22 2018-05-22 Hyperspectral image-based space-time joint anomaly detection method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108765460A CN108765460A (en) 2018-11-06
CN108765460B true CN108765460B (en) 2021-03-30

Family

ID=64008377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810493266.2A Active CN108765460B (en) 2018-05-22 2018-05-22 Hyperspectral image-based space-time joint anomaly detection method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108765460B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697431B (en) * 2018-12-29 2021-11-23 哈尔滨工业大学 Hyperspectral image-based weak and small target detection method
CN109949278B (en) * 2019-03-06 2021-10-29 西安电子科技大学 Hyperspectral anomaly detection method based on antagonistic self-coding network
CN112990106B (en) * 2021-04-19 2022-09-09 中国人民解放军国防科技大学 Underwater object detection method, device, computer equipment and storage medium
CN113553914B (en) * 2021-06-30 2024-03-19 核工业北京地质研究院 CASI hyperspectral data abnormal target detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616299A (en) * 2015-01-30 2015-05-13 南京邮电大学 Method for detecting weak and small target based on space-time partial differential equation
CN105825200A (en) * 2016-03-31 2016-08-03 西北工业大学 High-spectrum abnormal object detection method based on background dictionary learning and structure sparse expression
CN106600602A (en) * 2016-12-30 2017-04-26 哈尔滨工业大学 Clustered adaptive window based hyperspectral image abnormality detection method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426167A (en) * 2013-07-09 2013-12-04 哈尔滨工程大学 Hyperspectral real-time detection method based on recursive analysis
CN103606157B (en) * 2013-11-28 2016-09-28 中国科学院光电研究院 Hyperspectral imagery processing method and device
CN106101732B (en) * 2016-07-05 2019-04-09 重庆邮电大学 The vector quantization scheme of Fast Compression bloom spectrum signal
CN107704835B (en) * 2017-10-16 2020-10-30 北京市遥感信息研究所 Method for identifying offshore artificial facilities by using spectrum remote sensing images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616299A (en) * 2015-01-30 2015-05-13 南京邮电大学 Method for detecting weak and small target based on space-time partial differential equation
CN105825200A (en) * 2016-03-31 2016-08-03 西北工业大学 High-spectrum abnormal object detection method based on background dictionary learning and structure sparse expression
CN106600602A (en) * 2016-12-30 2017-04-26 哈尔滨工业大学 Clustered adaptive window based hyperspectral image abnormality detection method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A novel spatio-temporal saliency approach for robust dim moving target detection from airborne infrared image sequences;Yansheng Li;《Information Sciences》;20160718;摘要、第1-3节 *
A survey of landmine detection using hyperspectral imaging;Ihab Makki;《ISPRS Journal of Photogrammetry and Remote Sensing》;20161229;全文 *
On the impact of PCA dimension reduction for hyperspectral detection of difficult targets;M. D. Farrell;《IEEE Geoscience and Remote Sensing Letters》;20050418;摘要、第1-3节 *
RX 及其变种在高光谱图像中的异常检测;史振威;《红外与激光工程》;20120331;全文 *
Yansheng Li.A novel spatio-temporal saliency approach for robust dim moving target detection from airborne infrared image sequences.《Information Sciences》.2016, *
联合空间预处理与谱聚类的协同稀疏高光谱异常检测;成宝芝;《光学学报》;20170430;摘要、第1-3节 *

Also Published As

Publication number Publication date
CN108765460A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
Geetha et al. Machine vision based fire detection techniques: A survey
CN108765460B (en) Hyperspectral image-based space-time joint anomaly detection method and electronic equipment
Jia et al. A saliency-based method for early smoke detection in video sequences
Chen et al. Visual depth guided color image rain streaks removal using sparse coding
US8243991B2 (en) Method and apparatus for detecting targets through temporal scene changes
Kim et al. Illumination-invariant background subtraction: Comparative review, models, and prospects
EP3438929B1 (en) Foreground and background detection method
US20140307917A1 (en) Robust feature fusion for multi-view object tracking
Fendri et al. Fusion of thermal infrared and visible spectra for robust moving object detection
CN109859246B (en) Low-altitude slow unmanned aerial vehicle tracking method combining correlation filtering and visual saliency
Zhou et al. Entropy distribution and coverage rate-based birth intensity estimation in GM-PHD filter for multi-target visual tracking
Xue et al. Low-rank approximation and multiple sparse constraint modeling for infrared low-flying fixed-wing UAV detection
CN109389609B (en) Interactive self-feedback infrared target detection method based on FART neural network
Tiwari et al. A survey on shadow detection and removal in images and video sequences
Xu et al. A robust background initialization algorithm with superpixel motion detection
Xu et al. Robust moving objects detection in long-distance imaging through turbulent medium
Li et al. DIM moving target detection using spatio-temporal anomaly detection for hyperspectral image sequences
Angelo A novel approach on object detection and tracking using adaptive background subtraction method
Yavari et al. Small infrared target detection using minimum variation direction interpolation
Miller et al. Person tracking in UAV video
Li et al. Fast forest fire detection and segmentation application for uav-assisted mobile edge computing system
Wang et al. Saliency detection using mutual consistency-guided spatial cues combination
US20190251695A1 (en) Foreground and background detection method
Zhu Image quality assessment model based on multi-feature fusion of energy Internet of Things
CN114429593A (en) Infrared small target detection method based on rapid guided filtering and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant