CN107273803B - Cloud layer image detection method - Google Patents
Cloud layer image detection method Download PDFInfo
- Publication number
- CN107273803B CN107273803B CN201710345541.1A CN201710345541A CN107273803B CN 107273803 B CN107273803 B CN 107273803B CN 201710345541 A CN201710345541 A CN 201710345541A CN 107273803 B CN107273803 B CN 107273803B
- Authority
- CN
- China
- Prior art keywords
- detection device
- remote sensing
- image
- spectrum
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a cloud layer image detection method, which comprises the following steps of; the detection device receives a remote sensing image sequence containing N frames of remote sensing images; the detection device extracts energy characteristics of the remote sensing images from the remote sensing image sequence to obtain an energy saliency map; the detection device calculates the brightness contrast of the energy saliency map to obtain a brightness contrast image; the detection device extracts the texture features of the brightness contrast map to obtain a texture feature map; the detection device performs interframe correlation on the texture feature map by using the image motion information, extracts an interested region and obtains a cloud layer detection result. The requirements on the sensor are reduced, and meanwhile, the position and the size of the cirrus cloud can be accurately detected; and the adopted algorithm is simple, the calculation efficiency is high, and the real-time requirement can be met.
Description
Technical Field
The invention relates to the technical field of remote sensing, in particular to a cloud layer image detection method based on energy characteristics, textural characteristics and motion characteristics.
Background
In recent years, remote sensing technology is widely applied in the fields of modern military, space-based detection, meteorological analysis and the like, and remote sensing image interpretation is one of key technologies of the remote sensing technology. The remote sensing image interpretation refers to comprehensive analysis according to the geometric characteristics and physical properties of the images, so as to reveal the quality and quantity characteristics of the objects or phenomena and the interrelation among the characteristics, and further research the development process and the distribution rule of the objects or phenomena, namely, the characteristics of the objects or the phenomena represented by the characteristics are identified according to the image characteristics. The false alarm source has a great influence on the interpretation of the remote sensing image. Different virtual alarm sources often exist in the remote sensing image, and the virtual alarm sources have the characteristics of high radiation intensity, time variation and the like. For example, high altitude curly clouds are an important source of false alarms. Areas 1/3 to 1/2 on the earth are covered by clouds, and the high-altitude clouds are a main clutter for remote sensing images and target detection systems. Due to the characteristics of fast shape change, variable motion and the like, the cirrus cloud brings certain difficulty for interpretation of the remote sensing image. The research on a proper cirrus cloud detection algorithm can improve the precision of the remote sensing imaging and detection system and is beneficial to the realization of military and space application.
Generally, for the cloud detection, a spectrum analysis method is usually adopted to collect multi-channel data such as visible light, infrared light and the like, and the radiation difference between a cloud layer and other ground objects is used for detection. However, this method is not high in real-time performance and high in requirements on imaging equipment, and is not easy to implement the cirrus detection.
Disclosure of Invention
The invention aims to provide a cloud layer image detection method based on energy, texture and motion characteristics, and solve the problem that high-altitude cloud is difficult to accurately detect in the prior art.
In order to solve the above technical problem, an embodiment of the present invention provides a cloud image detection method, including: the detection device receives a remote sensing image sequence f containing N remote sensing imagesn(x, y), wherein N is 1.., N is a frame number, and N is a total frame number; the detection device extracts the energy characteristics of the remote sensing image from the remote sensing image sequence to obtain an energy significance mapThe detection device calculates the brightness contrast of the energy saliency map to obtain a brightness contrast imageThe detection device extracts the texture features of the brightness contrast image to obtain a texture feature mapThe detection device performs interframe correlation on the texture feature map by using image motion information, extracts a region of interest (ROI) and obtains a cloud layer detection result.
Optionally, the detecting device performs inter-frame association on the texture feature map by using image motion information, extracts the region of interest ROI, and obtains a cloud layer detection result specifically as follows:
the detection device sets a brightness threshold T, performs threshold segmentation on the brightness contrast result to obtain a segmentation result Thn(x,y):The detection means detects the division result Thn(x, y) performing opening operation to eliminate isolated bright spots, remove partial clutter, fill holes and obtain a processed segmentation result Th'n(x, y); the detection device detects Th'nMarking the region with the pixel value of 1 in (x, y) as a region of interest ROI, and calculating and obtaining the center coordinate (x) of the current ROIn,yn) (ii) a The detection device sets a motion threshold Mov, and calculates the distance between the centers of the ROI of the nth frame and the n +1 th frame:n-1, · N-1; if D isnIf the frame number is less than Mov, the detection device associates the ROI areas of the n frames and the n +1 th frame; the detection device extracts mutually associated ROI areas, and the mutually associated ROI areas are cloud layer detection results.
Optionally, the detection device extracts energy features of the remote sensing image from the remote sensing image sequence to obtain an energy saliency mapThe method specifically comprises the following steps:
the detection device performs Fourier transform on the remote sensing image sequence: sn(ωx,ωy)=F[fn(x,y)]N1.., N, where F denotes a fourier transform operator, (ω) represents a fourier transform operatorx,ωy) Representing coordinates transformed to the frequency domain; the detection device calculates the amplitude of Fourier transform, and takes logarithm to obtain a log spectrum: l isn(ωx,ωy)=log[|sn(ωx,ωy)|]Where | represents a magnitude operator; calculating a phase spectrumWhereinRepresenting a phase operator; the detection device convolves the log spectrum with a mean filtering template with the size of mxm to obtain a smoothed spectrum: v (omega)x,ωy)=Ln(ωx,ωy)*hm(ωx,ωy) Wherein the mean filtering template is:the detection device subtracts the logarithmic spectrum from the smooth spectrum to obtain a spectrum residual R (omega)x,ωy):R(ωx,ωy)=Ln(ωx,ωy)-V(ωx,ωy) The detection device detects the spectrum residual R (omega)x,ωy) And the phase spectrum P (omega)x,ωy) Performing two-dimensional inverse discrete Fourier transform to obtain an energy significance map
Optionally, the detection device extracts texture features of the brightness contrast image to obtain a texture feature mapThe method specifically comprises the following steps: the detection device constructs a filter:wherein x' ═ a-m(xcosθ+ysinθ),y′=a-m(-xcosθ+ysinθ),a-mIs a scale factor, theta represents the direction of the kernel function, lambda represents the wavelength of the sine function, psi represents the phase shift, sigma represents the standard deviation of the gaussian function, and gamma represents the aspect ratio of the function; the detection device convolutes the filter with the brightness contrast image to obtain a filtering result, wherein the filtering result is a texture feature map:
the technical scheme of the invention has the following beneficial effects: in the scheme, the detection device adopts an image processing mode to detect the cloud layer in the remote sensing image, so that the requirement on the sensor is reduced, and the position and the size of the cloud can be accurately detected; and the adopted algorithm is simple, the calculation efficiency is high, and the real-time requirement can be met.
The foregoing is a summary of the present invention, and in order to provide a clear understanding of the technical means of the present invention and to be implemented in accordance with the present specification, the following is a detailed description of the preferred embodiments of the present invention with reference to the accompanying drawings.
Drawings
Fig. 1 is a flowchart of a cloud image detection method according to the present invention.
FIG. 2 is a set of infrared images containing a cloud of clouds in accordance with an example of the invention.
Fig. 3 is an energy saliency map for the infrared image of fig. 2.
Fig. 4 is a luminance contrast image of the energy saliency map of fig. 3.
Fig. 5 is a texture feature map of the luminance contrast image of fig. 4.
Fig. 6 shows the cloud layer detection results.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a cloud layer image detection method based on energy, texture and motion characteristics, aiming at the problem that the existing high-altitude cirrus cloud is difficult to accurately detect.
As shown in fig. 1, an embodiment of the present invention provides a cloud image detection method based on energy, texture, and motion characteristics, where the method is applied to a detection device, and specifically includes:
step 1: the detection device receives N frames of remote sensing images, and the images are recorded as a remote sensing image sequence, and the remote sensing image sequence comprisesThe function f can be used for a sequence of remote sensing imagesn(x, y), where N is 1. For example: fig. 2 is a group of infrared remote sensing images containing cirrus clouds.
Step 2: the detection device respectively extracts the energy characteristics of each frame of remote sensing image from the remote sensing image sequence, and obtains an energy significance map according to the energy characteristics of each frame of remote sensing image, wherein the energy significance map is marked asN is 1. For example: fig. 3 is an energy saliency map obtained by performing energy feature extraction on the infrared image containing the cirrus cloud of fig. 2.
The energy characteristic of each frame of remote sensing image adopts a spectrum residual error characteristic. The method comprises the following specific steps:
21, the detection device performs Fourier transform on the remote sensing image sequence:
sn(ωx,ωy)=F[fn(x,y)],n=1,...,N
wherein F represents a Fourier transform operator, (ω)x,ωy) Representing the coordinates transformed to the frequency domain.
22, the detection device calculates the amplitude of the Fourier transform and takes the logarithm to obtain a logarithmic spectrum:
Ln(ωx,ωy)=log[|sn(ωx,ωy)|]
where | represents the magnitude operator. Simultaneously, the phase spectrum is calculated:
23, the detection device convolves the log spectrum obtained in the previous step with a mean filtering template with the size of mxm to obtain a smooth spectrum:
V(ωx,ωy)=Ln(ωx,ωy)*hm(ωx,ωy)
wherein the mean filtering template is:
and 24, subtracting the smooth spectrum from the logarithmic spectrum by the detection device to obtain a spectrum residual:
R(ωx,ωy)=Ln(ωx,ωy)-V(ωx,ωy)
25, the detecting device makes the spectrum residual R (omega)x,ωy) And the phase spectrum P (omega)x,ωy) Performing two-dimensional inverse discrete Fourier transform to obtain an energy significance map:
and step 3: the detection device calculates the brightness contrast of the energy significance map to obtain a brightness contrast image, and the contrast image is recorded asN1., N; for example: fig. 4 is a luminance contrast image obtained after performing a luminance contrast calculation on the energy saliency map of fig. 3.
And 4, step 4: the detection device extracts the texture features of the brightness contrast image to obtain a texture feature map, and the texture feature map is recorded asN1., N; for example: fig. 5 is a texture feature map obtained by extracting texture features from the luminance contrast image of fig. 4.
The texture feature can be represented by a directional Gabor feature. The specific process for obtaining the texture feature map comprises the following steps:
41, the detection device constructs a filter:
wherein x' ═ a-m(xcosθ+ysinθ),y′=a-m(-xcosθ+ysinθ)。a-mFor scale factors, θ denotes the direction of the Gabor kernel, λ denotes the wavelength of the sine function, ψ denotes the phase shift, σ denotes the standard deviation of the gaussian function, and γ denotes the aspect ratio of the function. And selecting different directions to obtain the Gabor multidirectional filter.
42, the detection device convolves the Gabor filter with the brightness contrast image to obtain a filtering result, wherein the filtering result is a texture feature map:
and 5: the detection device performs interframe correlation on the texture feature map by using the image motion information, extracts the region of interest and obtains a cloud layer detection result. For example: fig. 6 shows the final cloud detection result.
The process of step 5 above is described in detail below:
51, the detection device sets a brightness threshold T, and performs threshold division on the result of brightness contrast to obtain a division result, which is recorded as Thn(x,y):
52, the detection device opens the segmentation result to eliminate isolated bright spots, remove partial clutter, fill holes, and record the processed segmentation result as Thn' (x, y). General Thn' (x, y) the Region with pixel value 1 is marked as Region of interest (ROI), the center coordinate of the current ROI is calculated and obtained, and the center coordinate of the current ROI is marked as (x)n,yn)。
The detection device sets a motion threshold value Mov, and calculates the distance between the centers of the ROIs of the nth frame and the (n + 1) th frame:
if D isnIf the frame number is less than Mov, the detection device associates the ROI areas of the n frames and the n +1 th frame; if D isnIf the correlation fails, the detection device does not correlate the ROI of the n-th frame and the n + 1-th frame.
And 54, extracting the correlated ROI area by the detection device, wherein the correlated ROI area is a cirrus cloud detection result.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the detection device adopts an image processing mode to detect the cirrus cloud in the remote sensing image, so that the requirement on the sensor is reduced, and the position and the size of the cirrus cloud can be accurately detected; and the adopted algorithm is simple, the calculation efficiency is high, and the real-time requirement can be met.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (3)
1. A cloud layer image detection method is characterized by comprising the following steps:
the detection device receives a remote sensing image sequence f containing N remote sensing imagesn(x, y), wherein N is 1.., N is a frame number, and N is a total frame number;
the detection device extracts the energy characteristics of the remote sensing image from the remote sensing image sequence to obtain an energy significance map
The detection device calculates the brightness contrast of the energy saliency map to obtain a brightness contrast image
The detection device extracts the texture features of the brightness contrast image to obtain a texture feature map
The detection device performs interframe correlation on the texture feature map by using image motion information, extracts a region of interest (ROI) and obtains a cloud layer detection result, the detection device performs interframe correlation on the texture feature map by using the image motion information and extracts the region of interest (ROI), and the obtained cloud layer detection result specifically comprises the following steps:
the detection device sets a brightness threshold T, performs threshold segmentation on the brightness contrast result to obtain a segmentation result Thn(x,y):
The detection means detects the division result Thn(x, y) performing opening operation to eliminate isolated bright spots, remove partial clutter, fill holes and obtain a processed segmentation result Th'n(x,y);
The detection device detects the Th'nThe region with pixel value 1 in (x, y) is marked as region of interest ROI, and the center coordinate (x) of the current ROI is obtainedn,yn);
The detection device sets a motion threshold Mov, and calculates the distance between the centers of the ROI of the nth frame and the n +1 th frame:
if D isnIf the frame number is less than Mov, the detection device associates the ROI areas of the n frames and the n +1 th frame;
the detection device extracts mutually associated ROI areas, and the mutually associated ROI areas are cloud layer detection results.
2. The method of claim 1, wherein the detection device extracts energy features of the remote sensing images from the sequence of remote sensing images to obtain an energy saliency mapThe method specifically comprises the following steps:
the detection device performs Fourier transform on the remote sensing image sequence:
sn(ωx,ωy)=F[fn(x,y)]n1.., N, where F denotes a fourier transform operator, (ω) represents a fourier transform operatorx,ωy) Representing coordinates transformed to the frequency domain;
the detection device calculates the amplitude of Fourier transform, and takes logarithm to obtain a log spectrum:
Ln(ωx,ωy)=log[|sn(ωx,ωy)|]where | represents the magnitude operator,
calculating the phase spectrum P (omega)x,ωy):
the detection device convolves the log spectrum with a mean filtering template with the size of mxm to obtain a smoothed spectrum:
V(ωx,ωy)=Ln(ωx,ωy)*hm(ωx,ωy) Wherein the mean filtering template is:
the detection device subtracts the logarithmic spectrum and the smooth spectrum to obtain a spectrum residual R (omega)x,ωy):
R(ωx,ωy)=Ln(ωx,ωy)-V(ωx,ωy),
The detection device uses the spectrum residual R (omega)x,ωy) With the phase spectrum P (ω)x,ωy) Performing two-dimensional inverse discrete Fourier transform to obtain an energy significance map
3. The method of claim 1, wherein the detecting device extracts texture features of the brightness contrast image to obtain a texture feature mapThe method specifically comprises the following steps:
the detection device constructs a filter:
wherein x' ═ a-m(xcosθ+ysinθ),y′=a-m(-xcosθ+ysinθ);a-mIs a scale factor, theta represents the direction of the kernel function, lambda represents the wavelength of the sine function, psi represents the phase shift, sigma represents the standard deviation of the gaussian function, and gamma represents the aspect ratio of the function;
the detection device convolutes the filter with the brightness contrast image to obtain a filtering result, wherein the filtering result is a texture feature map:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710345541.1A CN107273803B (en) | 2017-05-16 | 2017-05-16 | Cloud layer image detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710345541.1A CN107273803B (en) | 2017-05-16 | 2017-05-16 | Cloud layer image detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107273803A CN107273803A (en) | 2017-10-20 |
CN107273803B true CN107273803B (en) | 2020-04-24 |
Family
ID=60065156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710345541.1A Active CN107273803B (en) | 2017-05-16 | 2017-05-16 | Cloud layer image detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107273803B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648184A (en) * | 2018-05-10 | 2018-10-12 | 电子科技大学 | A kind of detection method of remote sensing images high-altitude cirrus |
CN110667847B (en) * | 2019-10-17 | 2020-08-18 | 安徽省徽腾智能交通科技有限公司泗县分公司 | Unmanned aerial vehicle intelligent flying height control platform |
CN111967508A (en) * | 2020-07-31 | 2020-11-20 | 复旦大学 | Time series abnormal point detection method based on saliency map |
CN111812106B (en) * | 2020-09-15 | 2020-12-08 | 沈阳风驰软件股份有限公司 | Method and system for detecting glue overflow of appearance surface of wireless earphone |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093241A (en) * | 2013-01-23 | 2013-05-08 | 北京理工大学 | Optical remote sensing image non-homogeneous cloud layer discriminating method based on homogenization processing |
CN105868745A (en) * | 2016-06-20 | 2016-08-17 | 重庆大学 | Weather identifying method based on dynamic scene perception |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103052962B (en) * | 2010-11-24 | 2016-01-27 | 印度统计学院 | The classification of rough wavelet granular space and multi-spectral remote sensing image |
-
2017
- 2017-05-16 CN CN201710345541.1A patent/CN107273803B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093241A (en) * | 2013-01-23 | 2013-05-08 | 北京理工大学 | Optical remote sensing image non-homogeneous cloud layer discriminating method based on homogenization processing |
CN105868745A (en) * | 2016-06-20 | 2016-08-17 | 重庆大学 | Weather identifying method based on dynamic scene perception |
Also Published As
Publication number | Publication date |
---|---|
CN107273803A (en) | 2017-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107273803B (en) | Cloud layer image detection method | |
US20210049769A1 (en) | Vibe-based three-dimensional sonar point cloud image segmentation method | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
WO2015180527A1 (en) | Image saliency detection method | |
CN109919960B (en) | Image continuous edge detection method based on multi-scale Gabor filter | |
US20120328161A1 (en) | Method and multi-scale attention system for spatiotemporal change determination and object detection | |
CN105184804B (en) | Small targets detection in sea clutter method based on Airborne IR camera Aerial Images | |
CN103871039B (en) | Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection | |
CN108765327B (en) | Image rain removing method based on depth of field and sparse coding | |
CN102360503B (en) | SAR (Specific Absorption Rate) image change detection method based on space approach degree and pixel similarity | |
CN108257153B (en) | Target tracking method based on direction gradient statistical characteristics | |
CN110110618B (en) | SAR target detection method based on PCA and global contrast | |
CN111161222A (en) | Printing roller defect detection method based on visual saliency | |
CN111145121B (en) | Confidence term filter target tracking method for strengthening multi-feature fusion | |
CN107516322A (en) | A kind of image object size based on logarithm pole space and rotation estimation computational methods | |
CN102175993A (en) | Radar scene matching feature reference map preparation method based on satellite SAR (synthetic aperture radar) images | |
CN109800713A (en) | The remote sensing images cloud detection method of optic increased based on region | |
CN104951765A (en) | Remote sensing image target division method based on shape priori information and vision contrast ratio | |
CN116863357A (en) | Unmanned aerial vehicle remote sensing dyke image calibration and intelligent segmentation change detection method | |
CN103646397B (en) | Real-time synthetic aperture perspective imaging method based on multisource data fusion | |
CN109767442B (en) | Remote sensing image airplane target detection method based on rotation invariant features | |
CN112884795A (en) | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion | |
Fu et al. | A noise-resistant superpixel segmentation algorithm for hyperspectral images | |
CN106778822B (en) | Image straight line detection method based on funnel transformation | |
Wu et al. | Research on crack detection algorithm of asphalt pavement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |