CN107273803A - Cloud layer image detecting method - Google Patents

Cloud layer image detecting method Download PDF

Info

Publication number
CN107273803A
CN107273803A CN201710345541.1A CN201710345541A CN107273803A CN 107273803 A CN107273803 A CN 107273803A CN 201710345541 A CN201710345541 A CN 201710345541A CN 107273803 A CN107273803 A CN 107273803A
Authority
CN
China
Prior art keywords
mrow
detection means
msub
remote sensing
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710345541.1A
Other languages
Chinese (zh)
Other versions
CN107273803B (en
Inventor
王俊
刘延利
彭真明
王晓阳
杨春平
李霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Environmental Features
Original Assignee
Beijing Institute of Environmental Features
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Environmental Features filed Critical Beijing Institute of Environmental Features
Priority to CN201710345541.1A priority Critical patent/CN107273803B/en
Publication of CN107273803A publication Critical patent/CN107273803A/en
Application granted granted Critical
Publication of CN107273803B publication Critical patent/CN107273803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of cloud layer image detecting method, including;Detection means receives the remote sensing images sequence for including N frame remote sensing images;Detection means obtains energy Saliency maps from the energy feature of remote sensing images sequential extraction procedures remote sensing images;Detection means calculates the luminance contrast of the energy Saliency maps, obtains luminance contrast image;Detection means extracts the textural characteristics of luminance contrast figure, obtains textural characteristics figure;Detection means utilizes image motion information, and the textural characteristics figure is carried out into intra-frame trunk, extracts area-of-interest, obtains nutritious obesity result.The requirement to sensor is reduced, while position and the size of cirrus can be accurately detected;And the algorithm used is simple, and computational efficiency is high, can meet requirement of real-time.

Description

Cloud layer image detecting method
Technical field
The present invention relates to remote sensing technology field, particularly relate to a kind of based on energy feature, textural characteristics and motion feature Cloud layer image detecting method.
Background technology
In recent years, remote sensing technology is widely used in fields such as modern military, space-based detection and meteorologic analysis, distant It is one of key technology of remote sensing technology to feel image interpretation.Remote sensing image interpretation refers to the geometric properties and physics according to image Property, carries out comprehensive analysis, so that the quality and quantative attribute of object or phenomenon are disclosed, and the mutual pass between them System, and then study its generation evolution and the regularity of distribution, that is to say, that the thing representated by recognizing them according to characteristics of image The property of body or phenomenon.Sources for false alarms has large effect for remote sensing image interpretation.And difference is often there is in remote sensing images Sources for false alarms, the features such as these sources for false alarms have that radiation intensity is high, changed over time.For example:High-altitude cirrus is a kind of important Sources for false alarms.There is 1/3 to 1/2 area on the earth by cloud cover, for remote sensing images and target detection system, upper cloud layer It is a kind of main clutter.And cirrus is because its change in shape is fast, move the feature such as changeable, the interpretation to remote sensing images is brought Certain difficulty.Suitable cirrus detection algorithm is studied, the precision of remotely sensed image and detection system can be improved, it is advantageously implemented Military and space usage.
In general, cirrus detection often use spectra methods, collection visible ray and it is infrared wait multi-channel data, utilization Cloud layer is detected with the radiation difference of other ground objects.But this mode is that real-time is not high, and to imaging device It is required that it is high, it is not easy to the development of cirrus detection.
The content of the invention
The technical problem to be solved in the present invention is to provide a kind of cloud layer image detection based on energy, texture and motion feature Method, solves the problem of prior art high and medium cirrus is difficult to accurately be detected.
In order to solve the above technical problems, embodiments of the invention provide a kind of cloud layer image detecting method, including:Detection dress Put the remote sensing images sequence f for receiving and including N frame remote sensing imagesn(x, y), wherein n=1 ..., N are frame number, and N is totalframes;Institute Detection means is stated from the energy feature of the remote sensing images sequential extraction procedures remote sensing images, energy Saliency maps are obtainedInstitute The luminance contrast that detection means calculates the energy Saliency maps is stated, luminance contrast image is obtainedThe detection Device extracts the textural characteristics of the luminance contrast figure, obtains textural characteristics figureThe detection means utilizes image Movable information, intra-frame trunk is carried out by the textural characteristics figure, extracts region of interest ROI, obtains nutritious obesity result.
Optionally, detection means utilizes image motion information, and the textural characteristics figure is carried out into intra-frame trunk, extracts sense emerging Interesting region ROI, obtaining nutritious obesity result is specially:
The detection means sets luminance threshold T, and the result to luminance contrast enters row threshold division, obtains segmentation result For Thn(x,y):The detection means is to the segmentation result Thn (x, y) carries out opening operation, eliminates isolated bright spot, removes part clutter, while filling hole, the segmentation result after being handled Th′n(x,y);The detection means is by Th 'nThe zone marker that pixel value is 1 in (x, y) is region of interest ROI, calculates and obtains To current ROI centre coordinate (xn,yn);The detection means sets movement threshold Mov, calculates the ROI of n-th frame and the (n+1)th frame The distance between center:If Dn<Mov, the detection dress Put, the ROI region of n frames and the (n+1)th frame is associated;The detection means extracts be mutually related ROI region, the phase The ROI region of mutual correlation is nutritious obesity result.
Optionally, detection means obtains energy notable from the energy feature of the remote sensing images sequential extraction procedures remote sensing images Property figureSpecially:
The detection means carries out Fourier transformation to the remote sensing images sequence:snxy)=F [fn(x,y)],n =1 ..., N, wherein, F represents Fourier transformation operator, (ωxy) represent to transform to the coordinate of frequency domain;The detection means The amplitude of Fourier transformation is calculated, and takes the logarithm and obtains logarithmic spectrum:Lnxy)=log [| snxy) |], wherein | | represent amplitude operator;Calculate phase spectrum P (ωxy):WhereinRepresent phase operator; The logarithmic spectrum and size are m × m mean filter mask convolution by the detection means, are smoothly composed:V(ωxy)= Lnxy)*hmxy), wherein mean filter template is:The detection means will be right Number spectrum and smooth spectrum subtraction, obtain spectrum residual error R (ωxy):R(ωxy)=Lnxy)-V(ωxy), detection dress Residual error R (ω will be composed by puttingxy) and phase spectrum P (ωxy) two-dimensional discrete Fourier inverse transformation is carried out, obtain energy conspicuousness Figure
Optionally, detection means described in detection means extracts the textural characteristics of the luminance contrast figure, obtains textural characteristics figureSpecially:Detection means builds wave filter: Wherein, x '=a-m(xcos θ+ysin θ), y '=a-m(- xcos θ+ysin θ), a-mFor scale factor, θ represents the side of kernel function To λ represents the wavelength of SIN function, and ψ represents phase offset, and σ represents the standard deviation of Gaussian function, and the width of γ representative functions is high Than;Wave filter and remote sensing images convolution are obtained filter result by the detection means, and the filter result is textural characteristics figure:
The above-mentioned technical proposal of the present invention has the beneficial effect that:In such scheme, because detection means is using at image The mode of reason carries out the nutritious obesity in remote sensing images, reduces the requirement to sensor, while volume can be accurately detected The position of cloud and size;And the algorithm used is simple, and computational efficiency is high, can meet requirement of real-time.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, below with presently preferred embodiments of the present invention and coordinate accompanying drawing describe in detail it is as follows.
Brief description of the drawings
Fig. 1 is the flow chart of the cloud layer image detecting method of the present invention.
Fig. 2 is one group of an example of the present invention infrared image for containing cirrus.
Fig. 3 is the energy Saliency maps to Fig. 2 infrared images.
Fig. 4 is the luminance contrast image of Fig. 3 energy Saliency maps.
Fig. 5 is the textural characteristics figure of Fig. 4 luminance contrast images.
Fig. 6 is nutritious obesity result.
Embodiment
To make the technical problem to be solved in the present invention, technical scheme and advantage clearer, below in conjunction with accompanying drawing and tool Body embodiment is described in detail.
The present invention the problem of be difficult to accurately be detected for existing high-altitude cirrus there is provided it is a kind of be based on energy, texture and The cloud layer image detecting method of motion feature.
As shown in figure 1, embodiments of the invention propose a kind of cloud layer image detection based on energy, texture and motion feature Method, this method is applied to detection means, specifically includes:
Step 1:Detection means receives N frame remote sensing images, is designated as remote sensing images sequence, this remote sensing images sequence can be used Function fn(x, y) is represented, wherein n=1 ..., N are frame number, and N is totalframes.For example:Fig. 2 is one group and contains the infrared of cirrus Remote sensing images.
Step 2:Detection means extracts the energy feature of every frame remote sensing images from remote sensing images sequence respectively, distant according to every frame Feel the energy feature of image, obtain energy Saliency maps, the energy Saliency maps are designated asFor example: Fig. 3 is to carry out power feature extraction, the energy Saliency maps of acquisition to Fig. 2 infrared image containing cirrus.
Wherein, the energy feature of every frame remote sensing images here is using spectrum residual error feature.It is comprised the following steps that:
21, detection means carries out Fourier transformation to remote sensing images sequence:
snxy)=F [fn(x, y)], n=1 ..., N
Wherein, F represents Fourier transformation operator, (ωxy) represent to transform to the coordinate of frequency domain.
22, detection means calculates the amplitude of Fourier transformation, and takes the logarithm and obtain logarithmic spectrum:
Lnxy)=log [| snxy)|]
Wherein | | represent amplitude operator.Phase spectrum is calculated simultaneously:
WhereinRepresent phase operator.
23, logarithmic spectrum obtained in the previous step and size are m × m mean filter mask convolution by detection means, are put down Sliding spectrum:
V(ωxy)=Lnxy)*hmxy)
Wherein mean filter template is:
24, logarithmic spectrum and smooth spectrum subtraction are obtained composing residual error by detection means:
R(ωxy)=Lnxy)-V(ωxy)
25, detection means will compose residual error R (ωxy) and phase spectrum P (ωxy) carry out two-dimensional discrete Fourier contravariant Change, obtain energy Saliency maps:
Step 3:Detection means calculates the luminance contrast of energy Saliency maps, obtains luminance contrast image, the contrast Degree image is designated asFor example:Fig. 4 is the carry out luminance contrast meter to Fig. 3 energy Saliency maps The luminance contrast image obtained after calculation.
Step 4:Detection means extracts the textural characteristics of luminance contrast figure, obtains textural characteristics figure, the textural characteristics figure It is designated asFor example:Fig. 5 is that Fig. 4 luminance contrast image is carried out to obtain after texture feature extraction Textural characteristics figure.
Here textural characteristics can be represented using direction Gabor characteristic.Specifically obtain textural characteristics figure process bag Include:
41, the detection means builds wave filter:
Wherein, x '=a-m(xcos θ+ysin θ), y '=a-m(-xcosθ+ysinθ)。a-mFor scale factor, θ is represented The direction of Gabor kernel functions, λ represents the wavelength of SIN function, and ψ represents phase offset, and σ represents the standard deviation of Gaussian function, γ The ratio of width to height of representative function.Choose different directions, you can obtain Gabor Multi-aspect filtering devices.
42, Gabor filter and input picture convolution are obtained filter result by detection means, and the filter result is line Manage characteristic pattern:
Step 5:Detection means utilizes image motion information, and textural characteristics figure is carried out into intra-frame trunk, extracts region of interest Domain, obtains nutritious obesity result.For example:Fig. 6 is final nutritious obesity result.
The process of above-mentioned steps 5 is described in detail below:
51, detection means setting luminance threshold T, the result to luminance contrast enters row threshold division, obtains segmentation result, The segmentation result is designated as Thn(x,y):
52, detection means carries out opening operation to segmentation result, eliminates isolated bright spot, removes part clutter, fills simultaneously Hole, Th ' is designated as by the segmentation result after processingn(x,y).By Th 'nThe zone marker that pixel value is 1 in (x, y) is interested Region (Region of interest, ROI), calculates and obtains current ROI centre coordinate, and current ROI center is sat Labeled as (xn,yn)。
53, detection means setting movement threshold Mov, calculate the distance between ROI center of n-th frame and the (n+1)th frame:
If Dn<The ROI region of n frames and the (n+1)th frame is then associated by Mov, detection means;If Dn>Mov, then associate Failure, ROI region of the detection means not to n frames and the (n+1)th frame is associated.
54, detection means extracts the ROI region that is mutually related, and the ROI region that is mutually related is cirrus testing result.
In summary, by adopting the above-described technical solution, the beneficial effects of the invention are as follows:
The cirrus detection in remote sensing images is carried out by the way of image procossing due to detection means, is reduced to sensor Requirement, while position and the size of cirrus can be accurately detected;And the algorithm used is simple, and computational efficiency is high, can To meet requirement of real-time.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, on the premise of principle of the present invention is not departed from, some improvements and modifications can also be made, these improvements and modifications It should be regarded as protection scope of the present invention.

Claims (4)

1. a kind of cloud layer image detecting method, it is characterised in that including:
Detection means receives the remote sensing images sequence f for including N frame remote sensing imagesn(x, y), wherein n=1 ..., N are frame number, and N is Totalframes;
The detection means obtains energy Saliency maps from the energy feature of the remote sensing images sequential extraction procedures remote sensing images
The detection means calculates the luminance contrast of the energy Saliency maps, obtains luminance contrast image
The detection means extracts the textural characteristics of the luminance contrast figure, obtains textural characteristics figure
The detection means utilizes image motion information, and the textural characteristics figure is carried out into intra-frame trunk, extracts area-of-interest ROI, obtains nutritious obesity result.
2. the method as described in claim 1, it is characterised in that the detection means utilizes image motion information, by the line Manage characteristic pattern and carry out intra-frame trunk, extract region of interest ROI, obtaining nutritious obesity result is specially:
The detection means sets luminance threshold T, and the result to the luminance contrast enters row threshold division, obtains segmentation result For Thn(x,y):
<mrow> <msub> <mi>Th</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>f</mi> <mi>n</mi> <mn>3</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;GreaterEqual;</mo> <mi>T</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>f</mi> <mi>n</mi> <mn>3</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&lt;</mo> <mi>T</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>N</mi> <mo>;</mo> </mrow>
The detection means is to the segmentation result Thn(x, y) carries out opening operation, eliminates isolated bright spot, removes part clutter, Hole, the segmentation result Th ' after being handled are filled simultaneouslyn(x,y);
The detection means is by the Th 'nThe zone marker that pixel value is 1 in (x, y) is region of interest ROI, obtains current ROI centre coordinate (xn,yn);
The detection means sets movement threshold Mov, calculates the distance between the ROI center of n-th frame and the (n+1)th frame:
<mrow> <msub> <mi>D</mi> <mi>n</mi> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>n</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> <mo>;</mo> </mrow>
If Dn<The ROI region of n frames and the (n+1)th frame is then associated by Mov, the detection means;
The detection means extracts the ROI region that is mutually related, and the ROI region that is mutually related is nutritious obesity result.
3. the method as described in claim 1, it is characterised in that the detection means is from the remote sensing images sequential extraction procedures remote sensing The energy feature of image, obtains energy Saliency mapsSpecially:
The detection means carries out Fourier transformation to the remote sensing images sequence:
snxy)=F [fn(x, y)], n=1 ..., N, wherein, F represents Fourier transformation operator, (ωxy) represent to become Change to the coordinate of frequency domain;
The detection means calculates the amplitude of Fourier transformation, and takes the logarithm and obtain logarithmic spectrum:
Lnxy)=log [| snxy) |], wherein | | amplitude operator is represented,
Calculate phase spectrum P (ωxy):
WhereinRepresent phase operator;
The logarithmic spectrum and size are m × m mean filter mask convolution by the detection means, are smoothly composed:
V(ωxy)=Lnxy)*hmxy), wherein mean filter template is:
The detection means will obtain spectrum residual error R (ω to the number spectrum and the smooth spectrum subtractionxy):
R(ωxy)=Lnxy)-V(ωxy),
The detection means is by the spectrum residual error R (ωxy) and the phase spectrum P (ωxy) carry out two-dimensional discrete Fourier Inverse transformation, obtains energy Saliency maps
<mrow> <msubsup> <mi>f</mi> <mi>n</mi> <mn>1</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <msup> <mi>F</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>&amp;lsqb;</mo> <mi>exp</mi> <mo>{</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;omega;</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>&amp;omega;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>i</mi> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;omega;</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>&amp;omega;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mo>}</mo> <mo>&amp;rsqb;</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>.</mo> </mrow>
4. the method as described in claim 1, it is characterised in that detection means described in the detection means extracts the brightness pair Than the textural characteristics of degree figure, textural characteristics figure is obtainedSpecially:
The detection means builds wave filter:
Wherein, x '=a-m(xcosθ+ Ysin θ), y '=a-m(-xcosθ+ysinθ);a-mFor scale factor, θ represents the direction of kernel function, and λ represents the ripple of SIN function Long, ψ represents phase offset, and σ represents the standard deviation of Gaussian function, the ratio of width to height of γ representative functions;
The wave filter and the remote sensing images convolution are obtained filter result by the detection means, and the filter result is line Manage characteristic pattern:
<mrow> <msubsup> <mi>f</mi> <mi>n</mi> <mn>3</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>f</mi> <mi>n</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>N</mi> <mo>.</mo> </mrow> 2
CN201710345541.1A 2017-05-16 2017-05-16 Cloud layer image detection method Active CN107273803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710345541.1A CN107273803B (en) 2017-05-16 2017-05-16 Cloud layer image detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710345541.1A CN107273803B (en) 2017-05-16 2017-05-16 Cloud layer image detection method

Publications (2)

Publication Number Publication Date
CN107273803A true CN107273803A (en) 2017-10-20
CN107273803B CN107273803B (en) 2020-04-24

Family

ID=60065156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710345541.1A Active CN107273803B (en) 2017-05-16 2017-05-16 Cloud layer image detection method

Country Status (1)

Country Link
CN (1) CN107273803B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648184A (en) * 2018-05-10 2018-10-12 电子科技大学 A kind of detection method of remote sensing images high-altitude cirrus
CN110667847A (en) * 2019-10-17 2020-01-10 于刚 Unmanned aerial vehicle intelligent flying height control platform
CN111812106A (en) * 2020-09-15 2020-10-23 沈阳风驰软件股份有限公司 Method and system for detecting glue overflow of appearance surface of wireless earphone
CN111967508A (en) * 2020-07-31 2020-11-20 复旦大学 Time series abnormal point detection method based on saliency map

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183225A1 (en) * 2010-11-24 2012-07-19 Indian Statistical Institute Rough wavelet granular space and classification of multispectral remote sensing image
CN103093241A (en) * 2013-01-23 2013-05-08 北京理工大学 Optical remote sensing image non-homogeneous cloud layer discriminating method based on homogenization processing
CN105868745A (en) * 2016-06-20 2016-08-17 重庆大学 Weather identifying method based on dynamic scene perception

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120183225A1 (en) * 2010-11-24 2012-07-19 Indian Statistical Institute Rough wavelet granular space and classification of multispectral remote sensing image
CN103093241A (en) * 2013-01-23 2013-05-08 北京理工大学 Optical remote sensing image non-homogeneous cloud layer discriminating method based on homogenization processing
CN105868745A (en) * 2016-06-20 2016-08-17 重庆大学 Weather identifying method based on dynamic scene perception

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648184A (en) * 2018-05-10 2018-10-12 电子科技大学 A kind of detection method of remote sensing images high-altitude cirrus
CN110667847A (en) * 2019-10-17 2020-01-10 于刚 Unmanned aerial vehicle intelligent flying height control platform
CN110667847B (en) * 2019-10-17 2020-08-18 安徽省徽腾智能交通科技有限公司泗县分公司 Unmanned aerial vehicle intelligent flying height control platform
CN111967508A (en) * 2020-07-31 2020-11-20 复旦大学 Time series abnormal point detection method based on saliency map
CN111812106A (en) * 2020-09-15 2020-10-23 沈阳风驰软件股份有限公司 Method and system for detecting glue overflow of appearance surface of wireless earphone
CN111812106B (en) * 2020-09-15 2020-12-08 沈阳风驰软件股份有限公司 Method and system for detecting glue overflow of appearance surface of wireless earphone

Also Published As

Publication number Publication date
CN107273803B (en) 2020-04-24

Similar Documents

Publication Publication Date Title
US11244197B2 (en) Fast and robust multimodal remote sensing image matching method and system
CN103729848B (en) High-spectrum remote sensing small target detecting method based on spectrum saliency
CN103971115B (en) Automatic extraction method for newly-increased construction land image spots based on NDVI and PanTex index
CN104376330B (en) Polarimetric SAR Image Ship Target Detection method based on super-pixel scattering mechanism
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
CN107273803A (en) Cloud layer image detecting method
CN109684925B (en) Depth image-based human face living body detection method and device
CN103914847A (en) SAR image registration method based on phase congruency and SIFT
CN105205858A (en) Indoor scene three-dimensional reconstruction method based on single depth vision sensor
CN105139412A (en) Hyperspectral image corner detection method and system
CN106296638A (en) Significance information acquisition device and significance information acquisition method
CN105184779A (en) Rapid-feature-pyramid-based multi-dimensioned tracking method of vehicle
CN102129573A (en) SAR (Synthetic Aperture Radar) image segmentation method based on dictionary learning and sparse representation
CN103593832A (en) Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN103955701A (en) Multi-level-combined multi-look synthetic aperture radar image target recognition method
CN102629380B (en) Remote sensing image change detection method based on multi-group filtering and dimension reduction
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN110110618A (en) A kind of SAR target detection method based on PCA and global contrast
CN110533025A (en) The millimeter wave human body image detection method of network is extracted based on candidate region
CN103533332B (en) A kind of 2D video turns the image processing method of 3D video
CN108510531A (en) SAR image registration method based on PCNCC and neighborhood information
Zhang et al. Optical and SAR image dense registration using a robust deep optical flow framework
Misra et al. SPRINT: Spectra Preserving Radiance Image Fusion Technique using holistic deep edge spatial attention and Minnaert guided Bayesian probabilistic model
CN102136060A (en) Method for detecting population density

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant