CN102902956B - A kind of ground visible cloud image identifying processing method - Google Patents

A kind of ground visible cloud image identifying processing method Download PDF

Info

Publication number
CN102902956B
CN102902956B CN201210333246.1A CN201210333246A CN102902956B CN 102902956 B CN102902956 B CN 102902956B CN 201210333246 A CN201210333246 A CN 201210333246A CN 102902956 B CN102902956 B CN 102902956B
Authority
CN
China
Prior art keywords
image
cloud
gray
varieties
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210333246.1A
Other languages
Chinese (zh)
Other versions
CN102902956A (en
Inventor
王敏
周树道
陈晓颖
黄峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
METEOROLOGICAL COLLEGE UNIV OF TECHNOLOGY PLA
Original Assignee
METEOROLOGICAL COLLEGE UNIV OF TECHNOLOGY PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by METEOROLOGICAL COLLEGE UNIV OF TECHNOLOGY PLA filed Critical METEOROLOGICAL COLLEGE UNIV OF TECHNOLOGY PLA
Priority to CN201210333246.1A priority Critical patent/CN102902956B/en
Publication of CN102902956A publication Critical patent/CN102902956A/en
Application granted granted Critical
Publication of CN102902956B publication Critical patent/CN102902956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention discloses a kind of ground visible cloud image identifying processing method, and it comprises collection cloud atlas, carries out the pre-service such as denoising, enhancing to the coloured image of input, then adopts the transparency based on perceptual color space to be partitioned into sky background and cloud prospect; Extract the transparence value of cloud foreground image again, be aided with textural characteristics jointly as other distinguishing characteristics of the different varieties of clouds simultaneously; Finally utilize neural network in conjunction with the classification carrying out varieties of clouds type in the other property data base of the varieties of clouds, and store and show individual features and result.The present invention is applicable to the identifying processing work of the visible Shekinah of ground all-sky, the limitation that artificial order is sentenced can be solved, possesses certain robotization discrimination function, method realizes easily, structure is simple, cost is low, to common cloud discriminator, there is better effects, especially in the less demanding situation of degree of purity of background sky, the threshold value diagnostic method that the empty separating effect of cloud is more general is good.

Description

A kind of ground visible cloud image identifying processing method
Technical field
The present invention relates to digital image processing techniques field, particularly a kind of ground visible cloud image identifying processing method being applicable to weather station meteorologic analysis.
Background technology
Cloud is an important step of hydrologic cycle on the earth, and it and terrestrial radiation interaction joint effect local and energy equilibrium that is Global Scale.The radiation characteristic of dissimilar cloud and its distribution situation, significant to the accuracy of weather forecast, Global climate change and flight support etc.Due to the change of cloud moment, generally rely on the eyesight of people to carry out other observation of the ground varieties of clouds at present both at home and abroad, automatic Observation is still in the exploratory development stage.Wherein belong to identifying processing to carry out the varieties of clouds based on digital image processing techniques, what extensively use is have different textures as segmentation and classification foundation based on the red gray scale ratio of indigo plant (or radiance ratio), different cloud, but final differentiation result is not ideal enough, and main cause is as follows:
The first, under low visibility, because aerocolloidal increase causes the blue component of sky colourity to weaken, there is aerocolloidal sky will become greyish white, as utilized blue red gray scale than carrying out cloud and being separated of sky, gasoloid air can be mistaken for cloud;
The second, the varieties of clouds are not various, and form is different also close, and moment change is complicated, and image appearance form is single, is only to utilize simple a kind of feature to differentiate, discrimination must not be high.
Summary of the invention
For the problem that prior art medium cloud discrimination is lower, the invention provides a kind of identifying processing effective, and be easy to the ground visible cloud image identifying processing method of realization and management.Because consider that cloud is translucent object, and different classes of cloud has different transparence values, traditional cloud textural characteristics is combined with transparency by the present invention, more fully can show the feature of different cloud, thus significantly improves discrimination.
For achieving the above object, the technical scheme that the present invention takes is: a kind of ground visible cloud image identifying processing method, comprises the following steps:
1) original image comprising ground visible cloud image is gathered;
2). the original image of the colour collected is transformed to gray space by rgb space, obtains the gray level image corresponding with original image; Then filtering process is carried out to gray level image and obtain denoising image, then nonlinear gray conversion is carried out with the image that is enhanced to denoising image, and enhancing image is transformed to rgb space by gray space, to obtain colored enhancing image;
3). utilize the natural image matting method based on perceptual color space, strengthen image from colour and sky background and cloud prospect are separated;
4). from isolated cloud prospect, extract feature set;
5). according to the data in the feature set extracted, utilize the neural network classifier trained to carry out the discriminant classification of varieties of clouds type, obtain the differentiation result of varieties of clouds type.
For the ease of knowing recognition result and in the future to identifying the checking of history, the present invention also comprises step 6): show and store differentiation result and the cloud feature set of varieties of clouds type.
Further, adaptive wiener filter method is utilized to carry out filtering and noise reduction process to gray level image in step 2 of the present invention, the greyscale transformation of recycling immune genetic algorithm determination image non-linear strengthens parameter, and then carry out nonlinear gray conversion to image, be enhanced image.
Preferably, in step 4 of the present invention, described feature set comprises the three-channel transparency average of RGB, maximal value and minimum value in cloud prospect color cloud picture, and second moment, contrast, correlativity and entropy that cloud prospect gray scale cloud atlas 0 °, 45 °, 90 °, 135 ° four directions are corresponding, respective mean value, namely these characteristics can be used as the element characteristic of varieties of clouds type identification, with comprehensive multifactor identification varieties of clouds type, improve the accuracy rate identified.
Further, the present invention also comprises the different varieties of clouds type characteristic of correspondence of collection as varieties of clouds type property data base, and utilizes varieties of clouds type property data base to carry out the training of neural network classifier.Eigenwert kind in varieties of clouds type property data base is corresponding with the eigenwert kind in the feature set extracted in step 4, based on the neural network classifier of such varieties of clouds type property data base training, the recognition result of varieties of clouds type comparatively accurately also can be obtained rapidly when carrying out the varieties of clouds and not identifying.
Beneficial effect
The present invention utilizes transparency matching algorithm to be separated cloud atlas, and the basis be separated extracts the identification of eigenwert for varieties of clouds type of cloud atlas, and the accuracy of identification improves greatly; And structure of the present invention is simple, only needs existing image acquisition device and computing machine to realize, for the Accurate Prediction hazard weather of weather station provides technical guarantee; Storage Presentation Function of the present invention is convenient to the management of weather information simultaneously.
Accompanying drawing explanation
Figure 1 shows that the method flow diagram of specific embodiments of the invention;
Figure 2 shows that the process flow diagram of nonlinear gray changing image Enhancement Method in the present invention;
Figure 3 shows that immune genetic algorithm process flow diagram in Fig. 2 image enchancing method;
Figure 4 shows that the transparency statistical picture segmentation schematic diagram scratching drawing method based on perceptual color space;
Figure 5 shows that the structural model schematic diagram of cloud type identification BP neural network.
Embodiment
For making content of the present invention more become apparent, be described further below in conjunction with the drawings and specific embodiments.
As shown in Figure 1, specific embodiments of the invention method comprises step:
1) gather image, and original image is transformed to gray space by rgb space, obtain the gray level image that original image is corresponding; Image can utilize existing techniques in realizing in the conversion of rgb space and gray space.
2). adaptive wiener filter is carried out to obtain denoising image to the gray level image that step 1 obtains, and nonlinear gray conversion is carried out with the image that is enhanced to denoising image.Wherein adaptive wiener filter is prior art, and it adjusts parameter according to the difference of topography, and little smooth operation is carried out in the place large to local difference, and large smooth operation is carried out in the place little to local difference.
As shown in Figure 2, the idiographic flow of the image enhancement processes in above-mentioned steps is:
2.1) formula (1) is adopted to be normalized denoising image f (i, j).
g ( i , j ) = f ( i , j ) - min ( f ( i , j ) ) max ( f ( i , j ) ) - min ( f ( i , j ) ) - - - ( 1 )
In formula (1), i and j represents the capable and row of the coordinate at image slices vegetarian refreshments place, and f (i, j) denotation coordination is the original image gray scale of (i, j), and g (i, j) is the gray-scale value after its process.
2.2) to the image after normalized, existing basic immune genetic algorithm is as shown in Figure 3 adopted to seek optimized transformation parameters α and β (0 < α, β < 10).For different images, the value of α and β can utilize immune genetic algorithm automatic seeking to obtain optimized parameter, and image is different, and optimal result is also different.Wherein the fitness function of immune genetic algorithm relates to the variance F of image ac, information entropy E, pixel differences F brand noise knots modification I ncetc. four performance evaluation parameter that picture quality is closely related, these four evaluation indexes promote immune genetic algorithm to find the power of optimal parameter, i.e. the constituent element of fitness function in immune genetic algorithm.Fitness value is larger, and picture quality is better.Fitness function expression formula is:
Fitness=E·Inc·F ac+F br(2)
Wherein, the variance of image wherein, i xybe the pixel value of pixel (x, y), n (n=M × N) is the sum of image pixel.F acbe worth larger, show that the global contrast of image is better.
Information entropy wherein p ibe the probability that i-th grade of gray scale occurs, work as p iwhen=0, definition p ilog 2p i=0.
Pixel differences pixel differences F brbe in whole image each pixel and with its summation of difference at a distance of the respective pixel of two pixel distances, the image of a better contrast, has the F that is larger brvalue:
Noise knots modification after noise knots modification represents image enhaucament, gray level is the quantity that the number of pixels of h is greater than given threshold value t.
2.3) utilize the optimized transformation parameters α obtained in step 2.2 and β, obtain non-linear transform function F (u), 0≤u≤1,
F ( u ) = B - 1 ( &alpha; , &beta; ) &times; &Integral; 0 u t &alpha; - 1 ( 1 - t ) &beta; - 1 dt , 0 < &alpha; , &beta; < 10 - - - ( 3 )
Wherein, B (α, β) is Beta function, is expressed as:
B ( &alpha; , &beta; ) &Integral; 0 1 t &alpha; - 1 ( 1 - t ) &beta; - 1 dt
Then carrying out the image after incomplete beta function greyscale transformation to image is:
g'(x,y)=F(g(x,y)),0≤g′(x,y)≤1(4)
2.4) renormalization process is carried out to the image after conversion, obtain the image of gray scale airspace enhancement.
f'(x,y)=(Lmax-Lmin)g'(x,y)+Lmin(5)
Wherein, f'(i, j) for denotation coordination is image intensity value after the enhancing of (i, j), Lmax, Lmin are the changing image g'(i, the j that obtain in 2.3 steps) the minimum and maximum value of gray scale.
3). because cloud is translucent object, before the pixel of target internal contains, the color of target context, the color that pixel single in cloud comprises effectively can be carried out being separated of prospect and background by transparency matching algorithm, the present invention adopts the transparency based on perceptual color space naturally to scratch drawing method and single sky background and cloud prospect is separated, as shown in Figure 4; Algorithm concrete steps are:
3.1) supposition carries out being separated of prospect and background to the pixel of on image, and this point is P point, then first find the some f on the prospect profile line that on image, distance P point is nearest when being separated 1, represent bee-line with l.Then with f 1point is the center of circle, with γ 1l f1for radius factor, 1< γ 1<10) the foreground F doing round F, P point for radius can calculate by the weighted mean of all foreground point color values in known region in circle F.Usage space Gaussian distribution the impact of neighbor point is strengthened as flexible strategy.For interior 1 i of circle,
&omega; F i = 1 2 &pi; e &xi; F i 2 2 l F 2 - - - ( 6 )
Wherein represent foreground point i and some f in circle F 1two-dimensional distance on image space, i.e. foreground:
F &ap; 1 &omega; &Sigma; i &Element; N &omega; F i F i - - - ( 7 )
Wherein f ibe the color value of foreground point i in circle F, N is the number of foreground pixel point in circle F;
Make to use the same method and can calculate the background colour B of P point.
3.2) the colourity α of P point then, is first calculated respectively cHwith brightness α iN, then calculate the transparency α value of this point:
&alpha; CH = ( C &prime; - B &prime; ) &CenterDot; ( F &prime; - B &prime; ) | | F &prime; - B &prime; | | 2 - - - ( 8 )
&alpha; IN = L C - L B L F - L B - - - ( 9 )
Wherein, C ', F', B' be C, F, B in RGB color space by (1,0,0), (0,1,0), the projection in the plane that (0,0,1) is determined, L c, L b, L fbe respectively the brightness of P point combined colors, background color and foreground color.Definition:
ρ=min(L F,L B)/max(L F,L B)(10)
d=|F′B′|(11)
D is the distance in unit plane between F' and B',
Then α cHand α iNweight be respectively
W CH=sd 3+tρ 3
W IN = u d 3 + v &rho; 3 - - - ( 12 )
Wherein, u, v, s, t are constants, in color-aware space, rule of thumb can get u=1/8000, v=t=3.0, s=8000.Then P point transparency α value following formula calculates
&alpha; = W CH &alpha; CH + W IN &alpha; IN W CH + W IN - - - ( 13 )
Carry out cloud atlas according to transparency α value and scratch figure, thus realize being separated of cloud prospect and sky background.
3). obtain cloud prospect characteristic of correspondence collection, first calculate the transparency mean value of its correspondence according to all R, G, B transparency α values of cloud prospect formula (13) gained, maximum and minimum value;
&alpha; 1 = &Sigma; i = 1 n &alpha; i n - - - ( 14 )
α 2=max(α i)(15)
α 3=min(α i)(16)
Then its gray scale cloud atlas 0 ° is obtained to cloud prospect gray processing, four features mean value separately that 45 °, 90 °, 135 ° four direction gray level co-occurrence matrixes derive:
Gray level co-occurrence matrixes: from gray scale be x pixel (its position is (i, j)), adds up on θ direction, with it the probability that simultaneously occurs for pixel (i+dcos θ, j+dsin θ) that another gray scale of d is y of distance:
P &(i,j)=P(f(i,j)=x,f(i+dcosθ,j+dsinθ)=y)(17)
Second moment: the tolerance of image distribution homogeneity:
Contrast: the sharpness of image:
Correlativity: the similarity degree in the direction of the direction that the element weighing gray level co-occurrence matrixes is expert at and row:
In formula (20),
Entropy: image has the tolerance of quantity of information:
5). utilize the BP neural network classifier trained to carry out discriminant classification to the new characteristic obtained:
Neural network model adopts standard three layers of BP neural network of most elite, shown in its models coupling Fig. 4.The input layer number of neural network is samples selection intrinsic dimensionality; Export as identification and classification layer.Implicit nodal point number elects the geometric mean of input nodal point number M and output node number N as.The weights that network training adopts the method for error back propagation to adjust between node connect.
The present invention also comprises the different varieties of clouds type characteristic of correspondence of collection as varieties of clouds type property data base, and utilizes varieties of clouds type property data base to carry out the training of neural network classifier.Eigenwert kind in varieties of clouds type property data base is corresponding with the eigenwert kind in the feature set extracted in step 4.
Shown in composition graphs 5, when training BP network, if the proper vector of input extracts from m width cloud atlas picture, then the desired output of the output layer of corresponding BP network is m neuronic output is 1, and other neuronic outputs are all 0, so output layer vector can be expressed as:
out = [ 0,0 , . . . , 1 ( m ) , 0 , . . . , 0 ] T (m element of vector is 1)
After obtaining recognition result, the present invention can utilize the Presentation Function of computing machine by recognition result display on the visual interface, is convenient to staff and knows in time, and analyzes meteorological according to recognition result, with Accurate Prediction weather.Meanwhile, the not corresponding characteristic parameter of recognition result and the varieties of clouds is stored, so that later reference.
Above embodiment only in order to technical scheme of the present invention to be described, but not is limited; Although with reference to previous embodiment to invention has been detailed description, for the person of ordinary skill of the art, still can modify to the technical scheme described in previous embodiment, or equivalent replacement is carried out to wherein portion of techniques feature; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of the present invention's technical scheme required for protection.

Claims (5)

1. a ground visible cloud image identifying processing method, is characterized in that, comprise the following steps:
1). gather the original image comprising ground visible cloud image;
2). the original image of the colour collected is transformed to gray space by rgb space, obtains the gray level image corresponding with original image; Then filtering process is carried out to gray level image and obtain denoising image, then nonlinear gray conversion is carried out with the image that is enhanced to denoising image, and enhancing image is transformed to rgb space by gray space, to obtain colored enhancing image;
Above-mentioned to denoising image carry out nonlinear gray conversion be enhanced in the process of image, comprise step:
2.1) formula (1) is adopted to be normalized denoising image f (i, j):
g ( i , j ) = f ( i , j ) - min ( f ( i , j ) ) max ( f ( i , j ) ) - min ( f ( i , j ) ) - - - ( 1 )
In formula (1), i and j represents the capable and row of the coordinate at image slices vegetarian refreshments place, and f (i, j) denotation coordination is the original image gray scale of (i, j), and g (i, j) is the gray-scale value after its process;
2.2) to the image after normalized, basic immune genetic algorithm is adopted to seek optimized transformation parameters α and β, wherein 0 < α, β < 10; Wherein the fitness function of immune genetic algorithm is:
Fitness=E·I nc·F ac+F br(2)
In above formula, F acfor the variance of image, E are information entropy, F brfor pixel differences, I ncfor noise knots modification,
The variance of image is:
F a c = 1 n &Sigma; i = 1 M &Sigma; j = 1 N g 2 ( i , j ) - ( 1 n &Sigma; i = 1 M &Sigma; j = 1 N g ( i , j ) ) 2
In above formula, n is the sum of image pixel, and n=M × N;
Information entropy is:
E = - &Sigma; k = 0 L - 1 p k log 2 p k
In above formula, p kfor the probability that kth level gray scale occurs, work as p kwhen=0, definition p klog 2p k=0;
Pixel differences is:
F b r = &Sigma; i = 1 M - 2 &Sigma; j = 1 N ( g ( i , j ) - g ( i + 2 , j ) ) 2
Noise knots modification is:
I n c = &Sigma; n ( h ) > t 1
After representing image enhaucament, gray level is the quantity that the number of pixels of h is greater than given threshold value t;
2.3) utilize the optimized transformation parameters α obtained in step 2.2 and β, obtain non-linear transform function F (u),
0 &le; u &le; 1 , F ( u ) = B - 1 ( &alpha; , &beta; ) &times; &Integral; 0 u t &alpha; - 1 ( 1 - t ) &beta; - 1 d t , 0 < &alpha; , &beta; < 10 - - - ( 3 )
Wherein, B (α, β) is Beta function, is expressed as:
B ( &alpha; , &beta; ) = &Integral; 0 1 t &alpha; - 1 ( 1 - t ) &beta; - 1 d t
Then carrying out the image after incomplete beta function greyscale transformation to image is:
g'(i,j)=F(g(i,j)),0≤g',(i,j)≤1(4)
2.4) renormalization process is carried out to the image after conversion, obtains the image of gray scale airspace enhancement:
f'(i,j)=(Lmax-Lmin)g'(i,j)+Lmin(5)
Wherein, f'(i, j) for denotation coordination is image intensity value after the enhancing of (i, j), Lmax, Lmin are the changing image g'(i, the j that obtain in 2.3 steps) the minimum and maximum value of gray scale;
3). utilize the natural image matting method based on perceptual color space, strengthen image from colour and sky background and cloud prospect are separated;
4). from isolated cloud prospect, extract feature set;
5). according to the data in the feature set extracted, utilize the neural network classifier trained to carry out the discriminant classification of varieties of clouds type, obtain the differentiation result of varieties of clouds type.
2. ground visible cloud image identifying processing method according to claim 1, is characterized in that, also comprise step:
6). show and store differentiation result and the cloud feature set of varieties of clouds type.
3. ground visible cloud image identifying processing method according to claim 1, it is characterized in that, adaptive wiener filter method is utilized to carry out filtering and noise reduction process to gray level image in step 2, recycle basic immune genetic algorithm determination image non-linear greyscale transformation and strengthen parameter, then carry out nonlinear gray conversion to image, be enhanced image.
4. ground visible cloud image identifying processing method according to claim 1, it is characterized in that, in step 4, described feature set comprises the three-channel transparency average of RGB, maximal value and minimum value in cloud prospect color cloud picture, and second moment, contrast, correlativity and entropy four features mean value separately that cloud prospect gray scale cloud atlas derives at 0 °, 45 °, 90 °, 135 ° four direction gray level co-occurrence matrixes.
5. ground visible cloud image identifying processing method according to claim 1, is characterized in that, also comprises and collects different varieties of clouds type characteristic of correspondence as varieties of clouds type property data base, and utilize varieties of clouds type property data base to carry out the training of neural network classifier.
CN201210333246.1A 2012-09-10 2012-09-10 A kind of ground visible cloud image identifying processing method Active CN102902956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210333246.1A CN102902956B (en) 2012-09-10 2012-09-10 A kind of ground visible cloud image identifying processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210333246.1A CN102902956B (en) 2012-09-10 2012-09-10 A kind of ground visible cloud image identifying processing method

Publications (2)

Publication Number Publication Date
CN102902956A CN102902956A (en) 2013-01-30
CN102902956B true CN102902956B (en) 2016-04-13

Family

ID=47575178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210333246.1A Active CN102902956B (en) 2012-09-10 2012-09-10 A kind of ground visible cloud image identifying processing method

Country Status (1)

Country Link
CN (1) CN102902956B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390274A (en) * 2013-07-19 2013-11-13 电子科技大学 Image segmentation quality evaluation method based on region-related information entropies
CN103699902A (en) * 2013-12-24 2014-04-02 南京信息工程大学 Sorting method of ground-based visible light cloud picture
CN104008402A (en) * 2014-05-12 2014-08-27 南京信息工程大学 Foundation cloud picture recognition method based on improved SOM algorithm
CN104182977B (en) * 2014-08-13 2017-02-15 中国人民解放军理工大学 Wave cloud arranging information extraction method based on cloud block main body framework analysis
CN105590307A (en) * 2014-10-22 2016-05-18 华为技术有限公司 Transparency-based matting method and apparatus
CN105405120A (en) * 2015-10-22 2016-03-16 华北电力大学(保定) Method extracting cloud graph from sky image
US9792522B2 (en) * 2015-12-01 2017-10-17 Bloomsky, Inc. Weather information extraction using sequential images
US10460231B2 (en) * 2015-12-29 2019-10-29 Samsung Electronics Co., Ltd. Method and apparatus of neural network based image signal processor
CN106127725B (en) * 2016-05-16 2019-01-22 北京工业大学 A kind of millimetre-wave radar cloud atlas dividing method based on multiresolution CNN
CN106228522A (en) * 2016-07-27 2016-12-14 合肥高晶光电科技有限公司 A kind of color concentration treatment of CCD color selector
CN108280810B (en) * 2018-01-09 2020-08-14 北方工业大学 Automatic processing method for repairing cloud coverage area of single-time phase optical remote sensing image
CN110111342B (en) * 2019-04-30 2021-06-29 贵州民族大学 Optimized selection method and device for matting algorithm
CN110363171A (en) * 2019-07-22 2019-10-22 北京百度网讯科技有限公司 The method of the training method and identification sky areas of sky areas prediction model
CN112801857A (en) * 2020-11-30 2021-05-14 泰康保险集团股份有限公司 Image data processing method and device
CN117314741B (en) * 2023-12-01 2024-03-26 成都华栖云科技有限公司 Green screen background matting method, device and equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727657A (en) * 2008-10-31 2010-06-09 李德毅 Image segmentation method based on cloud model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727657A (en) * 2008-10-31 2010-06-09 李德毅 Image segmentation method based on cloud model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
强对流天气下卫星云图特征体系构建方法的研究;刘瑜;《中国优秀硕士学位论文全文数据库信息科技辑》;20071015;1-49 *

Also Published As

Publication number Publication date
CN102902956A (en) 2013-01-30

Similar Documents

Publication Publication Date Title
CN102902956B (en) A kind of ground visible cloud image identifying processing method
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN102646200B (en) Image classifying method and system for self-adaption weight fusion of multiple classifiers
CN103218831B (en) A kind of video frequency motion target classifying identification method based on profile constraint
CN106920243A (en) The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN105335966B (en) Multiscale morphology image division method based on local homogeney index
CN103049763B (en) Context-constraint-based target identification method
CN106651886A (en) Cloud image segmentation method based on superpixel clustering optimization CNN
CN108647602B (en) A kind of aerial remote sensing images scene classification method determined based on image complexity
CN106296695A (en) Adaptive threshold natural target image based on significance segmentation extraction algorithm
CN105631892B (en) It is a kind of that detection method is damaged based on the aviation image building of shade and textural characteristics
CN110264484A (en) A kind of improvement island water front segmenting system and dividing method towards remotely-sensed data
CN105354865A (en) Automatic cloud detection method and system for multi-spectral remote sensing satellite image
CN110120041A (en) Pavement crack image detecting method
CN103984953A (en) Cityscape image semantic segmentation method based on multi-feature fusion and Boosting decision forest
CN109558806A (en) The detection method and system of high score Remote Sensing Imagery Change
CN113807464B (en) Unmanned aerial vehicle aerial image target detection method based on improved YOLO V5
CN106056155A (en) Super-pixel segmentation method based on boundary information fusion
CN110009095A (en) Road driving area efficient dividing method based on depth characteristic compression convolutional network
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN103246894B (en) A kind of ground cloud atlas recognition methods solving illumination-insensitive problem
CN109684922A (en) A kind of recognition methods based on the multi-model of convolutional neural networks to finished product dish
CN110309781A (en) Damage remote sensing recognition method in house based on the fusion of multi-scale spectrum texture self-adaption
CN111914611A (en) Urban green space high-resolution remote sensing monitoring method and system
CN103325095A (en) Swatch sparsity image inpainting method with directional factor combined

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant