CN101876993B - Method for extracting and retrieving textural features from ground digital nephograms - Google Patents
Method for extracting and retrieving textural features from ground digital nephograms Download PDFInfo
- Publication number
- CN101876993B CN101876993B CN2009102385224A CN200910238522A CN101876993B CN 101876993 B CN101876993 B CN 101876993B CN 2009102385224 A CN2009102385224 A CN 2009102385224A CN 200910238522 A CN200910238522 A CN 200910238522A CN 101876993 B CN101876993 B CN 101876993B
- Authority
- CN
- China
- Prior art keywords
- cloud image
- ground
- based digital
- pixel
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 239000013598 vector Substances 0.000 claims abstract description 53
- 239000011159 matrix material Substances 0.000 claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 description 11
- 230000008676 import Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Chemical compound O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种地基数字云图的纹理特征抽取和检索方法,包括抽取方法和检索方法,抽取方法包括:将彩色RGB三通道的地基数字云图转换为单通道的像素类别图;通过分析像素类别图和建立共生矩阵,得到共生矩阵的直方图向量;合并多个像素类别共生矩阵的直方图向量,构建地基数字云图的纹理特征向量;将纹理特征向量保存到云图数据库。检索包括:按照上述的特征抽取方法抽取样例云图的纹理特征;依次计算样例云图的纹理特征和云图数据库中每一幅云图的纹理特征之间的相似性;把最相似的若干云图作为检索结果显示。本发明能够自动分析和抽取地基数字云图的有效纹理特征,自动从云图数据库中检索出与样例云图相似的结果云图。
The invention discloses a texture feature extraction and retrieval method of a ground-based digital cloud image, including an extraction method and a retrieval method. The extraction method includes: converting a ground-based digital cloud image with three channels of color RGB into a single-channel pixel category map; analyzing the pixel category Figure and establish the co-occurrence matrix to obtain the histogram vector of the co-occurrence matrix; combine the histogram vectors of the co-occurrence matrix of multiple pixel categories to construct the texture feature vector of the ground-based digital cloud image; save the texture feature vector to the cloud image database. The retrieval includes: extracting the texture features of the sample cloud image according to the above-mentioned feature extraction method; sequentially calculating the similarity between the texture features of the sample cloud image and the texture features of each cloud image in the cloud image database; The results show that. The invention can automatically analyze and extract the effective texture features of the ground-based digital cloud image, and automatically retrieve the result cloud image similar to the sample cloud image from the cloud image database.
Description
技术领域 technical field
本发明涉及气象探测技术、数字图像处理、图像检索和模式识别领域,尤其涉及一种地基数字云图的纹理特征分析和检索方法。The invention relates to the fields of meteorological detection technology, digital image processing, image retrieval and pattern recognition, in particular to a texture feature analysis and retrieval method of ground-based digital cloud images.
背景技术 Background technique
在地球大气能量收支平衡中,云的调节作用非常显著,是气候变化的一个重要影响因子。另一方面,云的生成、发展和演变不仅反映了当时大气的稳定程度、运动和水汽状况,而且也是预示未来天气变化的重要征兆之一。因此,云的观测具有极其重要的作用。目前,云的观测主要包括天基观测(即卫星遥感)和地基观测。卫星云图的特征分析和自动化处理已经取得了较大的进展,但是,云的地基观测以及云图分析则长期依赖气象观测员目视判断,这成为气象业务自动化发展进程的一个瓶颈。In the Earth's atmospheric energy balance, clouds play a very significant role in regulating and are an important factor affecting climate change. On the other hand, the generation, development and evolution of clouds not only reflect the stability, movement and water vapor conditions of the atmosphere at that time, but also one of the important signs foretelling future weather changes. Therefore, cloud observation plays an extremely important role. At present, cloud observations mainly include space-based observations (that is, satellite remote sensing) and ground-based observations. Significant progress has been made in the feature analysis and automatic processing of satellite cloud images. However, ground-based cloud observation and cloud image analysis have long relied on the visual judgment of meteorological observers, which has become a bottleneck in the development of meteorological business automation.
目前,国内外已经开发了一些地基全天空云观测设备,如美国Yankee Environmental System Inc.研制的全天空成像仪TSI(Total SkyImager),美国California大学研制的WSI(Whole Sky Imager),中国科学院大气物理研究所研制的ASI(All Sky Imager)以及中国气象科学研究院研制的地基全天空云观测系统TCI(Ground-based Total-skyCloud Imager)等。这些设备都能够自动地拍摄全天空的图像,并生成彩色数字图像一一地基数字云图。虽然地基数字云图已能够自动获取,云量也基本能够实现自动计算,但是,地基数字云图的云状分析目前还主要依赖于有经验的观测员人工分析。显然,人工分析有很多缺陷:首先,观测员必须十分熟悉复杂的观测规范并能熟练运用;其次,观测结果会受人的生理、心理和责任心等方面的影响,乃至同一个观测员对于同一幅地基云图在不同的时候分析可能会产生不同的特征描述;另外,测报人员的流动以及观测的连续性不够,也会影响观测的准确性,同一幅地基云图由不同的观测员分析也往往会产生不同的特征描述。显然,客观地自动分析和抽取地基数字云图的有效特征对于地基云观测的自动化和智能化具有非常重要的意义。纹理特征是地基数字云图的一个重要特征,它能客观地描述云及其天空背景的一些特性。应用数字图像处理、人工智能技术,地基数字云图能够用数字化的、有效的纹理特征向量表示。At present, some ground-based all-sky cloud observation devices have been developed at home and abroad, such as the TSI (Total Sky Imager) developed by Yankee Environmental System Inc. of the United States, the WSI (Whole Sky Imager) developed by the University of California in the United States, and the Atmospheric Physics of the Chinese Academy of Sciences. The ASI (All Sky Imager) developed by the Institute and the ground-based all-sky cloud observation system TCI (Ground-based Total-skyCloud Imager) developed by the Chinese Academy of Meteorological Sciences, etc. These devices can automatically take images of the entire sky and generate color digital images—ground-based digital cloud images. Although the ground-based digital cloud image can be obtained automatically, and the cloud amount can basically be automatically calculated, however, the cloud shape analysis of the ground-based digital cloud image still mainly relies on manual analysis by experienced observers. Obviously, there are many defects in manual analysis: firstly, observers must be very familiar with complex observation rules and be able to use them proficiently; Analysis of ground-based cloud images at different times may produce different feature descriptions; in addition, the flow of forecasters and insufficient continuity of observations will also affect the accuracy of observations, and the same ground-based cloud image is often analyzed by different observers. produce different feature descriptions. Obviously, objectively and automatically analyzing and extracting effective features of ground-based digital cloud images is of great significance for the automation and intelligence of ground-based cloud observations. Texture features are an important feature of ground-based digital cloud images, which can objectively describe some characteristics of clouds and their sky background. Using digital image processing and artificial intelligence technology, ground-based digital cloud images can be represented by digital and effective texture feature vectors.
另外,随着数字摄像技术的发展及其在地基云观测中的应用,我们获取到的地基数字云图将越来越多,日积月累,就会形成一个规模非常庞大(往往上万幅)的图像库。在实际应用中,气象观测员或者气象研究者经常需要在地基云图数据库中检索特定的云图。在传统的方法,检索者有两种方式获取云图:最原始的方法是手工浏览云图数据库,获取特定的云图;另外一种方法是根据关键字检索云图,在此方法中,云图数据库管理员须事先手工对云图数据库中的每一幅云图用文字(称之为关键字)进行描述,并且把关键字和云图关联起来,保存在数据库中;检索的时候,检索者输入关键字,云图数据库检索系统通过匹配关键字进行检索。显然,当云图数据库的规模达一定级别的时候,上述两种方法都难以胜任。手工浏览一个大规模云图数据库耗时又耗力,效率低下;基于关键字检索虽然检索效率比较高,但是不容忽视的是,该方法的前提是云图数据库中的每一幅云图都有正确的关键字与之相关联,而就目前技术来说,机器不能自动给云图添加正确的关键字,而是需要由熟练的有专业知识的观测员手工添加云图的关键字描述。同样的,人工添加云图关键字耗时耗力,而且,添加的关键字主观性强,前后不一致的情况也不少见。所以,数字化的、有效的纹理特征抽取,以及基于该纹理特征的云图检索方法能够在一定程度上解决云图手工分析和检索中存在的问题。In addition, with the development of digital camera technology and its application in ground-based cloud observation, we will obtain more and more ground-based digital cloud images, which will form a very large-scale (often tens of thousands) image library . In practical applications, meteorological observers or meteorological researchers often need to retrieve specific cloud images in ground-based cloud image databases. In the traditional method, the searcher has two ways to obtain the cloud image: the most primitive method is to manually browse the cloud image database to obtain a specific cloud image; the other method is to search the cloud image according to keywords. In this method, the cloud image database administrator must Manually describe each cloud image in the cloud image database in words (called keywords) in advance, and associate the keyword with the cloud image and save it in the database; when searching, the searcher enters keywords, and the cloud image database searches The system searches by matching keywords. Obviously, when the scale of the cloud image database reaches a certain level, the above two methods are incompetent. Manually browsing a large-scale cloud image database is time-consuming, labor-intensive, and inefficient; although keyword-based retrieval is relatively efficient, it cannot be ignored that the premise of this method is that each cloud image in the cloud image database has the correct key Words are associated with it, and as far as the current technology is concerned, the machine cannot automatically add the correct keywords to the cloud map, but a skilled observer with professional knowledge needs to manually add the keyword description of the cloud map. Similarly, manually adding cloud image keywords is time-consuming and labor-intensive. Moreover, the added keywords are highly subjective and inconsistencies are not uncommon. Therefore, digital and effective texture feature extraction and cloud image retrieval method based on the texture feature can solve the problems in cloud image manual analysis and retrieval to a certain extent.
发明内容Contents of the invention
(一)发明目的(1) Purpose of the invention
本发明的目的是提供一种地基数字云图的纹理特征抽取与检索方法,客观地自动分析和抽取地基数字云图的有效纹理特征,解决上述云图手工分析和检索中存在的问题。The purpose of the present invention is to provide a method for extracting and retrieving texture features of ground-based digital cloud images, objectively and automatically analyzing and extracting effective texture features of ground-based digital cloud images, and solving the problems existing in the manual analysis and retrieval of above-mentioned cloud images.
(二)发明内容(2) Contents of the invention
一种地基数字云图的纹理特征抽取方法,包括以下步骤:A method for extracting texture features of ground-based digital cloud images, comprising the following steps:
S101:采用如下公式将彩色RGB三通道的地基数字云图转换为单通道的像素类别图:S101: Convert the color RGB three-channel ground-based digital cloud image into a single-channel pixel category image using the following formula:
其中,IB(x,y),IR(x,y)分别表示输入彩色地基数字云图中坐标(x,y)处像素的蓝色(Blue)分量值和红色(Red)分量值,其值通过彩色地基数字云图图像文件直接读取,IV(x,y)表示输入彩色地基数字云图中坐标(x,y)处像素的亮度值,C(x,y)表示彩色地基数字云图中坐标(x,y)处像素的类别标号,α1,α2为蓝红波段比例的阈值参数,β为亮度阈值参数。Among them, I B (x, y), I R (x, y) represent the blue (Blue) component value and the red (Red) component value of the pixel at coordinates (x, y) in the input color ground-based digital cloud image respectively, and The value is directly read through the image file of the color ground-based digital cloud image, I V (x, y) represents the brightness value of the pixel at the coordinate (x, y) in the input color ground-based digital cloud image, and C(x, y) represents the brightness value of the pixel in the color ground-based digital cloud image The category label of the pixel at coordinates (x, y), α 1 , α 2 are the threshold parameters of the blue-red band ratio, and β is the brightness threshold parameter.
S102:通过分析像素类别图和建立共生矩阵,得到共生矩阵的直方图向量;S102: Obtain the histogram vector of the co-occurrence matrix by analyzing the pixel category map and establishing the co-occurrence matrix;
S103:合并多个像素类别共生矩阵的直方图向量,构建地基数字云图的纹理特征向量;S103: Merge the histogram vectors of multiple pixel category co-occurrence matrices to construct a texture feature vector of the ground-based digital cloud image;
S104:将S103构建的纹理特征向量保存到云图数据库。S104: Save the texture feature vector constructed in S103 to a cloud image database.
其中,所述步骤S101中的IV(x,y)值通过输入彩色地基数字云图中坐标(x,y)处像素的蓝色(Blue)分量值红色(Red)分量值以及绿色(Green)分量值计算得到,其计算公式为:IV(x,y)=100*Max(IR(x,y),IG(x,y),IB(x,y))/255,其中IG(x,y)为坐标(x,y)处像素的绿色(Green)分量值。Wherein, the IV (x, y) value in the step S101 is obtained by inputting the blue (Blue) component value red (Red) component value and the green (Green) component value of the pixel at coordinates (x, y) in the colored ground-based digital nephogram. The component values are calculated, and the calculation formula is: I V (x, y) = 100*Max(I R (x, y), I G (x, y), I B (x, y))/255, where I G (x, y) is the green (Green) component value of the pixel at the coordinate (x, y).
其中,所述α1值为1.5,α2值为1.3,β值为80。Wherein, the α1 value is 1.5, the α2 value is 1.3, and the β value is 80.
其中,所述步骤S102包括步骤:Wherein, the step S102 includes the steps of:
S1021:分析像素类别图中任意两个像素类别之间的共生关系,构建共生矩阵CCM:S1021: Analyze the co-occurrence relationship between any two pixel categories in the pixel category map, and construct a co-occurrence matrix CCM:
其中i,j分别表示像素类别,取值为{0,1,2,3},(Δx,Δy)表示偏移量,w,h分别表示地基数字云图的长度和宽度,CCM(i,j)表示像素类别图中(x,y)处像素类别为i,同时(x+Δx,y+Δy)处像素类别为j的位置对出现的频次,分别计算每一个CCM(i,j),得到一个4×4的共生矩阵;Among them, i and j respectively represent the pixel category, and the values are {0, 1, 2, 3}, (Δx, Δy) represent the offset, w, h represent the length and width of the ground-based digital cloud image respectively, CCM(i, j ) indicates the frequency of occurrence of the position pair where the pixel category at (x, y) is i, and the pixel category at (x+Δx, y+Δy) is j in the pixel category map, and each CCM(i, j) is calculated separately, Get a 4×4 co-occurrence matrix;
S1022:按如下归一化公式归一化所述共生矩阵CCM(i,j),得到归一化的共生矩阵CCMN;S1022: Normalize the co-occurrence matrix CCM(i, j) according to the following normalization formula to obtain a normalized co-occurrence matrix CCM N ;
CCMN(i,j)=CCM(i,j)/(wh).CCM N (i, j) = CCM (i, j)/(wh).
S1023:按如下公式按行拼接归一化共生矩阵CCMN的各个元素,得到一个16维的直方图向量FS;S1023: According to the following formula, each element of the normalized co-occurrence matrix CCM N is spliced row by row to obtain a 16-dimensional histogram vector F S ;
FS=(CCMN(0,0),CCMN(0,1),...,CCMN(3,3)).。F S = (CCM N (0, 0), CCM N (0, 1), . . . , CCM N (3, 3)).
其中,所述步骤S103包括步骤:Wherein, the step S103 includes the steps of:
S1031:两个像素类别共生关系的位置偏移量存在有L个,即(Δx1,Δy1),(Δx2,Δy2),…,(ΔxL-1ΔyL-1),(ΔxL,ΔyL),按步骤S102得到L个不同的共生矩阵及L个相应的直方图向量 S1031: There are L position offsets of the co-occurrence relationship between two pixel categories, namely (Δx 1 , Δy 1 ), (Δx 2 , Δy 2 ), ..., (Δx L-1 Δy L-1 ), (Δx L , Δy L ), get L different co-occurrence matrices and L corresponding histogram vectors according to step S102
S1032:按如下公式线性叠加和平均化处理L个直方图向量 S1032: Process the L histogram vectors linearly superimposed and averaged according to the following formula
得到16维的地基数字云图的纹理特征向量。The texture feature vector of the 16-dimensional ground-based digital cloud image is obtained.
一种地基数字云图的纹理特征检索方法,包括以下步骤:A texture feature retrieval method for ground-based digital cloud images, comprising the following steps:
S201:按权利要求1的步骤S101、S102和S103的步骤计算样例数字云图的纹理特征向量;S201: Calculate the texture feature vector of the sample digital cloud image according to the steps of steps S101, S102 and S103 of
S202:按如下公式计算所述样例数字云图的纹理特征向量和云图数据库中云图的纹理特征向量之间的相似性,S202: Calculate the similarity between the texture feature vector of the sample digital cloud image and the texture feature vector of the cloud image in the cloud image database according to the following formula,
其中,F1,F2表示两个地基数字云图的特征向量,D的值越大,这两个地基云图越相似,相反,D的值越小,这两个地基云图差别越大;Among them, F 1 and F 2 represent the eigenvectors of the two ground-based digital cloud images, the larger the value of D, the more similar the two ground-based cloud images, on the contrary, the smaller the value of D, the greater the difference between the two ground-based cloud images;
S203:按D值从大到小排序,选择相似性最大的前M幅作为检索结果,其中,M为大于0小于云图数据库总云图数的任意整数;S203: sort by the D value from large to small, and select the top M sheets with the greatest similarity as the retrieval result, where M is any integer greater than 0 and less than the total number of cloud images in the cloud image database;
S204:显示检索结果云图。S204: Display a cloud map of the retrieval result.
其中,所述步骤S201前包括选择一幅样例数字云图的步骤。Wherein, before the step S201, there is a step of selecting a sample digital cloud image.
(三)有益效果(3) Beneficial effects
本发明提出的地基数字云图的纹理特征抽取与检索方法具有如下有益效果:The texture feature extraction and retrieval method of the ground-based digital cloud image proposed by the present invention has the following beneficial effects:
(1)本发明方法中,地基数字云图的特征用一个客观的16维数值向量描述,而不是主观的文字描述,而且特征向量的抽取过程可由计算机全自动执行,可以极大地提高地基云图分析和管理的效率。这种纹理特征向量隐含了云图的颜色、质地、结构特征,为云图的自动化分析、识别和检索等任务提供了数理依据。(1) In the inventive method, the feature of ground-based digital cloud image is described with an objective 16-dimensional numerical vector, rather than subjective literal description, and the extraction process of feature vector can be carried out fully automatically by computer, can greatly improve ground-based cloud image analysis and management efficiency. This texture feature vector implies the color, texture, and structural features of cloud images, and provides a mathematical basis for automatic analysis, identification, and retrieval of cloud images.
(2)本发明方法中,云图检索的后台云图库管理过程全自动化,可实现0手工量。也就是说,云图库管理员只需要指定入库的云图集合或者云图所在目录,本发明自动分析云图的纹理特征,并且保存到数据库中,供前台检索调用。另外,本发明还提供了基于样例云图检索的新型检索方式。用户只需要指定一副样例云图,本发明可自动从云图库中检索出与样例云图相似的结果云图。最后,基于本发明的纹理特征检索方法性能优异,检索的准确率较高。(2) In the method of the present invention, the management process of cloud image library in the background of cloud image retrieval is fully automated, and zero manual effort can be realized. That is to say, the administrator of the cloud library only needs to specify the collection of cloud images or the directory where the cloud images are stored, and the invention automatically analyzes the texture features of the cloud images and saves them in the database for retrieval and calling by the front desk. In addition, the present invention also provides a new retrieval method based on sample cloud image retrieval. The user only needs to specify a pair of sample cloud images, and the invention can automatically retrieve the result cloud images similar to the sample cloud images from the cloud library. Finally, the texture feature retrieval method based on the present invention has excellent performance and high retrieval accuracy.
(3)本发明方法的像素类别图根据蓝红波段比例和像素亮度对像素进行分类,分类方法简单高效,类别数目精简。(3) The pixel classification diagram of the method of the present invention classifies the pixels according to the blue-red band ratio and the pixel brightness, the classification method is simple and efficient, and the number of classes is simplified.
(4)本发明方法中的共生矩阵基于像素类别图进行分析,与传统的基于灰度图方法相比,本方法像素对的共生关系内涵深刻,寓意直白,它刻画了云像素与云像素,天空像素与天空像素,云像素与天空像素等像素对之间在不同位置关系下的共生关系。而且,本发明的共生矩阵维度很低(4×4),大大减少了存储开销。(4) The co-occurrence matrix in the method of the present invention is analyzed based on the pixel category map. Compared with the traditional grayscale-based method, the co-occurrence relationship of the pixel pair in this method has profound connotation and straightforward meaning. It depicts cloud pixels and cloud pixels , the symbiotic relationship between sky pixels and sky pixels, cloud pixels and sky pixels and other pixel pairs under different positional relationships. Moreover, the dimensionality of the co-occurrence matrix of the present invention is very low (4×4), which greatly reduces storage overhead.
(5)本发明方法中的共生矩阵特征采用直方图的表示,避免了传统方法中求高阶统计量的过程,本方法转换简单,而且,直方图向量中每一个元素的含义明确,它表示特定的像素类别对在地基云图中的出现频率,这种频率特征在图像检索时具有较好的区分度。(5) the co-occurrence matrix feature in the method of the present invention adopts the expression of histogram, has avoided the process of seeking high-order statistic in the traditional method, and this method conversion is simple, and, in the histogram vector, the meaning of each element is clear, and it represents The frequency of specific pixel category pairs in ground-based cloud images, this frequency feature has a better degree of discrimination in image retrieval.
(6)本发明方法中合并多个直方图向量时采用平均化合并策略,这种策略执行效率高,避免了任何单个直方图特性的丢失,而且,保持了最终纹理特征向量的维度精简。(6) The average merging strategy is adopted when merging multiple histogram vectors in the method of the present invention. This strategy has high execution efficiency, avoids the loss of any single histogram characteristics, and maintains the simplification of the dimension of the final texture feature vector.
附图说明 Description of drawings
图1为地基数字云图的纹理特征抽取和检索方法流程框图;Fig. 1 is a flow chart of texture feature extraction and retrieval method of ground-based digital cloud image;
图2为彩色地基数字云图转换为像素类别图示意图;Fig. 2 is a schematic diagram of converting the color ground-based digital cloud image into a pixel category map;
图3为两副地基数字云图及其对应的纹理特征向量示意图;Figure 3 is a schematic diagram of two ground-based digital cloud images and their corresponding texture feature vectors;
图4为基于纹理特征的云图检索演示系统界面。Figure 4 is the interface of the cloud image retrieval demonstration system based on texture features.
具体实施方式 Detailed ways
本发明提出的地基数字云图的纹理特征抽取和检索方法,结合附图和实施例说明如下。The texture feature extraction and retrieval method of the ground-based digital cloud image proposed by the present invention is described as follows in conjunction with the accompanying drawings and embodiments.
如图1所示,包括入库过程(即纹理特征抽取过程)和检索过程。As shown in Figure 1, it includes the storage process (that is, the texture feature extraction process) and the retrieval process.
纹理抽取的步骤如下:The steps of texture extraction are as follows:
首先获取一幅数字云图,一般通过全天空成像仪自动捕获天空的状态,并且生成数字云图,数字云图也可以通过数码相机/数码摄像机等电子设备采集。Firstly, a digital cloud image is obtained. Generally, the state of the sky is automatically captured by an all-sky imager, and a digital cloud image is generated. The digital cloud image can also be collected by digital cameras/digital video cameras and other electronic devices.
此时采集的数字云图为RGB三通道数字云图,一幅数字云图通常含有云、天空两部分,而且在不同的天气条件下,云的亮度也会有比较大的差别,步骤S101的任务就是对数字云图中的像素进行分类,并用类别标号值代替原图像中像素的RGB值,最终得到像素类别图,即将彩色三通道的地基数字云图转换为单通道的像素类别图。像素类别图化繁为简,摒弃了原RGB三通道数字云图中的一些次要信息,保留了其本质内容。The digital cloud image collected at this time is an RGB three-channel digital cloud image. A digital cloud image usually contains two parts, cloud and sky, and under different weather conditions, the brightness of the cloud also has a relatively large difference. The task of step S101 is to The pixels in the digital cloud image are classified, and the RGB value of the pixel in the original image is replaced by the category label value, and finally the pixel category map is obtained, that is, the color three-channel ground-based digital cloud image is converted into a single-channel pixel category map. The pixel category map is simplified, discarding some secondary information in the original RGB three-channel digital cloud map, and retaining its essential content.
对于原云图I中位置(x,y)处的像素,首先读取彩色地基数字云图图像文件,得到IB(x,y),IR(x,y),计算像素蓝色(B)通道值与红色(R)通道值之间比例,即IB(x,y)/IR(x,y),如果比例值大于1.5,则该像素为天空像素,类别标号值C(x,y)=0;如果比例值介于1.3和1.5之间,则该像素为云和天空的过渡像素,类别标号值C(x,y)=1;如果比例值小于1.3,则该像素为云像素,再根据原像素的亮度信息把云像素分为暗云与亮云,即得到亮度值IV(x,y),其值通过输入彩色地基数字云图中坐标(x,y)处像素的蓝色(Blue)分量值、红色(Red)分量值以及绿色(Green)分量值IG(x,y)计算得到,其计算公式为:IV(x,y)=100*Max(IR(x,y),IG(x,y),IB(x,y))/255,IV(x,y)<80,则为暗云,类别标号值C(x,y)=2,否则为亮云,类别标号值C(x,y)=3。即α1=1.5,α2=1.3,β=80,这三个阈值的选取与彩色地基数字云图获取时的拍摄参数设置有关,在曝光适度、色彩还原正确的云图,可采用上述的阈值选择,而对于曝光时间参数偏长/短(图像亮度偏亮/暗)、色彩不正常的云图,首先需要对云图进行预处理,对其亮度和色彩进行修正,才能采用上述阈值,如果不进行图像修正,在进行此步骤处理时这三个阈值应该做相应的调节才能保证图像特征的准确、合理地提取。具体计算公式如下:For the pixel at position (x, y) in the original cloud image I, first read the color ground-based digital cloud image file to obtain I B (x, y), I R (x, y), and calculate the pixel blue (B) channel The ratio between the value and the red (R) channel value, that is, I B (x, y)/I R (x, y), if the ratio value is greater than 1.5, the pixel is a sky pixel, and the category label value C(x, y )=0; if the ratio value is between 1.3 and 1.5, the pixel is a transition pixel between cloud and sky, and the class label value C(x, y)=1; if the ratio value is less than 1.3, the pixel is a cloud pixel , and then according to the brightness information of the original pixels, the cloud pixels are divided into dark clouds and bright clouds, and the brightness value I V (x, y) is obtained. Color (Blue) component value, red (Red) component value and green (Green) component value I G (x, y) are calculated, and its calculation formula is: I V (x, y)=100*Max(I R ( x, y), I G (x, y), I B (x, y))/255, I V (x, y)<80, it is a dark cloud, and the category label value C(x, y)=2 , otherwise it is a bright cloud, and the category label value C(x, y)=3. That is, α 1 = 1.5, α 2 = 1.3, β = 80. The selection of these three thresholds is related to the shooting parameter settings when the color ground-based digital cloud image is acquired. For cloud images with moderate exposure and correct color reproduction, the above threshold selection can be used , and for cloud images with long/short exposure time parameters (brighter/dark image brightness) and abnormal colors, it is first necessary to preprocess the cloud image and correct its brightness and color before using the above threshold. Correction, these three thresholds should be adjusted accordingly in this step to ensure accurate and reasonable extraction of image features. The specific calculation formula is as follows:
按照上述公式,从左至右,从上往下,循环计算云图I中每一个像素的类别,并把类别标号值保存到像素类别图C中。如图2(a)显示了一个原始的地基数字云图,图2(b)表示按上述步骤生成的像素类别图。为了显示的方便起见,在图2(b)中蓝色表示类别0,红色表示类别1,白色表示类别2,灰色表示类别3。According to the above formula, from left to right, from top to bottom, cyclically calculate the category of each pixel in the cloud image I, and save the category label value in the pixel category map C. Figure 2(a) shows an original ground-based digital cloud image, and Figure 2(b) shows the pixel category map generated by the above steps. For the convenience of display, in Fig. 2(b), blue represents
步骤S102是根据分析像素类别图和建立共生矩阵,得到共生矩阵的直方图向量。显然,不同类别的像素按照不同的空间位置排列就会得到不同的云图。比如大量天空像素邻接排列,则会构成一个天空区域;大量云像素邻接排列,则可能构成一片云朵。类别共生矩阵是对满足特定空间位置关系的两个像素(称之为位置对)分别属于特定像素类别的状况进行统计的结果。类别共生矩阵分析包括三个子步骤,即类别共生矩阵生成、归一化共生矩阵和类别共生矩阵特征表示。Step S102 is to obtain the histogram vector of the co-occurrence matrix based on analyzing the pixel category map and establishing the co-occurrence matrix. Obviously, pixels of different categories are arranged in different spatial positions to obtain different cloud images. For example, if a large number of sky pixels are arranged adjacently, a sky region will be formed; if a large number of cloud pixels are arranged adjacently, a cloud may be formed. The category co-occurrence matrix is the result of statistics on the condition that two pixels (called position pairs) satisfying a specific spatial position relationship belong to a specific pixel category respectively. Class co-occurrence matrix analysis consists of three sub-steps, namely class co-occurrence matrix generation, normalized co-occurrence matrix and class co-occurrence matrix feature representation.
分析像素类别图中任意两个像素类别之间的共生关系,构建共生矩阵CCM:Analyze the co-occurrence relationship between any two pixel categories in the pixel category map, and construct the co-occurrence matrix CCM:
其中i,j分别表示像素类别,取值为{0,1,2,3},(Δx,Δy)表示偏移量,w,h分别表示地基数字云图的长度和宽度,CCM(i,j)表示像素类别图中(x,y)处像素类别为i,同时(x+Δx,y+Δy)处像素类别为j的位置对出现的频次,分别计算每一个CCM(i,j),得到一个4×4的共生矩阵。Among them, i and j respectively represent the pixel category, and the values are {0, 1, 2, 3}, (Δx, Δy) represent the offset, w, h represent the length and width of the ground-based digital cloud image respectively, CCM(i, j ) indicates the frequency of occurrence of the position pair where the pixel category at (x, y) is i, and the pixel category at (x+Δx, y+Δy) is j in the pixel category map, and each CCM(i, j) is calculated separately, A 4×4 co-occurrence matrix is obtained.
按如下归一化公式归一化所述共生矩阵CCM(i,j),得到归一化的共生矩阵CCMN;Normalize the co-occurrence matrix CCM (i, j) according to the following normalization formula to obtain a normalized co-occurrence matrix CCM N ;
CCMN(i,j)=CCM(i,j)/(wh).CCM N (i, j) = CCM (i, j)/(wh).
按如下公式按行拼接归一化共生矩阵CCMN的各个元素,得到一个16维的直方图向量FS;According to the following formula, each element of the normalized co-occurrence matrix CCM N is stitched row by row to obtain a 16-dimensional histogram vector F S ;
FS=(CCMN(0,0),CCMN(0,1),...,CCMN(3,3)).F S = (CCM N (0, 0), CCM N (0, 1), ..., CCM N (3, 3)).
直方图向量FS则作为满足位置关系(Δx,Δy)的类别共生矩阵特征表示。The histogram vector F S is represented as a feature of the category co-occurrence matrix satisfying the positional relationship (Δx, Δy).
步骤S103是合并多个像素类别共生矩阵的直方图向量,构建地基数字云图的纹理特征向量。一个位置关系往往不足以描述地基数字云图的纹理特征,因此,构造多个位置关系,并且相应地生成多个类别共生矩阵及其它们的特征表示。此处包括四个位置关系,它们是{(1,0),(0,1),(-1,0),(0,-1)},对于每一个位置关系,循环运行步骤S102可以得到4个不同的共生矩阵,以及4个相应的直方图向量然后对这四个直方图向量进行线性叠加,并且取平均值,最终合并成一个16维的向量,合并公式如下:Step S103 is to combine the histogram vectors of multiple pixel category co-occurrence matrices to construct the texture feature vector of the ground-based digital cloud image. One location relationship is often not enough to describe the texture features of the ground-based digital cloud image. Therefore, multiple location relationships are constructed, and multiple category co-occurrence matrices and their feature representations are generated accordingly. Four positional relationships are included here, they are {(1, 0), (0, 1), (-1, 0), (0, -1)}, for each positional relationship, step S102 can be looped to get 4 different co-occurrence matrices, and 4 corresponding histogram vectors Then for these four histogram vectors Carry out linear superposition, and take the average value, and finally merge into a 16-dimensional vector. The merge formula is as follows:
合并后的16维直方图向量F则构成了地基数字云图的纹理特征向量。如图3(a)表示一幅碎积云数字云图,图3(b)表示图3(a)所对应的特征向量(用直方图表示);图3(c)表示一幅淡积云数字云图,图3(d)表示图3(c)所对应的特征向量(用直方图表示)。The combined 16-dimensional histogram vector F constitutes the texture feature vector of the ground-based digital cloud image. Figure 3(a) shows a digital cloud map of cumulus fractus, and Figure 3(b) shows the eigenvectors (expressed in histogram) corresponding to Figure 3(a); Figure 3(c) shows a digital cloud map of cumulus cumulus Cloud diagram, Fig. 3(d) shows the corresponding feature vectors (represented by histogram) in Fig. 3(c).
步骤S104将步骤S103构建的纹理特征向量保存到云图数据库,供检索使用。可通过SQL Server 2000的ADO接口,编写程序把特征向量写入数据库中,采用主流的数据库管理软件(比如SQL Server、Oracle)和编程语言(C++、JAVA)都可以实现。Step S104 saves the texture feature vector constructed in step S103 to the cloud image database for retrieval. Through the ADO interface of SQL Server 2000, write a program to write the feature vector into the database, and it can be realized by using mainstream database management software (such as SQL Server, Oracle) and programming languages (C++, JAVA).
如果有多个地基数字云图入库,循环步骤S101-S104的过程,直到所有的数字云图都处理完毕。If multiple ground-based digital cloud images are put into storage, the process of steps S101-S104 is repeated until all digital cloud images are processed.
从云图数据库检索云图的步骤如下:The steps to retrieve a cloud image from the cloud image database are as follows:
首先选择待检索的样例数字云图,选定待检索的样例数字云图后计算该样例的纹理特征向量,即步骤S201,该步骤可按抽取纹理特征方法中的步骤S101、S102和S103进行。First select the sample digital cloud image to be retrieved, and calculate the texture feature vector of the sample after selecting the sample digital cloud image to be retrieved, that is, step S201, this step can be carried out according to the steps S101, S102 and S103 in the texture feature extraction method .
步骤S202为待检索样例数字云图和云图数据库中云图的相似性度量,即计算样例数字云图的纹理特征向量和云图数据库中云图的纹理特征向量之间的相似性。依次计算样例云图的特征向量Fe与云图数据库中云图的特征向量Fi,i=1,...,N之间的相似性距离,其中N表示云图数据库中云图的数目。特征向量之间的相似性距离用交叉距离表示,交叉距离的计算公式如下:Step S202 is to measure the similarity between the sample digital cloud image to be retrieved and the cloud image in the cloud image database, that is, calculate the similarity between the texture feature vector of the sample digital cloud image and the cloud image texture feature vector in the cloud image database. Calculate the similarity distance between the feature vector F e of the sample cloud image and the feature vector F i of the cloud image in the cloud image database in turn, i=1,...,N, where N represents the number of cloud images in the cloud image database. The similarity distance between eigenvectors is represented by the cross distance, and the calculation formula of the cross distance is as follows:
步骤S203按照相似性距离D(Fe,Fi)从大到小的顺序,对云图数据库中的云图进行排序。然后,选择相似性最大的前面M幅图像作为检索结果,其中,M可由用户任意指定大于0,小于云图数据库总数目N的任何正整数。Step S203 sorts the cloud images in the cloud image database in descending order of the similarity distance D(F e , F i ). Then, select the previous M images with the greatest similarity as the retrieval result, where M can be arbitrarily designated by the user to be any positive integer greater than 0 and less than the total number N of cloud image databases.
步骤S204将检索的结果通过特定的图形用户界面,把M幅检索结果云图呈现给用户,图4显示了一个云图检索系统的显示界面示例。Step S204 presents the retrieved results to the user through a specific graphical user interface, and presents M cloud images of the retrieval results to the user. FIG. 4 shows an example of a display interface of the cloud image retrieval system.
本发明在单机云图管理和检索中的应用:Application of the present invention in stand-alone cloud image management and retrieval:
本发明尤其适合于管理大规模的云图图像,并且提供便利的检索接口。如气象科学家张教授在长期的气象观察和研究中积累了上万张地基数字云图,他把它们存放在个人电脑硬盘的某一个或某几个目录中。每当他得到新的云图时,他需要从收集的典型云图集合中找出相似的云图,进行对比研究,但是面对电脑上万张数字云图,他无法快速有效地找出相似的云图。应用本发明,张教授可以快速地把上万张数字云图抽取纹理特征,并且导入数据库中,而且这种导入过程是“一次导入永久使用”;当他得到新的云图时,他可以通过本发明的云图检索方法在几秒钟之内检索云图数据库中内容相似的云图。The invention is especially suitable for managing large-scale cloud images, and provides a convenient retrieval interface. For example, Professor Zhang, a meteorological scientist, has accumulated tens of thousands of ground-based digital cloud images in his long-term meteorological observation and research, and he stores them in one or several directories on the hard disk of his personal computer. Whenever he gets a new cloud image, he needs to find similar cloud images from the collection of typical cloud images and conduct comparative research. However, facing tens of thousands of digital cloud images on the computer, he cannot find similar cloud images quickly and efficiently. With the application of the present invention, Professor Zhang can quickly extract texture features from tens of thousands of digital cloud images and import them into the database, and this import process is "one-time import and permanent use"; when he gets a new cloud image, he can use the present invention to The cloud image retrieval method retrieves cloud images with similar content in the cloud image database within a few seconds.
本发明在网络云图检索中的应用:Application of the present invention in network cloud image retrieval:
气象管理部门(如气象局)或研究部门(如气象科学研究院)构建了一个大规模的地基云图数据库,并且应用本发明抽取云图的纹理特征、导入到数据库中,同时还提供了基于Internet的网络云图检索服务,并且公布了访问云图检索服务的网址。当某一个观测员拍摄了一幅云图,他想从云图数据库中检索出和该云图类似的云图进行对比研究。于是,他可以登陆网站,上传自己拍摄的云图并进行检索,系统在几秒钟之内返回云图数据库中与之相似的云图。Meteorological management departments (such as Meteorological Bureau) or research departments (such as Academy of Meteorological Sciences) have built a large-scale ground-based cloud image database, and apply the present invention to extract the texture features of cloud images, import them into the database, and provide Internet-based Internet cloud image retrieval service, and announced the URL to access the cloud image retrieval service. When an observer takes a cloud image, he wants to retrieve cloud images similar to the cloud image from the cloud image database for comparative research. Therefore, he can log in to the website, upload and search the cloud images he has taken, and the system will return similar cloud images in the cloud image database within a few seconds.
本发明在自动气象站地基数字云图综合管理系统中的应用:Application of the present invention in automatic weather station ground-based digital cloud image comprehensive management system:
地基自动气象站的全天空云成像采集到天空的状态,并且生成数字云图。然后,通过传输通道把数字云图发送到远程主机。主机运行本发明的纹理特征抽取算法,抽取该云图的纹理特征,并且把云图和特征向量导入数据库中。主机还可以发布云图检索的服务接口,用户可以随时从该气象站中检索云图。这样可以实现在无人干预情况下,全自动实现地基数字云图的采集、分析、管理和检索功能,形成一个智能化的、自动化的地基数字云图综合管理系统。All-sky cloud imaging from ground-based automatic weather stations captures the state of the sky and generates digital cloud images. Then, send the digital cloud image to the remote host through the transmission channel. The host runs the texture feature extraction algorithm of the present invention to extract the texture features of the cloud image, and import the cloud image and feature vectors into the database. The host computer can also issue a cloud image retrieval service interface, and users can retrieve cloud images from the weather station at any time. In this way, the collection, analysis, management and retrieval functions of ground-based digital cloud images can be realized fully automatically without human intervention, forming an intelligent and automated ground-based digital cloud image comprehensive management system.
以上实施方式仅用于说明本发明,而并非对本发明的限制,有关技术领域的普通技术人员,在不脱离本发明的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明的范畴,本发明的专利保护范围应由权利要求限定。The above embodiments are only used to illustrate the present invention, but not to limit the present invention. Those of ordinary skill in the relevant technical field can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, all Equivalent technical solutions also belong to the category of the present invention, and the scope of patent protection of the present invention should be defined by the claims.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102385224A CN101876993B (en) | 2009-11-26 | 2009-11-26 | Method for extracting and retrieving textural features from ground digital nephograms |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102385224A CN101876993B (en) | 2009-11-26 | 2009-11-26 | Method for extracting and retrieving textural features from ground digital nephograms |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101876993A CN101876993A (en) | 2010-11-03 |
CN101876993B true CN101876993B (en) | 2011-12-14 |
Family
ID=43019551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009102385224A Expired - Fee Related CN101876993B (en) | 2009-11-26 | 2009-11-26 | Method for extracting and retrieving textural features from ground digital nephograms |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101876993B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102339388B (en) * | 2011-06-27 | 2012-12-19 | 华中科技大学 | Method for identifying classification of image-based ground state |
CN103413148B (en) * | 2013-08-30 | 2017-05-24 | 中国科学院自动化研究所 | Ground-based cloud image classifying method based on random self-adaptive symbol sparse codes |
CN105783861B (en) * | 2014-12-22 | 2018-08-28 | 国家电网公司 | Cloud cluster height measurement method based on double ground cloud atlas |
CN110806582A (en) * | 2019-11-06 | 2020-02-18 | 上海眼控科技股份有限公司 | Method, device and equipment for evaluating accuracy of cloud image prediction and storage medium |
CN111046911A (en) * | 2019-11-13 | 2020-04-21 | 泰康保险集团股份有限公司 | Image processing method and device |
CN115131494A (en) * | 2022-08-03 | 2022-09-30 | 北京开运联合信息技术集团股份有限公司 | Optical remote sensing satellite imaging simulation method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4897881A (en) * | 1988-03-23 | 1990-01-30 | Centre De Recherche Industrielle Du Quebec | Optimum fast textural feature extractor |
CN1945353A (en) * | 2006-10-26 | 2007-04-11 | 国家卫星气象中心 | Method for processing meteorological satellite remote sensing cloud chart |
-
2009
- 2009-11-26 CN CN2009102385224A patent/CN101876993B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4897881A (en) * | 1988-03-23 | 1990-01-30 | Centre De Recherche Industrielle Du Quebec | Optimum fast textural feature extractor |
CN1945353A (en) * | 2006-10-26 | 2007-04-11 | 国家卫星气象中心 | Method for processing meteorological satellite remote sensing cloud chart |
Also Published As
Publication number | Publication date |
---|---|
CN101876993A (en) | 2010-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105141903B (en) | A kind of method for carrying out target retrieval in video based on colouring information | |
CN101876993B (en) | Method for extracting and retrieving textural features from ground digital nephograms | |
CN106682108B (en) | Video retrieval method based on multi-mode convolutional neural network | |
Yue et al. | Content-based image retrieval using color and texture fused features | |
CN102012939B (en) | Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features | |
CN105144141B (en) | For using the system and method apart from relevance hashing to media database addressing | |
CN104376105B (en) | The Fusion Features system and method for image low-level visual feature and text description information in a kind of Social Media | |
CN102254006A (en) | Method for retrieving Internet video based on contents | |
CN106897295B (en) | Hadoop-based power transmission line monitoring video distributed retrieval method | |
CN101281520A (en) | An Interactive Sports Video Retrieval Method Based on Unsupervised Learning and Semantic Matching Features | |
CN101425082A (en) | Video file content determining method and system | |
Serrano-Talamantes et al. | Self organizing natural scene image retrieval | |
CN102902826A (en) | Quick image retrieval method based on reference image indexes | |
CN102184250A (en) | Garment fabric sample retrieving method based on colored image matching | |
JP2003216612A (en) | Image retrieval method and apparatus resistant to illumination change | |
CN104123709B (en) | A kind of extraction method of key frame selected based on dictionary | |
Xue et al. | Research of image retrieval based on color | |
WO2025020619A1 (en) | Method and system for quantifying semantic difference between neural network representations | |
CN116701553B (en) | Similar rainfall runoff process searching method based on rainfall time distribution histogram | |
CN115063692B (en) | Remote sensing image scene classification method based on active learning | |
CN116664465A (en) | Multi-mode image fusion method and device and computer equipment | |
CN116704378A (en) | Homeland mapping data classification method based on self-growing convolution neural network | |
CN111723241A (en) | An automatic short video annotation method based on feature and multi-label enhanced representation | |
CN117874498B (en) | Intelligent forestry big data system, method, equipment and medium based on data lake | |
CN113029973B (en) | Device and method for reading NDVI value of selected area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111214 Termination date: 20151126 |
|
CF01 | Termination of patent right due to non-payment of annual fee |