CN109829511B - Texture classification-based method for detecting cloud layer area in downward-looking infrared image - Google Patents

Texture classification-based method for detecting cloud layer area in downward-looking infrared image Download PDF

Info

Publication number
CN109829511B
CN109829511B CN201910148702.7A CN201910148702A CN109829511B CN 109829511 B CN109829511 B CN 109829511B CN 201910148702 A CN201910148702 A CN 201910148702A CN 109829511 B CN109829511 B CN 109829511B
Authority
CN
China
Prior art keywords
image
sub
cloud layer
pixel
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910148702.7A
Other languages
Chinese (zh)
Other versions
CN109829511A (en
Inventor
于起峰
尚洋
刘肖琳
孙晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201910148702.7A priority Critical patent/CN109829511B/en
Publication of CN109829511A publication Critical patent/CN109829511A/en
Application granted granted Critical
Publication of CN109829511B publication Critical patent/CN109829511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a texture classification-based method for detecting a cloud layer area in an downward-looking infrared image. The method takes the detection problem of the cloud layer region in the downward-looking infrared image as a texture classification problem between the cloud layer region and the underlying surface ground object region, firstly, a median filtering and contrast enhancement algorithm is adopted to preprocess an input image, eliminate noise interference and improve image contrast; secondly, extracting image characteristics, in order to adapt to the complexity of cloud layer shapes and improve the accuracy of judgment, the invention divides an input image into self-adaptive sub-images, processes each sub-image independently, and extracts the gray level co-occurrence matrix related characteristics of the image and a uniform local binary pattern with unchanged rotation; and finally, judging the type of the image subgraph by adopting a pre-trained support vector machine according to the extracted image characteristics, and detecting the cloud layer area in the image.

Description

Texture classification-based method for detecting cloud layer area in downward-looking infrared image
Technical Field
The invention mainly relates to the field of computer vision and machine learning, in particular to a method for realizing cloud layer area detection by utilizing texture characteristic difference between a cloud layer area and an underlying surface ground object area in an underlying infrared image.
Technical Field
In the infrared terminal guidance system, when the infrared terminal guidance system is in a working state, if a cloud layer exists between a bomb body and a target, a ground target area is shielded to form a blind area, and matching errors are easily caused, so that a terminal guidance task fails. Therefore, in the terminal guidance process, it is necessary to introduce cloud layer detection, and a terminal guidance workflow is formulated according to the distribution of cloud layers in an image, so as to minimize adverse effects of the cloud layers on the terminal guidance process, for example, if most regions in the image are covered by the cloud layers and the remaining effective regions are not sufficient to support stable matching positioning, matching processing is not performed on current data, guidance is performed according to other modes, if only a few regions in the image are covered by the cloud layers, the cloud layer covered regions can be shielded, and matching processing is performed by using regions not polluted by the cloud layers. How to accurately and efficiently detect the cloud layer area in the downward-looking infrared image is an important problem to be solved in an infrared terminal guidance system, and has important significance for improving the performance of the guidance system.
The invention aims at detecting the cloud layer area in the downward-looking infrared image acquired in the infrared terminal guidance, and has fewer related research results. The cloud detection technology which is widely researched and mature at present is focused on the fields of remote sensing and weather. The remote sensing field mainly focuses on detecting and removing a cloud layer area and a shadow area corresponding to the cloud layer area in a remote sensing image, eliminates the shielding of the cloud layer and the shadow on ground objects, and provides a foundation for subsequent analysis and application; cloud detection in the meteorological field is mainly used for distinguishing meteorological relevant characteristics such as cloud amount, cloud layer type and the like, and provides basis for weather analysis, weather forecast and the like, image data is mainly acquired by departments such as meteorological stations and the like, and certainly, the image data acquired by remote sensing satellites is also applied to the meteorological field. The detection of the cloud layer region in the downward-looking infrared image, which is referred to herein, can be essentially regarded as a texture classification problem, i.e., a sub-image region in the image is distinguished as an underlying ground scene or a cloud layer according to texture attributes, which is quite similar to the cloud layer detection idea based on image characteristics in the fields of remote sensing and weather. Cloud layer detection in the field of remote sensing essentially means that a cloud layer part in an image is determined according to the texture characteristic difference between a cloud layer area and a ground object area; the task of the cloud layer detection processing module in the meteorological field comprises the judgment of cloud amount and cloud type, firstly, a cloud layer area is positioned in an image, namely, the cloud layer area and a non-cloud layer area in the image are distinguished, and then the cloud layer type is judged according to the texture characteristics of the cloud layer area.
Disclosure of Invention
The patent discloses a cloud layer area detection method based on textural feature classification, aiming at the problem of cloud layer area detection in an downward-looking infrared image, which comprises the following steps: and finishing the detection of the cloud layer area by utilizing the texture difference between the cloud layer area and the underlying surface ground object area in the downward-looking infrared image. In order to adapt to the complexity of the cloud layer shape and improve the accuracy of judgment, the invention performs adaptive sub-image division on an input image and independently processes each sub-image block; describing texture features by using gray level co-occurrence matrix correlation features and rotation-invariant uniform local binary pattern features; and judging whether the sub-image blocks are in the cloud layer region or not based on a pre-trained support vector machine model according to the extracted textural feature description of each sub-image block, thereby completing the detection of the cloud layer region in the image.
1. Implementation process of cloud layer region detection method in downward-looking infrared image based on texture classification
The implementation process of the invention is shown in the attached figure 2, and concretely comprises the following steps:
(1) image preprocessing: the method comprises the steps of preprocessing an input image by adopting a median filtering and contrast enhancement algorithm, eliminating noise interference, improving image contrast, and carrying out sub-image division on the input image to adapt to the complexity of cloud layer shapes and improve the accuracy of judgment, wherein sub-image blocks are used as elements for subsequent processing;
(2) extracting texture features: processing each sub-image block independently, extracting the relevant characteristics of the gray level co-occurrence matrix corresponding to the sub-image block and a uniform local binary pattern with unchanged rotation as the description of the texture characteristics of the sub-image block;
(3) and (4) classification and judgment: and aiming at each sub-image block, judging each sub-image block type by adopting a pre-trained support vector machine model according to the extracted textural feature description, and further detecting the cloud layer area in the image.
2. The invention discloses a texture classification-based method for detecting cloud layer areas in an downward-looking infrared image
(1) Image pre-processing
The image preprocessing comprises filtering processing, image dynamic range adjustment and sub-image block self-adaptive division, and introduction is performed one by one as follows:
in comprehensive consideration of the aspects of noise characteristics, filtering performance, efficiency and the like in the downward-looking infrared image in the application background, the method selects a median filtering algorithm to filter the downward-looking infrared image. The median filtering is a nonlinear filter, as shown in formula (1), when a certain pixel point in an image is median filtered, the pixels in the mask and subjected to filtering and the pixels in the neighborhood are sorted according to the pixel value, the median is found and assigned to the pixel point,
Figure BDA0001980878950000021
wherein I is an input image, f is a filtered image, SuvIs a mask area corresponding to the pixel point (u, v) to be filtered;
s and t are respectively the pixel points at SuvRelative abscissa and ordinate of (1);
in order to eliminate the adverse interference caused by the dynamic range difference, the invention adjusts the dynamic range of the input image. The dynamic range of the image is adjusted by adopting a gray level normalization operation, and the calculation formula is as follows:
Figure BDA0001980878950000022
Figure BDA0001980878950000023
wherein f is the median filtered image,
Figure BDA0001980878950000031
and σ is the mean and mean square error of the gray scale of the image f, and equation (2) maps the input image to
Figure BDA0001980878950000032
The mean and variance of the gray scale values are 128 and
Figure BDA0001980878950000033
k is a constant and is usually selected in the value range of [3,5 ]];
The shape of the cloud layer is not regular and varies variously, so that the cloud layer is more accurateThe invention can detect the cloud layer area in the image, and the invention can process self-adapting division operation to the sub-image block, and uses the sub-image as the basic processing unit to judge whether each sub-image belongs to the cloud layer area. In image partitioning, the size of a sub-image needs to be determined according to the resolution of the image, the accuracy of a result cannot be guaranteed if the size of the sub-image is too large, and a stable feature extraction result cannot be obtained if the size of the sub-image is too small and the number of pixels is small. For an input image of size M x N, the sub-image size Mb*NbThe following relationship exists with the classification accuracy d% that needs to be achieved:
Figure BDA0001980878950000034
in the formula
Figure BDA0001980878950000035
Is a round-up operation. For an input image with a size of 320 × 256, if the classification accuracy is 95% and the sub-images have the same length and width, the total number of sub-images is 20 and the partitioning result is shown in fig. 2 if the sub-image size is 64 × 64.
(2) Texture feature extraction
The invention adopts gray level co-occurrence matrix correlation characteristics and rotation invariant uniform local binary pattern characteristics to extract texture characteristics:
the gray level co-occurrence matrix is defined based on the joint probability density of the gray level values of the pixels at two positions in the image, the gray level co-occurrence matrix describes the distribution characteristic of the gray level and the position distribution characteristic between the pixels with similar gray level values, and belongs to the second-order statistical characteristic for describing the gray level change in the image. Let the gray level co-occurrence matrix be P ═ P (i, j | d, θ)]L×LIn the direction θ, the distance between the pixel pairs is d, and the probability p (i, j) of the occurrence of the pixel pairs with gray levels i and j is described, where i, j is 0,1,2, L, and L-1 are the gray levels corresponding to the pixels, and L is the gray level number of the image. The direction θ is typically taken to be four directions, 0 °, 45 °, 90 ° and 135 °, as shown in fig. 3. Each direction corresponds to a matrix, and the four directions are totally fourFor the input image I, calculating the gray level co-occurrence matrix elements in all directions as shown in formula (5);
Figure BDA0001980878950000041
wherein, (k, l), (m, n) are respectively the abscissa and ordinate of the corresponding position of the two pixels to be compared.
During texture feature analysis, the gray level co-occurrence matrix obtained through calculation is not directly used, but some statistic quantity is obtained through calculation based on the gray level co-occurrence matrix and is used as feature quantity of the texture, entropy, energy, homogeneity, contrast, clustering tendency, clustering form, correlation, information measurement correlation 1, information measurement correlation 2 and maximum probability are used, and 10 statistic quantities are calculated;
the basic principle of the local binary pattern is to compare the threshold value of the pixel in the image with the pixel in the neighborhood to obtain a binary coding sequence, and to take the decimal number corresponding to the binary coding sequence as the local binary pattern value LBP of the pointP,RAs shown in the formula (6),
Figure BDA0001980878950000042
wherein g iscIs the central pixel (u)c,vc) Gray value of gpIs the gray value of its neighborhood pixels, p is the pixel in the neighborhood, s (-) is a sign function,
Figure BDA0001980878950000043
fig. 4 shows an example of local binary pattern extraction in the 3 × 3 domain. In order to adapt to rotation, the binary coding sequence obtained by the circular neighborhood is connected end to form a cyclic coding sequence, a series of corresponding local binary pattern values are obtained by rotating the circular neighborhood, and the minimum value is used as the local binary pattern characteristic of the neighborhood
Figure BDA0001980878950000044
See the formula (8),
Figure BDA0001980878950000051
ri denotes rotation invariant, ROR (LBP)P,RL) denotes the P-bit number LBPP,RRotating l bits clockwise, P represents the number of sampling points, and R represents the radius of the circular neighborhood.
The transition U between 0 and 1 in the binary representation of the local binary pattern is defined as follows
Figure BDA0001980878950000052
g0Gray value g representing pixel No. 0 in circular neighborhood of center pixelp-1Gray value g representing the (p-1) th pixel in the circular neighborhood of the center pixelP-1And expressing the gray value of the (P-1) th pixel point in the circular neighborhood of the central pixel point.
When the jump number between 0 and 1 contained in the cyclic binary code corresponding to the local binary pattern is not more than 2, the binary code corresponding to the local binary pattern is marked as an equivalent pattern class, and the patterns except the equivalent pattern class are classified as another class, which is called a mixed mode class. Improvement of two aspects of combining rotation invariance and 'equivalent mode', rotation invariance uniform local binary mode
Figure BDA0001980878950000053
The definition is shown in the formula (10),
Figure BDA0001980878950000054
(3) classification and discrimination
Based on the obtained texture feature description of the sub-image block, the method adopts a support vector machine as a classifier to judge the type of the sub-image. The linear support vector machine is defined as
y=g(wTx+b) (11)
Where y ∈ { -1,1} outputs the class label, x is the input feature vector, w is the weight vector, b is the offset, and the threshold function g (·) outputs either-1 (· < 0) or 1(· > 0). The linear support vector machine determines the best classification plane by maximizing the classification interval, as shown in fig. 5. Based on the labeled training sample, parameters w and b are realized by adopting a Lagrange multiplier method and a sequential minimum optimization algorithm;
in order to process the nonlinear classification problem, a kernel function method is introduced to popularize a linear support vector machine to the nonlinear condition, and a radial basis kernel function is adopted in the invention
Figure BDA0001980878950000061
Where σ is the radial basis function standard deviation parameter, x is the input feature vector, xzA relaxation variable xi is introduced on the basis of a basic support vector machine for a characteristic vector corresponding to the z-th support vector and adapting to the interference of noisezNot less than 0, and simultaneously optimizing and solving w, b and xiz
Compared with the prior art, the invention has the advantages that:
(1) according to the method, the cloud layer region detection is realized by utilizing the difference of the texture of the cloud layer region and the texture of the ground object region of the underlying surface in the downward-looking infrared image based on the idea of feature classification, the algorithm steps are simple and clear, the calculation complexity is low, and the method is easy to apply;
(2) in order to adapt to the complexity of the cloud layer shape and improve the accuracy and efficiency of judgment, the invention performs adaptive sub-map division on an input image and takes a sub-map block as a primitive for subsequent processing.
Drawings
FIG. 1 is a flow chart of a method for detecting cloud layer regions in an undersight infrared image based on texture classification;
FIG. 2 is a schematic diagram of image sub-graph partitioning;
FIG. 3 is a schematic view of gray level co-occurrence matrix angles;
figure 43 x 3 neighborhood local binary pattern extraction example;
FIG. 5 is a linear support vector machine classification schematic.
Detailed Description
The embodiments of the present invention are described in further detail below:
(1) image pre-processing
Preprocessing an input image, and sequentially performing median filtering, dynamic range adjustment and blocking processing;
the median filtering is a nonlinear filter, as shown in formula (1), when a certain pixel point in an image is median filtered, the pixels in the mask and subjected to filtering and the pixels in the neighborhood are sorted according to the pixel value, the median is found and assigned to the pixel point,
Figure BDA0001980878950000062
wherein I is an input image, f is a filtered image, SuvA mask region corresponding to the pixel (u, v) to be filtered; s and t are respectively the pixel points at SuvRelative abscissa and ordinate of (1);
in order to eliminate the adverse interference caused by the dynamic range difference, the invention adjusts the dynamic range of the input image. The dynamic range of the image is adjusted by adopting a gray level normalization operation, and the calculation formula is as follows:
Figure BDA0001980878950000071
Figure BDA0001980878950000072
wherein f is the median filtered image,
Figure BDA0001980878950000073
and σ is the mean and mean square error of the gray scale of the image f, and equation (2) maps the input image to
Figure BDA0001980878950000074
The mean and variance of the gray scale values are 128 and
Figure BDA0001980878950000075
k can range from [3,5 ]];
In the blocking, the size of a sub-image needs to be determined according to the resolution of the image, the accuracy of the result cannot be ensured when the size of the sub-image is too large, and a stable feature extraction result cannot be obtained when the number of pixels is less when the size of the sub-image is too small. For an input image of size M x N, the sub-image size Mb*NbThe following relationship exists with the classification accuracy d% that needs to be achieved:
Figure BDA0001980878950000076
in the formula
Figure BDA0001980878950000077
Is a round-up operation. For an input image with a size of 320 × 256, if the classification accuracy is 95% and the sub-images have the same length and width, the total number of sub-images is 20 and the partitioning result is shown in fig. 2 if the sub-image size is 64 × 64.
(2) Texture feature extraction
Extracting relevant features of a gray level co-occurrence matrix and rotation invariant uniform binary pattern features for each image sub-block for perhaps classification judgment;
the parameters involved in the gray level co-occurrence matrix calculation include a direction θ and a step length d, and the direction θ generally takes four directions of 0 °, 45 °, 90 ° and 135 °, as shown in fig. 3. Each direction corresponds to one matrix, four matrixes are obtained in all four directions, and for an input image I, the calculation of gray level co-occurrence matrix elements in all directions is shown in a formula (5);
Figure BDA0001980878950000081
calculating the relevant statistics of the gray level co-occurrence matrix: entropy, energy, homogeneity, contrast, clustering tendency, clustering morphology, correlation, information measure correlation 1, information measure correlation 2, and maximum probability;
the rotation-invariant uniform binary pattern is defined as shown in equation (10),
Figure BDA0001980878950000082
(3) classification and discrimination
And (3) judging the category of each image subblock based on the pre-trained support vector machine model according to the texture feature description extracted in the step (2), and further completing the detection of the cloud layer region in the image.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (3)

1. A method for detecting a cloud layer region in an downward-looking infrared image based on texture classification utilizes texture difference between the cloud layer region in the downward-looking infrared image and an underlying surface ground object region to complete cloud layer region detection, and is characterized in that:
performing adaptive sub-map division on an input image, and processing each sub-map block independently; describing texture features by using gray level co-occurrence matrix correlation features and rotation-invariant uniform local binary pattern features; according to the extracted textural feature description of each sub-image block, based on a pre-trained support vector machine model, judging whether the sub-image block is a cloud layer region or not, and further completing the detection of the cloud layer region in the image, wherein the steps comprise:
(1) image preprocessing: comprises filtering processing, image dynamic range adjustment and sub-image block self-adaptive division,
filtering the downward-looking infrared image by a median filtering algorithm, sorting pixels in a mask and subjected to filtering processing and pixels in the neighborhood thereof according to pixel values when performing median filtering on a certain pixel point in an image, finding out a median value, and assigning the median value to the pixel point,
Figure FDA0001980878940000011
wherein I is an input image, f is a filtered image, SuvIs a mask area corresponding to the pixel point (u, v) to be filtered; s and t are respectively the pixel points at SuvRelative abscissa and ordinate of (1);
in order to eliminate the interference caused by the difference of the dynamic ranges, the dynamic range of the input image is adjusted, the dynamic range of the image is adjusted by adopting a gray level normalization operation, and the calculation formula is as follows:
Figure FDA0001980878940000012
Figure FDA0001980878940000013
wherein f is the median filtered image,
Figure FDA0001980878940000017
and σ is the mean and mean square error of the gray scale of the image f, and equation (2) maps the input image to
Figure FDA0001980878940000016
The mean and variance of the gray scale values are 128 and
Figure FDA0001980878940000018
k is a constant and is usually selected in the value range of [3,5 ]];
Performing adaptive partition operation on the sub-image blocks, and judging by using the sub-image as a basic processing unitWhether each subgraph belongs to the cloud layer area or not, in image blocking, if the subgraph size is too large, the accuracy of a result cannot be ensured, if the subgraph size is too small, the number of pixels is less, a stable feature extraction result cannot be obtained, and for an input image with the size of M x N, the subgraph size Mb*NbThe following relationship exists with the classification accuracy d% that needs to be achieved:
Figure FDA0001980878940000014
in the formula
Figure FDA0001980878940000015
The operation of rounding up is carried out;
(2) extracting texture features: processing each sub-image block independently, extracting the relevant characteristics of the gray level co-occurrence matrix corresponding to the sub-image block and a uniform local binary pattern with unchanged rotation as the description of the texture characteristics of the sub-image block;
(3) and (4) classification and judgment: and aiming at each sub-image block, judging each sub-image block type by adopting a pre-trained support vector machine model according to the extracted textural feature description, and further detecting the cloud layer area in the image.
2. The method for detecting the cloud layer region in the downward-looking infrared image based on the texture classification as claimed in claim 1, wherein the texture feature extraction is performed by using a gray level co-occurrence matrix correlation feature and a rotation-invariant uniform local binary pattern feature:
defining a gray level co-occurrence matrix based on joint probability densities of gray levels of pixels at two locations in the image, noting that the gray level co-occurrence matrix is P ═ P (i, j | d, theta)]L×LIt describes the probability p (i, j) of the occurrence of the pixel pair whose gray levels are i and j respectively at a distance step d in the direction θ, where i, j is 0,1,2, L-1 is the gray value corresponding to the pixel, L is the gray level number of the image, and the solving process can find that the gray level co-occurrence matrix is a symmetric matrix, the direction θ generally takes four directions of 0 °, 45 °, 90 ° and 135 °, and each direction isObtaining four matrixes in four directions corresponding to one matrix, and calculating gray level co-occurrence matrix elements in all directions of the input image I as shown in a formula (5);
Figure FDA0001980878940000021
wherein, (k, l), (m, n) are respectively the horizontal and vertical coordinates of the corresponding positions of the two pixels to be compared;
the basic principle of the local binary pattern is to compare the threshold value of the pixel in the image with the pixel in the neighborhood to obtain a binary coding sequence, and to take the decimal number corresponding to the binary coding sequence as the local binary pattern value LBP of the pointP,RAs shown in the formula (6),
Figure FDA0001980878940000031
wherein g iscIs the central pixel (u)c,vc) Gray value of gpIs the gray value of its neighborhood pixels, p is the pixel in the neighborhood, s (-) is a sign function,
Figure FDA0001980878940000032
in order to adapt to rotation, the binary coding sequence obtained by the circular neighborhood is connected end to form a cyclic coding sequence, a series of corresponding local binary pattern values are obtained by rotating the circular neighborhood, and the minimum value is used as the local binary pattern characteristic of the neighborhood
Figure FDA0001980878940000033
See the formula (8),
Figure FDA0001980878940000034
ri denotes rotation invariant, ROR (LBP)P,RL) denotes the P-bit number LBPP,RRotating l bits clockwise, P represents the number of sampling points, and R represents the radius of a circular neighborhood;
the transition U between 0 and 1 in the binary representation of the local binary pattern is defined as follows
Figure FDA0001980878940000035
g0Gray value g representing pixel No. 0 in circular neighborhood of center pixelp-1Gray value g representing the (p-1) th pixel in the circular neighborhood of the center pixelP-1Expressing the gray value of the No. (P-1) pixel point in the circular neighborhood of the central pixel point;
when the jump number between 0 and 1 contained in the cyclic binary code corresponding to the local binary pattern is not more than 2, the binary code corresponding to the local binary pattern is marked as an equivalent pattern class, the patterns except the equivalent pattern class are classified into another class called a mixed mode class, the improvement of two aspects of rotation invariance and equivalent pattern is synthesized, and the rotation invariance is uniform
Figure FDA0001980878940000036
The definition is shown in the formula (10),
Figure FDA0001980878940000037
3. the method of claim 1, wherein the classification decision uses a support vector machine as a classifier to decide the sub-image type, and the linear support vector machine is defined as
y=g(wTx+b) (11)
In the formula, y belongs to { -1,1} output class labels, x is an input feature vector, w is a weight vector, b is an offset, a threshold function g (·) outputs-1 (· < 0) or 1(· > 0), a linear support vector machine determines an optimal classification surface through a maximized classification interval, and parameters w and b are realized by adopting a Lagrange multiplier method and a sequential minimum optimization algorithm based on labeled training samples;
in order to process the nonlinear classification problem, a kernel function method is introduced to popularize a linear support vector machine to the nonlinear condition, and a radial basis kernel function is adopted
Figure FDA0001980878940000041
Where σ is the radial basis function standard deviation parameter, x is the input feature vector, xzA relaxation variable xi is introduced on the basis of a basic support vector machine for a characteristic vector corresponding to the z-th support vector and adapting to the interference of noisezNot less than 0, and simultaneously optimizing and solving w, b and xiz
CN201910148702.7A 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image Active CN109829511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910148702.7A CN109829511B (en) 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910148702.7A CN109829511B (en) 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Publications (2)

Publication Number Publication Date
CN109829511A CN109829511A (en) 2019-05-31
CN109829511B true CN109829511B (en) 2022-04-08

Family

ID=66864721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910148702.7A Active CN109829511B (en) 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Country Status (1)

Country Link
CN (1) CN109829511B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753689A (en) * 2020-06-15 2020-10-09 珠海格力电器股份有限公司 Image texture feature extraction method and image identification method
CN114724188B (en) * 2022-05-23 2022-08-16 北京圣点云信息技术有限公司 Vein identification method and device based on gray level co-occurrence matrix

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034858A (en) * 2012-11-30 2013-04-10 宁波大学 Secondary clustering segmentation method for satellite cloud picture
CN107886490A (en) * 2018-01-14 2018-04-06 中国人民解放军国防科技大学 Offshore sea area azimuth ambiguity removing method based on double-temporal SAR image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8532360B2 (en) * 2010-04-20 2013-09-10 Atheropoint Llc Imaging based symptomatic classification using a combination of trace transform, fuzzy technique and multitude of features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034858A (en) * 2012-11-30 2013-04-10 宁波大学 Secondary clustering segmentation method for satellite cloud picture
CN107886490A (en) * 2018-01-14 2018-04-06 中国人民解放军国防科技大学 Offshore sea area azimuth ambiguity removing method based on double-temporal SAR image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LBP和GLCM融合的织物组织结构分类;景军锋等;《电子测量与仪器学报》;20150915(第09期);全文 *
Study of Enteromorpha Idendification Based on Machine Learning Technology;Xueying MA et al.;《2018 OCEANS-MTS/IEEE Kobe Techno-Oceans(OT0)》;20181206;全文 *
基于可见光――红外图像信息融合的云状识别方法;张弛等;《气象与环境学报》;20180215(第01期);全文 *

Also Published As

Publication number Publication date
CN109829511A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN109271991B (en) License plate detection method based on deep learning
CN103049763B (en) Context-constraint-based target identification method
CN111753828B (en) Natural scene horizontal character detection method based on deep convolutional neural network
CN110264448B (en) Insulator fault detection method based on machine vision
CN104182763B (en) A kind of floristics identifying system based on flower feature
CN107633226B (en) Human body motion tracking feature processing method
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN109801305B (en) SAR image change detection method based on deep capsule network
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN112307919B (en) Improved YOLOv 3-based digital information area identification method in document image
CN110766016A (en) Code spraying character recognition method based on probabilistic neural network
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image
CN111091071B (en) Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting
CN115272225A (en) Strip steel surface defect detection method and system based on countermeasure learning network
CN111046838A (en) Method and device for identifying wetland remote sensing information
CN114494704A (en) Method and system for extracting framework from binary image in anti-noise manner
Shire et al. A review paper on: agricultural plant leaf disease detection using image processing
CN112101283A (en) Intelligent identification method and system for traffic signs
CN105512682B (en) A kind of security level identification recognition methods based on Krawtchouk square and KNN-SMO classifier
Günen et al. A novel edge detection approach based on backtracking search optimization algorithm (BSA) clustering
CN109785318B (en) Remote sensing image change detection method based on facial line primitive association constraint
CN115063679B (en) Pavement quality assessment method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant