CN109829511A - Texture classification-based method for detecting cloud layer area in downward-looking infrared image - Google Patents

Texture classification-based method for detecting cloud layer area in downward-looking infrared image Download PDF

Info

Publication number
CN109829511A
CN109829511A CN201910148702.7A CN201910148702A CN109829511A CN 109829511 A CN109829511 A CN 109829511A CN 201910148702 A CN201910148702 A CN 201910148702A CN 109829511 A CN109829511 A CN 109829511A
Authority
CN
China
Prior art keywords
pixel
image
cloud layer
formula
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910148702.7A
Other languages
Chinese (zh)
Other versions
CN109829511B (en
Inventor
于起峰
尚洋
刘肖琳
孙晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201910148702.7A priority Critical patent/CN109829511B/en
Publication of CN109829511A publication Critical patent/CN109829511A/en
Application granted granted Critical
Publication of CN109829511B publication Critical patent/CN109829511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a texture classification-based method for detecting a cloud layer area in an downward-looking infrared image. The method takes the detection problem of the cloud layer region in the downward-looking infrared image as a texture classification problem between the cloud layer region and the underlying surface ground object region, firstly, a median filtering and contrast enhancement algorithm is adopted to preprocess an input image, eliminate noise interference and improve image contrast; secondly, extracting image characteristics, in order to adapt to the complexity of cloud layer shapes and improve the accuracy of judgment, the invention divides an input image into self-adaptive sub-images, processes each sub-image independently, and extracts the gray level co-occurrence matrix related characteristics of the image and a uniform local binary pattern with unchanged rotation; and finally, judging the type of the image subgraph by adopting a pre-trained support vector machine according to the extracted image characteristics, and detecting the cloud layer area in the image.

Description

Lower view infrared image medium cloud layer region detection method based on Texture classification
Technical field
The invention mainly relates to computer visions, machine learning field, refer in particular to cloud layer in a kind of lower view infrared image of utilization The method that region and underlying surface atural object zone-texture property difference realize cloud layer region detection.
Technical background
In Terminal Infrared Guidance system, when Terminal Infrared Guidance system is in running order, if there is cloud layer to be present in body Between target, ground target region is blocked, and is formed " blind area ", easily causes matching error, cause terminal guidance mission failure. Therefore, during terminal guidance, it is necessary to introduce nutritious obesity, the work of terminal guidance is formulated according to the distribution situation of cloud layer in image Make process, in the hope of minimizing adverse effect of the cloud layer to matching terminal guidance process, for example, if most of region is equal in image By cloud cover, remaining effective coverage is not enough to that stable matching is supported to position, then does not carry out matching treatment to current data, according to It is guided according to other modes, it is maskable by cloud cover region if only having small part region in image by cloud cover, Matching treatment is carried out using the region that do not polluted by cloud layer.How lower view infrared image medium cloud layer region is accurately and efficiently detected It is the major issue for needing to solve in Terminal Infrared Guidance system, is of great significance to the promotion of guidance system performance.
The present invention is directed the lower view infrared image medium cloud layer region detection obtained in Terminal Infrared Guidance, is disclosed Related research result is less.It is widely studied at present and more mature nutritious obesity technology concentrates on remote sensing and meteorological field. Concern is primarily with the detections and removal of remote sensing images medium cloud layer region and its corresponding shadow region for remote sensing fields, eliminate cloud layer And shade blocks atural object, provides basis with application for subsequent analysis;Nutritious obesity is mainly used for cloud amount in meteorological field And in the differentiation of the meteorological correlation properties such as cloud type, provide foundation for climatic analysis, weather forecast etc., image data mainly by The departments such as weather station obtain, and certainly, the image data that remote sensing satellite obtains also is applied to meteorological field.It is referred to herein down It substantially can be regarded as a Texture classification problem depending on the detection of infrared image medium cloud layer region, i.e., according to texture properties by image Neutron graph region is determined as underlying surface ground scene or cloud layer, the cloud based on characteristics of image in this point and remote sensing and meteorological field Layer detection thinking is rather similar.Texture features of the nutritious obesity substantially i.e. according to cloud layer region and ground object area in remote sensing fields Difference determines cloud layer part in image;The task of cloud layer detection processing module includes cloud amount in meteorological field and the varieties of clouds are other sentences It is disconnected, cloud layer region is positioned in the picture first, i.e. differentiation image medium cloud layer region and non-cloud layer region, then according to cloud layer region Texture features carry out cloud type differentiation.
Summary of the invention
This patent be directed to lower view infrared image medium cloud layer region test problems, invented it is a kind of based on textural characteristics classify Cloud layer method for detecting area: using lower view infrared image medium cloud layer region and underlying surface the texture difference between object area, it is complete At cloud layer region detection.To adapt to the complicated and changeable of cloud layer shape, the accuracy of differentiation is improved, the present invention carries out input picture Adaptive subgraph divides, and individually handles each subgraph block;Using the uniform of gray level co-occurrence matrixes correlated characteristic and invariable rotary Textural characteristics are described in local binary patterns feature;It describes, is based on according to the extracted textural characteristics of each subgraph block Whether the supporting vector machine model of pre-training is that cloud layer region differentiates, and then completes image medium cloud layer region to sub- segment Detection.
1, the implementation process of the lower view infrared image medium cloud layer region detection method of the invention based on Texture classification
Implementation process of the present invention is as shown in Fig. 2, specific as follows:
(1) image preprocessing: pre-processing input picture using median filtering and contrast enhancement algorithms, and it is dry to eliminate noise It disturbs, improves picture contrast, to adapt to the complicated and changeable of cloud layer shape, improve the accuracy of differentiation, the present invention is to input picture Subgraph division is carried out, using subgraph block as the primitive of subsequent processing;
(2) texture feature extraction: individually handling each subgraph block, extracts the corresponding gray level co-occurrence matrixes correlated characteristic of subgraph block And the uniform local binary patterns of invariable rotary, it is described as its textural characteristics;
(3) discriminant classification: being directed to each subgraph block, describes according to extracted textural characteristics, using the supporting vector of pre-training Machine model differentiates each subgraph class block type, and then detects image medium cloud layer region.
2, the lower view infrared image medium cloud layer region detection method of the invention based on Texture classification
(1) image preprocessing
Image preprocessing includes that filtering processing, dynamic range of images adjustment and subgraph block adaptive divide, and is next carried out one by one It introduces:
The side such as noise characteristic, filtering performance and efficiency in lower view infrared image in the targeted application background of the comprehensive consideration present invention Face, the present invention select median filtering algorithm to be filtered to lower depending on infrared image.Median filtering is a kind of nonlinear filtering Device, i.e., will be in mask and at filtering when carrying out median filtering to a certain pixel in piece image as shown in formula (1) The pixel of reason is ranked up with its neighborhood territory pixel according to pixel value, finds out intermediate value, and assign it to the pixel,
I is input picture in formula, and f is image after filtering processing, SuvFor the corresponding masks area of pixel (u, v) to be filtered;
S and t is respectively pixel in SuvIn relatively horizontal, ordinate;
To eliminate the unfavorable interference of difference in dynamic range bring, the present invention carries out dynamic range adjustment to input picture.Using ash Degree normalization operation is adjusted dynamic range of images, and calculation formula is as follows:
F is image after median filtering in formula,And σ is the gray average and mean square deviation of image f, formula (2) reflects input picture Penetrate forIts gray average and variance be 128 andK is constant, and commonly using value range is [3,5];
Cloud layer shape is without rule, variation multiplicity, for the detection for more accurately completing image medium cloud layer region, antithetical phrase of the present invention Segment carries out adaptive division operation and judges whether each subgraph belongs to cloud layer region using subgraph as basic handling unit.Scheming As in piecemeal, sub-graph size needs the resolution ratio according to image to be determined, and sub-graph size is excessive, not can guarantee the accurate of result Property, sub-graph size is too small, and the pixel number for including is less, is unable to get stable feature extraction result.For a width having a size of M* The input picture of N, sub-graph size Mb*NbThere are following relationships with classification accuracy d% to be achieved is needed:
In formulaFor the operation that rounds up.It is accurate to the classification that reaches 95% for the input picture having a size of 320*256 Rate, and guarantee that subgraph length and width are suitable, can use sub-graph size is 64*64, then subgraph sum is 20, and division result is as shown in Fig. 2.
(2) texture feature extraction
The present invention is using the uniform local binary patterns feature of gray level co-occurrence matrixes correlated characteristic and invariable rotary to textural characteristics It extracts:
Based on the joint probability density of the gray value of pixel defines gray level co-occurrence matrixes, gray scale symbiosis square on two positions in image Battle array describes the position distribution characteristic between the distribution character and the close pixel of gray value of gray scale, belongs to grey scale change in description image Second-order statistics feature.Note gray level co-occurrence matrixes are P=[p (i, j | d, θ)]L×L, it is described on the θ of direction, at a distance of step-length d, Gray level is respectively Probability p (i, j) of the pixel to appearance of i and j, wherein i, j=0,1, and 2, L, L-1 is the corresponding gray scale of pixel Value, L are the number of greyscale levels of image, by finding process it is found that gray level co-occurrence matrixes are symmetrical matrix.Direction θ generally take 0 °, 45 °, 90 ° and 135 ° of four directions, as shown in Fig. 3.The corresponding matrix in each direction, four matrixes are obtained in four direction, right In input picture I, all directions gray level co-occurrence matrixes element is calculated as shown in formula (5);
Wherein, (k, l), (m, n) are respectively the cross of two pixel corresponding positions to be compared, ordinate.
When analysis of texture, it is not direct using the gray level co-occurrence matrixes being calculated, but is based on gray level co-occurrence matrixes Characteristic quantity of some statistics as texture is calculated, is become in the present invention using entropy, energy, homogeneity, contrast, cluster Gesture, cluster shape, correlation, measure information correlation 1, measure information correlation 2 and maximum probability amount to 10 statistics;
The basic principle of local binary patterns is that the pixel in image and the pixel in its neighborhood are carried out threshold value comparison, obtains two Scale coding sequence, and using the corresponding decimal number of binary sequence as the local binary patterns value LBP of the pointP,R, such as formula (6) shown in,
Wherein gcCentered on put pixel (uc,vc) gray value, gpFor the gray value of its neighborhood territory pixel, p is pixel in neighborhood, s () is sign function,
Fig. 4 show the field 3*3 local binary patterns and seeks example.To adapt to rotation, the binary system that circle shaped neighborhood region is obtained is compiled Code sequence head and the tail connect and compose loop coding sequence, obtain a series of corresponding local binary patterns by rotating circle shaped neighborhood region Value, using minimum value therein as the local binary patterns feature of the neighborhoodSee formula (8),
Ri indicates invariable rotary, ROR (LBPP,R, l) and it indicates P digit LBPP,RL are rotated in a clockwise direction, P indicates sampling Points, R indicate circle shaped neighborhood region radius.
Jump U in local binary patterns binary representation between 0 and 1 is defined as follows
g0Indicate the gray value of No. 0 pixel in central pixel point circle shaped neighborhood region, gp-1It indicates in central pixel point circle shaped neighborhood region The gray value of (p-1) number pixel, gP-1Indicate the gray value of (P-1) number pixel in central pixel point circle shaped neighborhood region.
When the number of transitions between 0 and 1 for including in the circulation binary coding corresponding to the local binary patterns is no more than 2, then The corresponding binary coding of the local binary patterns is denoted as one " equivalent formulations class ", and the mode other than " equivalent formulations class " is classified as It is another kind of, referred to as " mixed mode class ".The improvement of comprehensive invariable rotary and " equivalent formulations " two aspects, the uniform office of invariable rotary Portion's binary patternDefinition such as formula (10) is shown,
(3) discriminant classification
Based on obtained subgraph block textural characteristics description, the present invention using support vector machines as classifier, to sub- graph type into Row differentiates.Linear SVM is defined as
Y=g (wTx+b) (11)
Class label is exported for y ∈ { -1,1 } in formula, x is input feature value, and w is weight vectors, and b is offset, threshold value letter Number g () exports -1 (< 0) or 1 (> 0).Linear SVM determines optimal classification by maximizing class interval Face, as shown in Fig. 5.Training sample based on mark realizes ginseng using method of Lagrange multipliers and Sequential minimal optimization algorithm Number w, b;
In order to handle nonlinear classification problem, linear SVM is generalized to nonlinear feelings by the method for introducing kernel function Condition, the present invention in use Radial basis kernel function
Wherein σ is Radial basis kernel function standard deviation criteria, and x is input feature value, xzFor the corresponding feature of z-th of supporting vector Vector, in addition, introducing slack variable ξ on the basis of basic support vector machines to adapt to the interference of noisez>=0, optimize simultaneously Solve w, b, ξz
Compared with the prior art, the advantages of the present invention are as follows:
(1) present invention is divided using the difference of lower view infrared image medium cloud layer region and underlying surface atural object zone-texture based on feature The thinking of class realizes cloud layer region detection, and algorithm steps are simple and clear, and computation complexity is low, are easy to apply;
(2) to adapt to the complicated and changeable of cloud layer shape, the accuracy and efficiency of differentiation are improved, the present invention carries out certainly input picture It adapts to subgraph to divide, using subgraph block as the primitive of subsequent processing.
Detailed description of the invention
Lower view infrared image medium cloud layer region detection method flow chart of the Fig. 1 based on Texture classification;
Fig. 2 image subgraph divides schematic diagram;
Fig. 3 gray level co-occurrence matrixes angle schematic diagram;
Fig. 4 3*3 neighborhood local binary patterns seek example;
The classification signal of Fig. 5 linear SVM.
Specific embodiment
Embodiments of the present invention are described in further details below:
(1) image preprocessing
It is pre-processed for input picture, successively carries out median filtering, dynamic range adjustment and piecemeal processing;
Median filtering is a kind of nonlinear filter, as shown in formula (1), in carrying out to a certain pixel in piece image When value filtering, i.e., it will be ranked up with its neighborhood territory pixel according to pixel value in mask with the pixel of filtering processing, find out intermediate value, and The pixel is assigned it to,
I is input picture in formula, and f is image after filtering processing, SuvFor the corresponding masks area of pixel (u, v) to be filtered;S and t Respectively pixel is in SuvIn relatively horizontal, ordinate;
To eliminate the unfavorable interference of difference in dynamic range bring, the present invention carries out dynamic range adjustment to input picture.Using ash Degree normalization operation is adjusted dynamic range of images, and calculation formula is as follows:
F is image after median filtering in formula,And σ is the gray average and mean square deviation of image f, formula (2) reflects input picture Penetrate forIts gray average and variance be 128 andK value range can be [3,5];
In piecemeal, sub-graph size needs the resolution ratio according to image to be determined, and sub-graph size is excessive, not can guarantee the standard of result True property, sub-graph size is too small, and the pixel number for including is less, is unable to get stable feature extraction result.For a width having a size of The input picture of M*N, sub-graph size Mb*NbThere are following relationships with classification accuracy d% to be achieved is needed:
In formulaFor the operation that rounds up.It is accurate to the classification that reaches 95% for the input picture having a size of 320*256 Rate, and guarantee that subgraph length and width are suitable, can use sub-graph size is 64*64, then subgraph sum is 20, and division result is as shown in Fig. 2.
(2) texture feature extraction
For each image subblock, gray level co-occurrence matrixes correlated characteristic and the uniform binary pattern feature of invariable rotary are extracted, is used for Perhaps discriminant classification;
Parameter involved in gray level co-occurrence matrixes calculating has direction θ and step-length d, direction θ generally to take 0 °, 45 °, 90 ° and 135 ° four Direction, as shown in Fig. 3.Four matrixes are obtained in the corresponding matrix in each direction, four direction, for input picture I, respectively Direction gray level co-occurrence matrixes element is calculated as shown in formula (5);
Calculate gray level co-occurrence matrixes ASSOCIATE STATISTICS amount: entropy, energy, homogeneity, contrast, Clustering Tendency, cluster shape, correlation Property, measure information correlation 1, measure information correlation 2 and maximum probability;
The uniform binary pattern definition of invariable rotary such as formula (10) is shown,
(3) discriminant classification
According to the textural characteristics description extracted in step (2), the supporting vector machine model based on pre-training, to each image subblock Classification is differentiated, and then completes the detection of image medium cloud layer region.
The above is only the preferred embodiment of the present invention, protection scope of the present invention is not limited merely to above-described embodiment, All technical solutions belonged under thinking of the present invention all belong to the scope of protection of the present invention.It should be pointed out that for the art For those of ordinary skill, several improvements and modifications without departing from the principles of the present invention should be regarded as protection of the invention Range.

Claims (3)

1. the lower view infrared image medium cloud layer region detection method based on Texture classification, utilizes lower view infrared image medium cloud layer region With underlying surface the texture difference between object area completes cloud layer region detection, it is characterised in that:
Adaptive subgraph division is carried out to input picture, each subgraph block is individually handled;It is related special using gray level co-occurrence matrixes Textural characteristics are described in sign and the uniform local binary patterns feature of invariable rotary;According to extracted to each subgraph block Whether textural characteristics description, the supporting vector machine model based on pre-training are that cloud layer region differentiates to sub- segment, and then complete It is detected at image medium cloud layer region, step includes:
(1) image preprocessing: dividing comprising filtering processing, dynamic range of images adjustment and subgraph block adaptive,
It is filtered to lower depending on infrared image using median filtering algorithm, is carried out to a certain pixel in piece image When median filtering, i.e., it will be ranked up with its neighborhood territory pixel according to pixel value in mask with the pixel of filtering processing, find out intermediate value, And the pixel is assigned it to,
I is input picture in formula, and f is image after filtering processing, SuvFor the corresponding masks area of pixel (u, v) to be filtered;S and T is respectively pixel in SuvIn relatively horizontal, ordinate;
To eliminate the interference of difference in dynamic range bring, dynamic range adjustment is carried out to input picture, is grasped using gray scale normalization Work is adjusted dynamic range of images, and calculation formula is as follows:
F is image after median filtering in formula,And σ is the gray average and mean square deviation of image f, formula (2) maps input picture ForIts gray average and variance be 128 andK is constant, and commonly using value range is [3,5];
Adaptive division operation is carried out to sub- segment and judges whether each subgraph belongs to cloud layer using subgraph as basic handling unit Region, in image block, sub-graph size is excessive, not can guarantee the accuracy of result, and sub-graph size is too small, the pixel for including Number is less, is unable to get stable feature extraction as a result, input picture for a width having a size of M*N, sub-graph size Mb*NbWith Needing classification accuracy d% to be achieved, there are following relationships:
In formulaFor the operation that rounds up;
(2) texture feature extraction: individually handling each subgraph block, extracts the corresponding gray level co-occurrence matrixes correlated characteristic of subgraph block And the uniform local binary patterns of invariable rotary, it is described as its textural characteristics;
(3) discriminant classification: being directed to each subgraph block, describes according to extracted textural characteristics, using the supporting vector of pre-training Machine model differentiates each subgraph class block type, and then detects image medium cloud layer region.
2. the lower view infrared image medium cloud layer region detection method according to claim 1 based on Texture classification, feature It is, the texture feature extraction, using the uniform local binary patterns of gray level co-occurrence matrixes correlated characteristic and invariable rotary spy Sign is to texture feature extraction:
Based on the joint probability density of the gray value of pixel defines gray level co-occurrence matrixes, note gray scale symbiosis on two positions in image Matrix is P=[p (i, j | d, θ)]L×L, it is described on the θ of direction, and at a distance of step-length d, gray level is respectively the pixel pair of i and j The Probability p (i, j) of appearance, wherein i, j=0,1,2, L, L-1 are the corresponding gray value of pixel, and L is the number of greyscale levels of image, by Finding process is it is found that gray level co-occurrence matrixes are symmetrical matrix, and direction θ generally takes 0 °, 45 °, 90 ° and 135 ° four direction, each Direction corresponds to a matrix, and four matrixes are obtained in four direction, for input picture I, all directions gray level co-occurrence matrixes element It calculates as shown in formula (5);
Wherein, (k, l), (m, n) are respectively the cross of two pixel corresponding positions to be compared, ordinate;
The basic principle of local binary patterns is that the pixel in image and the pixel in its neighborhood are carried out threshold value comparison, obtains two Scale coding sequence, and using the corresponding decimal number of binary sequence as the local binary patterns value LBP of the pointP,R, such as formula (6) shown in,
Wherein gcCentered on put pixel (uc,vc) gray value, gpFor the gray value of its neighborhood territory pixel, p is pixel in neighborhood, s () is sign function,
To adapt to rotation, the binary code sequence head and the tail that circle shaped neighborhood region obtains are connected and composed into loop coding sequence, pass through rotation Turn circle shaped neighborhood region and obtain a series of corresponding local binary patterns values, using minimum value therein as the local binary mould of the neighborhood Formula featureSee formula (8),
Ri indicates invariable rotary, ROR (LBPP,R, l) and it indicates P digit LBPP,RL are rotated in a clockwise direction, P indicates sampling Points, R indicate circle shaped neighborhood region radius;
Jump U in local binary patterns binary representation between 0 and 1 is defined as follows
g0Indicate the gray value of No. 0 pixel in central pixel point circle shaped neighborhood region, gp-1It indicates in central pixel point circle shaped neighborhood region The gray value of (p-1) number pixel, gP-1Indicate the gray value of (P-1) number pixel in central pixel point circle shaped neighborhood region;
When the number of transitions between 0 and 1 for including in the circulation binary coding corresponding to the local binary patterns is no more than 2, then The corresponding binary coding of the local binary patterns is denoted as one " equivalent formulations class ", and the mode other than " equivalent formulations class " is classified as It is another kind of, referred to as " mixed mode class ", the improvement of comprehensive invariable rotary and " equivalent formulations " two aspects, the uniform office of invariable rotary Portion's binary patternDefinition such as formula (10) is shown,
3. the lower view infrared image medium cloud layer region detection method according to claim 1 based on Texture classification, feature It is, the discriminant classification, using support vector machines as classifier, sub- graph type is differentiated, linear SVM It is defined as
Y=g (wTx+b) (11)
Class label is exported for y ∈ { -1,1 } in formula, x is input feature value, and w is weight vectors, and b is offset, threshold value letter Number g () exports -1 (< 0) or 1 (> 0), and linear SVM determines optimal classification by maximizing class interval Face, the training sample based on mark realize parameter w, b using method of Lagrange multipliers and Sequential minimal optimization algorithm;
In order to handle nonlinear classification problem, linear SVM is generalized to nonlinear feelings by the method for introducing kernel function Condition, using Radial basis kernel function
Wherein σ is Radial basis kernel function standard deviation criteria, and x is input feature value, xzFor the corresponding feature of z-th of supporting vector Vector, in addition, introducing slack variable ξ on the basis of basic support vector machines to adapt to the interference of noisez>=0, optimize simultaneously Solve w, b, ξz
CN201910148702.7A 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image Active CN109829511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910148702.7A CN109829511B (en) 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910148702.7A CN109829511B (en) 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Publications (2)

Publication Number Publication Date
CN109829511A true CN109829511A (en) 2019-05-31
CN109829511B CN109829511B (en) 2022-04-08

Family

ID=66864721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910148702.7A Active CN109829511B (en) 2019-02-28 2019-02-28 Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Country Status (1)

Country Link
CN (1) CN109829511B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753689A (en) * 2020-06-15 2020-10-09 珠海格力电器股份有限公司 Image texture feature extraction method and image identification method
CN114724188A (en) * 2022-05-23 2022-07-08 北京圣点云信息技术有限公司 Vein identification method and device based on gray level co-occurrence matrix

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120078099A1 (en) * 2010-04-20 2012-03-29 Suri Jasjit S Imaging Based Symptomatic Classification Using a Combination of Trace Transform, Fuzzy Technique and Multitude of Features
CN103034858A (en) * 2012-11-30 2013-04-10 宁波大学 Secondary clustering segmentation method for satellite cloud picture
CN107886490A (en) * 2018-01-14 2018-04-06 中国人民解放军国防科技大学 Offshore sea area azimuth ambiguity removing method based on double-temporal SAR image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120078099A1 (en) * 2010-04-20 2012-03-29 Suri Jasjit S Imaging Based Symptomatic Classification Using a Combination of Trace Transform, Fuzzy Technique and Multitude of Features
CN103034858A (en) * 2012-11-30 2013-04-10 宁波大学 Secondary clustering segmentation method for satellite cloud picture
CN107886490A (en) * 2018-01-14 2018-04-06 中国人民解放军国防科技大学 Offshore sea area azimuth ambiguity removing method based on double-temporal SAR image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUEYING MA ET AL.: "Study of Enteromorpha Idendification Based on Machine Learning Technology", 《2018 OCEANS-MTS/IEEE KOBE TECHNO-OCEANS(OT0)》 *
张弛等: "基于可见光――红外图像信息融合的云状识别方法", 《气象与环境学报》 *
景军锋等: "LBP和GLCM融合的织物组织结构分类", 《电子测量与仪器学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753689A (en) * 2020-06-15 2020-10-09 珠海格力电器股份有限公司 Image texture feature extraction method and image identification method
CN114724188A (en) * 2022-05-23 2022-07-08 北京圣点云信息技术有限公司 Vein identification method and device based on gray level co-occurrence matrix

Also Published As

Publication number Publication date
CN109829511B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
Sun et al. Research on the hand gesture recognition based on deep learning
Ahmed et al. Salient segmentation based object detection and recognition using hybrid genetic transform
Mi et al. A port container code recognition algorithm under natural conditions
CN103049763B (en) Context-constraint-based target identification method
CN107633226B (en) Human body motion tracking feature processing method
CN105224937B (en) Fine granularity semanteme color pedestrian recognition methods again based on human part position constraint
CN111340824B (en) Image feature segmentation method based on data mining
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN102722712A (en) Multiple-scale high-resolution image object detection method based on continuity
CN110070090A (en) A kind of logistic label information detecting method and system based on handwriting identification
CN107992818B (en) Method for detecting sea surface ship target by optical remote sensing image
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN103246894B (en) A kind of ground cloud atlas recognition methods solving illumination-insensitive problem
CN110766016B (en) Code-spraying character recognition method based on probabilistic neural network
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN110826408B (en) Face recognition method by regional feature extraction
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
Chaki et al. Recognition of whole and deformed plant leaves using statistical shape features and neuro-fuzzy classifier
Guo et al. Real-time hand detection based on multi-stage HOG-SVM classifier
CN108596195A (en) A kind of scene recognition method based on sparse coding feature extraction
CN110223310A (en) A kind of line-structured light center line and cabinet edge detection method based on deep learning
CN103456017B (en) Image partition method based on the semi-supervised weight Kernel fuzzy clustering of subset
CN109829511A (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image
CN110188646B (en) Human ear identification method based on fusion of gradient direction histogram and local binary pattern
CN104200226B (en) Particle filter method for tracking target based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant