CN111310666A - High-resolution image ground feature identification and segmentation method based on texture features - Google Patents

High-resolution image ground feature identification and segmentation method based on texture features Download PDF

Info

Publication number
CN111310666A
CN111310666A CN202010099370.0A CN202010099370A CN111310666A CN 111310666 A CN111310666 A CN 111310666A CN 202010099370 A CN202010099370 A CN 202010099370A CN 111310666 A CN111310666 A CN 111310666A
Authority
CN
China
Prior art keywords
feature
texture
matrix
features
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010099370.0A
Other languages
Chinese (zh)
Other versions
CN111310666B (en
Inventor
吴炜
高明
范菁
夏列钢
杨海平
陈婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010099370.0A priority Critical patent/CN111310666B/en
Publication of CN111310666A publication Critical patent/CN111310666A/en
Application granted granted Critical
Publication of CN111310666B publication Critical patent/CN111310666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

A high-resolution image ground object identification and segmentation method based on texture features comprises the following steps: step 1, manufacturing a sample set according to a category system; step 2, constructing a deep learning network model, which comprises: step 2.1, constructing a backbone network, step 2.2, constructing a texture feature extraction structure, step 2.3, constructing a feature matrix denoising structure, step 2.4, and constructing an upsampling structure; step 3, deep learning network model training; step 4, image prediction; and 5, processing the segmentation result. The invention uses the deep learning network frame, and not only reduces the information loss of small targets but also improves the expression capability of texture information by resetting the down-sampling multiple of the network frame and explicitly setting the texture information extraction structure; a feature matrix denoising module is added in the deep network model, so that extra noise brought by the method in calculation is reduced, pixel-by-pixel texture expression is realized, and the accuracy of the network model is further improved.

Description

High-resolution image ground feature identification and segmentation method based on texture features
Technical Field
The invention discloses a technology relating to ultrahigh resolution image information extraction, in particular to a method for identifying and segmenting ground objects from ultrahigh resolution images acquired by an unmanned aerial vehicle or a satellite according to the texture characteristics of the ground objects.
Background
The rapid development of the light and small unmanned aerial vehicle technology enables the unmanned aerial vehicle remote sensing to be generally adopted. Compared with satellite remote sensing, the unmanned aerial vehicle flies in low altitude and is not influenced by noise factors such as cloud, cloud shadow and the like; compared with the traditional aerial remote sensing, the data acquisition cost is greatly reduced. The unmanned aerial vehicle has the advantages of flexible and mobile data acquisition mode and capability of acquiring ultrahigh-resolution images, so that the unmanned aerial vehicle is widely applied to small-area applications such as agricultural loss investigation, area statistics and sample collection, and becomes an important supplement for satellite and traditional aerial remote sensing.
Compared with the medium-low resolution images, the unmanned aerial vehicle images have fewer wave bands, often adopt non-measurement cameras, and lack fine radiation correction. However, the unmanned aerial vehicle image has ultrahigh spatial resolution, and can accurately represent the spatial distribution of pixels inside the earth surface, namely the arrangement, combination and contrast of pixels with different colors, so as to form unique texture features. Therefore, research on ground feature identification and segmentation based on texture features is of great significance to the utilization of unmanned aerial vehicle images, and for this reason, researchers at home and abroad develop various methods which mainly include three types, namely a statistical method, a model method and a deep learning method.
(1) The statistical-based texture feature representation method comprises the following steps: the method realizes the representation of the texture by defining the statistical indexes in the local area. The gray level co-occurrence matrix is a representative method, the method firstly constructs the co-occurrence matrix by calculating the co-occurrence relationship between the gray level values of pixels in an image and adjacent areas, and defines a series of derived indexes such as entropy, moment and the like on the basis, thereby describing the texture characteristics of the areas. However, various metrics of the gray level co-occurrence matrix are sensitive only to certain special textures, and may lack distinctiveness for other types of textures; meanwhile, how to define the adjacent regions and the gray level co-occurrence relationship thereof also affects the performance of the algorithm.
(2) The texture feature representation method based on the model comprises the following steps: the method firstly models the distribution of pixels according to a certain mode, extracts the textural features and converts the textural features into a model parameter estimation problem, and typical methods are a random field model and a visual word bag model.
Random field models describe the statistical dependence of pixels on neighboring pixels in an image by probabilistic models, such as Markov Random field models, which describe texture assuming that any pixel is only related to neighboring pixels (Kendiuywo B K, Bargiel D, Soergel U,. high Order Dynamic Random Fields Ensemble for CroppType Classification in radial Images [ J ]. IEEE Transactions on Geoscience and Remote Sensing,2017,55(8): 4638-. The method has the problems that different models have different performances on different texture characteristics, the model parameters are complex, and various optimization algorithms are required, so that uncertainty is brought to the method.
The texture feature expression method of the visual word bag model comprises three main steps of feature extraction, feature clustering and feature coding. The feature extraction is to extract a series of feature points from an image, then calculate the high-dimensional features of each feature point, and commonly use local feature descriptors such as SIFT or SURF and the like; the feature clustering is to perform unsupervised clustering on feature points, and take a clustering center as a feature code, so as to construct a visual dictionary; the feature coding is to calculate the response of the current feature on each visual word to obtain the feature vector representation of the local region, and the feature vector represents the response of the feature point to each visual word, thereby describing the texture feature.
Although the methods based on statistics and models obtain good results in application, the method also has the problem that the feature extraction methods are all preset, cannot be adjusted according to texture content on the image, are window-based descriptors, are mainly applied to image classification, and cannot realize the segmentation precision of the ground features at the pixel level.
(3) The texture representation method based on deep learning comprises the following steps: the deep learning method based on the convolutional neural network has great success in the fields of computing vision and the like, firstly, the deep convolutional network is used for carrying out feature extraction on an image, information such as color, structure, local correlation and the like in the image is captured through superposition of convolutional layers, then pooling is carried out in a cascading mode, however, the features obtained through convolution keep the spatial arrangement relation among pixels in a convolution window, and the features such as spatial arrangement and the like are effective. However, the texture reflects the arrangement and combination of local gray scale or color in a certain area and the repeatability in space, rather than the arrangement of local features, so that the traditional convolution feature extraction is not suitable for texture expression.
Therefore, the problem that the convolution method is insufficient in texture feature expression is solved. The features extracted by the CNN are quantized by methods such as VLAD to achieve statistical description of local feature distribution, thereby representing texture. However, the feature extraction and the feature quantization in the above method are two steps performed separately, and cannot be optimized simultaneously. For this reason, Deep TEN introduces a coding layer, learns the visual words through samples, obtains the distribution of each feature in the visual words, and replaces cascade through accumulation to realize efficient description of Texture features, thereby achieving good effect on classification based on Texture images (Zhang H, Xue J, Dana K,. Deep TEN: Texture encoding network [ C ]// Proceedings-30th IEEE Conference on computer vision and Pattern registration, CVPR 2017.2017: 2896-. However, the method mainly realizes the classification of the texture-rich target, cannot obtain the texture expression pixel by pixel, and realizes the segmentation of the target object according to the different texture characteristics of each terrain. Aiming at the problem, the EncNet provides a new network structure, and on the basis of the traditional CNN extraction characteristics, an Encoding layer is added to realize the disorder of ordered characteristics, thereby realizing the understanding of local environment, realizing the Segmentation of pixel level, and obtaining the optimal Segmentation result in a plurality of Segmentation tasks (ZHANG H, DANA K, SHI J et al, Context Encoding for the selective Segmentation [ C ]// Proceedings of the IEEEcomputer Society Conference on Computer Vision and Pattern registration.2018: 7151-.
The algorithm based on deep learning continuously and iteratively learns the texture information in the image data in an end-to-end mode, adjusts the model parameters and has strong adaptability. The method makes a great breakthrough in the fields of image classification, image segmentation, target tracking and the like. Therefore, the feature identification and segmentation of the ultrahigh-resolution image by using the texture representation method based on the deep learning have important application potential, but the current deep learning network also has the following problems:
Figure BDA0002386416890000031
in order to grasp the whole image information, a plurality of times of image down-sampling operations are carried out, and an object with a small area may only occupy one pixel in a feature map after the down-sampling operations are carried out for a plurality of times, so that the influence of information loss on a classification task is small, but the division of a small target is often not negligible;
Figure BDA0002386416890000032
because a plurality of pixels with different characteristics exist in the same ground object on the ultrahigh-resolution image, texture information of an adjacent target interferes with a current point in local convolution calculation after down sampling in a depth network, and thus extra noise is brought to a calculation result.
In order to solve the problems, the invention provides a deep learning framework which realizes the identification and segmentation of ground objects on ultrahigh-resolution images of unmanned aerial vehicles and the like by using texture features of the ground objects.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an unmanned aerial vehicle image ground object identification and segmentation method based on texture features. The method is based on a deep convolutional network model, and effectively improves the capture capability of the target texture information in the image by improving the perception mode of pixels in the image to the global information; the sensitivity degree of small target detection is improved by reducing the multiple of down sampling in the network; and denoising the noise of convolution calculation after downsampling by adding a characteristic matrix denoising structure to obtain a smooth segmentation result.
After the images of the research area are acquired, the data are divided into image blocks with the resolution H multiplied by W according to the requirements of experimental equipment capacity and experimental efficiency. In order to reduce the loss of edge area information caused by image segmentation, the image is overlapped in a certain pixel size when being partitioned, the number O of overlapped pixels is determined according to the requirements of experiment precision and efficiency, and then the image is partitioned according to the number O of overlapped pixels.
The invention discloses a high-resolution image ground feature identification and segmentation method based on texture features, which comprises the following steps of:
step 1, manufacturing a sample set according to a category system;
according to the research objective, determining a ground object classification system of the research area, and assuming n different classes CL of the research area:
CL={cl1,cl2,...,cln} (1)
and (4) making a sample set according to the category system, wherein the sample comprises positive samples of all categories.
The sample represents the area of the type ground object in a polygonal mode and uses clie.CL identifies its category.
The number of samples needs to meet the training requirement, and if the number of samples is insufficient, sample enhancement is performed to improve the number of samples.
Step 2, constructing a deep learning network model;
the network model is divided into four parts: the first part is a backbone network and is used for extracting basic features of the image; the second part is a texture feature extraction structure; the third part is a characteristic matrix denoising structure; and the fourth part is an up-sampling structure, and the de-noised characteristic matrix is up-sampled to the size of the original image to obtain the image category and the segmentation result thereof.
Step 2.1, constructing a backbone network;
the partial network extracts the basic characteristics of the image and constructs a backbone network based on ResNet.
ResNet is composed of five convolution modules, a convolution kernel with the step length of 2 is used in the first convolution module, and the size of the output characteristic image is 1/2 of the original image; in the second module, a pooling layer with a step size of 2 is used, and the size of the output feature map is 1/4 of the original image; the convolution kernel with the step size of 2 is used in all of the third to fifth modules, and the final output feature size is 1/32 of the original image.
It can be seen that: the size of the ResNet output feature map is 1/32 of the original image, so that the texture information of the small ground object is seriously lost. In contrast, the method cancels the ResNet last convolution module, uses the dilation convolution in the third and fourth convolution modules, and can reduce the downsampling multiple and maximally reserve the texture characteristics of the small target under the condition of ensuring the reception field of the convolution kernel.
Acquiring a basic feature matrix F of the image by using the improved ResNet backbone network:
F={f1,f2,...,fC} (2)
where C represents the dimension of the feature, and the size of the feature is denoted as H '× W'.
Step 2.2, constructing a texture feature extraction structure;
the partial network extracts texture features.
In the feature matrix F obtained in step 2.1, a C-dimensional feature value is set at any position, the feature value is obtained by convolving pixels in a local area at the position with different convolution kernels, and can be regarded as a feature vector of the point, so that the feature matrix F is mapped into t C-dimensional feature vectors:
X={x1,x2,...,xt} (3)
the texture is represented by generalizing the concept of the word bag model.
Suppose there is a dictionary D containing K codewords of textural features:
D={d1,d2,...,dK} (4)
wherein the code word dkAnd the feature vector xiHave the same dimensions. The feature dictionary is used for the dictionary from xiAnd learning typical texture center features.
The dictionary in the traditional method is kept unchanged after being built, and cannot be learned and adjusted from data. Different from the traditional method, the dictionary D is embedded into a deep learning model, and different texture features are learned and adjusted in a supervised learning mode, so that the expression capacity of the dictionary on the texture features is optimized.
The initialization of the dictionary D uses uniformly distributed random initialization, and the distribution interval is
Figure BDA0002386416890000041
Next, a codeword d is constructedkAnd the feature vector xiA relation model e betweenikSo that the dictionary D can iteratively adjust the expression capability of the texture features through gradient propagation in backward propagation.
Because of possible ambiguities between individual codewords, hard assignment cannot be used to model the relationships between features and codewords. By applying a code to each code word d in soft allocationkThe weighting factor is set to solve this problem. Because multi-class segmentation is involved and the feature data X contains feature information of a plurality of classes, each code word d is divided according to the thought of a Gaussian mixture modelkE D sets a smoothing factor:
S={s1,s2,...,sK} (5)
sirepresenting a feature vector xiIs attributed to codeword dkS is initialized by uniformly distributed random initialization, and the distribution interval is
Figure BDA0002386416890000051
Benefit from back-propagation to iterate continuously to obtain the optimal parameters.
On the basis, different characteristics and different code words are calculatedαik
Figure BDA0002386416890000052
Wherein r isikExpressing x for input featuresiAnd dictionary code word dkResidual distance between:
rik=xi-dk(7)
obtaining a result e of the relation model by the texture feature capturing structure through weight soft distributionik
eik=aik*rik(8)
At this time eikRegarded as a code word dkFor a feature xiBy aggregating the code word dkFull description e of input features XikObtaining the description of the code word to the whole feature matrix:
Figure BDA0002386416890000053
and for the disordered repeated existence of texture elements in the texture features on the image, the spatial arrangement information of the features is ignored in an aggregation mode, and the capture capability of the texture feature distribution information is improved.
And sequentially calculating the description of each code word on the input features X, and further obtaining all unordered descriptions E of the texture feature dictionary D on the input features X:
E={e1,e2,...,eK} (10)
wherein the dimension of E is K × C.
At this time, the description information E acquired into the texture dictionary is added to the base feature matrix. The specific method comprises the following steps: according to the research on channel information in SE-Net, the adjustment coefficient Z of each characteristic channel is automatically obtained in a learning mode, and the global pooling of the first dimension is carried out on E to obtain different channel characteristic graphs fiAnd for the response value of the texture feature dictionary, taking the value as an adjusting coefficient Z of feature matrix recalibration:
Z=(z1,z2,...,zc) (11)
calculating a feature matrix F recalibrated according to texture information1
F1=F*Z (12)
Where denotes the matrix channel multiplication.
The step obtains a feature matrix F which is re-calibrated according to the extracted texture feature information1The feature contains not only the information in the convolution kernel, but also texture information, thus describing the texture.
Step 2.3, constructing a characteristic matrix denoising structure;
the partial network is a feature matrix denoising structure.
In the deep convolutional network model, after multiple downsampling, the range of the feature value in the original image is large, which may cause interference of different texture feature information in the region and form noise. At this time, the feature matrix F is obtained1And denoising to obtain a more accurate result of the ground object segmentation.
In the image denoising process containing more repeated texture primitives, the non-local mean denoising method has a good effect. Compared with the denoising method of the smoothing filter, the non-local mean denoising method reconstructs pixel points by calculating the local feature similarity between two points, can better reserve the texture details of an image and avoids the fuzzy texture features brought by local smoothing.
According to the idea, a feature matrix reconstruction type denoising structure in deep learning is designed.
Step 2.2 obtaining a feature matrix F containing texture information1And a texture feature dictionary D. And the dictionary D effectively learns the texture features of the corresponding category by using a supervised learning mode. And (5) reconstructing a feature matrix by using the dictionary D, and highlighting texture feature information required by the algorithm.
First, a feature matrix F is calculated1The similarity between the respective feature vectors and the respective codewords. The similarity between vectors can be obtained by cosine similarity calculation, and in deep learning, the cosine is approximated by dot product similarity while ensuring the calculation efficiencyAnd (4) similarity. To ensure the accuracy of the measure between similarities, the vectors are normalized before the dot product similarity is calculated. According to the rule of matrix calculation, F1Obtaining by matrix transposition
Figure BDA0002386416890000061
And then carrying out matrix multiplication calculation with D, and obtaining a similarity matrix W through a softmax function:
Figure BDA0002386416890000062
the softmax function in the above equation maps the similarity to the (0, 1) interval and makes K codewords to one feature xiSum of similarity
Figure BDA0002386416890000063
Is 1. Then, taking the similarity as weight, and carrying out matrix multiplication to obtain a reconstruction characteristic matrix F2
F2=DWT(14)
Step 2.4, constructing an upper sampling structure;
the fourth part is an upsampling structure.
Firstly, the feature matrix F calculated in step 2.2 and step 2.3 is weighted by using the weight parameter1And F2And (3) connecting to obtain a final feature matrix G:
G=wf1*F1+wf2*F2(15)
wherein, wf1And wf2The characteristic of the backward propagation adjustment parameter of the deep learning network model is utilized to enable the network to automatically adjust the combination of the characteristic information so as to obtain the accurate estimation of the parameter.
Then, performing channel compression on G according to the number n of classes, and explicitly representing the structure of predicting the C channels for the n classes; then, bi-linear interpolation is up-sampled to the original image size.
The step obtains a feature matrix with the same size as the original image, not only contains texture features of a convolution kernel, but also carries out feature denoising, better highlights texture feature information, and removes noise interference possibly brought by convolution calculation after down sampling.
Step 3, deep learning network model training;
and (3) using the sample set and the label set which are manufactured in the step (1) as the input of the network model which is constructed in the step (2), setting the hyper-parameter of the network model, training the model through a gradient descent algorithm and obtaining a stable result.
Step 4, image prediction;
predicting the data set to obtain the probability P of the type i predicted by the pixel point jijAnd taking the value with the maximum probability as the type T of the pixelj:
Figure BDA0002386416890000071
Since there are a plurality of different segmentation results for the pixels in the overlap region, the synthesis can be performed according to the maximum voting method.
The type of each pixel is obtained by processing all the images, so that the identification and the segmentation of the ground object types are realized, and communicated ground objects form the minimum texture primitive.
Step 5, segmentation result post-processing
Counting connected regions formed by the same type of pixels, and judging whether the size of the connected region in the prediction result is smaller than a threshold value eConIf the area in the communication area is expanded to a certain extent, if the network contacts other areas during expansion, the network is considered to predict that the category of the communication area is wrong, and the category of the communication area is changed into the category of the other communication areas which are contacted firstly. If the expansion does not contact other regions, the connected region is regarded as an isolated noise point and removed.
The invention adopts the scheme has the advantages that:
(1) the invention uses the deep learning network frame, reduces the information loss of small targets and improves the expression capability of texture information by resetting the down-sampling multiple of the network frame and explicitly setting the texture information extraction structure.
(2) A feature matrix denoising module is added in the deep network model, so that extra noise brought by the method in calculation is reduced, pixel-by-pixel texture expression is realized, and the accuracy of the network model is further improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a distribution diagram of shot points when the unmanned aerial vehicle image is acquired in the embodiment of the invention.
FIG. 3 is a category hierarchy diagram in an embodiment of the present invention.
FIG. 4 is a deep learning network first part: backbone network ResNet schematic.
FIG. 5 is a second part of the deep learning network: the texture feature captures the structural schematic.
FIG. 6 is a third part of the deep learning network: and (5) a characteristic matrix denoising structure schematic diagram.
Fig. 7 shows an image case and a sample label thereof according to the present invention. FIG. 7(a) is an exemplary image of a sample set in accordance with an embodiment of the present invention; FIG. 7(b) is an exemplary diagram of a sample set image tag in an embodiment of the present invention; FIG. 7(c) is a segmentation result of the method herein; FIG. 7(d) is a post-treatment result of the method herein.
Detailed Description
In order to further clarify the disclosure of the present invention, the following further describes the detailed description of the present invention in conjunction with the inventive flow chart of fig. 1 and the embodiments of the present invention. It is to be understood that not all features of an actual implementation are the same as the present embodiment, and that details of the implementation of the present invention may vary depending on actual conditions and objectives in an actual engineering project. In addition, although the present example describes the unmanned aerial vehicle remote sensing image feature segmentation with respect to texture features, it should be understood that the method may also be applied to other image feature understanding and segmentation.
The research area of this embodiment is the dragon tour county in the city of thoroughfare province of Zhejiang province (as shown in FIG. 2), the model of the unmanned aerial vehicle for obtaining the image is DJI PHONOM 4 PRO, the sensor is a 1-inch CMOS, the spatial resolution of the obtained image is about 6.25cm, and the size of the image is 5472 × 3682. The flight acquires 86 scenes of images together, and the area of the covered research area is about 1.38km2
The classification is determined according to the research target, and because the texture features of the same ground object have larger differences in different growth periods, the implementation case takes the same ground object as different types to test, and the research classification system is determined to be divided into 6 major classes and 24 minor classes: typical textures of the respective classes are shown in fig. 7, and the texture features of different classes are greatly different and can be accurately distinguished.
The image needs to be cropped to image blocks of 480 x 480 pixels in size in case of limited device computing power. In order to reduce the gap phenomenon generated when the divided image prediction results are combined, the transverse direction and the longitudinal direction are overlapped by 50 pixels.
In the above method, 8256 video blocks with a resolution of 480 × 480 (pixels) were obtained as an experimental data set.
A high-resolution image ground object identification and segmentation method based on texture features comprises the following steps:
step 1, manufacturing a sample set according to a category system;
LabelMe software was used to make label images of the sample set according to the determined study area classification system, as shown in figure 3.
In order to ensure the coverage degree of the sample set on the class textural feature information, firstly, positive samples containing all researched classes and negative samples of other classes are manually selected as the basis of the sample set, and then the remaining data sets are randomly extracted to obtain 2500 sample sets in total.
Step 2, constructing a deep learning network model;
the network model is divided into four parts. The first part is a backbone network and is used for extracting basic features of the image; the second part is a texture feature extraction structure used for extracting the texture features existing in the feature matrix; the third part is a characteristic matrix denoising structure used for denoising the characteristic matrix; and the fourth part is an up-sampling structure, and the feature matrix combined after de-noising is up-sampled to the size of the original image to obtain an image prediction segmentation result.
Step 2.1, constructing a backbone network;
this configuration is shown in fig. 4. And constructing a backbone network based on ResNet, and selecting ResNet101 as the backbone network for extracting the basic characteristics of the image according to the performance and efficiency requirements of the computing equipment.
The structure eliminates the ResNet last convolution module, and uses dilation convolution in the third and fourth convolution modules, so as to reduce down-sampling times under the condition of ensuring convolution kernel receptive field. And setting the expansion convolution parameter of the third convolution module to be 2 and the step size to be 2, and setting the expansion convolution parameter of the fourth convolution module to be 4 and the step size to be 1. The finally obtained basic feature matrix F is 8 times of the original image downsampling size, the number of channels C is 1024, and the shape is 1024 × 60 × 60.
Step 2.2, constructing a texture feature capturing structure;
the second part is a texture feature extraction structure, as shown in fig. 5. Mapping 1 the basic feature matrix F extracted in step 2.1, as shown in fig. 5, part 511, to 360 feature vectors with length 1024.
Given a texture feature dictionary D, as shown in FIG. 5, part 512, the dictionary contains 32 codewords DiAnd a smoothing factor s of 32 codewordsi
Calculate e using equation 9kThe texture feature extraction structure soft-assigns a description E of each codeword in the aggregated dictionary for the input features in the C dimension by weight, as shown in part 513 in fig. 4.
The texture information obtained using the texture feature extraction structure is globally pooled in a first dimension, and the scaling factor Z is obtained, as shown in 514 of fig. 4.
The adjustment of the basic feature matrix according to the texture features shown in equation 12 is used to obtain a new feature matrix F1As shown in fig. 4 at 515.
Step 2.3, constructing a characteristic matrix denoising structure;
the third part is a feature matrix denoising structure, as shown in fig. 5.
For F generated in step 2.21As in 611 of fig. 6, and the texture dictionary D generated in step 2.2, as in 612 of fig. 6, are calculated using equation 13The similarity between each codeword and each feature vector. Then, the feature matrix F after denoising is obtained by using the formula 142As shown in part 613 of fig. 6.
Step 2.4, constructing an upper sampling structure;
the fourth part is an upsampling structure, and firstly, the feature matrices calculated in the step 2.2 and the step 2.3 are connected by using the weight parameters, and a final feature matrix G is obtained by using a formula 15.
The characteristics of back propagation adjustment parameters of the deep learning network model are utilized, so that the network automatically adjusts the combination of the characteristic information to obtain a result with higher precision.
G is then compressed to 24 (total number of classes) channels. The displayed configuration sets the structure of each channel predicted for each category. Then, bilinear interpolation is carried out to up-sample the original image size to 480 × 480.
Step 3, deep learning network model training;
and (3) using the sample set and the label set manufactured in the step (1) as the input of the network model constructed in the step (2), and setting hyper-parameters: the learning rate was 0.001, total batch 100, each batch size 16, momentum 0.9, weight decay 0.0001.
Predicting a result through a set model frame, then calculating loss values of an actual label and the predicted result according to a loss function, then using a reverse gradient propagation algorithm, and adjusting model parameters according to a learning rate. And continuously iteratively training the deep network model until the result obtained by the loss function tends to be stable, and the network is converged at the moment, so that the accuracy of the model in the sample set is calculated. And adjusting the hyper-parameter by using a general method according to different precision requirements to obtain the current optimal model.
Step 4, image prediction;
predicting all manufactured data sets, and reserving the probability p of the type i predicted by each pixel point jij
According to the overlapped cutting, the point is predicted repeatedly at most four times, and the result with the largest occurrence number is used as the final segmentation result.
FIG. 7(a) is an input image of the present embodiment; FIG. 7(b) is a label of an input image in an embodiment of the present invention; fig. 7(c) shows the segmentation result of the present embodiment.
Step 5, post-processing results;
the minimum texel size is 5 x 5 (pixels) calculated from the class hierarchy.
Expanding the area with the connected region smaller than the size by 1.5 times, and determining the point as an isolated noise point to be erased if other connected regions are not encountered in the expansion process, as shown in 710 and 711 in fig. 7 (c); if other connected regions are contacted, the point is changed to the category of the connected region contacted first, as shown in 712 of FIG. 7(c) and 713 of FIG. 7 (d).
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. A high-resolution image ground object identification and segmentation method based on texture features comprises the following steps:
step 1, manufacturing a sample set according to a category system;
according to the research objective, determining a ground object classification system of the research area, and assuming n different classes CL of the research area:
CL={cl1,cl2,...,cln} (1)
preparing a sample set according to the category system, wherein the sample comprises positive samples of all categories;
the sample represents the area of the type ground object in a polygonal mode and uses cliIdentifying the category by the epsilon CL;
the number of samples needs to meet the training requirement, and if the number of samples is insufficient, the samples are enhanced to improve the number of the samples;
step 2, constructing a deep learning network model;
the network model is divided into four parts: the first part is a backbone network and is used for extracting basic features of the image; the second part is a texture feature extraction structure; the third part is a characteristic matrix denoising structure; the fourth part is an up-sampling structure, the denoised feature matrix is up-sampled to the size of the original image, and the image category and the segmentation result thereof are obtained;
step 2.1, constructing a backbone network;
extracting basic features of the image, and constructing a backbone network based on ResNet;
ResNet is composed of five convolution modules, a convolution kernel with the step length of 2 is used in the first convolution module, and the size of the output characteristic image is 1/2 of the original image; in the second module, a pooling layer with a step size of 2 is used, and the size of the output feature map is 1/4 of the original image; the convolution kernels with the step length of 2 are used in the third module to the fifth module, and the size of the finally output characteristic image is 1/32 of the original image;
it can be seen that: the size of the ResNet output feature map is 1/32 of the original image, so that the loss of texture information of the small ground object is serious; in contrast, the method cancels the ResNet last convolution module, uses the expansion convolution in the third and fourth convolution modules, can reduce the down-sampling times and maximally reserve the texture characteristics of small targets under the condition of ensuring the reception fields of convolution kernels;
acquiring a basic feature matrix F of the image by using the improved ResNet backbone network:
F={f1,f2,...,fC} (2)
wherein C represents the dimension of the feature, and the size of the feature is recorded as H '× W';
step 2.2, constructing a texture feature extraction structure;
in the feature matrix F obtained in step 2.1, a C-dimensional feature value is set at any position, the feature value is obtained by convolving pixels in a local area at the position with different convolution kernels, and can be regarded as a feature vector of the point, so that the feature matrix F is mapped into t C-dimensional feature vectors:
X={x1,x2,...,xt} (3)
expressing the texture by generalizing the thought of the word bag model;
suppose there is a dictionary D containing K codewords of textural features:
D={d1,d2,...,dK} (4)
wherein the code word dkAnd the feature vector xiHave the same dimensions; the feature dictionary is used for the dictionary from xiLearning typical texture center features;
in the traditional method, a dictionary is kept unchanged after being constructed, and cannot be learned and adjusted from data; different from the traditional method, the dictionary D is embedded into a deep learning model, and different texture features are learned and adjusted in a supervised learning mode, so that the expression capability of the dictionary on the texture features is optimized;
the initialization of the dictionary D uses uniformly distributed random initialization, and the distribution interval is
Figure FDA0002386416880000021
Next, a codeword d is constructedkAnd the feature vector xiA relation model e betweenikThe dictionary D can iteratively adjust the expression capacity of the texture features through gradient propagation during backward propagation;
because ambiguity may exist between each code word, a relationship model between the features and the code words cannot be constructed by using a hard allocation mode; by applying a code to each code word d in soft allocationkSetting a weight coefficient to solve the problem; because multi-class segmentation is involved and the feature data X contains feature information of a plurality of classes, each code word d is divided according to the thought of a Gaussian mixture modelkE D sets a smoothing factor:
S={s1,s2,...,sK} (5)
sirepresenting a feature vector xiIs attributed to codeword dkS is initialized by uniformly distributed random initialization, and the distribution interval is
Figure FDA0002386416880000022
Benefit from back-propagation to iterate continuously to obtain optimal parameters;
based on this, the weighting coefficients α between different features and different codewords are calculatedik
Figure FDA0002386416880000023
Wherein r isikExpressing x for input featuresiAnd dictionary code word dkResidual distance between:
rik=xi-dk(7)
obtaining a result e of the relation model by the texture feature capturing structure through weight soft distributionik
eik=aik*rik(8)
At this time eikRegarded as a code word dkFor a feature xiBy aggregating the code word dkFull description e of input features XikObtaining the description of the code word to the whole feature matrix:
Figure FDA0002386416880000031
for the disordered and repeated existence of texture elements in the texture features on the image, the spatial arrangement information of the features is ignored in a polymerization mode, and the capture capability of the texture feature distribution information is improved;
and sequentially calculating the description of each code word on the input features X, and further obtaining all unordered descriptions E of the texture feature dictionary D on the input features X:
E={e1,e2,...,eK} (10)
wherein the dimension of E is KxC;
at the moment, the description information E of the texture dictionary is added into the basic feature matrix; the specific method comprises the following steps: according to the research on channel information in SE-Net, the adjustment coefficient Z of each characteristic channel is automatically obtained in a learning mode, and the global dimension of the E is subjected to global operationPooling to obtain different channel characteristic maps fiAnd for the response value of the texture feature dictionary, taking the value as an adjusting coefficient Z of feature matrix recalibration:
Z=(z1,z2,...,zc) (11)
calculating a feature matrix F recalibrated according to texture information1
F1=F*Z (12)
Wherein denotes matrix channel multiplication;
the step obtains a feature matrix F which is re-calibrated according to the extracted texture feature information1The feature contains not only the information in the convolution kernel, but also texture information, thereby describing the texture;
step 2.3, constructing a characteristic matrix denoising structure;
in the depth convolution network model, after multiple downsampling, the feature value represents a large range in an original image, so that interference of different texture feature information in an area can be caused, and noise is formed; at this time, the feature matrix F is obtained1Denoising to obtain a more accurate result of ground object segmentation;
in the image denoising process containing more repeated texture primitives, the non-local mean denoising method has a good effect; compared with a smoothing filter denoising method, the non-local mean denoising method reconstructs pixel points by calculating the local feature similarity between two points, can better reserve the texture details of an image and avoids the fuzzy texture features brought by local smoothing;
according to the thought, a characteristic matrix reconstruction type denoising structure in deep learning is designed;
step 2.2 obtaining a feature matrix F containing texture information1And a texture feature dictionary D; the dictionary D effectively learns the texture features of corresponding categories by using a supervised learning mode; reconstructing a characteristic matrix by using the dictionary D, and highlighting texture characteristic information required by the algorithm;
first, a feature matrix F is calculated1Similarity between each feature vector and each codeword; the similarity between vectors can be obtained by cosine similarity calculation in deep learningUnder the condition of ensuring the calculation efficiency, the cosine similarity is approximated by using the dot product similarity; in order to ensure the accuracy of the measurement between the similarities, the vectors are normalized before the dot product similarity is calculated; according to the rule of matrix calculation, F1Obtaining by matrix transposition
Figure FDA0002386416880000041
And then carrying out matrix multiplication calculation with D, and obtaining a similarity matrix W through a softmax function:
Figure FDA0002386416880000042
the softmax function in the above equation maps the similarity to the (0, 1) interval and makes K codewords to one feature xiSum of similarity
Figure FDA0002386416880000043
Is 1; then, taking the similarity as weight, and carrying out matrix multiplication to obtain a reconstruction characteristic matrix F2
F2=DWT(14)
Step 2.4, constructing an upper sampling structure;
firstly, the feature matrix F calculated in step 2.2 and step 2.3 is weighted by using the weight parameter1And F2And (3) connecting to obtain a final feature matrix G:
G=wf1*F1+wf2*F2(15)
wherein, wf1And wf2The characteristic of back propagation adjustment parameters of a deep learning network model is utilized to enable the network to automatically adjust the combination of the characteristic information so as to obtain accurate estimation of the parameters;
then, performing channel compression on G according to the number n of classes, and explicitly representing the structure of predicting the C channels for the n classes; then, up-sampling the size of the original image by bilinear interpolation;
the step obtains a feature matrix with the same size as the original image, not only contains texture features of a convolution kernel, but also carries out feature denoising, better highlights texture feature information, and removes noise interference possibly brought by convolution calculation after down sampling;
step 3, deep learning network model training;
using the sample set and the label set manufactured in the step 1 as the input of the network model constructed in the step 2, setting the hyper-parameter of the network model, training the model through a gradient descent algorithm and obtaining a stable result;
step 4, image prediction;
predicting the data set to obtain the probability P of the type i predicted by the pixel point jijAnd taking the value with the maximum probability as the type T of the pixelj:
Figure FDA0002386416880000051
Because the pixels in the overlapping area have a plurality of different segmentation results, the pixels can be synthesized according to a maximum voting method;
the type of each pixel is obtained by processing all the images, so that the identification and the segmentation of the ground object types are realized, and communicated ground objects form the minimum texture primitive;
step 5, segmentation result post-processing
Counting connected regions formed by the same type of pixels, and judging whether the size of the connected region in the prediction result is smaller than a threshold value eConIf the area is contacted with other areas during expansion, the network is considered to predict the type of the connected area to be wrong, and the type of the connected area is changed into the type of the other connected areas contacted firstly; if the expansion does not contact other regions, the connected region is regarded as an isolated noise point and removed.
CN202010099370.0A 2020-02-18 2020-02-18 High-resolution image ground feature identification and segmentation method based on texture features Active CN111310666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010099370.0A CN111310666B (en) 2020-02-18 2020-02-18 High-resolution image ground feature identification and segmentation method based on texture features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010099370.0A CN111310666B (en) 2020-02-18 2020-02-18 High-resolution image ground feature identification and segmentation method based on texture features

Publications (2)

Publication Number Publication Date
CN111310666A true CN111310666A (en) 2020-06-19
CN111310666B CN111310666B (en) 2022-03-18

Family

ID=71148475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010099370.0A Active CN111310666B (en) 2020-02-18 2020-02-18 High-resolution image ground feature identification and segmentation method based on texture features

Country Status (1)

Country Link
CN (1) CN111310666B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990368A (en) * 2021-04-26 2021-06-18 湖南大学 Polygonal structure guided hyperspectral image single sample identification method and system
CN113011425A (en) * 2021-03-05 2021-06-22 上海商汤智能科技有限公司 Image segmentation method and device, electronic equipment and computer readable storage medium
CN113191386A (en) * 2021-03-26 2021-07-30 中国矿业大学 Chromosome classification model based on grid reconstruction learning
CN113222033A (en) * 2021-05-19 2021-08-06 北京数研科技发展有限公司 Monocular image estimation method based on multi-classification regression model and self-attention mechanism
CN113362323A (en) * 2021-07-21 2021-09-07 中国科学院空天信息创新研究院 Image detection method based on sliding window block
CN114173137A (en) * 2020-09-10 2022-03-11 北京金山云网络技术有限公司 Video coding method and device and electronic equipment
CN114406502A (en) * 2022-03-14 2022-04-29 扬州市振东电力器材有限公司 Laser metal cutting method and system
CN115661113A (en) * 2022-11-09 2023-01-31 浙江酷趣智能科技有限公司 Moisture-absorbing and sweat-releasing fabric and preparation process thereof
CN117611471A (en) * 2024-01-22 2024-02-27 中国科学院长春光学精密机械与物理研究所 High-dynamic image synthesis method based on texture decomposition model
CN117649661A (en) * 2024-01-30 2024-03-05 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method
CN117809140A (en) * 2024-03-01 2024-04-02 榆林拓峰达岸网络科技有限公司 Image preprocessing system and method based on image recognition
CN113011425B (en) * 2021-03-05 2024-06-07 上海商汤智能科技有限公司 Image segmentation method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408941A (en) * 2008-10-20 2009-04-15 中国科学院遥感应用研究所 Method for multi-dimension segmentation of remote sensing image and representation of segmentation result hierarchical structure
CN108734661A (en) * 2018-05-25 2018-11-02 南京信息工程大学 High-definition picture prediction technique based on image texture information architecture loss function
CN109118435A (en) * 2018-06-15 2019-01-01 广东工业大学 A kind of depth residual error convolutional neural networks image de-noising method based on PReLU
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408941A (en) * 2008-10-20 2009-04-15 中国科学院遥感应用研究所 Method for multi-dimension segmentation of remote sensing image and representation of segmentation result hierarchical structure
CN108734661A (en) * 2018-05-25 2018-11-02 南京信息工程大学 High-definition picture prediction technique based on image texture information architecture loss function
CN109118435A (en) * 2018-06-15 2019-01-01 广东工业大学 A kind of depth residual error convolutional neural networks image de-noising method based on PReLU
CN109784283A (en) * 2019-01-21 2019-05-21 陕西师范大学 Based on the Remote Sensing Target extracting method under scene Recognition task

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BRUNO T. KITANO ET AL: "Corn Plant Counting Using Deep Learning and UAV Images", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173137A (en) * 2020-09-10 2022-03-11 北京金山云网络技术有限公司 Video coding method and device and electronic equipment
CN113011425A (en) * 2021-03-05 2021-06-22 上海商汤智能科技有限公司 Image segmentation method and device, electronic equipment and computer readable storage medium
CN113011425B (en) * 2021-03-05 2024-06-07 上海商汤智能科技有限公司 Image segmentation method, device, electronic equipment and computer readable storage medium
CN113191386B (en) * 2021-03-26 2023-11-03 中国矿业大学 Chromosome classification model based on grid reconstruction learning
CN113191386A (en) * 2021-03-26 2021-07-30 中国矿业大学 Chromosome classification model based on grid reconstruction learning
CN112990368A (en) * 2021-04-26 2021-06-18 湖南大学 Polygonal structure guided hyperspectral image single sample identification method and system
CN113222033A (en) * 2021-05-19 2021-08-06 北京数研科技发展有限公司 Monocular image estimation method based on multi-classification regression model and self-attention mechanism
CN113362323A (en) * 2021-07-21 2021-09-07 中国科学院空天信息创新研究院 Image detection method based on sliding window block
CN113362323B (en) * 2021-07-21 2022-09-16 中国科学院空天信息创新研究院 Image detection method based on sliding window partitioning
CN114406502B (en) * 2022-03-14 2022-11-25 扬州市振东电力器材有限公司 Laser metal cutting method and system
CN114406502A (en) * 2022-03-14 2022-04-29 扬州市振东电力器材有限公司 Laser metal cutting method and system
CN115661113A (en) * 2022-11-09 2023-01-31 浙江酷趣智能科技有限公司 Moisture-absorbing and sweat-releasing fabric and preparation process thereof
CN117611471A (en) * 2024-01-22 2024-02-27 中国科学院长春光学精密机械与物理研究所 High-dynamic image synthesis method based on texture decomposition model
CN117611471B (en) * 2024-01-22 2024-04-09 中国科学院长春光学精密机械与物理研究所 High-dynamic image synthesis method based on texture decomposition model
CN117649661A (en) * 2024-01-30 2024-03-05 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method
CN117649661B (en) * 2024-01-30 2024-04-12 青岛超瑞纳米新材料科技有限公司 Carbon nanotube preparation state image processing method
CN117809140A (en) * 2024-03-01 2024-04-02 榆林拓峰达岸网络科技有限公司 Image preprocessing system and method based on image recognition
CN117809140B (en) * 2024-03-01 2024-05-28 榆林拓峰达岸网络科技有限公司 Image preprocessing system and method based on image recognition

Also Published As

Publication number Publication date
CN111310666B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN111310666B (en) High-resolution image ground feature identification and segmentation method based on texture features
CN110135267B (en) Large-scene SAR image fine target detection method
CN111915592B (en) Remote sensing image cloud detection method based on deep learning
CN112668494A (en) Small sample change detection method based on multi-scale feature extraction
CN111914686B (en) SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
Venugopal Automatic semantic segmentation with DeepLab dilated learning network for change detection in remote sensing images
CN113705580B (en) Hyperspectral image classification method based on deep migration learning
US10755146B2 (en) Network architecture for generating a labeled overhead image
CN114187450A (en) Remote sensing image semantic segmentation method based on deep learning
CN110084181B (en) Remote sensing image ship target detection method based on sparse MobileNet V2 network
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN113240040A (en) Polarized SAR image classification method based on channel attention depth network
CN113239736A (en) Land cover classification annotation graph obtaining method, storage medium and system based on multi-source remote sensing data
CN117788296B (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
CN115497002A (en) Multi-scale feature fusion laser radar remote sensing classification method
Gu et al. A classification method for polsar images using SLIC superpixel segmentation and deep convolution neural network
Liu et al. High-resolution remote sensing image information extraction and target recognition based on multiple information fusion
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
Zou et al. DiffCR: A fast conditional diffusion framework for cloud removal from optical satellite images
CN112560719B (en) High-resolution image water body extraction method based on multi-scale convolution-multi-core pooling
CN111860668B (en) Point cloud identification method for depth convolution network of original 3D point cloud processing
Jing et al. Time series land cover classification based on semi-supervised convolutional long short-term memory neural networks
CN113780096A (en) Vegetation land feature extraction method based on semi-supervised deep learning
CN113192018A (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
Gupta et al. Remote Sensing Image Classification Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant