CN111667493B - Orchard fruit tree region segmentation method and system based on deformable convolutional neural network - Google Patents

Orchard fruit tree region segmentation method and system based on deformable convolutional neural network Download PDF

Info

Publication number
CN111667493B
CN111667493B CN202010464877.1A CN202010464877A CN111667493B CN 111667493 B CN111667493 B CN 111667493B CN 202010464877 A CN202010464877 A CN 202010464877A CN 111667493 B CN111667493 B CN 111667493B
Authority
CN
China
Prior art keywords
convolution
depth
neural network
features
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010464877.1A
Other languages
Chinese (zh)
Other versions
CN111667493A (en
Inventor
姜军
周作禹
胡忠冰
胡若澜
宋丰璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202010464877.1A priority Critical patent/CN111667493B/en
Publication of CN111667493A publication Critical patent/CN111667493A/en
Application granted granted Critical
Publication of CN111667493B publication Critical patent/CN111667493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an orchard fruit tree region segmentation method and system based on a deformable convolutional neural network, which belong to the field of intelligent agriculture and comprise the following steps: the method comprises the steps of extracting depth features of a depth image and a color image of the same fruit tree region in an orchard, extracting initial color features of the color image by using a deformable convolution neural network, obtaining bias parameters of the deformable convolution neural network by using the depth features and the initial color features for learning together, obtaining a feature amplification coefficient of the deformable convolution neural network by using the depth feature for learning, obtaining a feature model, collecting the color image of the orchard, extracting the color features of the color image by using the feature model, and segmenting the color image by using the color features to obtain the fruit tree region. The method can better model the complex form of the fruit tree, thereby more accurately dividing the fruit tree area to reduce the waste of pesticide and reduce the pollution of pesticide to the land, and has important significance for the implementation of intelligent agriculture in China.

Description

Orchard fruit tree region segmentation method and system based on deformable convolution neural network
Technical Field
The invention belongs to the field of intelligent agriculture, and particularly relates to an orchard fruit tree region segmentation method and system based on a deformable convolution neural network.
Background
The research on the orchard fruit tree region segmentation method can help to apply pesticide to the orchard accurately, can reduce pesticide waste, can also reduce the pollution of pesticide to the land, and has important significance on the implementation of intelligent agriculture in China.
According to the traditional orchard fruit tree region segmentation algorithm based on image characteristics, a fruit tree region is segmented through characteristics such as colors and textures designed manually, but the segmentation precision is seriously interfered by factors such as uneven illumination and weeds in the background. The orchard fruit tree region segmentation algorithm based on the three-dimensional model is used for segmenting through the pre-established three-dimensional tree model, but the model cannot adapt to the complex form of fruit trees in a natural scene. The deep learning orchard fruit tree region segmentation algorithm based on the RGB images fully utilizes the automatic learning characteristic of the convolutional neural network to reduce the error rate, but is still influenced by weeds in the background.
Therefore, the technical problems that the model cannot adapt to the complex shape of the fruit tree, the fruit tree planting interval area is segmented by mistake, pesticides are wasted, and the environmental pollution is increased exist in the prior art.
Disclosure of Invention
Aiming at the defects or improvement requirements in the prior art, the invention provides an orchard fruit tree region segmentation method and system based on a deformable convolutional neural network, so that the technical problems that a model cannot adapt to the complex shape of a fruit tree, the fruit tree planting interval region is segmented by mistake, pesticides are wasted, and the environmental pollution is increased in the prior art are solved.
In order to achieve the above object, according to an aspect of the present invention, there is provided an orchard fruit tree region segmentation method based on a deformable convolutional neural network, including:
collecting a color image of an orchard, extracting color features of the color image by using a feature model, and segmenting the color image by using the color features to obtain a fruit tree region;
the feature model is obtained by training a deformable convolution neural network, and the training comprises the following steps:
for a depth image and a color image of the same fruit tree region in an orchard, extracting a depth feature of the depth image, extracting an initial color feature of the color image by using a deformable convolution neural network, learning together by using the depth feature and the initial color feature to obtain a bias parameter of the deformable convolution neural network, and learning by using the depth feature to obtain a feature amplification coefficient of the deformable convolution neural network, thereby obtaining a feature model.
Further, the depth features are obtained by extracting a depth image through a convolutional neural network, and the convolutional kernel size of all convolutional layers in the convolutional neural network is 1.
Further, the structure of the deformable convolutional neural network is as follows:
the device comprises a first convolution layer, a maximum pooling layer, a second convolution layer and a deformable convolution layer which are connected in sequence, wherein the convolution kernel size of the first convolution layer is 7, the convolution kernel size of the second convolution layer is 1, 3 and 1 in sequence, and the convolution kernel size of the deformable convolution layer is 1, 3 and 1 in sequence.
Further, the learning of the bias parameters includes:
and performing parallel convolution operation with convolution kernel sizes of 3, 5 and 7 on the depth features, splicing the depth features with the initial color features, and learning the spliced features by using convolution with the convolution kernel size of 1 to obtain bias parameters.
Further, the learning of the feature amplification factor includes:
and learning the depth features by using convolution with the convolution kernel size of 1 to obtain feature amplification coefficients.
Further, the feature model is:
Figure BDA0002511222640000021
where y is the color characteristic of the output, p o To output the pixel locations of the color features,
Figure BDA0002511222640000022
for convolution of the sampled region, p k For convolution sample position, w is the weight of the convolution kernel, x is the input color image, Δ p is the bias parameter, Δ m k Is a characteristic amplification factor.
According to another aspect of the present invention, there is provided an orchard fruit tree region segmentation system based on a deformable convolutional neural network, comprising:
the model training module is used for extracting the depth characteristics of the depth image from the depth image and the color image of the same fruit tree region in the orchard, extracting the initial color characteristics of the color image by using the deformable convolutional neural network, learning by using the depth characteristics and the initial color characteristics together to obtain the bias parameters of the deformable convolutional neural network, and learning by using the depth characteristics to obtain the characteristic amplification coefficient of the deformable convolutional neural network, thereby obtaining a characteristic model;
and the region segmentation module is used for acquiring a color image of the orchard, extracting color features of the color image by using the feature model, and segmenting the color image by using the color features to obtain a fruit tree region.
Further, the model training module comprises:
the depth feature extraction module is used for extracting the depth features of the depth image through a convolutional neural network with the convolutional kernel size of 1 of all convolutional layers;
an initial color feature extraction module, configured to extract an initial color feature of a color image by using a deformable convolutional neural network, where the deformable convolutional neural network has a structure that: the device comprises a first convolution layer, a maximum pooling layer, a second convolution layer and a deformable convolution layer which are sequentially connected, wherein the convolution kernel size of the first convolution layer is 7, the convolution kernel size of the second convolution layer is sequentially 1, 3 and 1, and the convolution kernel size of the deformable convolution layer is sequentially 1, 3 and 1;
the offset parameter learning module is used for splicing the depth features with the initial color features after parallel convolution operations with convolution kernel sizes of 3, 5 and 7 are performed on the depth features, and learning the spliced features by using convolution with the convolution kernel size of 1 to obtain offset parameters;
and the characteristic amplification factor learning module is used for learning the depth characteristic by using convolution with the convolution kernel size of 1 to obtain a characteristic amplification factor.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the orchard fruit tree region segmentation method based on the deformable convolution neural network, the depth characteristics are introduced to guide the deformable convolution operation of the deformable convolution neural network, the size of the receptive field of the characteristics can be adjusted in a self-adaptive mode, the complex form of a fruit tree is modeled better, the fruit tree region is segmented more accurately to reduce pesticide waste, meanwhile, the pollution of pesticides to the land can be reduced, and the method has important significance for implementation of intelligent agriculture in China.
(2) According to the method, for the depth image and the color image of the same fruit tree area in the orchard, the resolution of the two images is the same, each pixel in the color image is the rgb value, the depth value of the corresponding pixel exists in the depth image, and therefore when the model is trained, due to the fact that the depth image exists, the depth value of the short weeds is different from the depth value of the fruit tree, the difference can be learned by a convolutional neural network, the influence of the short weeds is avoided, and the fruit tree area is segmented more accurately to reduce pesticide waste.
(3) In the actual segmentation, the deformable convolution neural network under the guidance of the depth features is used for extracting the features of the color images, the color feature prediction is used for obtaining the fruit tree region segmentation map, and the extracted color features contain the information of the depth images, so that the influence of short and short bushes can be avoided, and the fruit tree region can be segmented more accurately.
(4) The depth features of the invention are obtained by extracting a depth image through a convolutional neural network, the convolutional kernel size of all convolutional layers in the convolutional neural network is 1, and the convolutional kernel size is set to 1 because if the size is set to be larger (such as 3), the features of adjacent pixels are relatively similar, which is not beneficial to the learning of feature amplification coefficients.
(5) The bias parameters enable the sampling grid to be deformed freely, and the depth feature is necessary for the learning of the bias parameters because it provides strong implications for the edges of the object. Depth values are more likely to jump at the edges of the object. RGB features are also crucial for the learning of bias parameters, because RGB features imply geometrical information of the object. Therefore, the present application uses depth feature and initial color feature stitching in bias parameter learning. The convolution operations with parallel convolution kernel sizes of 3, 5 and 7 are used to make the depth feature's receptive field size consistent with the color feature.
(6) The learnable feature amplification factor represents the relative importance of each sample location, and the RGB features are inappropriate for the learning of the feature amplification factor. For the RGB features of all sample locations of the convolution kernel, their receptive fields are roughly the same, which makes the relative importance of these features difficult to resolve. For example, for the last layer of a standard ResNet50 feature extraction network, the receptive field of each feature is 483 × 483, but the distance between adjacent features in image space is only 32, which makes the receptive field overlap of adjacent features very large, i.e., their receptive fields are very similar. Since the RGB features have a detrimental effect on the learning of the feature amplification factor, the feature amplification factor is learned only by the depth features.
(7) The convolution kernel of the convolution neural network, the convolution kernel of the deformable convolution neural network, the convolution kernel during bias parameter learning and the convolution kernel during characteristic amplification factor learning are all set, and meanwhile, the standard convolution is improved into the deformable convolution with the bias parameters and the characteristic amplification factors, so that the parameters can be better learned, the complex form of the fruit tree can be better modeled, and the region of the fruit tree can be more accurately segmented.
Drawings
Fig. 1 is a schematic flow chart of an orchard fruit tree region segmentation method based on a deformable convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a network structure diagram of a convolutional neural network for extracting depth features according to an embodiment of the present invention;
FIG. 3 is a network structure diagram of a deformable convolutional neural network provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of depth feature guided deformable convolution as provided by an embodiment of the present invention;
FIG. 5(a) is a color image provided by an embodiment of the present invention;
fig. 5(b) is a fruit tree region segmentation diagram provided by the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, an orchard fruit tree region segmentation method based on a deformable convolutional neural network includes:
collecting a color image of an orchard, extracting color features of the color image by using a feature model, and segmenting the color image by using the color features to obtain a fruit tree region;
the feature model is obtained by training a deformable convolution neural network, and the training comprises the following steps:
for a depth image and a color image of the same fruit tree region in an orchard, extracting a depth feature of the depth image, extracting an initial color feature of the color image by using a deformable convolution neural network, learning by using the depth feature and the initial color feature together to obtain a bias parameter of the deformable convolution neural network, and learning by using the depth feature to obtain a feature amplification coefficient of the deformable convolution neural network, thereby obtaining a feature model.
Further, the depth features are obtained by extracting a depth image through a convolutional neural network, and the convolutional kernel size of all convolutional layers in the convolutional neural network is 1. The specific structure of the convolutional neural network is shown in fig. 2, and comprises 5 convolutional layers, each convolutional layer is followed by a BN-ReLU, and the BN-ReLU respectively represents a batch normalization layer and a ReLU activation function. The 5 convolutional layers are sequentially as follows: conv-1-64-2, Conv-1-96-1, and Conv-1-96-1. The parameters of the convolution layer are set to "Conv- (convolution kernel size) - (number of convolution kernel output channels) - (step size)", and Conv-1-64-2 respectively represent a convolution-convolution kernel size of 1-a number of convolution output channels of 64-a convolution step size of 2.
As shown in fig. 3, the structure of the deformable convolutional neural network is:
the device comprises a first convolution layer, a maximum pooling layer, a second convolution layer and a deformable convolution layer which are sequentially connected, wherein the convolution kernel size of the first convolution layer is 7, the convolution kernel size of the second convolution layer is sequentially 1, 3 and 1, and the convolution kernel size of the deformable convolution layer is sequentially 1, 3 and 1.
Using the ResNet-50 model, the parameters of the convolutional layer were set to "Conv- (convolutional kernel size) - (number of convolutional kernel output channels)" and the parameters of the deformable convolutional layer were set to "DConv- (convolutional kernel size) - (number of convolutional kernel output channels)". Conv-7-64 represents convolution-convolution kernel size of 7-convolution output channel number of 64, Max-Pooling represents maximum Pooling layer, DConv represents deformable convolution, and X3 on the right represents that the convolution operation in brackets is repeated three times, respectively.
As shown in fig. 4, the learning of the bias parameters and the feature amplification coefficients includes:
1 × 1, 3 × 3, 5 × 5, 7 × 7 denote convolution kernel sizes 1, 3, 5, 7, respectively, and the depth features are stitched with the initial color features using parallel convolution operations with convolution kernel sizes 3, 5, and 7, the stitching representing stacking the features along the channel dimension. And learning the spliced features by using convolution with the convolution kernel size of 1 to obtain the bias parameters. And learning the depth features by using convolution with the convolution kernel size of 1 to obtain feature amplification coefficients.
Since standard convolution operates identically in the channel dimension, standard convolution will be introduced in two dimensions for simplicity of notation without loss of generality. The two-dimensional convolution comprises two steps: first, a rule grid is used
Figure BDA0002511222640000071
Sampling an input feature map x; and secondly, carrying out weighted summation on the sampled values by using the weight w. Grid mesh
Figure BDA0002511222640000072
The receptive field size and the void rate are defined. For example, it is possible to say that,
Figure BDA0002511222640000073
a 3 x3 convolution kernel with a void rate of 1 is defined.
For each position p of the output profile y o The method comprises the following steps:
Figure BDA0002511222640000074
wherein p is k Exhaust the grid
Figure BDA0002511222640000075
All of the positions in (a).
The characteristic model in the embodiment of the invention is as follows:
Figure BDA0002511222640000076
where y is the color characteristic of the output, p o To output the pixel locations of the color features,
Figure BDA0002511222640000077
for convolution of the sampled region, p k For convolution sample position, w is the weight of the convolution kernel, x is the input color image, Δ p is the bias parameter, Δ m k Is a characteristic amplification factor.
Wherein, Δ m k Is a characteristic amplification factor. The value range of the characteristic amplification factor is [0, 1 ]]It is also learned using an additional convolution whose kernel size and void rate are consistent with the current convolution. The weight of the additional convolution is initialized to 0, so Δ p and Δ m k Are respectively 0 and 0.5. The learning rate of the additional convolution is set to 0.1 times the current convolution learning rate.
The standard convolution is improved to a deformable convolution with bias parameters that enable the sampling grid to be deformed freely and feature amplification coefficients, the depth feature is essential for the learning of the bias parameters as it provides strong implications for the edges of the object. Depth values are more likely to jump at the edges of the object. The RGB features are also crucial for the learning of the bias parameters, since they imply the geometrical information of the object. Therefore, the present application uses depth feature and initial color feature stitching in bias parameter learning. The convolution operations with parallel convolution kernel sizes of 3, 5 and 7 are used to make the depth feature's receptive field size consistent with the color feature. The learnable feature amplification factor represents the relative importance of each sample location, and the RGB features are inappropriate for the learning of the feature amplification factor. The RGB features for all sample positions of the convolution kernel have approximately the same field of view, which makes the relative importance of these features difficult to resolve. For example, for the last layer of a standard ResNet50 feature extraction network, the receptive field of each feature is 483 × 483, but the distance between adjacent features in image space is only 32, which makes the receptive field overlap of adjacent features very large, i.e., their receptive fields are very similar. Since the RGB features have a detrimental effect on the learning of the feature amplification factor, the feature amplification factor is learned only by the depth features.
Collecting a color image of an orchard, as shown in fig. 5(a), extracting color features of the color image by using a feature model, based on which the color features already contain information of a depth image, segmenting the color image by using the color features, and classifying the color features of each pixel by using a fully-connected neural network to obtain the category of the corresponding pixel. The pixel categories are divided into two categories: pixels belong to fruit trees and pixels do not belong to fruit trees. The division diagram corresponds to 1 and 0 respectively, all the 1 positions are fruit tree regions, and the finally obtained fruit tree regions are shown in fig. 5 (b). The method disclosed by the invention is proved to be capable of accurately dividing the fruit tree region, so that the fruit tree region can be divided more accurately to reduce the waste of pesticides, and meanwhile, the pollution of pesticides to the land can be reduced, and the method has important significance for the implementation of intelligent agriculture in China.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (3)

1. An orchard fruit tree region segmentation method based on a deformable convolutional neural network is characterized by comprising the following steps:
collecting a color image and a depth image of an orchard, extracting depth features of the depth image, extracting color features of the color image by using a feature model under the guidance of the depth features, and segmenting the color image by using the color features to obtain a fruit tree region;
the feature model is obtained by training a deformable convolution neural network, and the training comprises the following steps:
for a depth image and a color image of the same fruit tree region in an orchard, extracting a depth feature of the depth image, extracting an initial color feature of the color image by using a deformable convolution neural network, learning by using the depth feature and the initial color feature together to obtain a bias parameter of the deformable convolution neural network, and learning by using the depth feature to obtain a feature amplification coefficient of the deformable convolution neural network, thereby obtaining a feature model;
the learning of the bias parameters includes:
performing convolution operation with parallel convolution kernel sizes of 3, 5 and 7 on the depth features, splicing the depth features with the initial color features, and learning the spliced features by using convolution with a convolution kernel size of 1 to obtain bias parameters;
the learning of the feature amplification factor includes:
learning the depth features by using convolution with a convolution kernel size of 1 to obtain feature amplification coefficients;
the characteristic model is as follows:
Figure FDA0003691339700000011
where y is the color characteristic of the output, p o To output the pixel locations of the color features,
Figure FDA0003691339700000012
for convolution of the sampled region, p k For convolution sample position, w is the weight of the convolution kernel, x is the input color image, Δ p is the bias parameter, Δ m k A characteristic magnification factor;
the structure of the deformable convolution neural network is as follows:
the device comprises a first convolution layer, a maximum pooling layer, a second convolution layer and a deformable convolution layer which are sequentially connected, wherein the convolution kernel size of the first convolution layer is 7, the convolution kernel size of the second convolution layer is sequentially 1, 3 and 1, and the convolution kernel size of the deformable convolution layer is sequentially 1, 3 and 1.
2. The orchard fruit tree region segmentation method based on the deformable convolutional neural network as claimed in claim 1, wherein the depth features are obtained by extracting a depth image through the convolutional neural network, and the convolutional kernel size of all convolutional layers in the convolutional neural network is 1.
3. The utility model provides an orchard fruit tree region segmentation system based on deformable convolution neural network which characterized in that includes:
the model training module is used for extracting the depth characteristics of the depth image from the depth image and the color image of the same fruit tree region in the orchard, extracting the initial color characteristics of the color image by using the deformable convolutional neural network, learning by using the depth characteristics and the initial color characteristics together to obtain the bias parameters of the deformable convolutional neural network, and learning by using the depth characteristics to obtain the characteristic amplification coefficient of the deformable convolutional neural network, thereby obtaining a characteristic model;
the region segmentation module is used for acquiring a color image and a depth image of an orchard, extracting the depth characteristic of the depth image, extracting the color characteristic of the color image by using the characteristic model under the guidance of the depth characteristic, and segmenting the color image by using the color characteristic to obtain a fruit tree region;
the model training module comprises:
the depth feature extraction module is used for extracting the depth features of the depth image through a convolutional neural network with the convolutional kernel size of 1 of all convolutional layers;
an initial color feature extraction module, configured to extract an initial color feature of a color image by using a deformable convolutional neural network, where the deformable convolutional neural network has a structure that: the device comprises a first convolution layer, a maximum pooling layer, a second convolution layer and a deformable convolution layer which are sequentially connected, wherein the convolution kernel size of the first convolution layer is 7, the convolution kernel size of the second convolution layer is sequentially 1, 3 and 1, and the convolution kernel size of the deformable convolution layer is sequentially 1, 3 and 1;
the offset parameter learning module is used for splicing the depth features with the initial color features after parallel convolution operations with convolution kernel sizes of 3, 5 and 7 are performed on the depth features, and learning the spliced features by using convolution with the convolution kernel size of 1 to obtain offset parameters;
the feature amplification factor learning module is used for learning the depth features by using convolution with the convolution kernel size of 1 to obtain feature amplification factors;
the characteristic model is as follows:
Figure FDA0003691339700000031
where y is the color characteristic of the output, p o To output the pixel locations of the color features,
Figure FDA0003691339700000032
for convolution of the sampled region, p k For convolution sample position, w is the weight of the convolution kernel, x is the input color image, Δ p is the bias parameter, Δ m k Is a characteristic amplification factor.
CN202010464877.1A 2020-05-27 2020-05-27 Orchard fruit tree region segmentation method and system based on deformable convolutional neural network Active CN111667493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010464877.1A CN111667493B (en) 2020-05-27 2020-05-27 Orchard fruit tree region segmentation method and system based on deformable convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010464877.1A CN111667493B (en) 2020-05-27 2020-05-27 Orchard fruit tree region segmentation method and system based on deformable convolutional neural network

Publications (2)

Publication Number Publication Date
CN111667493A CN111667493A (en) 2020-09-15
CN111667493B true CN111667493B (en) 2022-09-20

Family

ID=72385140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010464877.1A Active CN111667493B (en) 2020-05-27 2020-05-27 Orchard fruit tree region segmentation method and system based on deformable convolutional neural network

Country Status (1)

Country Link
CN (1) CN111667493B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447008A (en) * 2018-11-02 2019-03-08 中山大学 Population analysis method based on attention mechanism and deformable convolutional neural networks
CN110399882A (en) * 2019-05-29 2019-11-01 广东工业大学 A kind of character detecting method based on deformable convolutional neural networks

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563388A (en) * 2017-09-18 2018-01-09 东北大学 A kind of convolutional neural networks object identification method based on depth information pre-segmentation
CN108510467B (en) * 2018-03-28 2022-04-08 西安电子科技大学 SAR image target identification method based on depth deformable convolution neural network
US10699414B2 (en) * 2018-04-03 2020-06-30 International Business Machines Corporation Image segmentation based on a shape-guided deformable model driven by a fully convolutional network prior
US20200042825A1 (en) * 2018-08-02 2020-02-06 Veritone, Inc. Neural network orchestration
CN109409443A (en) * 2018-11-28 2019-03-01 北方工业大学 Multi-scale deformable convolution network target detection method based on deep learning
CN110674866B (en) * 2019-09-23 2021-05-07 兰州理工大学 Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447008A (en) * 2018-11-02 2019-03-08 中山大学 Population analysis method based on attention mechanism and deformable convolutional neural networks
CN110399882A (en) * 2019-05-29 2019-11-01 广东工业大学 A kind of character detecting method based on deformable convolutional neural networks

Also Published As

Publication number Publication date
CN111667493A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN108009542B (en) Weed image segmentation method in rape field environment
Wang et al. Image segmentation of overlapping leaves based on Chan–Vese model and Sobel operator
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN108682017B (en) Node2Vec algorithm-based super-pixel image edge detection method
CN108537239B (en) Method for detecting image saliency target
CN106709517B (en) Mangrove forest identification method and system
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN111126287B (en) Remote sensing image dense target deep learning detection method
CN109034268B (en) Pheromone trapper-oriented red-fat bark beetle detector optimization method
CN109344699A (en) Winter jujube disease recognition method based on depth of seam division convolutional neural networks
CN111340826A (en) Single tree crown segmentation algorithm for aerial image based on superpixels and topological features
CN111414954B (en) Rock image retrieval method and system
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN116091951A (en) Method and system for extracting boundary line between farmland and tractor-ploughing path
CN111462044A (en) Greenhouse strawberry detection and maturity evaluation method based on deep learning model
CN111008642A (en) High-resolution remote sensing image classification method and system based on convolutional neural network
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
CN112686086A (en) Crop classification method based on optical-SAR (synthetic aperture radar) cooperative response
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN112419333A (en) Remote sensing image self-adaptive feature selection segmentation method and system
CN114120359A (en) Method for measuring body size of group-fed pigs based on stacked hourglass network
CN111667493B (en) Orchard fruit tree region segmentation method and system based on deformable convolutional neural network
CN113591610A (en) Crop leaf aphid detection method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant