CN113792744A - Crop growth data transmission system and method in low-power-consumption wide area network - Google Patents

Crop growth data transmission system and method in low-power-consumption wide area network Download PDF

Info

Publication number
CN113792744A
CN113792744A CN202111083814.2A CN202111083814A CN113792744A CN 113792744 A CN113792744 A CN 113792744A CN 202111083814 A CN202111083814 A CN 202111083814A CN 113792744 A CN113792744 A CN 113792744A
Authority
CN
China
Prior art keywords
layer
module
feature map
image
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111083814.2A
Other languages
Chinese (zh)
Other versions
CN113792744B (en
Inventor
谢吉龙
刘万村
赵丹
孙慧敏
姜涛
雍丽英
杜丽萍
孙昊楠
张喜海
王海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Agricultural University
Harbin Vocational and Technical College
Original Assignee
Northeast Agricultural University
Harbin Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Agricultural University, Harbin Vocational and Technical College filed Critical Northeast Agricultural University
Priority to CN202111083814.2A priority Critical patent/CN113792744B/en
Publication of CN113792744A publication Critical patent/CN113792744A/en
Application granted granted Critical
Publication of CN113792744B publication Critical patent/CN113792744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W48/00Access restriction; Network selection; Access point selection
    • H04W48/18Selecting a network or a communication service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a crop growth data transmission system and method in a low-power-consumption wide area network, and relates to a crop growth data transmission system and method in a low-power-consumption wide area network. The invention aims to solve the problems that the existing method has no unified standard for judging the growth cycle of crops in remote mountainous areas, and has low timeliness, poor accuracy and low working efficiency. The system comprises: the image data acquisition module is used for acquiring crop growth image information; the preprocessing module is used for preprocessing the acquired information; the network building module is used for building a machine learning model; the training module is used for acquiring a trained machine learning model; the wide area network communication module is used for transmitting crop growth image information to be processed to the machine learning module; and the machine learning module is used for inputting the crop growth image information to be processed into the trained machine learning model to obtain a recognition result. The method is used for the field of crop growth recognition.

Description

Crop growth data transmission system and method in low-power-consumption wide area network
Technical Field
The invention relates to a crop growth data transmission system and method in a low-power-consumption wide area network, and relates to the field of wireless communication.
Background
The wireless sensor network is composed of a large number of autonomous nodes densely deployed in a monitoring area, and is a self-organizing network application system formed in a wireless communication mode, widely applied to multiple fields of military affairs, intelligent transportation, environment monitoring, medical health and the like, and is a high-technology industry recognized at home and abroad and having wide development prospects.
The transmission scheduling method in the wireless sensor network protocol stack is responsible for allocating wireless communication resources to the nodes, and is an important technology related to key performance indexes such as channel utilization rate, delay and the like of a wireless network. In recent years, a low power consumption wide area network featuring low power consumption, low speed and long distance communication has become one of the development directions of sensor networks and internet of things. The advantage that low-power consumption wide area network signal transmission distance is far away allows the node single hop to transmit data to the gateway, and the node need not listen and forward data, allows the node to accomplish and close the radio frequency module with gateway data transmission back to reduced the node consumption, the advantage of low-power consumption lets low-power consumption wide area network become the important development direction of wireless sensor network gradually. However, for the low communication rate, the same data amount occupies a long channel time; the transmission distance is long, the deployment density of the same nodes is high, and the number of gateway load nodes is large. The characteristics of low communication speed and long transmission distance cause the shortage of network channel resources, and a novel transmission scheduling algorithm is needed to effectively improve the utilization rate of a wireless channel, so that the requirement of data transmission rate is met.
The identification of the growth cycle of the crops is very important for agricultural production, the identification is widely applied to crops such as corn, wheat, rice, cotton and the like, and the accurate identification of the growth cycle can effectively guide the agricultural production.
The identification of the growth cycle generally needs qualified agricultural technicians to look up on site, so that although expert knowledge can be used for knowing the growth cycle, the cost is huge if each plot area is observed on site in remote mountain areas every day, one expert is not necessarily familiar with various crops, the requirement on the professional technology of operators is high, and due to the fact that many subjective factors are added, the result generated by the working mode is lack of unified standard, low in timeliness, poor in accuracy and low in working efficiency, and popularization and application of the working mode in agricultural production are greatly limited.
Disclosure of Invention
The invention aims to solve the problems that the existing method is lack of a unified standard for judging the growth cycle of crops in remote mountainous areas, low in timeliness, poor in accuracy and low in working efficiency, and provides a crop growth data transmission system and method in a low-power-consumption wide area network.
A crop growth data transmission system in a low-power-consumption wide area network comprises an image data acquisition module, a preprocessing module, a network building module, a training module, a wide area network communication module and a machine learning module;
the image data acquisition module is used for acquiring crop growth image information;
the preprocessing module is used for preprocessing the information acquired by the image data acquisition module;
the network building module is used for building a machine learning model;
the training module is used for training the machine learning model according to the preprocessed information acquired by the preprocessing module to acquire the trained machine learning model;
the wide area network communication module is used for transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module;
and the machine learning module is used for inputting crop growth image information to be processed into the trained machine learning model to obtain a recognition result.
Preferably, the image data acquisition module is used for acquiring image information of crop growth; the specific process is as follows:
judging whether the boundary value information of the images shot at different viewing angles exists or not, and if the boundary value information of all the images exists, splicing all the images according to the boundary value information to restore the images into continuous images; if the boundary value information of more than or equal to one image does not exist in all the images, processing the images without the boundary value information by adopting an image processing algorithm to enable the boundary value information of all the images to exist, and further obtaining spliced continuous images;
the image processing algorithm comprises the following specific processes:
by detecting and extracting the features and key points of the boundary image, matching the feature vectors by using a RANSAC algorithm, estimating a homography matrix, and determining the boundary information of the image without the boundary value information.
Preferably, the preprocessing module is configured to preprocess the information acquired by the image data acquiring module; the specific process is as follows:
de-noising the crop growth image information acquired by the image data acquisition module;
Figure BDA0003262127250000021
in the formula, f (x, y) represents a crop growth image signal acquired by the image data acquisition module, g (x, y) represents an image signal after de-noising processing, and n (x, y) represents noise.
Preferably, the network building module is used for building a machine learning model;
the machine learning model comprises an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, a fifth convolutional layer, a first CBAM attention module, a second CBAM attention module, a third CBAM attention module, a fourth CBAM attention module, a fifth CBAM attention module, a first activation layer, a sixth convolutional layer, a seventh convolutional layer, an eighth convolutional layer, a first asymmetric convolutional layer, a second asymmetric convolutional layer, a third asymmetric convolutional layer, a BN layer, a second activation layer, a global average pooling layer and a full connection layer;
inputting an image into an input layer of a machine learning model, inputting the image into a first convolution layer through the input layer for feature map extraction, performing 0.5-time upsampling on an output image of the first convolution layer, inputting a sampling result into a second convolution layer for feature map extraction, performing 0.5-time upsampling on an output image of the second convolution layer, inputting a sampling result into a third convolution layer for feature map extraction, performing 0.5-time upsampling on an output image of the third convolution layer, inputting a sampling result into a fourth convolution layer for feature map extraction, and inputting an output image of the fourth convolution layer into a fifth convolution layer for feature map extraction;
performing 1 × 1 convolution operation on the feature map extracted from the fifth convolution layer to change the dimension of the feature map, so as to obtain a feature map M5; performing 1 × 1 convolution operation on the feature map extracted from the fourth convolution layer to change the dimension of the feature map, so as to obtain a feature map M4; performing 1 × 1 convolution operation on the feature map extracted from the third convolution layer to change the dimension of the feature map, so as to obtain a feature map M3; performing 1 × 1 convolution operation on the feature map extracted from the second convolution layer to change the dimension of the feature map, so as to obtain a feature map M2;
performing 2-time upsampling on the feature map M5, inputting a sampling result into the feature map M4, performing 2-time upsampling on the feature map M4, inputting a sampling result into the feature map M3, performing 2-time upsampling on the feature map M3, and inputting a sampling result into the feature map M2;
performing 3-by-3 convolution on the M5 to obtain a feature map P5; performing 3 × 3 convolution on the feature map M4 of the input sampling result to obtain a feature map P4; performing 3 × 3 convolution on the feature map M3 of the input sampling result to obtain a feature map P3; performing 3 × 3 convolution on the feature map M2 of the input sampling result to obtain a feature map P2;
performing 0.5-time upsampling on the feature map P5 to obtain a feature map P6;
respectively passing the feature map P2 through a first active layer, a first depth separable convolutional layer and a first BN layer, wherein the output of the first BN layer is used as the input of a first CBAM attention module;
passing feature map P3 through a second active layer, a second depth separable convolutional layer, and a second BN layer, respectively, the second BN layer output being an input to a second CBAM attention module;
passing feature map P4 through a third active layer, a third depth-separable convolutional layer, and a third BN layer, respectively, with the third BN layer output as an input to a third CBAM attention module;
respectively passing the feature map P5 through a fourth activation layer, a fourth depth separable convolution layer and a fourth BN layer, wherein the output of the fourth BN layer is used as the input of a fourth CBAM attention module;
passing the feature map P6 through a fifth active layer, a fifth depth separable convolutional layer, and a fifth BN layer, respectively, with the fifth BN layer output as an input to a fifth CBAM attention module;
fusing the output of the first CBAM attention module, the output of the second CBAM attention module, the output of the third CBAM attention module, the output of the fourth CBAM attention module and the output of the fifth CBAM attention module, and inputting the fused image features into a sixth activation layer;
taking the output of the sixth active layer as the input of the sixth convolutional layer;
taking the output of the sixth active layer as the input of the seventh convolutional layer;
taking the output of the sixth active layer as the input of the eighth convolutional layer;
the output of the sixth convolutional layer is used as the input of the first asymmetric convolutional layer;
the output of the seventh convolutional layer is used as the input of the second asymmetric convolutional layer;
the output of the eighth convolutional layer is used as the input of the third asymmetric convolutional layer;
calculating Euclidean Distance (a, b) between the image feature vector output by the first asymmetric convolution layer and the image feature vector output by the second asymmetric convolution layer12
Calculating Euclidean Distance (a, b) of image feature vector output by the first asymmetric convolution layer and image feature vector output by the third asymmetric convolution layer23
Calculating Euclidean Distance (a, b) between the image feature vector output by the second asymmetric convolution layer and the image feature vector output by the third asymmetric convolution layer13
Will be provided with
Figure BDA0003262127250000041
A weight of an image feature vector output as the first asymmetric convolution layer;
will be provided with
Figure BDA0003262127250000042
A weight of an image feature vector output as the second asymmetric convolution layer;
will be provided with
Figure BDA0003262127250000043
A weight of an image feature vector output as the third asymmetric convolution layer;
Figure BDA0003262127250000044
and fusing the output of the first asymmetric convolutional layer, the output of the second asymmetric convolutional layer and the output of the third asymmetric convolutional layer according to the weight, inputting the fused image features into a sixth BN layer, inputting the output of the sixth BN layer into a seventh active layer, inputting the output of the seventh active layer into a global average pooling layer, inputting the output of the global average pooling layer into a full-connection layer, and outputting feature vectors by the full-connection layer.
Preferably, the expression of euclidean distance is:
Figure BDA0003262127250000051
in the formula, ai、biIs an image feature vector element; i is the number of elements.
The convolutional layers were all output normalized by L2, with a L2 regularization coefficient of 0.002.
Preferably, the training module is configured to train the machine learning model according to the preprocessed information obtained by the preprocessing module, so as to obtain a trained machine learning model; the specific process is as follows:
and inputting the training set of the crop growth image information subjected to denoising processing and acquired by the preprocessing module into the machine learning model until the machine learning model converges to obtain the trained machine learning model.
Preferably, the wide area network communication module is configured to transmit the to-be-processed crop growth image information acquired by the image data acquisition module to the machine learning module; the specific process is as follows:
acquiring communication signal intensity corresponding to each wide area network communication module;
the average value of the communication signal intensity of all the wide area network communication modules is taken, and the wide area network communication module with the communication signal intensity larger than the average value is taken as the wide area network communication module to be selected;
and taking the wide area network communication module with the maximum communication signal intensity in the wide area network communication modules to be selected as a finally determined wide area network communication module, and transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module by using the finally determined wide area network communication module.
Preferably, the image information transmission mode of the wan communication module is a single-hop communication mode or a multi-hop communication mode.
Preferably, when the communication mode is a single-hop communication mode, the image information data is stored in the data storage area;
when the communication mode is a multi-hop communication mode, judging whether a target address frame of image information data is the same as the address of the current communication node, and if not, sending the image information data to the next communication node; and if so, storing the image information data into a data storage area of the current communication node.
A method for transmitting crop growth data in a low power consumption wide area network, the method being used to implement a system for transmitting crop growth data in a low power consumption wide area network according to any one of embodiments one to nine.
The invention has the beneficial effects that:
the invention provides a crop growth data transmission system and method in a low-power-consumption wide area network, which detects the growth condition of a shot crop growth image in a remote mountain area through the wide area network: acquiring communication signal intensity corresponding to each wide area network communication module; the average value of the communication signal intensity of all the wide area network communication modules is taken, and the wide area network communication module with the communication signal intensity larger than the average value is taken as the wide area network communication module to be selected; selecting the wide area network communication module with the highest communication signal intensity in the wide area network communication modules to be selected as a finally determined wide area network communication module, and transmitting the to-be-processed remote mountain crop growth image information acquired by the image data acquisition module to the machine learning module by using the finally determined wide area network communication module;
the invention provides a system and a method for transmitting crop growth data in remote mountainous areas in a low-power-consumption wide area network, which are used for establishing a machine learning model, extracting a multi-scale target characteristic and fusing characteristics of different scales to improve the detection accuracy.
The invention provides a crop growth data transmission system in a low-power-consumption wide area network, which establishes a machine learning model, extracts features by multi-branch convolution, applies an attention mechanism at the same time, performs feature weighted analysis on space and channel information, and finally fuses all features; therefore, effective characteristics are accurately extracted, and the loss of information is reduced;
aiming at the problem that the parameter quantity of a network model is larger and larger in recent years, the invention constructs a lightweight model with lower parameter quantity, the parameter quantity of the model is reduced by using the depth separable convolution and the asymmetric convolution, and the network model can more fully excavate the key information of the image under lower complexity, obtain the accurate classification of the image and improve the image classification accuracy.
The method extracts deep features by alternately combining depth separable convolution and common convolution, the extracted effective information is sent to an attention module to obtain new features, the new features are fused and input into an activation layer after being fused, the output of the activation layer is respectively used as the input of three convolution layers, the output of the three convolution layers is respectively used as the input of three asymmetric convolution layers, the output of the three asymmetric convolution layers is fused according to weights, the asymmetric convolution layers are adopted to reduce the operation amount, the effective features are accurately extracted, and therefore the image classification accuracy is accurately judged and improved;
according to the crop growth image information denoising method, denoising processing is carried out on the crop growth image information acquired by the image data acquisition module, a denoised image is acquired, and the image classification accuracy is improved.
The invention relates to an input layer of an image input machine learning model, which is input into a first convolution layer through the input layer to extract a feature map, the output image of the first convolution layer is up-sampled by 0.5 times, a sampling result is input into a second convolution layer to extract the feature map, the output image of the second convolution layer is up-sampled by 0.5 times, the sampling result is input into a third convolution layer to extract the feature map, the output image of the third convolution layer is up-sampled by 0.5 times, the sampling result is input into a fourth convolution layer to extract the feature map, and the output image of the fourth convolution layer is input into a fifth convolution layer to extract the feature map; performing 1 × 1 convolution operation on the feature map extracted from the fifth convolution layer to change the dimension of the feature map, so as to obtain a feature map M5; performing 1 × 1 convolution operation on the feature map extracted from the fourth convolution layer to change the dimension of the feature map, so as to obtain a feature map M4; performing 1 × 1 convolution operation on the feature map extracted from the third convolution layer to change the dimension of the feature map, so as to obtain a feature map M3; performing 1 × 1 convolution operation on the feature map extracted from the second convolution layer to change the dimension of the feature map, so as to obtain a feature map M2; performing 2-time upsampling on the feature map M5, inputting a sampling result into the feature map M4, performing 2-time upsampling on the feature map M4, inputting a sampling result into the feature map M3, performing 2-time upsampling on the feature map M3, and inputting a sampling result into the feature map M2; performing 3-by-3 convolution on the M5 to obtain a feature map P5; performing 3 × 3 convolution on the feature map M4 of the input sampling result to obtain a feature map P4; performing 3 × 3 convolution on the feature map M3 of the input sampling result to obtain a feature map P3; performing 3 × 3 convolution on the feature map M2 of the input sampling result to obtain a feature map P2; performing 0.5-time upsampling on the feature map P5 to obtain a feature map P6; the operation reduces the pixel relevance and improves the accuracy of feature extraction;
in order to construct a lightweight network, the invention adopts a mode of combining deep separable convolution and common convolution to relieve the problems of large parameter quantity and low training speed of a model, abandons the traditional mode of directly and linearly stacking a plurality of large convolutions, applies asymmetric convolution, and greatly reduces the parameter quantity compared with the traditional n multiplied by n common convolution. The convergence rate of the model is accelerated by BN layer treatment; in addition, in order to prevent the phenomenon of overfitting during training, an L2 regularization penalty is added to the weight of the convolutional layer, and the penalty coefficient is 0.0005.
The method comprises the steps of obtaining communication signal intensity corresponding to each wide area network communication module; the average value of the communication signal intensity of all the wide area network communication modules is taken, and the wide area network communication module with the communication signal intensity larger than the average value is taken as the wide area network communication module to be selected; the average value of the communication signal intensity of all the wide area network communication modules is taken to avoid misjudgment of the communication signal intensity of the wide area network communication modules, the wide area network communication module with the signal intensity larger than the average value is selected as the wide area network communication module to be selected at the later stage, the wide area network communication module is prevented from being selected in a missing mode, and the efficiency and the accuracy of image data are improved.
Drawings
FIG. 1 is a diagram of a model framework of the present invention.
Detailed Description
The first embodiment is as follows: the crop growth data transmission system in the low-power-consumption wide area network comprises an image data acquisition module, a preprocessing module, a network building module, a training module, a wide area network communication module and a machine learning module;
the image data acquisition module is used for acquiring crop growth image information;
the preprocessing module is used for preprocessing the information acquired by the image data acquisition module;
the network building module is used for building a machine learning model;
the training module is used for training the machine learning model according to the preprocessed information acquired by the preprocessing module to acquire the trained machine learning model;
the wide area network communication module is used for transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module;
and the machine learning module is used for inputting crop growth image information to be processed into the trained machine learning model to obtain a recognition result.
The second embodiment is as follows: the difference between the present embodiment and the first embodiment is that the image data acquiring module is used for acquiring image information of crop growth; the specific process is as follows:
judging whether the boundary value information of the images shot at different viewing angles exists or not, and if the boundary value information of all the images exists, splicing all the images according to the boundary value information to restore the images into continuous images; if the boundary value information of more than or equal to one image does not exist in all the images, processing the images without the boundary value information by adopting an image processing algorithm to enable the boundary value information of all the images to exist, and further obtaining spliced continuous images;
the image processing algorithm comprises the following specific processes:
by detecting and extracting the features and key points of the boundary image, matching the feature vectors by using a RANSAC algorithm, estimating a homography matrix, and determining the boundary information of the image without the boundary value information.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the difference between the first embodiment and the second embodiment is that the preprocessing module is used for preprocessing the information acquired by the image data acquisition module; the specific process is as follows:
de-noising the crop growth image information acquired by the image data acquisition module;
Figure BDA0003262127250000081
in the formula, f (x, y) represents a crop growth image signal acquired by the image data acquisition module, g (x, y) represents an image signal after de-noising processing, and n (x, y) represents noise.
Denoising and removing noise to obtain a denoised image.
Other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between the first embodiment and the third embodiment is that the network building module is used for building a machine learning model;
the machine learning model comprises an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, a fifth convolutional layer, a first CBAM attention module, a second CBAM attention module, a third CBAM attention module, a fourth CBAM attention module, a fifth CBAM attention module, a first activation layer, a sixth convolutional layer, a seventh convolutional layer, an eighth convolutional layer, a first asymmetric convolutional layer, a second asymmetric convolutional layer, a third asymmetric convolutional layer, a BN layer, a second activation layer, a global average pooling layer and a full connection layer;
inputting an image into an input layer of a machine learning model, inputting the image into a first convolution layer through the input layer for feature map extraction, performing 0.5-time upsampling on an output image of the first convolution layer, inputting a sampling result into a second convolution layer for feature map extraction, performing 0.5-time upsampling on an output image of the second convolution layer, inputting a sampling result into a third convolution layer for feature map extraction, performing 0.5-time upsampling on an output image of the third convolution layer, inputting a sampling result into a fourth convolution layer for feature map extraction, and inputting an output image of the fourth convolution layer into a fifth convolution layer for feature map extraction;
performing 1 × 1 convolution operation on the feature map extracted from the fifth convolution layer to change the dimension of the feature map, so as to obtain a feature map M5; performing 1 × 1 convolution operation on the feature map extracted from the fourth convolution layer to change the dimension of the feature map, so as to obtain a feature map M4; performing 1 × 1 convolution operation on the feature map extracted from the third convolution layer to change the dimension of the feature map, so as to obtain a feature map M3; performing 1 × 1 convolution operation on the feature map extracted from the second convolution layer to change the dimension of the feature map, so as to obtain a feature map M2;
performing 2-time upsampling on the feature map M5, inputting a sampling result into the feature map M4, performing 2-time upsampling on the feature map M4, inputting a sampling result into the feature map M3, performing 2-time upsampling on the feature map M3, and inputting a sampling result into the feature map M2;
performing 3-by-3 convolution on the M5 to obtain a feature map P5; performing 3 × 3 convolution on the feature map M4 of the input sampling result to obtain a feature map P4; performing 3 × 3 convolution on the feature map M3 of the input sampling result to obtain a feature map P3; performing 3 × 3 convolution on the feature map M2 of the input sampling result to obtain a feature map P2;
performing 0.5-time upsampling on the feature map P5 to obtain a feature map P6;
respectively passing the feature map P2 through a first active layer, a first depth separable convolutional layer and a first BN layer, wherein the output of the first BN layer is used as the input of a first CBAM attention module;
passing feature map P3 through a second active layer, a second depth separable convolutional layer, and a second BN layer, respectively, the second BN layer output being an input to a second CBAM attention module;
passing feature map P4 through a third active layer, a third depth-separable convolutional layer, and a third BN layer, respectively, with the third BN layer output as an input to a third CBAM attention module;
respectively passing the feature map P5 through a fourth activation layer, a fourth depth separable convolution layer and a fourth BN layer, wherein the output of the fourth BN layer is used as the input of a fourth CBAM attention module;
passing the feature map P6 through a fifth active layer, a fifth depth separable convolutional layer, and a fifth BN layer, respectively, with the fifth BN layer output as an input to a fifth CBAM attention module;
fusing the output of the first CBAM attention module, the output of the second CBAM attention module, the output of the third CBAM attention module, the output of the fourth CBAM attention module and the output of the fifth CBAM attention module, and inputting the fused image features into a sixth activation layer;
taking the output of the sixth active layer as the input of the sixth convolutional layer;
taking the output of the sixth active layer as the input of the seventh convolutional layer;
taking the output of the sixth active layer as the input of the eighth convolutional layer;
the output of the sixth convolutional layer is used as the input of the first asymmetric convolutional layer;
the output of the seventh convolutional layer is used as the input of the second asymmetric convolutional layer;
the output of the eighth convolutional layer is used as the input of the third asymmetric convolutional layer;
calculating Euclidean Distance (a, b) between the image feature vector output by the first asymmetric convolution layer and the image feature vector output by the second asymmetric convolution layer12
Calculating Euclidean Distance (a, b) of image feature vector output by the first asymmetric convolution layer and image feature vector output by the third asymmetric convolution layer23
Calculating Euclidean Distance (a, b) between the image feature vector output by the second asymmetric convolution layer and the image feature vector output by the third asymmetric convolution layer13
Will be provided with
Figure BDA0003262127250000101
As a first nonThe weight of the image feature vector output by the symmetrical convolution layer;
will be provided with
Figure BDA0003262127250000102
A weight of an image feature vector output as the second asymmetric convolution layer;
will be provided with
Figure BDA0003262127250000103
A weight of an image feature vector output as the third asymmetric convolution layer;
Figure BDA0003262127250000104
and fusing the output of the first asymmetric convolutional layer, the output of the second asymmetric convolutional layer and the output of the third asymmetric convolutional layer according to the weight, inputting the fused image features into a sixth BN layer, inputting the output of the sixth BN layer into a seventh active layer, inputting the output of the seventh active layer into a global average pooling layer, inputting the output of the global average pooling layer into a full-connection layer, and outputting feature vectors by the full-connection layer.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the present embodiment is different from the first to the fourth embodiments in that the expression of euclidean distance is:
Figure BDA0003262127250000105
in the formula, ai、biIs an image feature vector element; i is the number of elements;
the convolutional layers were all output normalized by L2, with a L2 regularization coefficient of 0.002.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the embodiment is different from the first to the fifth embodiment in that the training module is used for training the machine learning model according to the preprocessed information acquired by the preprocessing module to acquire the trained machine learning model; the specific process is as follows:
and inputting the training set of the crop growth image information subjected to denoising processing and acquired by the preprocessing module into the machine learning model until the machine learning model converges to obtain the trained machine learning model.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between the first embodiment and the sixth embodiment is that the wide area network communication module is configured to transmit the to-be-processed crop growth image information acquired by the image data acquisition module to the machine learning module; the specific process is as follows:
acquiring communication signal intensity corresponding to each wide area network communication module;
the average value of the communication signal intensity of all the wide area network communication modules is taken, and the wide area network communication module with the communication signal intensity larger than the average value is taken as the wide area network communication module to be selected;
and taking the wide area network communication module with the maximum communication signal intensity in the wide area network communication modules to be selected as a finally determined wide area network communication module, and transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module by using the finally determined wide area network communication module.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The specific implementation mode is eight: the difference between this embodiment and one of the first to seventh embodiments is that the image information transmission mode of the wan communication module is a single-hop communication mode or a multi-hop communication mode.
Other steps and parameters are the same as those in one of the first to seventh embodiments.
The specific implementation method nine: the difference between this embodiment and the first to eighth embodiments is that, when the communication mode is a single-hop communication mode, the image information data is stored in the data storage area;
when the communication mode is a multi-hop communication mode, judging whether a target address frame of image information data is the same as the address of the current communication node, and if not, sending the image information data to the next communication node; and if so, storing the image information data into a data storage area of the current communication node.
Other steps and parameters are the same as those in one to eight of the embodiments.
The detailed implementation mode is ten: the present embodiment relates to a method for transmitting crop growth data in a low power consumption wide area network, which is used to implement a system for transmitting crop growth data in a low power consumption wide area network according to one of the first to ninth embodiments.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (10)

1. The utility model provides a crops growth data transmission system in low-power consumption wide area network which characterized in that: the system comprises an image data acquisition module, a preprocessing module, a network building module, a training module, a wide area network communication module and a machine learning module;
the image data acquisition module is used for acquiring crop growth image information;
the preprocessing module is used for preprocessing the information acquired by the image data acquisition module;
the network building module is used for building a machine learning model;
the training module is used for training the machine learning model according to the preprocessed information acquired by the preprocessing module to acquire the trained machine learning model;
the wide area network communication module is used for transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module;
and the machine learning module is used for inputting crop growth image information to be processed into the trained machine learning model to obtain a recognition result.
2. The system of claim 1, wherein the system further comprises: the image data acquisition module is used for acquiring image information of crop growth; the specific process is as follows:
judging whether the boundary value information of the images shot at different viewing angles exists or not, and if the boundary value information of all the images exists, splicing all the images according to the boundary value information to restore the images into continuous images; if the boundary value information of more than or equal to one image does not exist in all the images, processing the images without the boundary value information by adopting an image processing algorithm to enable the boundary value information of all the images to exist, and further obtaining spliced continuous images;
the image processing algorithm comprises the following specific processes:
by detecting and extracting the features and key points of the boundary image, matching the feature vectors by using a RANSAC algorithm, estimating a homography matrix, and determining the boundary information of the image without the boundary value information.
3. The system of claim 2, wherein the system further comprises: the preprocessing module is used for preprocessing the information acquired by the image data acquisition module; the specific process is as follows:
de-noising the crop growth image information acquired by the image data acquisition module;
Figure FDA0003262127240000011
in the formula, f (x, y) represents a crop growth image signal acquired by the image data acquisition module, g (x, y) represents an image signal after de-noising processing, and n (x, y) represents noise.
4. The system of claim 3, wherein the system comprises: the network building module is used for building a machine learning model;
the machine learning model comprises an input layer, a first convolutional layer, a second convolutional layer, a third convolutional layer, a fourth convolutional layer, a fifth convolutional layer, a first CBAM attention module, a second CBAM attention module, a third CBAM attention module, a fourth CBAM attention module, a fifth CBAM attention module, a first activation layer, a sixth convolutional layer, a seventh convolutional layer, an eighth convolutional layer, a first asymmetric convolutional layer, a second asymmetric convolutional layer, a third asymmetric convolutional layer, a BN layer, a second activation layer, a global average pooling layer and a full connection layer;
inputting an image into an input layer of a machine learning model, inputting the image into a first convolution layer through the input layer for feature map extraction, performing 0.5-time upsampling on an output image of the first convolution layer, inputting a sampling result into a second convolution layer for feature map extraction, performing 0.5-time upsampling on an output image of the second convolution layer, inputting a sampling result into a third convolution layer for feature map extraction, performing 0.5-time upsampling on an output image of the third convolution layer, inputting a sampling result into a fourth convolution layer for feature map extraction, and inputting an output image of the fourth convolution layer into a fifth convolution layer for feature map extraction;
performing 1 × 1 convolution operation on the feature map extracted from the fifth convolution layer to change the dimension of the feature map, so as to obtain a feature map M5; performing 1 × 1 convolution operation on the feature map extracted from the fourth convolution layer to change the dimension of the feature map, so as to obtain a feature map M4; performing 1 × 1 convolution operation on the feature map extracted from the third convolution layer to change the dimension of the feature map, so as to obtain a feature map M3; performing 1 × 1 convolution operation on the feature map extracted from the second convolution layer to change the dimension of the feature map, so as to obtain a feature map M2;
performing 2-time upsampling on the feature map M5, inputting a sampling result into the feature map M4, performing 2-time upsampling on the feature map M4, inputting a sampling result into the feature map M3, performing 2-time upsampling on the feature map M3, and inputting a sampling result into the feature map M2;
performing 3-by-3 convolution on the M5 to obtain a feature map P5; performing 3 × 3 convolution on the feature map M4 of the input sampling result to obtain a feature map P4; performing 3 × 3 convolution on the feature map M3 of the input sampling result to obtain a feature map P3; performing 3 × 3 convolution on the feature map M2 of the input sampling result to obtain a feature map P2;
performing 0.5-time upsampling on the feature map P5 to obtain a feature map P6;
respectively passing the feature map P2 through a first active layer, a first depth separable convolutional layer and a first BN layer, wherein the output of the first BN layer is used as the input of a first CBAM attention module;
passing feature map P3 through a second active layer, a second depth separable convolutional layer, and a second BN layer, respectively, the second BN layer output being an input to a second CBAM attention module;
passing feature map P4 through a third active layer, a third depth-separable convolutional layer, and a third BN layer, respectively, with the third BN layer output as an input to a third CBAM attention module;
respectively passing the feature map P5 through a fourth activation layer, a fourth depth separable convolution layer and a fourth BN layer, wherein the output of the fourth BN layer is used as the input of a fourth CBAM attention module;
passing the feature map P6 through a fifth active layer, a fifth depth separable convolutional layer, and a fifth BN layer, respectively, with the fifth BN layer output as an input to a fifth CBAM attention module;
fusing the output of the first CBAM attention module, the output of the second CBAM attention module, the output of the third CBAM attention module, the output of the fourth CBAM attention module and the output of the fifth CBAM attention module, and inputting the fused image features into a sixth activation layer;
taking the output of the sixth active layer as the input of the sixth convolutional layer;
taking the output of the sixth active layer as the input of the seventh convolutional layer;
taking the output of the sixth active layer as the input of the eighth convolutional layer;
the output of the sixth convolutional layer is used as the input of the first asymmetric convolutional layer;
the output of the seventh convolutional layer is used as the input of the second asymmetric convolutional layer;
the output of the eighth convolutional layer is used as the input of the third asymmetric convolutional layer;
calculating Euclidean Distance (a, b) between the image feature vector output by the first asymmetric convolution layer and the image feature vector output by the second asymmetric convolution layer12
Calculating Euclidean Distance (a, b) of image feature vector output by the first asymmetric convolution layer and image feature vector output by the third asymmetric convolution layer23
Calculating Euclidean Distance (a, b) between the image feature vector output by the second asymmetric convolution layer and the image feature vector output by the third asymmetric convolution layer13
Will be provided with
Figure FDA0003262127240000031
A weight of an image feature vector output as the first asymmetric convolution layer;
will be provided with
Figure FDA0003262127240000032
A weight of an image feature vector output as the second asymmetric convolution layer;
will be provided with
Figure FDA0003262127240000033
A weight of an image feature vector output as the third asymmetric convolution layer;
Figure FDA0003262127240000041
and fusing the output of the first asymmetric convolutional layer, the output of the second asymmetric convolutional layer and the output of the third asymmetric convolutional layer according to the weight, inputting the fused image features into a sixth BN layer, inputting the output of the sixth BN layer into a seventh active layer, inputting the output of the seventh active layer into a global average pooling layer, inputting the output of the global average pooling layer into a full-connection layer, and outputting feature vectors by the full-connection layer.
5. The system of claim 4, wherein the system comprises: the expression of Euclidean distance is as follows:
Figure FDA0003262127240000042
in the formula, ai、biIs an image feature vector element; i is the number of elements;
the convolutional layers were all output normalized by L2, with a L2 regularization coefficient of 0.002.
6. The system of claim 5, wherein the system further comprises: the training module is used for training the machine learning model according to the preprocessed information acquired by the preprocessing module to acquire the trained machine learning model; the specific process is as follows:
and inputting the training set of the crop growth image information subjected to denoising processing and acquired by the preprocessing module into the machine learning model until the machine learning model converges to obtain the trained machine learning model.
7. The system of claim 6, wherein the system comprises: the wide area network communication module is used for transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module; the specific process is as follows:
acquiring communication signal intensity corresponding to each wide area network communication module;
the average value of the communication signal intensity of all the wide area network communication modules is taken, and the wide area network communication module with the communication signal intensity larger than the average value is taken as the wide area network communication module to be selected;
and taking the wide area network communication module with the maximum communication signal intensity in the wide area network communication modules to be selected as a finally determined wide area network communication module, and transmitting the crop growth image information to be processed, which is acquired by the image data acquisition module, to the machine learning module by using the finally determined wide area network communication module.
8. The system of claim 7, wherein the system further comprises: the image information transmission mode of the wide area network communication module is a single-hop communication mode or a multi-hop communication mode.
9. The system of claim 8, wherein the system further comprises: when the communication mode is a single-hop communication mode, storing the image information data into a data storage area;
when the communication mode is a multi-hop communication mode, judging whether a target address frame of image information data is the same as the address of the current communication node, and if not, sending the image information data to the next communication node; and if so, storing the image information data into a data storage area of the current communication node.
10. A crop growth data transmission method in a low-power-consumption wide area network is characterized by comprising the following steps: the method is for implementing a low power crop growth data transmission system in a wide area network according to any one of the first to ninth embodiments.
CN202111083814.2A 2021-09-14 2021-09-14 Crop growth data transmission system and method in low-power-consumption wide area network Active CN113792744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111083814.2A CN113792744B (en) 2021-09-14 2021-09-14 Crop growth data transmission system and method in low-power-consumption wide area network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111083814.2A CN113792744B (en) 2021-09-14 2021-09-14 Crop growth data transmission system and method in low-power-consumption wide area network

Publications (2)

Publication Number Publication Date
CN113792744A true CN113792744A (en) 2021-12-14
CN113792744B CN113792744B (en) 2023-09-05

Family

ID=78878562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111083814.2A Active CN113792744B (en) 2021-09-14 2021-09-14 Crop growth data transmission system and method in low-power-consumption wide area network

Country Status (1)

Country Link
CN (1) CN113792744B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299783A (en) * 2018-12-18 2019-02-01 哈尔滨工业大学 Public sentiment role based on isomery domain migration identifies migratory system
CN110839519A (en) * 2019-12-02 2020-02-28 成都信息工程大学 Internet of things intelligent agricultural irrigation device and method based on deep learning
CN111144168A (en) * 2018-11-02 2020-05-12 阿里巴巴集团控股有限公司 Crop growth cycle identification method, equipment and system
CN112288739A (en) * 2020-11-20 2021-01-29 哈尔滨工业大学 Vein segmentation method based on deep learning
CN112330681A (en) * 2020-11-06 2021-02-05 北京工业大学 Attention mechanism-based lightweight network real-time semantic segmentation method
JP6830707B1 (en) * 2020-01-23 2021-02-17 同▲済▼大学 Person re-identification method that combines random batch mask and multi-scale expression learning
CN112733663A (en) * 2020-12-29 2021-04-30 山西大学 Image recognition-based student attention detection method
CN112788686A (en) * 2020-12-30 2021-05-11 浙江华消科技有限公司 Channel selection method and device for LoRa equipment and electronic device
CN112861978A (en) * 2021-02-20 2021-05-28 齐齐哈尔大学 Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism
CN112996136A (en) * 2021-04-26 2021-06-18 深圳市汇顶科技股份有限公司 Data transmission method of NB-IOT (network B-Internet of things) terminal, NB-IOT chip, device and communication system
CN113221839A (en) * 2021-06-02 2021-08-06 哈尔滨市科佳通用机电股份有限公司 Automatic truck image identification method and system
CN113361494A (en) * 2021-07-22 2021-09-07 金泰(深圳)科技文化有限公司 Self-service method and self-service system based on face recognition

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144168A (en) * 2018-11-02 2020-05-12 阿里巴巴集团控股有限公司 Crop growth cycle identification method, equipment and system
CN109299783A (en) * 2018-12-18 2019-02-01 哈尔滨工业大学 Public sentiment role based on isomery domain migration identifies migratory system
CN110839519A (en) * 2019-12-02 2020-02-28 成都信息工程大学 Internet of things intelligent agricultural irrigation device and method based on deep learning
JP6830707B1 (en) * 2020-01-23 2021-02-17 同▲済▼大学 Person re-identification method that combines random batch mask and multi-scale expression learning
CN112330681A (en) * 2020-11-06 2021-02-05 北京工业大学 Attention mechanism-based lightweight network real-time semantic segmentation method
CN112288739A (en) * 2020-11-20 2021-01-29 哈尔滨工业大学 Vein segmentation method based on deep learning
CN112733663A (en) * 2020-12-29 2021-04-30 山西大学 Image recognition-based student attention detection method
CN112788686A (en) * 2020-12-30 2021-05-11 浙江华消科技有限公司 Channel selection method and device for LoRa equipment and electronic device
CN112861978A (en) * 2021-02-20 2021-05-28 齐齐哈尔大学 Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism
CN112996136A (en) * 2021-04-26 2021-06-18 深圳市汇顶科技股份有限公司 Data transmission method of NB-IOT (network B-Internet of things) terminal, NB-IOT chip, device and communication system
CN113221839A (en) * 2021-06-02 2021-08-06 哈尔滨市科佳通用机电股份有限公司 Automatic truck image identification method and system
CN113361494A (en) * 2021-07-22 2021-09-07 金泰(深圳)科技文化有限公司 Self-service method and self-service system based on face recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
贾珺: "基于NB-IoT的农作物生长信息采集传输系统的设计与实现", 《中国优秀硕士学位论文全文数据库 (农业科技辑)》, pages 1 - 70 *
鲍烈;王曼韬;刘江川;彭珍;彭帅波;: "基于SSD目标检测框架的乌龟常见病症识别方法", 沈阳农业大学学报, no. 02 *

Also Published As

Publication number Publication date
CN113792744B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
Angin et al. Agrilora: a digital twin framework for smart agriculture.
CN114818996B (en) Method and system for diagnosing mechanical fault based on federal domain generalization
CN111127423B (en) Rice pest and disease identification method based on CNN-BP neural network algorithm
CN112749663B (en) Agricultural fruit maturity detection system based on Internet of things and CCNN model
CN110659684A (en) Convolutional neural network-based STBC signal identification method
CN116257750A (en) Radio frequency fingerprint identification method based on sample enhancement and deep learning
CN114972208A (en) YOLOv 4-based lightweight wheat scab detection method
CN114298086A (en) STBC-OFDM signal blind identification method and device based on deep learning and fourth-order lag moment spectrum
CN113837191A (en) Cross-satellite remote sensing image semantic segmentation method based on bidirectional unsupervised domain adaptive fusion
CN113792744A (en) Crop growth data transmission system and method in low-power-consumption wide area network
CN116566777B (en) Frequency hopping signal modulation identification method based on graph convolution neural network
CN114519402B (en) Citrus disease and insect pest detection method based on neural network
CN114422310B (en) Digital quadrature modulation signal identification method based on joint distribution matrix and multi-input neural network
CN115393608A (en) Target detection system and method based on nonlinear expansion rate convolution module
CN113936019A (en) Method for estimating field crop yield based on convolutional neural network technology
CN112818982B (en) Agricultural pest image detection method based on depth feature autocorrelation activation
CN114915526B (en) Communication signal modulation identification method, device and system
CN113763471A (en) Visual-based bullet hole detection method and system
Yin et al. Few-Shot Domain Adaption-Based Specific Emitter Identification Under Varying Modulation
CN112115970A (en) Lightweight image detection agricultural bird repelling method and system based on hierarchical regression
CN113343796A (en) Knowledge distillation-based radar signal modulation mode identification method
Jiyuan et al. Multi-modulation recognition using convolution gated recurrent unit networks
CN113705654B (en) FFPN model-based micro-seismic first-arrival intelligent pickup method, system, equipment and storage medium
CN116319195B (en) Millimeter wave and terahertz channel estimation method based on pruned convolutional neural network
CN116310391B (en) Identification method for tea diseases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant