CN117351181A - Intelligent farmland monitoring and crop growth automatic control method and system - Google Patents
Intelligent farmland monitoring and crop growth automatic control method and system Download PDFInfo
- Publication number
- CN117351181A CN117351181A CN202311331561.5A CN202311331561A CN117351181A CN 117351181 A CN117351181 A CN 117351181A CN 202311331561 A CN202311331561 A CN 202311331561A CN 117351181 A CN117351181 A CN 117351181A
- Authority
- CN
- China
- Prior art keywords
- sound
- image
- farmland
- index
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000012010 growth Effects 0.000 title claims abstract description 38
- 238000012544 monitoring process Methods 0.000 title claims abstract description 35
- 241000607479 Yersinia pestis Species 0.000 claims abstract description 130
- 241000238631 Hexapoda Species 0.000 claims abstract description 89
- 201000010099 disease Diseases 0.000 claims abstract description 76
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 76
- 238000001514 detection method Methods 0.000 claims abstract description 74
- 241001465754 Metazoa Species 0.000 claims abstract description 59
- 238000004458 analytical method Methods 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 25
- 230000005236 sound signal Effects 0.000 claims description 93
- 238000005520 cutting process Methods 0.000 claims description 49
- 230000006378 damage Effects 0.000 claims description 43
- 239000013598 vector Substances 0.000 claims description 33
- 230000007613 environmental effect Effects 0.000 claims description 30
- 238000007781 pre-processing Methods 0.000 claims description 30
- 238000004364 calculation method Methods 0.000 claims description 20
- 230000009466 transformation Effects 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 238000013136 deep learning model Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 4
- 240000007594 Oryza sativa Species 0.000 description 43
- 235000007164 Oryza sativa Nutrition 0.000 description 43
- 235000009566 rice Nutrition 0.000 description 43
- 241000196324 Embryophyta Species 0.000 description 31
- 235000013399 edible fruits Nutrition 0.000 description 21
- 241001124076 Aphididae Species 0.000 description 13
- 239000000575 pesticide Substances 0.000 description 13
- 241000894007 species Species 0.000 description 11
- 238000004519 manufacturing process Methods 0.000 description 10
- 238000009826 distribution Methods 0.000 description 9
- 241001466042 Fulgoromorpha Species 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 244000141359 Malus pumila Species 0.000 description 7
- 230000004071 biological effect Effects 0.000 description 6
- 241000699670 Mus sp. Species 0.000 description 5
- 235000021016 apples Nutrition 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 241000098289 Cnaphalocrocis medinalis Species 0.000 description 4
- 241000238814 Orthoptera Species 0.000 description 4
- 240000008042 Zea mays Species 0.000 description 4
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 4
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 4
- 235000005822 corn Nutrition 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 241000237858 Gastropoda Species 0.000 description 3
- 238000012271 agricultural production Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000002265 prevention Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 239000007921 spray Substances 0.000 description 3
- 241000273311 Aphis spiraecola Species 0.000 description 2
- 241000271566 Aves Species 0.000 description 2
- 241001635274 Cydia pomonella Species 0.000 description 2
- 241000255925 Diptera Species 0.000 description 2
- 241000353522 Earias insulana Species 0.000 description 2
- 244000068988 Glycine max Species 0.000 description 2
- 235000010469 Glycine max Nutrition 0.000 description 2
- 235000011430 Malus pumila Nutrition 0.000 description 2
- 235000015103 Malus silvestris Nutrition 0.000 description 2
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 239000003124 biologic agent Substances 0.000 description 2
- 230000001364 causal effect Effects 0.000 description 2
- 235000013339 cereals Nutrition 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000000813 microbial effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000001846 repelling effect Effects 0.000 description 2
- 230000033764 rhythmic process Effects 0.000 description 2
- 239000002689 soil Substances 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000005507 spraying Methods 0.000 description 2
- 238000012876 topography Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000254032 Acrididae Species 0.000 description 1
- 244000105624 Arachis hypogaea Species 0.000 description 1
- 241000193388 Bacillus thuringiensis Species 0.000 description 1
- 241000894006 Bacteria Species 0.000 description 1
- 235000002566 Capsicum Nutrition 0.000 description 1
- 241000008892 Cnaphalocrocis patnalis Species 0.000 description 1
- 241000254171 Curculionidae Species 0.000 description 1
- 206010059866 Drug resistance Diseases 0.000 description 1
- 206010053759 Growth retardation Diseases 0.000 description 1
- 241000346285 Ostrinia furnacalis Species 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 244000061456 Solanum tuberosum Species 0.000 description 1
- 235000002595 Solanum tuberosum Nutrition 0.000 description 1
- 206010044565 Tremor Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 239000003242 anti bacterial agent Substances 0.000 description 1
- 229940088710 antibiotic agent Drugs 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 229940097012 bacillus thuringiensis Drugs 0.000 description 1
- 244000052616 bacterial pathogen Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000975 bioactive effect Effects 0.000 description 1
- IRUJZVNXZWPBMU-UHFFFAOYSA-N cartap Chemical compound NC(=O)SCC(N(C)C)CSC(N)=O IRUJZVNXZWPBMU-UHFFFAOYSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000001609 comparable effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000008773 effect on children Effects 0.000 description 1
- 238000003912 environmental pollution Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004720 fertilization Effects 0.000 description 1
- 235000011389 fruit/vegetable juice Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000035784 germination Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 231100000001 growth retardation Toxicity 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- JEIPFZHSYJVQDO-UHFFFAOYSA-N iron(III) oxide Inorganic materials O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 description 1
- 238000003973 irrigation Methods 0.000 description 1
- 230000002262 irrigation Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 235000016709 nutrition Nutrition 0.000 description 1
- 230000035764 nutrition Effects 0.000 description 1
- 244000045947 parasite Species 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 235000020232 peanut Nutrition 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 235000012015 potatoes Nutrition 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 238000012892 rational function Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000009331 sowing Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000017260 vegetative to reproductive phase transition of meristem Effects 0.000 description 1
- 230000003612 virological effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/10—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Image Processing (AREA)
Abstract
The invention provides an intelligent farmland monitoring and crop growth automatic control method and system, and belongs to the technical field of farmland monitoring. Firstly, collecting farmland images, and dividing the farmland images to obtain small-area images; secondly, acquiring sound data of a region corresponding to the small region image, and processing the sound data to obtain sound category information; detecting the small area image again to obtain a detection result, and further calculating an index; then analyzing crops according to the sound category information and the indexes to obtain an analysis result; and finally, performing control operation on the equipment according to the analysis result. According to the method, the condition of farmlands is reflected from different angles, targets such as animals, crops, diseases and insect pests and the like in the farmlands are detected, and related indexes are calculated; through analysis of the related indexes, the method performs fine control operation, improves the yield and quality of crops, improves the efficiency and accuracy of farmland monitoring, reduces labor cost and errors, and scientifically and intelligently manages farmlands.
Description
Technical Field
The invention belongs to the technical field of farmland monitoring, and particularly relates to an intelligent farmland monitoring and crop growth automatic control method and system.
Background
Agriculture is the basis for human survival and development, and is also an important support for national economy. With the growth of population and the reduction of resources, agricultural production faces more and more challenges such as climate change, plant diseases and insect pests, water and soil loss, environmental pollution and the like. In order to improve the agricultural production efficiency and quality, reduce the agricultural production cost and risk, ensure the grain safety and ecological safety, the modern scientific technology is required to be used for intelligent monitoring and management of farmlands.
Traditional farmland monitoring and management modes mainly rely on manual observation, recording and operation, and have the following defects: firstly, the efficiency is low, the labor cost is high, and a large-area farmland cannot be covered; secondly, the accuracy is poor, the influence of artificial factors is large, and the problems can not be found and treated in time; thirdly, the informatization degree is low, the data support is lacked, and scientific decision and optimization cannot be performed.
In order to solve the problems, intelligent farmland monitoring and crop growth automatic control methods and systems based on technologies such as Internet of things, cloud computing and artificial intelligence are recently developed. The method and the system realize real-time monitoring of farmland environmental parameters, crop growth states, plant diseases and insect pests and the like by deploying various sensors, controllers, cameras and other equipment in the farmland, and transmit data to a cloud or local server for analysis and processing through a network, and automatically control the farmland according to analysis results, such as irrigation, fertilization, plant protection and the like. The method and the system improve the efficiency and the accuracy of farmland monitoring and management to a certain extent, but still have the following defects: firstly, data acquisition is incomplete, and full-view information of farmlands cannot be obtained; secondly, the data analysis is not deep, and the rule and influencing factors of crop growth cannot be revealed; thirdly, the control operation is not intelligent, and the fine control can not be performed according to individual differences and demands of crops. Thus, there is a need for a more accurate, intelligent method of automatically controlling crop growth and farmland monitoring.
Disclosure of Invention
Based on the technical problems, the invention provides an intelligent farmland monitoring and crop growth automatic control method and system, which realize omnibearing farmland monitoring and automatic crop growth management through image and sound data acquisition, processing, detection, analysis and identification.
The invention provides an intelligent farmland monitoring and crop growth automatic control method, which comprises the following steps:
step S1: collecting farmland images, and dividing the farmland images to obtain small-area images;
step S2: acquiring sound data of a region corresponding to the small region image, and processing the sound data to obtain sound category information;
step S3: detecting the small area image to obtain a detection result;
step S4: calculating an index according to the detection result;
step S5: analyzing crops according to the sound category information and the index to obtain an analysis result;
step S6: and performing control operation on the equipment according to the analysis result.
Optionally, the collecting the farmland image and dividing the farmland image to obtain a small area image specifically includes:
preprocessing the collected farmland image to obtain a preprocessed image; the preprocessing operation comprises noise removal, contrast enhancement, edge clipping and size scaling;
Starting the preprocessing image from the upper left corner, moving a cutting window according to a step length, filtering a non-farmland area by using a binary image, intercepting a small area image, moving the position of the cutting window according to the step length, and adjusting the position of the cutting window if the position of the cutting window exceeds the boundary of the preprocessing image;
after all the small area images are intercepted, the small area images are saved.
Optionally, the acquiring the sound data of the area corresponding to the small area image, and processing the sound data to obtain sound category information specifically includes:
acquiring sound data of a region corresponding to the small region image;
denoising, dividing and enhancing the sound data to obtain sound signals;
sequentially transforming, compensating, analyzing and identifying the sound signals, wherein a transformation formula is as follows:
X(t)=x(t)+jy(t)
wherein X (t) is a complex representation of the sound signal X (t), j is an imaginary unit, and y (t) is a nonlinear transformation of the sound signal X (t);
and compensating the converted sound signal, wherein the compensation formula is as follows:
in the method, in the process of the invention,for a compensated or inverse filtered sound signal, g (t) is the impulse response of the inverse or inverse filter and Y (X (t)) is the sound signal after passing through the propagation medium;
Analyzing the compensated sound signal, wherein the analysis formula is as follows:
wherein A (t) is the instantaneous amplitude of the sound signal, phi (t) is the instantaneous phase of the sound signal, and f (t) is the instantaneous frequency of the sound signal;
inputting the sound feature vector into a Bayesian model for recognition, wherein the recognition formula is as follows:
wherein F is a sound feature vector and comprises instantaneous amplitude and instantaneous frequency, P (F|C, E) is the likelihood probability of a given class C and an environmental factor E, P (C|E) is the prior probability of the given environmental factor E and the class C, P (F|E) is the prior probability of the given environmental factor E and the sound feature F, and P (C|F, E) is the posterior probability that the sound feature vector F belongs to the class C;
and selecting the category with the highest posterior probability as sound category information.
Optionally, the detecting the small area image to obtain a detection result specifically includes:
the detection results comprise animal detection results and crop detection results;
preprocessing the small-area image to obtain a standard small-area image; the preprocessing operation comprises the steps of adjusting contrast, removing shadows and reducing image blurring;
performing target detection on the standard small-area image by using a deep learning model, detecting a target and classifying the target to obtain the animal detection result; the animal detection result comprises animal size and animal species;
Performing crop detection on the standard small area image by using an image recognition model to obtain a crop detection result; the crop detection results comprise crop types and plant diseases and insect pests types.
Optionally, the calculating an index according to the detection result specifically includes:
the index includes a pest index and a risk index;
the calculation formula of the disease and pest index is as follows:
BCH=(∑(P i W i )+∑(Q J S J D J ))×100
wherein BCH is the index of plant diseases and insect pests, P i For the pest and disease extent of the ith part, W i For the weight coefficient of the ith part, Q J For the number of animals of type J, S J Size of animals of type J, S J Is the damage coefficient of animals of type J;
the risk index calculation formula is as follows:
WX=∑(R I H I )
wherein WX is a pest index, R I For the number of the I-th plant diseases and insect pests, H I Is the risk index of the type I plant diseases and insect pests.
The invention also provides an intelligent farmland monitoring and crop growth automatic control system, which comprises:
the image acquisition module is used for acquiring farmland images and dividing the farmland images to obtain small-area images;
the sound processing module is used for acquiring sound data of the area corresponding to the small area image, and processing the sound data to obtain sound category information;
The image detection module is used for detecting the small-area image to obtain a detection result;
the index calculation module is used for calculating an index according to the detection result;
the farmland analysis module is used for analyzing crops according to the sound type information and the indexes to obtain analysis results;
and the equipment control module is used for controlling and operating the equipment according to the analysis result.
Optionally, the image acquisition module specifically includes:
the preprocessing sub-module is used for preprocessing the collected farmland images to obtain preprocessed images; the preprocessing operation comprises noise removal, contrast enhancement, edge clipping and size scaling;
the area intercepting sub-module is used for moving a cutting window from the upper left corner of the preprocessed image according to the step length, filtering a non-farmland area by using a binary image, intercepting a small area image, moving the position of the cutting window according to the step length, and adjusting the position of the cutting window if the position of the cutting window exceeds the boundary of the preprocessed image;
and the image storage sub-module is used for storing the small area images after intercepting all the small area images.
Optionally, the sound processing module specifically includes:
The sound acquisition sub-module is used for acquiring sound data of the region corresponding to the small region image;
the sound processing sub-module is used for carrying out denoising, segmentation and enhancement on the sound data to obtain sound signals;
the signal operation submodule is used for sequentially carrying out transformation, compensation, analysis and identification on the sound signals, wherein a transformation formula is as follows:
X(t)=x(t)+jy(t)
wherein X (t) is a complex representation of the sound signal X (t), j is an imaginary unit, and y (t) is a nonlinear transformation of the sound signal X (t);
and compensating the converted sound signal, wherein the compensation formula is as follows:
in the method, in the process of the invention,for a compensated or inverse filtered sound signal, g (t) is the impulse response of the inverse or inverse filter and Y (X (t)) is the sound signal after passing through the propagation medium;
analyzing the compensated sound signal, wherein the analysis formula is as follows:
wherein A (t) is the instantaneous amplitude of the sound signal, phi (t) is the instantaneous phase of the sound signal, and f (t) is the instantaneous frequency of the sound signal;
inputting the sound feature vector into a Bayesian model for recognition, wherein the recognition formula is as follows:
wherein F is a sound feature vector and comprises instantaneous amplitude and instantaneous frequency, P (F|C, E) is the likelihood probability of a given class C and an environmental factor E, P (C|E) is the prior probability of the given environmental factor E and the class C, P (F|E) is the prior probability of the given environmental factor E and the sound feature F, and P (C|F, E) is the posterior probability that the sound feature vector F belongs to the class C;
And the class selection sub-module is used for selecting the class with the maximum posterior probability as sound class information.
Optionally, the image detection module specifically includes:
the small area processing sub-module is used for preprocessing the small area image to obtain a standard small area image;
the animal detection sub-module is used for detecting targets of the standard small-area images by using a deep learning model, detecting the targets and classifying the targets to obtain animal detection results;
and the crop detection sub-module is used for detecting crops in the standard small area image by using the image recognition model to obtain a crop detection result.
Optionally, the index calculating module specifically includes:
the calculation formula of the pest and disease damage index is as follows:
BCH=(∑(P i Wi ) +∑(Q J S J D J ))×100
wherein BCH is the index of plant diseases and insect pests, P i For the pest and disease extent of the ith part, W i For the weight coefficient of the ith part, Q J For the number of animals of type J, S J Size of animals of type J, S J Is the damage coefficient of animals of type J;
the risk index calculation formula is:
WX=∑(R I H I )
wherein WX is a pest index, R I For the number of the I-th plant diseases and insect pests, H I Is the risk index of the type I plant diseases and insect pests.
Compared with the prior art, the invention has the following beneficial effects:
The invention can monitor farmland state more comprehensively by comprehensively utilizing image and sound information, quickly find out problems and improve farmland monitoring efficiency; the processing and analysis of the sound data enable the monitoring to be more comprehensive, and some information which cannot be obtained through image recognition, such as the sound of pests, can be detected; the detection is performed by adopting a deep learning and image recognition model, so that the artificial misjudgment is reduced, and the monitoring accuracy is improved; by executing control operation through analysis results, automatic management of crop growth, including pest control and crop growth optimization, can be realized.
Drawings
FIG. 1 is a flow chart of an intelligent farmland monitoring and crop growth automation control method of the present invention;
FIG. 2 is a block diagram of an intelligent farmland monitoring and crop growth automation control system according to the present invention.
Detailed Description
The invention is further described below in connection with specific embodiments and the accompanying drawings, but the invention is not limited to these embodiments.
Example 1
As shown in fig. 1, the invention discloses an intelligent farmland monitoring and crop growth automatic control method, which comprises the following steps:
step S1: and collecting farmland images, and dividing the farmland images to obtain small-area images.
Step S2: and acquiring sound data of the region corresponding to the small region image, and processing the sound data to obtain sound category information.
Step S3: and detecting the small-area image to obtain a detection result.
Step S4: and calculating an index according to the detection result.
Step S5: and analyzing the crops according to the sound type information and the indexes to obtain an analysis result.
Step S6: and performing control operation on the equipment according to the analysis result.
The steps are discussed in detail below:
step S1: and collecting farmland images, and dividing the farmland images to obtain small-area images.
The step S1 specifically comprises the following steps:
step S11: preprocessing the collected farmland image to obtain a preprocessed image; the preprocessing operations include removing noise, enhancing contrast, cropping edges, and scaling sizes.
The step S11 specifically includes:
removing noise, smoothing farmland images by using a Gaussian filter or a median filter, eliminating random noise or salt and pepper noise in the images, removing particles, spots or other unwanted visual interference elements in the images, and improving the definition of the images.
The contrast is enhanced, the gray level transformation is carried out on the farmland image by using a histogram equalization or self-adaptive histogram equalization method, the dynamic range and the brightness difference of the image are increased, and the visual effect of the image is improved.
Cutting edges, extracting edges of farmland images by using Hough transformation or Canny edge detection methods, finding out farmland boundaries in the images, cutting the images into a regular area according to the boundaries, removing irrelevant background parts, and ensuring that only interested farmland parts are reserved.
The scaling is small, and the bilinear interpolation or bicubic interpolation method is used for carrying out size transformation on farmland images, so that the images are scaled to a proper size, and the subsequent cutting operation is convenient.
Step S12: and (3) starting the preprocessing image from the upper left corner, moving a cutting window according to the step length, filtering a non-farmland area by using the binary image, intercepting a small area image, moving the position of the cutting window according to the step length, and adjusting the position of the cutting window if the position of the cutting window exceeds the boundary of the preprocessing image.
The step S12 specifically includes:
initial position setting, starting from the upper left corner of the preprocessed image, setting an initial cutting window position.
Step movement, namely gradually moving a cutting window according to a predefined step, wherein the step size can be determined according to the size and the overlapping proportion of the cut small-area image; for example, if the size of the small area image is 100 pixels and the overlap ratio is 0.5, the step size may be 50 pixels.
Moving a cutting window according to the step length, setting a rectangular window with a fixed size as the cutting window, and setting a proper step length. Starting from the upper left corner of the preprocessed image, respectively moving a cutting window by a step distance according to the horizontal direction and the vertical direction, and performing binarization processing on the preprocessed image by using a threshold segmentation or self-adaptive threshold segmentation method to divide pixels in the image into two types: farmland areas and non-farmland areas. The pixel value of the farmland area is set to white (255), the pixel value of the non-farmland area is set to black (0), and whether the pixels in the cutting window exceed 80% or not is checked to be white after each movement. If yes, intercepting a small area image in the window; if not, the window is skipped and the cutting window continues to be moved.
If the position of the cutting window exceeds the boundary of the preprocessed image, adjusting the position of the cutting window: when the cutting window is moved in the horizontal direction to the right boundary of the pre-processed image, the cutting window is moved downward by one step and its horizontal position is reset to the left boundary, which is a left-to-right, top-to-bottom cutting sequence, similar to the sequence of reading text. When the cutting window moves to the rightmost side of one row, the cutting window is changed to the leftmost side of the next row, and the cutting window continues to be moved. When the cutting window moves to the lower boundary of the pre-processed image in the vertical direction, the movement of the cutting window is stopped, and the cutting operation is ended, which is a condition for judging the end of the cutting, when the cutting window moves to the rightmost side of the last line, it is indicated that all the small area images have been intercepted, and the cutting operation can be stopped.
Step S13: after all the small area images are intercepted, the small area images are saved.
The step S13 specifically includes:
each small area image is assigned an area number that increases in a left-to-right, top-to-bottom order and is represented by a combination of letters and numbers. For example, the small area image number of the first column of the first row is A1, the small area image number of the second column of the first row is A2, and so on. Each small area image and its corresponding area number are stored in a list, and the list is stored in a file for subsequent use. At this stage, each small area image taken needs to be saved for subsequent sound data acquisition, detection, analysis and control operations.
The purpose of the overlap ratio during cutting is to avoid loss or incompleteness of information in the edge region. If there is no overlap ratio, the following may occur:
some animals or insects are located just at the juncture of two small pieces, resulting in them being cut in half and not being properly identified and classified.
Some crops are damaged to the extent that they are located just at the juncture of two small pieces, resulting in them being cut in half and not properly judged and evaluated.
Each small block can contain complete information, and a certain overlapping part needs to be arranged between two adjacent small blocks, so that information loss or incompleteness in an edge area can be avoided. In this embodiment, the overlapping ratio can be adjusted according to the actual situation, and generally, the larger the overlapping ratio is, the higher the integrity of the information is, but the calculation amount and the storage space are increased.
Step S2: and acquiring sound data of the region corresponding to the small region image, and processing the sound data to obtain sound category information.
The step S2 specifically comprises the following steps:
step S21: and obtaining sound data of the region corresponding to the small region image.
Step S22: denoising, segmenting and enhancing the sound data to obtain sound signals, wherein the method specifically comprises the following steps:
denoising, first of all, the noise level in the sound data needs to be estimated, by recording background noise without sound event or using noise estimation algorithm; noise is reduced using spectral subtraction or wavelet transformation with the estimated noise model. Spectral subtraction is a frequency domain-based method that uses the difference in energy spectra of a speech signal and a noise signal to obtain an enhanced speech spectrum by subtracting the noise spectrum; the wavelet transform method is a time-frequency domain based method, which uses the multi-resolution characteristic of wavelet transform to decompose a noisy signal into subband signals of different scales, and then thresholding each subband signal to remove noise. This approach may preserve the detailed characteristics of the speech signal.
For segmentation of audio signals, it is necessary to detect the start and end points of sound events. This may be achieved by a volume threshold, an energy threshold, and a sound event detection algorithm; once the start and end points of a sound event are detected, the audio signal may be split into different segments, one for each sound event.
Feature extraction, prior to sound enhancement, typically requires extracting features of the audio, such as short-time fourier transform (STFT) coefficients, mel-frequency cepstral coefficients (MFCCs); for analyzing sound events and applying enhancement techniques.
The method for reducing noise and further reducing noise can use a deep learning model.
Speech enhancement, which may increase the clarity and intelligibility of sound events, includes speech enhancement filters, deep learning enhancement models, and the like.
Signal restoration, for corrupted acoustic signals, an attempt may be made to repair missing or corrupted portions using signal restoration techniques.
Step S23: sequentially transforming, compensating, analyzing and identifying the sound signals, wherein the transformation formula is as follows:
X(t)=x(t)+jy(t)
where X (t) is a complex representation of the sound signal X (t), j is an imaginary unit, and y (t) is a nonlinear transformation of the sound signal X (t).
And compensating the converted sound signal, wherein the compensation formula is as follows:
in the method, in the process of the invention,to be compensated or compensated forThe inverse filtered sound signal, g (t), is the impulse response of the inverse or inverse filter and Y (X (t)) is the sound signal after passing through the propagation medium.
In this embodiment, compensation or inverse filtering techniques are used to construct an inverse or inverse filter based on the effect of the propagation medium on the sound waves, thereby canceling the effect of the propagation medium on the sound signal.
First, a forward filter is constructed based on a physical model and a mathematical model of the propagation medium to simulate the propagation of acoustic waves in the propagation medium, such as attenuation, refraction, reflection, etc. The filter may be represented by a linear time-invariant system, with the input being the original sound signal X (t) and the output being the sound signal Y (X (t)) after passing through the propagation medium. The impulse response of the filter is H (t) and the frequency response is H (f).
An inverse or reverse filter is then constructed based on the impulse response or frequency response of the forward filter to cancel or cancel the effect of the propagation medium on the sound signal. The filter can be represented by a linear time-invariant system, the input of which is the sound signal Y (X (t)) after passing through the propagation medium, and the output of which is the restoration of the original sound signal The impulse response of the filter is G (t), and the frequency response is G (f).
Finally, the forward filter and the inverse or inverse filter are connected to form a cascade system for compensating the sound data. The cascade system is represented by a linear time-invariant system, the input of which is the original sound signal X (t) and the output of which is the recovered original sound signalThe impulse response of the cascade system is H (t) x G (t), and the frequency response is H (f) x G (f).
In this embodiment, this technique works effectively, requiring the following conditions to be met:
(1) a certain degree of matching or inverse is required between the forward filter and the inverse or inverse filter, i.e. H (f) x G (f) 1 or H (t) x G (t) δ (t).
If H (f) and G (f) are the frequency responses of two linear time-invariant systems, then H (f). Times.G (. F) is the frequency response of the two systems after cascading, i.e., the ratio of the Fourier transform of the output signal to the input signal. If H (f) ×g (f) ≡1, this means that the output signal and the input signal have no change in the frequency domain, or that the output signal is a distortion-free replica of the input signal.
If h (t) and g (t) are impulse responses of two linear time-invariant systems, then h (t) x g (t) is the impulse response of the two systems after concatenation, i.e. convolution of the output signal with the input signal. If h (t) x g (t) ≡δ (t), where δ (t) is a dirac impulse function, it means that the output signal does not change from the input signal in the time domain, or that the output signal is a distortion-free replica of the input signal.
(2) There is a need for stability or causality between the forward filter and the inverse or inverse filter, i.e. neither H (f) nor G (f) has a zero or pole on the right half plane, or neither H (t) nor G (t) has a non-zero value on the negative time, meaning that H (t) and G (t) depend only on the current or past input signal and not on future input signals. Causality is an important condition, and real-time performance and realizability of the filter are guaranteed. If h (t) and g (t) have non-zero values at negative times, they are not causal, i.e. they need to predict future input signals to output results, which is not possible.
(3) Some realizability or calculability is required between the forward filter and the inverse or inverse filter, i.e., H (f) and G (f) are both approximated by a finite order polynomial or rational function, or H (t) and G (t) can both be approximated by a finite length sequence.
Analyzing the compensated sound signal, wherein the analysis formula is as follows:
where a (t) is the instantaneous amplitude of the sound signal, phi (t) is the instantaneous phase of the sound signal, and f (t) is the instantaneous frequency of the sound signal.
In this embodiment, the bayesian network model is a directed acyclic graph, wherein each circular node represents a random variable and each directed edge represents a conditional probability distribution; arrows between nodes represent causal or dependency relationships, i.e., parent nodes have an effect on child nodes, while child nodes have no effect on parent nodes.
In this embodiment, the bayesian network model has n+2 nodes and n+2 edges, where n is the number of environmental factor variables, such as temperature, humidity, noise, and the like, and the other two nodes are category variables and sound feature variables, which respectively represent biological activity categories to which sound signals belong and feature vectors obtained by transforming and analyzing the sound signals; node C is a class variable representing the class of biological activity to which the acoustic signal belongs, such as insect eating leaves, insect flight, insect puppet, etc., and is a discrete variable whose value ranges are all possible classes of biological activity; node F is a sound feature variable representing a feature vector obtained by transforming and analyzing a sound signal, such as instantaneous amplitude, frequency, etc., and is a continuous variable whose value range is all possible sound feature vectors; nodes E1, E 2 ,...,E n Is an environmental factor variable representing the value of an environmental factor affecting the sound signal, such as temperature, humidity, noise, etc., these nodes are continuous variables, and their range of values is all possible environmental factor values; the edges (C, F) represent a conditional probability distribution between the class variable and the sound feature variable, i.e. the probability distribution of the values of the sound feature variable given the values of the class variable. This is The side expression category variable has an influence on the sound characteristic variable, namely, sound signals of different categories have different sound characteristics; edge (E) i C) represents a conditional probability distribution between the environmental factor variable and the class variable, i.e. the probability distribution of the values of the class variable given the values of the environmental factor variable. These edges represent that the environmental factor variables have an impact on the class variables, i.e. the sound signals under different environmental factors have different biological activity classes; edge (E) i F) represents a conditional probability distribution between the environmental factor variable and the sound feature variable, i.e. a probability distribution of the values of the sound feature variable given the values of the environmental factor variable. These edges represent the influence of the environment factor variables on the sound characteristic variables, i.e. the sound signals under different environment factors have different sound characteristics.
Given a sound feature vector F and an environmental factor vector e= (E) 1 ,E 2 ,...,E n ) Inputting the sound feature vector F into a Bayes model to calculate the posterior probability of the sound signal belonging to a certain class C, wherein the calculation formula is as follows:
wherein F is a sound feature vector and comprises instantaneous amplitude and instantaneous frequency, P (F|C, E) is the likelihood probability of a given class C and an environmental factor E, P (C|E) is the prior probability of the given environmental factor E and the class C, P (F|E) is the prior probability of the given environmental factor E and the sound feature F, and P (C|F, E) is the posterior probability that the sound feature vector F belongs to the class C.
In this embodiment, the sound transformation is analyzed, that is, the characteristics of the sound signal, such as the instantaneous amplitude, the instantaneous phase, and the instantaneous frequency, are calculated from the complex representation obtained by transforming the sound signal. These features may reflect information such as the intensity, shape, and variation of the sound signal, thereby helping to distinguish between different sound sources and biological activities.
The sound analysis is classified and identified, namely, according to the feature vector obtained by the sound signal after transformation and analysis, the biological activity category of the sound signal is judged, such as insect eating leaves, insect flying, insect puppet and the like. In this step, a bayesian formula is used to calculate the posterior probability that the sound signal belongs to a certain category, and the category with the highest probability is selected as the recognition result.
The relation between sound analysis and sound recognition is that the input of sound recognition is the output of sound analysis, i.e. a sound feature vector. Instead of directly using the original sound signal or complex representation, sound recognition uses feature vectors resulting from sound analysis for classification and recognition. This is because the feature vector can extract the most discriminative and representative information in the sound signal, thereby improving the accuracy and efficiency of classification and recognition.
The relation between the formula of voice recognition and the formula of voice analysis is that F in the formula of voice recognition refers to a voice feature vector obtained in the formula of voice analysis, that is, f= (a (t), F (t)). This vector contains both the instantaneous amplitude and frequency characteristics of the sound signal and can be used to represent the distribution and variation of the sound signal over the time-frequency domain. This vector is used as observation data in a Bayesian formula to calculate the posterior probability.
Step S24: and selecting the category with the highest posterior probability as sound category information.
Step S3: and detecting the small-area image to obtain a detection result.
The step S3 specifically comprises the following steps:
step S31: preprocessing the small-area image to obtain a standard small-area image; the preprocessing operation comprises the steps of adjusting contrast, removing shadows and reducing image blurring, and specifically comprises the following steps:
the contrast is adjusted, which helps to enhance the brightness difference of different areas in the image, making the object easier to identify. This can be achieved by: histogram equalization, which expands the brightness range by redistributing the pixel values of the image, thereby increasing the contrast; contrast stretching expands the brightness range of the image by linear or nonlinear methods, mapping the pixel values into a wider range.
Shadow removal may result in non-uniform brightness of portions of the image, affecting subsequent object detection and classification. The shadow removing method comprises the following steps: the illumination correction can be carried out on the image by analyzing the illumination distribution in the image so as to reduce or eliminate shadow effect; background subtraction, using background subtraction technique to separate the background from the target in the image, eliminating the interference of shadow to the target.
Image blurring is reduced, and can cause loss of target details, thereby affecting subsequent target detection and classification. The method for reducing image blurring comprises the following steps: image denoising, using a denoising algorithm, such as gaussian filtering, median filtering, or wavelet denoising, to reduce noise in an image; motion blur correction, if an image is affected by motion blur, attempts to correct such blur to restore the sharpness of the image.
Step S32: performing target detection on the standard small-area image by using a deep learning model, detecting the target and classifying to obtain an animal detection result, wherein the method specifically comprises the following steps of:
and performing target detection and classification operation on the standard small-area image, and identifying the animal information. The purpose of this operation is to determine whether pests or other pests are present in the field, and their number and location.
In this embodiment, a deep learning based object detection and classification model is used that can detect multiple objects in an image and give their class and bounding box. The structure of the model is as follows:
an input layer for receiving the small area image as input and converting it into tensor format; the convolution layer carries out convolution operation on the input tensor for a plurality of times, and extracts the characteristics of the image; the pooling layer is used for downsampling the convolved characteristics, so that the dimension and the calculated amount are reduced; a Feature Pyramid Network (FPN) performs up-sampling and fusion on features of different layers to obtain a multi-scale feature map; a regional recommendation network (RPN) that generates candidate regions on the feature map and gives their scores and positional offsets; pooling (ropooling) of regions of interest, mapping candidate regions to feature vectors of fixed size; and the full connection layer is used for classifying and regressing the feature vectors and outputting the class and the bounding box of the target.
In this embodiment, a pre-trained model is used that is trained on a data set containing a variety of agriculturally relevant animals, and can identify classes of locusts, aphids, snails, mice, birds, etc. And inputting the small-area image into the model to obtain target detection and classification results, including the category, the number, the position and the confidence of the animals.
Step S33: crop detection is carried out on the standard small area image by using the image recognition model to obtain a crop detection result, and the method specifically comprises the following steps:
judging whether the crop is affected by the plant diseases and insect pests or not through the image recognition model. The purpose of this operation is to assess the health of the crop and whether control measures need to be taken. A deep learning based image recognition model is used that classifies crops and gives whether they are affected by pests. The structure of the model is as follows:
an input layer for receiving the small area image as input and converting it into tensor format; the convolution layer carries out convolution operation on the input tensor for a plurality of times, and extracts the characteristics of the image; the pooling layer is used for downsampling the convolved characteristics, so that the dimension and the calculated amount are reduced; and the full-connection layer is used for classifying the feature vectors and outputting the types and the plant diseases and insect conditions of crops.
A pre-trained model is used that is trained on a data set containing a plurality of crop and pest types, and can identify rice, wheat, corn, soybean and other types, as well as leaf spot, rust, aphid, locust and other pest types. And inputting the small-area image into the model to obtain an image recognition result, wherein the image recognition result comprises the type, the pest and disease damage type and the confidence level of crops.
In this embodiment, the detection results include animal detection results and crop detection results; animal detection results include animal size, animal species; crop detection results include crop species and pest species.
Step S4: and calculating an index according to the detection result.
The step 4 specifically comprises the following steps:
the calculation formula of the pest and disease damage index is as follows:
BCH=(∑(P i W i )+∑(Q J S J D J ))×100
wherein BCH is the index of plant diseases and insect pests, P i For the pest and disease extent of the ith part, W i For the weight coefficient of the ith part, Q J For the number of animals of type J, S J Size of animals of type J, S J Is the damage coefficient of animals of type J.
In this embodiment, the pest and disease coefficient represents an index of the overall extent to which the crop or fruit is affected by the pest and disease. It is obtained by adding two factors of insect damage degree of each part and damage index of various insects or animals. The plant diseases and insect pests at each part are multiplied by weight coefficients of each part and then summed to represent the degree of influence of internal factors such as bacteria or parasites on crops or fruits; the sum of the number of insects or animals multiplied by the size and damage coefficient of the insects or animals indicates the degree to which the crop or fruit is affected by external factors such as locust or birds. The two factors are added and multiplied by 100 to obtain a range of 0 to 100, which represents the pest index. The higher the pest index, the more severely the crop or fruit is affected by the pest.
The risk index calculation formula is:
WX=Σ(R I H I )
wherein WX is a pest index, R I For the number of the I-th plant diseases and insect pests, H I Is the risk index of the type I plant diseases and insect pests.
In this embodiment, the risk factor represents an indicator of the overall extent of pest risk faced by the crop or fruit. It is obtained by multiplying the number of various plant diseases and insect pests and the risk coefficient and summing. Wherein, the various pest numbers represent the percentage of different species of pest present on the crop or fruit to the total number; the various pest risk coefficients represent the degree of risk that different species of pests pose to crops or fruits. The two factors are multiplied and summed to obtain a range without a fixed upper limit, representing the risk factor. The higher the risk factor, the greater the risk of pest and disease damage to the crop or fruit.
The extent of pest at each location represents the percentage of the extent to which different portions of a crop or fruit (e.g., leaves, stems, fruits, etc.) are affected by the pest. For example, if 10% of the area of an apple's fruit is cracked or rotted, the extent of the fruit pest is 10%.
The weight coefficient of each part represents the percentage of the influence degree of different parts of crops or fruits on the yield and quality of the crops or fruits. For example, if the importance of the fruit is higher for apples than for both leaves and stalks, the weight coefficient of the fruit is 80%, and the leaves and stalks each account for 10%.
The number of insects or animals refers to the percentage of the total number of different species of insects or animals (e.g., grasshopper, aphid, snail, mice, birds, etc.) present on the crop or fruit. For example, if there are 10 locusts and 20 aphids on a paddy field, the number of locusts is 33.33% and the number of aphids is 66.67%.
Various insect or animal sizes represent the percentage of different species of insects or animals present on a crop or fruit relative to the size of the crop or fruit. For example, if the average length of the locust on a paddy field is 5 cm and the average length of the paddy field is 50 cm, the size of the locust is 10%.
The damage coefficient of each insect or animal represents the percentage of damage to the crop or fruit caused by the different species of insect or animal. For example, if a locust is more damaging to rice than to aphids, because it can consume a large amount of rice leaves and stems, resulting in reduced rice yield and even in apology, the damage coefficient of the locust is 80% and the damage coefficient of the aphid is 20%; birds may be more damaging to apples than snails, as birds may peck the pulp and peel of apples, causing the apples to deform or rot.
The various pest risk coefficients represent the percentage of the risk degree of different types of pests (such as rice planthoppers, cnaphalocrocis medinalis, apple aphids, codling moths and the like) on crops or fruits. For example, if rice planthoppers are more dangerous than cnaphalocrocis medinalis for rice, because they can spread serious diseases such as rice blast, the danger coefficient of rice planthoppers is 90% and the danger coefficient of cnaphalocrocis medinalis is 10%.
In this embodiment, the index includes a pest index and a risk index.
Step S5: and analyzing the crops according to the sound type information and the indexes to obtain an analysis result.
The step S5 specifically comprises the following steps:
the possible insect pest types corresponding to the sound types are the processes of deducing which insect or animal the sound sources are possible to exist according to the characteristics and the frequency of the sound and the type and the growth stage of crops, so as to judge which insect pest types are possible to exist. For example, where the crop being analyzed is rice and the sound class is insect eating foliage, then possible pest classes are locust, aphid, borer, etc., as these insects will gain nutrition by biting the leaves of rice and produce rattle or biting sounds. If the crop is apples and the sound class is insect flight, then possible pest classes are apple aphids, codling moths, etc., as these insects all fly to find apple trees as hosts, producing a buzzing or fanning sound. If the crop is corn and the sound class is insect puppet, then possible pest classes are corn weevils, corn borers, etc., because these insects all attract the opposite sex by ringing or vibrating, thereby producing a squeak or trembling sound.
The damage coefficient and the risk coefficient of the plant diseases and insect pests corresponding to the sound category give the damage degree and the percentage of the risk degree of the plant diseases and insect pests according to the intensity and the duration of the sound and the damage of the plant diseases and insect pests to the plant. For example, the crop analyzed is rice and the sound class is insect leaf eating, then the pest damage coefficient and risk coefficient corresponding to the sound class are shown in table one.
TABLE 1 damage to diseases and insect pests and risk factors
Species of plant diseases and insect pests | Damage coefficient | Coefficient of risk |
Locust | 80% | 70% |
Aphids | 20% | 10% |
Borer worm | 40% | 30% |
The result shows that the damage degree and the risk degree of the locust on the rice are highest, and the locust can eat a large amount of leaves and stems of the rice, so that the yield of the rice is reduced and even the rice is apotheca; the damage degree and the risk degree of aphids to rice are the lowest, because the aphids only absorb juice on rice leaves and do not directly influence the growth of the rice; the damage degree and risk degree of the stem borer to the rice are centered, because the stem borer can dig into the rice stem and spawn and hatch in the stem, so that the rice is withered or deformed.
The relation between the sound type, the disease and pest index and the risk coefficient is a process for calculating the contribution of the sound type to the overall degree of influence of the disease and pest on the crop according to the disease and pest damage coefficient and the risk coefficient corresponding to the sound type and a formula of the disease and pest index and the risk coefficient. For example, if the crop analyzed is rice, the sound class is insect eating foliage.
Assuming that the rice has three parts (leaf, stalk, ear), the degree of pest and disease damage and the weight coefficient of each part are shown in table 2.
TABLE 2 extent of pest and disease damage and weight coefficient at different sites
Part(s) | Degree of pest and disease damage | Weight coefficient |
Blade | 20% | 50% |
Stalk and stalk | 10% | 30% |
Ear of grain | 5% | 20% |
Three insects or animals (locust, aphid, borer) were assumed on the rice, and the number, size and damage coefficient of each insect or animal are shown in table 3.
TABLE 3 number, size and damage coefficient of different insects
Insects or animals | Quantity of | Size and dimensions of | Damage coefficient |
Locust | 30% | 10% | 80% |
Aphids | 50% | 1% | 20% |
Borer worm | 20% | 5% | 40% |
Two kinds of insect pests (rice planthoppers and cnaphalocrocis medinalis) on rice are assumed, and the number and risk coefficient of each kind of insect pest are shown in table 4.
TABLE 4 number and risk factors of different pests
Species of plant diseases and insect pests | Quantity of | Damage coefficient |
Rice planthoppers | 10% | 90% |
Rice leaf roller | 5% | 10% |
The crop is rice, the sound class is the leaf eating of the insect, and the contribution of the corresponding disease and pest indexes and the risk coefficient are as follows:
insect pest index= ((20% ×50% +10% ×30% +5% ×20%) + (30% ×10% ×80% +50% ×1% ×20% +20% ×5% ×40%) ×100=16.9.
Risk coefficient= (10% ×90% +5% ×10%) =9.5.
This shows that the sound class contributes 16.9 to the overall extent of crop pest impact and 9.5 to the overall extent of crop pest risk. These values may be compared to other sound categories or other crops to assess the relative importance and urgency of the sound category.
The suggestion of the sound category for crop control gives a process of whether the opinion of the sound source needs to be dispersed or killed according to the relation of the sound category, the pest index, the danger coefficient, the type and the growth stage of the crop. For example, where the crop is rice and the sound class is insect foliage, then the advice for crop control for the sound class may be as follows:
if the rice is in the seedling stage or the jointing stage, measures are required to dispel or kill the sound sources as soon as possible, because the rice in the stages is most susceptible to leaf eating by insects, resulting in growth retardation or yield reduction.
If the rice is in the filling stage or the mature stage, the control standard can be properly relaxed, because the rice in the stages has developed stronger resistance and yield, is less susceptible to leaf eating by insects, and can cause reverse injury to the rice by overuse of pesticides or mechanical means.
Assuming that the pest situation and risk level of each small area has been tagged by image recognition or other means, the tag of each area may be, for example, a binary group (BCH, WX), where BCH represents the pest index and WX represents the risk factor. The range of values of BCH and WX may be real numbers between 0 and 1, or integers between 0 and 100, which may be specifically determined according to practical situations. For example, (16.9,9.5) shows that the pest index of the area is 16.9 and the risk factor is 9.5, indicating that the area has slight pest phenomena and is at risk of spreading.
Calculating farmland pest and disease damage coefficients and danger coefficients, wherein the concrete formula is as follows:
in the BCH whole WX is the disease and pest index of the whole farmland whole Is the danger index of the whole farmland, A n The number of regions, v v ECH as the weight of the v-th region v And WX v The viral hazard index and the risk index of the v-th region, respectively.
Weight v v It may be relevant to many factors, such as the type of crop, the value of the crop, the stage of growth of the crop, to assign an integrated weight to each area.
In the formula, v v Weight of the v-th region, S v Is the area of the v-th region, T v Is the firstCrop species of V regions, V v For the crop value of the v th region, G v Crop growth stage, alpha, beta, epsilon andis a weight coefficient of different factors.
In this embodiment, if the influence of the crop species on the pest is considered to be the greatest, β may be given a higher value, for example, 0.5; if the impact of crop value on pest is considered minimal, epsilon may be given a lower value, such as 0.1; if other factors are considered to have a comparable effect on pest, alpha andassign the same value, such as 0.2; a set of possible weight coefficients is obtained: α=0.2, β=0.5, ε=0.1, ++ >
Next, a numerical value needs to be given to the crop type, crop value, and crop growth stage of each area in order to make the calculation. This may be done by using some criteria or scoring system, such as:
crop species may be classified according to the sensitivity or resistance of different crops to disease or pest, such as grades 1 to 5, grade 1 indicating the most sensitive or susceptible to disease or pest, grade 5 indicating the most resistant or least susceptible to disease or pest. For example, rice may be classified into 3 grades, wheat may be classified into 4 grades, corn may be classified into 5 grades, etc.
Crop value may be rated according to the price or benefit of different crops on the market, such as grades 1 to 5, grade 1 representing the lowest price or benefit, and grade 5 representing the highest price or benefit. For example, potatoes may be classified into 1 class, peanuts may be classified into 2 classes, soybeans may be classified into 3 classes, etc.
The crop growth stages may be graded according to the susceptibility or resistance of different crops to disease and pest during the different growth stages, such as grades 1 to 5, grade 1 indicating the most susceptible or least susceptible to disease and pest, grade 5 indicating the most resistant or least susceptible to disease and pest. For example, during the sowing and harvesting periods, crops may be more sensitive or susceptible to insect pests and may be classified as class 1 or class 2; during germination and flowering, crops may be relatively resistant or less susceptible to insect pests and may be classified as grade 4 or grade 5, etc.
Step S6: and performing control operation on the equipment according to the analysis result.
The step S6 specifically comprises the following steps:
according to a certain standard or threshold, judging what control measures are needed to be taken in each area. For example, if the pest index of a certain area is greater than 50 or the risk coefficient is greater than 80, emergency control measures need to be taken; preventive or biological control measures can be taken if the pest index of a certain area is less than 20 and the risk factor is less than 20; if the pest index and risk factor of a certain area are in the medium range, moderate or comprehensive control measures can be taken. The specific standard and the threshold are set according to the situation, and the specific standard and the threshold are set according to the actual situation for each area or the whole farmland.
And sending an instruction to intelligent control equipment in the farmland through a network.
The aim of this step is to take corresponding control measures according to the result of sound and image recognition, so as to reduce or eliminate the damage of plant diseases and insect pests to crops. According to the characteristics and habits of different insects and animals, a proper control method is selected, such as using pollution-free trapping and killing equipment, spraying biological pesticides, playing repelling sound and the like.
The nuisanceless trapping and killing equipment is one that uses the insect's tendency to stimulate light, color, smell, etc. to attract it into the equipment and kill or capture it by means of high voltage network, adhesive board, mechanical clamp, etc. The device can effectively control insects with strong flying performance, wide moving range and high hazard degree, such as moths, mosquitoes, flies, locusts and the like. According to the habits of different insects, proper trapping factors such as wavelength, color, smell and the like are selected, and parameters such as the position, the number, the height and the like of the equipment are reasonably arranged so as to improve trapping effect.
A biological pesticide is a pesticide which utilizes biological agents or bioactive substances to control plant diseases and insect pests. The pesticide has the characteristics of no public hazard, no residue, no drug resistance and the like, and is friendly to human and livestock and environment. According to different types and degrees of diseases and insect pests, proper biological pesticides such as microbial pesticides, plant-derived pesticides, insect-derived pesticides and the like are selected, and are sprayed or applied according to the use instructions and safety regulations.
Playing the driving sound is a method of disturbing or driving the pest using sound waves or ultrasonic waves. The method can utilize the characteristic that the plant diseases and insect pests are sensitive or aversive to sound, and influence the behavior or physiological functions of the plant diseases and insect pests by playing sound with specific frequency or intensity or rhythm, so as to achieve the aim of expelling or preventing and controlling the plant diseases and insect pests. According to the hearing characteristics of different diseases and insect pests, proper sound parameters such as frequency, intensity, rhythm and the like are selected, and the parameters such as the position, the number, the direction and the like of the speakers are reasonably arranged so as to improve the driving effect.
For rice blast, unmanned aerial vehicles are used for carrying biological pesticides, such as antagonistic antibiotics, microbial preparations and the like, and the infected areas are precisely sprayed to kill pathogenic bacteria and strengthen the resistance of crops.
For rice planthoppers, nuisanceless trapping and killing equipment such as yellow sticky plates, photoelectric traps and the like are used for effectively trapping the affected areas so as to reduce the number of the rice planthoppers and monitor the occurrence of the rice planthoppers.
For rice borers, biological pesticides such as cartap, bacillus thuringiensis, and the like are used to regularly spray the damaged areas to kill the larvae and prevent the damage thereof.
For rice birds, speakers are used to play a repelling sound, such as hawk, gunshot, etc., to effectively repel the threatening areas, to reduce the number of rice birds, and to protect the crop.
For mice, nuisance-free trapping and killing devices such as cages, clamps and the like are used for effectively trapping the affected areas so as to reduce the number of the mice and prevent the damage of the mice.
It is also necessary to install corresponding intelligent control devices, such as unmanned aerial vehicles, sprayers, speakers, etc., in the farmland in order to perform the control measures. And sending instructions to the devices through a network to enable the devices to automatically or semi-automatically perform control operation according to the identification result. And selecting a proper network protocol and a proper data format to ensure the integrity and the safety of the instruction.
The unmanned aerial vehicle is an unmanned aerial vehicle capable of flying in the air, and can be provided with various sensors, cameras, sprayers and other devices to carry out tasks such as agricultural monitoring, pesticide application, surveying and mapping. According to factors such as the size, the shape and the topography of a farmland, proper unmanned aerial vehicle model, quantity, flying height, speed, route and other parameters are selected, and an instruction is sent to the unmanned aerial vehicle through a network, so that the unmanned aerial vehicle can fly and operate according to a preset program or real-time feedback.
The sprayer is a device capable of spraying liquid pesticide or biological agent onto crops, and can effectively prevent and treat diseases and insect pests of leaves or stems. According to factors such as the type, the height and the density of crops, proper parameters such as the type, the number, the spray heads, the pressure, the flow and the like of the sprayers are selected, and instructions are sent to the sprayers through a network, so that the sprayers can spray and adjust according to a preset program or real-time feedback.
The loudspeaker is a device capable of converting sound signals into sound waves and amplifying the sound waves to be output, and can effectively repel some diseases and insect pests sensitive to or disgusting the sound. According to factors such as the size, shape and topography of a farmland, proper parameters such as the model number, the power and the direction of the speakers are selected, and instructions are sent to the speakers through a network, so that the speakers are played and regulated according to a preset program or real-time feedback.
And feeding back the identification result, the prevention measures and related data to a production manager through a network for viewing and adjustment.
In order to feed back the results of the voice and image recognition, the execution condition of the prevention and control measures, and related data and information to the production manager through a network, the production manager can know the condition of farmlands, the occurrence of diseases and insect pests and the prevention and control condition in time. The content is sent to the computer or mobile phone of the production manager through the network, so that the production manager can view and adjust the content through web pages or APP.
The results of the sound and image recognition are presented to the production manager in text or in a graph or picture, etc., so that they can clearly see which insects and animals are in the farm and how much damage they cause to the crop. The execution of the control measures can also be presented to the production manager in the form of text or charts or videos, etc., so that they can understand which control methods are taken and what their effect is.
The related data and information can be displayed to production managers in the form of texts, charts, pictures and the like, so that the production managers can know weather environment, soil moisture content, crop growth vigor and the like in the farmland. Such data and information may help the production manager make more scientific and rational decisions and guidelines.
Example 2
As shown in fig. 2, the invention discloses an intelligent farmland monitoring and crop growth automation control system, which comprises:
the image acquisition module 10 is used for acquiring farmland images and dividing the farmland images to obtain small-area images.
The sound processing module 20 is configured to obtain sound data of a region corresponding to the small region image, and process the sound data to obtain sound category information.
The image detection module 30 is configured to detect the small-area image, and obtain a detection result.
An index calculation module 40, configured to calculate an index according to the detection result.
The farmland analysis module 50 is used for analyzing crops according to the sound category information and the indexes to obtain analysis results.
And a device control module 60 for performing control operation on the device according to the analysis result.
As an alternative embodiment, the image acquisition module 10 of the present invention specifically includes:
the preprocessing sub-module is used for preprocessing the collected farmland images to obtain preprocessed images; the preprocessing operations include removing noise, enhancing contrast, cropping edges, and scaling sizes.
And the region intercepting sub-module is used for moving a cutting window from the upper left corner of the preprocessed image according to the step length, filtering a non-farmland region by using the binary image, intercepting a small region image, moving the position of the cutting window according to the step length, and adjusting the position of the cutting window if the position of the cutting window exceeds the boundary of the preprocessed image.
And the image storage sub-module is used for storing the small area images after intercepting all the small area images.
As an alternative embodiment, the sound processing module 20 of the present invention specifically includes:
And the sound acquisition sub-module is used for acquiring sound data of the region corresponding to the small region image.
And the sound processing sub-module is used for carrying out denoising, segmentation and enhancement on the sound data to obtain sound signals.
The signal operation submodule is used for sequentially converting, compensating, analyzing and identifying the sound signals, wherein the conversion formula is as follows:
X(t)=x(t)+jy(t)
where X (t) is a complex representation of the sound signal X (t), j is an imaginary unit, and y (t) is a nonlinear transformation of the sound signal X (t).
And compensating the converted sound signal, wherein the compensation formula is as follows:
in the method, in the process of the invention,for compensated or inverse filtered sound signals, g (t) is the impulse response of the inverse or inverse filterY (X (t)) is an acoustic signal after passing through the propagation medium.
Analyzing the compensated sound signal, wherein the analysis formula is as follows:
where a (t) is the instantaneous amplitude of the sound signal, phi (t) is the instantaneous phase of the sound signal, and f (t) is the instantaneous frequency of the sound signal.
Inputting the sound feature vector into a Bayesian model for recognition, wherein the recognition formula is as follows:
wherein F is a sound feature vector and comprises instantaneous amplitude and instantaneous frequency, P (F|C, E) is the likelihood probability of a given class C and an environmental factor E, P (C|E) is the prior probability of the given environmental factor E, class C, P (I E) is the prior probability of the given environmental factor E, the sound feature F, and P (C|F, E) is the posterior probability that the sound feature vector F belongs to the class C.
And the class selection sub-module is used for selecting the class with the maximum posterior probability as sound class information.
As an alternative embodiment, the image detection module 30 of the present invention specifically includes:
and the small area processing sub-module is used for preprocessing the small area image to obtain a standard small area image.
And the animal detection sub-module is used for carrying out target detection on the standard small-area image by using the deep learning model, detecting the target and classifying the target to obtain an animal detection result.
And the crop detection sub-module is used for detecting crops in the standard small area image by using the image recognition model to obtain a crop detection result.
As an alternative embodiment, the index calculation module 40 of the present invention specifically includes:
the calculation formula of the pest and disease damage index is as follows:
BCH=(∑(P i W i )+∑(Q J S J D J ))×100
wherein BCH is the index of plant diseases and insect pests, P i For the pest and disease extent of the ith part, W i For the weight coefficient of the ith part, Q J For the number of animals of type J, S J Size of animals of type J, S J Is the damage coefficient of animals of type J.
The risk index calculation formula is:
WX=∑(R I H I )
wherein WX is the pest index, RI is the number of the I-th pest and disease damage, H I Is the risk index of the type I plant diseases and insect pests.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An intelligent farmland monitoring and crop growth automation control method is characterized by comprising the following steps:
step S1: collecting farmland images, and dividing the farmland images to obtain small-area images;
step S2: acquiring sound data of a region corresponding to the small region image, and processing the sound data to obtain sound category information;
step S3: detecting the small area image to obtain a detection result;
step S4: calculating an index according to the detection result;
step S5: analyzing crops according to the sound category information and the index to obtain an analysis result;
step S6: and performing control operation on the equipment according to the analysis result.
2. The method for intelligent farmland monitoring and crop growth automation control according to claim 1, wherein the collecting farmland images, dividing the farmland images to obtain small area images, comprises the following steps:
Preprocessing the collected farmland image to obtain a preprocessed image; the preprocessing operation comprises noise removal, contrast enhancement, edge clipping and size scaling;
starting the preprocessing image from the upper left corner, moving a cutting window according to a step length, filtering a non-farmland area by using a binary image, intercepting a small area image, moving the position of the cutting window according to the step length, and adjusting the position of the cutting window if the position of the cutting window exceeds the boundary of the preprocessing image;
after all the small area images are intercepted, the small area images are saved.
3. The method for automatically controlling intelligent farmland monitoring and crop growth according to claim 1, wherein the steps of obtaining the sound data of the area corresponding to the small area image, and processing the sound data to obtain sound category information comprise:
acquiring sound data of a region corresponding to the small region image;
denoising, dividing and enhancing the sound data to obtain sound signals;
sequentially transforming, compensating, analyzing and identifying the sound signals, wherein a transformation formula is as follows:
X(t)=x(t)+jy(t)
wherein X (t) is a complex representation of the sound signal X (t), j is an imaginary unit, and y (t) is a nonlinear transformation of the sound signal X (t);
And compensating the converted sound signal, wherein the compensation formula is as follows:
in the method, in the process of the invention,for a compensated or inverse filtered sound signal, g (t) is the impulse response of the inverse or inverse filter and Y (X (t)) is the sound signal after passing through the propagation medium;
analyzing the compensated sound signal, wherein the analysis formula is as follows:
wherein A (t) is the instantaneous amplitude of the sound signal, phi (t) is the instantaneous phase of the sound signal, and f (t) is the instantaneous frequency of the sound signal;
inputting the sound feature vector into a Bayesian model for recognition, wherein the recognition formula is as follows:
wherein F is a sound feature vector and comprises instantaneous amplitude and instantaneous frequency, P (F|C, E) is the likelihood probability of a given class C and an environmental factor E, P (C|E) is the prior probability of the given environmental factor E and the class C, P (F|E) is the prior probability of the given environmental factor E and the sound feature F, and P (C|F, E) is the posterior probability that the sound feature vector F belongs to the class C;
and selecting the category with the highest posterior probability as sound category information.
4. The method for intelligent farmland monitoring and crop growth automation control according to claim 1, wherein the detecting the small area image to obtain a detection result specifically comprises:
The detection results comprise animal detection results and crop detection results;
preprocessing the small-area image to obtain a standard small-area image; the preprocessing operation comprises the steps of adjusting contrast, removing shadows and reducing image blurring;
performing target detection on the standard small-area image by using a deep learning model, detecting a target and classifying the target to obtain the animal detection result; the animal detection result comprises animal size and animal species;
performing crop detection on the standard small area image by using an image recognition model to obtain a crop detection result; the crop detection results comprise crop types and plant diseases and insect pests types.
5. The method for intelligent farmland monitoring and crop growth automation control according to claim 1, wherein the calculating an index according to the detection result specifically comprises:
the index includes a pest index and a risk index;
the calculation formula of the disease and pest index is as follows:
BCH=(∑(P i W i )+∑(Q J S J D J ))×100
wherein BCH is the index of plant diseases and insect pests, P i For the pest and disease extent of the ith part, W i For the weight coefficient of the ith part, Q J For the number of animals of type J, S J Size of animals of type J, S J Is the damage coefficient of animals of type J;
The risk index calculation formula is as follows:
WX=∑(R I H I )
wherein WX is a pest index, R I For the number of the I-th plant diseases and insect pests, H I Is the risk index of the type I plant diseases and insect pests.
6. An intelligent farmland monitoring and crop growth automation control system, the system comprising:
the image acquisition module is used for acquiring farmland images and dividing the farmland images to obtain small-area images;
the sound processing module is used for acquiring sound data of the area corresponding to the small area image, and processing the sound data to obtain sound category information;
the image detection module is used for detecting the small-area image to obtain a detection result;
the index calculation module is used for calculating an index according to the detection result;
the farmland analysis module is used for analyzing crops according to the sound type information and the indexes to obtain analysis results;
and the equipment control module is used for controlling and operating the equipment according to the analysis result.
7. The intelligent farmland monitoring and crop growth automation control system according to claim 6, wherein the image acquisition module specifically comprises:
the preprocessing sub-module is used for preprocessing the collected farmland images to obtain preprocessed images; the preprocessing operation comprises noise removal, contrast enhancement, edge clipping and size scaling;
The area intercepting sub-module is used for moving a cutting window from the upper left corner of the preprocessed image according to the step length, filtering a non-farmland area by using a binary image, intercepting a small area image, moving the position of the cutting window according to the step length, and adjusting the position of the cutting window if the position of the cutting window exceeds the boundary of the preprocessed image;
and the image storage sub-module is used for storing the small area images after intercepting all the small area images.
8. The intelligent farmland monitoring and crop growth automation control method according to claim 6, wherein the sound processing module specifically comprises:
the sound acquisition sub-module is used for acquiring sound data of the region corresponding to the small region image;
the sound processing sub-module is used for carrying out denoising, segmentation and enhancement on the sound data to obtain sound signals;
the signal operation submodule is used for sequentially carrying out transformation, compensation, analysis and identification on the sound signals, wherein a transformation formula is as follows:
X(t)=x(t)+jy(t)
wherein X (t) is a complex representation of the sound signal X (t), j is an imaginary unit, and y (t) is a nonlinear transformation of the sound signal X (t);
and compensating the converted sound signal, wherein the compensation formula is as follows:
In the method, in the process of the invention,for a compensated or inverse filtered sound signal, g (t) is the impulse response of the inverse or inverse filter and Y (X (t)) is the sound signal after passing through the propagation medium;
analyzing the compensated sound signal, wherein the analysis formula is as follows:
wherein A (t) is the instantaneous amplitude of the sound signal, phi (t) is the instantaneous phase of the sound signal, and f (t) is the instantaneous frequency of the sound signal;
inputting the sound feature vector into a Bayesian model for recognition, wherein the recognition formula is as follows:
wherein F is a sound feature vector and comprises instantaneous amplitude and instantaneous frequency, P (F|C, E) is the likelihood probability of a given class C and an environmental factor E, P (C|E) is the prior probability of the given environmental factor E and the class C, P (F|E) is the prior probability of the given environmental factor E and the sound feature F, and P (C|F, E) is the posterior probability that the sound feature vector F belongs to the class C;
and the class selection sub-module is used for selecting the class with the maximum posterior probability as sound class information.
9. The intelligent farmland monitoring and crop growth automation control system of claim 6, wherein the image detection module specifically comprises:
the small area processing sub-module is used for preprocessing the small area image to obtain a standard small area image;
The animal detection sub-module is used for detecting targets of the standard small-area images by using a deep learning model, detecting the targets and classifying the targets to obtain animal detection results;
and the crop detection sub-module is used for detecting crops in the standard small area image by using the image recognition model to obtain a crop detection result.
10. The intelligent farmland monitoring and crop growth automation control system of claim 6, wherein the index calculation module specifically comprises:
the calculation formula of the pest and disease damage index is as follows:
BCH=(∑(P i W i )+∑(Q J S J D J ))×100
wherein BCH is the index of plant diseases and insect pests, P i For the pest and disease extent of the ith part, W i For the weight coefficient of the ith part, Q J For the number of animals of type J, S J Size of animals of type J, S J Is the damage coefficient of animals of type J;
the risk index calculation formula is:
WX=∑(R I H I )
wherein WX is a pest index, R I For the number of the I-th plant diseases and insect pests, H I Is the risk index of the type I plant diseases and insect pests.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311331561.5A CN117351181A (en) | 2023-10-16 | 2023-10-16 | Intelligent farmland monitoring and crop growth automatic control method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311331561.5A CN117351181A (en) | 2023-10-16 | 2023-10-16 | Intelligent farmland monitoring and crop growth automatic control method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117351181A true CN117351181A (en) | 2024-01-05 |
Family
ID=89357008
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311331561.5A Withdrawn CN117351181A (en) | 2023-10-16 | 2023-10-16 | Intelligent farmland monitoring and crop growth automatic control method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117351181A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557966A (en) * | 2024-01-09 | 2024-02-13 | 南京格瑞物联科技有限公司 | Campus abnormal behavior safety detection method and system based on monitoring image recognition |
-
2023
- 2023-10-16 CN CN202311331561.5A patent/CN117351181A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557966A (en) * | 2024-01-09 | 2024-02-13 | 南京格瑞物联科技有限公司 | Campus abnormal behavior safety detection method and system based on monitoring image recognition |
CN117557966B (en) * | 2024-01-09 | 2024-04-02 | 南京格瑞物联科技有限公司 | Campus abnormal behavior safety detection method and system based on monitoring image recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhu et al. | Identification of grape diseases using image analysis and BP neural networks | |
Lima et al. | Automatic detection and monitoring of insect pests—A review | |
CN117351181A (en) | Intelligent farmland monitoring and crop growth automatic control method and system | |
Tiwari et al. | An experimental set up for utilizing convolutional neural network in automated weed detection | |
Swinton | Economics of site-specific weed management | |
Reisig et al. | Dispersal pattern and dispersion of adult and nymph stink bugs (Hemiptera: Pentatomidae) in wheat and corn | |
Farjon et al. | Deep-learning-based counting methods, datasets, and applications in agriculture: A review | |
Sargent et al. | Traps and trap placement may affect location of brown marmorated stink bug (Hemiptera: Pentatomidae) and increase injury to tomato fruits in home gardens | |
CN114550108A (en) | Spodoptera frugiperda identification and early warning method and system | |
Gulavnai et al. | Deep learning for image based mango leaf disease detection | |
Rani et al. | Automated weed detection system in smart farming for developing sustainable agriculture | |
CN112465038A (en) | Method and system for identifying disease and insect pest types of fruit trees | |
Ariza-Sentís et al. | Object detection and tracking in Precision Farming: a systematic review | |
CN115697040A (en) | Artificial pollination method and device for carrying out artificial pollination | |
Jalinas et al. | Acoustic activity cycles of Rhynchophorus ferrugineus (Coleoptera: Dryophthoridae) early instars after Beauveria bassiana (Hypocreales: Clavicipitaceae) treatments | |
Kandalkar et al. | Classification of agricultural pests using dwt and back propagation neural networks | |
Lesmeister et al. | Integrating new technologies to broaden the scope of northern spotted owl monitoring and linkage with USDA forest inventory data | |
Metre et al. | Reviewing important aspects of plant leaf disease detection and classification | |
Maheswaran et al. | A real time image processing based system to scaring the birds from the agricultural field | |
Hossain et al. | A convolutional neural network approach to recognize the insect: A perspective in Bangladesh | |
Allen et al. | A continuing saga: Soybean rust in the continental United States, 2004 to 2013 | |
KR20200084948A (en) | Smart cctv system for detecting of wild animals | |
Arinichev et al. | Digital solutions in the system of intelligent crop monitoring | |
PushpaLakshmi | Development of an IoT-Based Bird Control System Using a Hybrid Deep Learning CNN-SVM Classifier Model | |
Rasti et al. | Assessment of deep learning methods for classification of cereal crop growth stage pre and post canopy closure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20240105 |