WO2022094854A1 - Growth monitoring method for crops, and devices and storage medium - Google Patents

Growth monitoring method for crops, and devices and storage medium Download PDF

Info

Publication number
WO2022094854A1
WO2022094854A1 PCT/CN2020/126731 CN2020126731W WO2022094854A1 WO 2022094854 A1 WO2022094854 A1 WO 2022094854A1 CN 2020126731 W CN2020126731 W CN 2020126731W WO 2022094854 A1 WO2022094854 A1 WO 2022094854A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
crops
target image
growth
neural network
Prior art date
Application number
PCT/CN2020/126731
Other languages
French (fr)
Chinese (zh)
Inventor
任创杰
李鑫超
李思晋
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/126731 priority Critical patent/WO2022094854A1/en
Publication of WO2022094854A1 publication Critical patent/WO2022094854A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the technical field of agricultural monitoring, and in particular, to a method for monitoring the growth of crops, an unmanned aerial vehicle, a control terminal, a monitoring device and a storage medium.
  • the method for monitoring the growth of crops, the unmanned aerial vehicle, the control terminal, the monitoring device and the storage medium can quickly determine the growth status of the crops, so that the crops can be processed in time to improve the yield of the crops.
  • the embodiments of the present application provide a method for monitoring the growth of crops, the method comprising:
  • the growth status of the crop is determined according to the fusion map.
  • an embodiment of the present application further provides an unmanned aerial vehicle, and the unmanned aerial vehicle includes:
  • the PTZ is installed on the body
  • a photographing device wherein the photographing device is installed on the cradle, and the photographing angle of the photographing device can be adjusted by adjusting the cradle;
  • the memory is used to store a computer program and a pre-trained neural network model;
  • the processor is used to execute the computer program and implement the following steps when executing the computer program:
  • the growth status of the crop is determined according to the fusion map.
  • an embodiment of the present application further provides a control terminal, the control terminal is connected to the unmanned aerial vehicle in communication, and is used to control the flying of the unmanned aerial vehicle, and the unmanned aerial vehicle includes a gimbal and a The photographing device on the PTZ; the control terminal includes:
  • the memory is used to store a computer program and a pre-trained neural network model;
  • the processor is used to execute the computer program and implement the following steps when executing the computer program:
  • the growth status of the crop is determined according to the fusion map.
  • an embodiment of the present application further provides a monitoring device, the monitoring device comprising:
  • the memory is used to store a computer program and a pre-trained neural network model;
  • the processor is used to execute the computer program and implement the following steps when executing the computer program:
  • the growth status of the crop is determined according to the fusion map.
  • an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor can be implemented as implemented in the present application. The steps of the growth monitoring method of any one of the examples provided.
  • the method for monitoring the growth of crops, the drone, the control terminal, the monitoring device and the storage medium disclosed in the embodiments of the present application can accurately monitor the growth status of the crops, specifically including different growth status categories (such as normal growth, abnormal growth or lack of growth). It is convenient for crop administrators to carry out targeted treatments on crops in a timely manner, such as fertilizing, replenishing seedlings or spraying pesticides, etc., thereby increasing the yield of crops.
  • FIG. 1 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • FIG. 2 is a schematic block diagram of a control system of an unmanned aerial vehicle provided by an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a flight control system provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a scene captured by a controlled drone provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of steps of a model training method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of the steps of a method for monitoring the growth of crops provided in an embodiment of the present application
  • FIG. 7 is a schematic diagram illustrating a usage process of a neural network model provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram illustrating another usage process of the neural network model provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of the steps of another method for monitoring the growth of crops provided in the embodiment of the present application.
  • FIG. 10 is a schematic flowchart of steps of a method for determining the actual position of a crop provided by an embodiment of the present application
  • FIG. 11a and FIG. 11b are schematic diagrams of corresponding principles for determining the actual position of crops provided by the embodiments of the present application;
  • FIG. 12 is a schematic block diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of a control terminal provided by an embodiment of the present application.
  • FIG. 14 is a schematic block diagram of a monitoring apparatus provided by an embodiment of the present application.
  • the analysis of the growth status of crops is generally carried out by manual statistical methods, that is, monitoring personnel are required to conduct statistical analysis on the spot.
  • manual statistical methods are slow in analysis and have no digital records.
  • crop managers cannot carry out targeted remedial treatment, such as fertilizing or removing abnormal growth.
  • Pests, etc. make up for the missing seedlings, because the timely remediation cannot be carried out, the final output of the saplings will decrease.
  • the embodiments of the present application provide a method for monitoring the growth of crops, an unmanned aerial vehicle, a control terminal, a monitoring device, and a storage medium.
  • a method for monitoring the growth of crops an unmanned aerial vehicle, a control terminal, a monitoring device, and a storage medium.
  • the method for monitoring the growth of crops specifically uses a drone to process target images obtained by photographing crops grown in a target area, so as to monitor the growth conditions of the crops. Therefore, before introducing the growth monitoring method of crops, the structure and working principle of the UAV are introduced first.
  • FIG. 1 shows the structure of the drone 100 provided by the embodiment of the present application
  • FIG. 2 shows the structural framework of the control system of the drone 100 provided by the embodiment of the present application.
  • the UAV 100 may include a body 11 , a gimbal 12 , a photographing device 13 , a power system 14 , a control system 15 , and the like.
  • Airframe 11 may include a fuselage and a foot frame (also called landing gear).
  • the fuselage may include a center frame and one or more arms connected to the center frame, the one or more arms extending radially from the center frame.
  • the tripod is connected with the fuselage, and is used for supporting when the drone 100 is landed.
  • the pan/tilt 12 is mounted on the body 11 for mounting the photographing device 13 .
  • the pan/tilt 12 may include three motors, that is, the pan/tilt 12 is a three-axis pan/tilt.
  • the shooting angle of the photographing device 13 can be adjusted, and the shooting angle can be understood as The angle of the direction in which the lens of the photographing device 13 faces the target to be photographed relative to the horizontal direction or the vertical direction.
  • the pan/tilt 12 may further include a controller for controlling the movement of the pan/tilt 12 by controlling the motor of the pan/tilt, thereby adjusting the shooting angle of the shooting device 13.
  • the gimbal 12 may be independent of the UAV 100 , or may be a part of the UAV 100 .
  • the motor 122 may be a DC motor or an AC motor; or, the motor 122 may be a brushless motor or a brushed motor.
  • the photographing device 13 can be, for example, a device for capturing images such as a camera or a video camera, and the photographing device 13 can communicate with the control system 15 and perform photographing under the control of the control system 15 .
  • the photographing device 13 is mounted on the body 11 of the drone 100 through the gimbal 12 . It can be understood that, the photographing device 13 can also be directly fixed on the body 11 of the drone 100, so that the gimbal 12 can be omitted.
  • the photographing device 13 may be controlled to photograph the target area from a bird's-eye view to obtain the target image.
  • the target area is planted with crops, such as a certain field plot, or an area that the user needs to monitor, such as crops such as saplings, grass seedlings, etc., so that the target image can be identified, processed and analyzed to determine the crops in the target area. growth status.
  • the top view angle is that the optical axis direction of the lens of the photographing device 13 is perpendicular to the target area to be photographed, or is approximately perpendicular to the target area to be photographed, and the approximately perpendicularity is, for example, 88 degrees or 92 degrees. , which is not limited here.
  • Shooting from a bird's-eye view can improve the accuracy and efficiency of image recognition, which is conducive to monitoring the growth of crops.
  • the photographing device 13 may include a monocular camera or a binocular camera, which is used for capturing different functions.
  • the monocular camera is used to photograph a target image of a target area, which may specifically be an RGB image
  • the binocular camera may Measure the distance from the target object to the drone, and also measure the distance from the crop to the drone, that is, to get the depth map of the crop, which can be used in conjunction with the target image to improve the recognition effect of the state category and radius of the crop .
  • the depth image can be used in combination with the target image to analyze the growth conditions of crops in the target area, so that the growth conditions of crops can be more accurately analyzed.
  • the power system 14 may include one or more electronic governors (referred to as ESCs for short), one or more propellers, and one or more motors corresponding to the one or more propellers, wherein the motors are connected to the electronic governors. Between the propeller and the motor, the motor and the propeller are arranged on the arm of the drone 100 .
  • the electronic governor is used to receive the driving signal generated by the control system 15, and provide a driving current to the motor according to the driving signal, so as to control the speed of the motor and then drive the propeller to rotate, so as to provide power for the flight of the UAV 100, which makes no
  • the human-machine 100 is capable of motion in one or more degrees of freedom. In some embodiments, the drone 100 can rotate about one or more axes of rotation.
  • the above-mentioned rotation axis may include a roll axis (Roll), a yaw axis (Yaw), and a pitch axis (pitch).
  • the motor may be a DC motor or an AC motor.
  • the motor may be a brushless motor or a brushed motor.
  • Control system 15 may include a controller and a sensing system.
  • the controller is used to control the flight of the UAV 100, for example, the flight of the UAV 100 can be controlled according to the attitude information measured by the sensing system. It should be understood that the controller can control the UAV 100 according to pre-programmed instructions, and can also control the UAV 100 by responding to one or more control instructions from the control terminal.
  • the sensing system is used to measure the attitude information of the UAV 100, that is, the position information and state information of the UAV 100 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, and three-dimensional angular velocity.
  • the sensing system may include at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, a barometer, and other sensors.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the controller may include one or more processors and memory.
  • the processor may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), and the like.
  • the memory may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the memory of the drone needs to store a pre-trained neural network model.
  • the input of the neural network model is the target image captured by the drone 100
  • the output is the radius estimation map of the crop and the confidence map of multiple growth state categories.
  • the pre-trained neural network model in order to save the storage space of the UAV, can be compressed, and the compressed neural network model is stored in the memory of the UAV, wherein the compression process At least including pruning and so on.
  • the UAV 100 may further include a radar device, and the radar device may be installed on the UAV 100, specifically, may be installed on the body 11 of the UAV 100.
  • the radar device may be installed on the UAV 100, specifically, may be installed on the body 11 of the UAV 100.
  • the radar device is installed on the tripod of the UAV 100, the radar device is connected to the control system 15 in communication, and the radar device transmits the collected observation data to the control system for processing by the control system 15.
  • the UAV 100 may include two or more tripods, and the radar device is mounted on one of the tripods.
  • the radar device may also be mounted on other positions of the UAV 100, which is not specifically limited.
  • the radar device mainly includes a radio frequency front-end module and a signal processing module.
  • the radio frequency front-end module may include a transmitting antenna and a receiving antenna.
  • the transmitting antenna is used to send signals to the target, and the receiving antenna is used to receive the signal reflected by the target.
  • the signal processing module is responsible for generating modulation Signal and process and analyze the collected intermediate frequency signal, where the target is an obstacle, such as a building, an iron tower, a tree, etc.
  • the radar device can specifically measure the distance from the crop to the drone, and then obtain the depth map of the crop.
  • the methods for monitoring the growth of crops are all realized by processing the acquired target images of crops photographed by the drone 100 .
  • any one of the crop growth monitoring methods provided in this application can be applied to the UAV 100 , and is specifically applied to the controller of the UAV 100 .
  • the controller of the unmanned aerial vehicle 100 is used to: obtain a target image obtained by photographing a target area by the unmanned aerial vehicle, and the target area is planted with crops; input the target image into the pre-trained neural network model, Obtain the radius estimation map of the crop and the confidence maps of multiple growth state categories; perform information fusion according to the radius estimation map and the confidence maps of the multiple growth state categories to obtain a fusion map; and according to the fusion
  • the graph determines the growth status of the crop.
  • Shooting target images by drones can improve the efficiency of statistical analysis.
  • the neural network model to learn the target images, the radius estimation map of crops and the confidence maps of multiple growth state categories can be obtained, which can more accurately analyze the growth of crops. situation.
  • the method for monitoring the growth of crops provided by the embodiments of the present application can be applied to other electronic devices, such as a control terminal of a flight control system, in addition to the drone.
  • FIG. 3 shows a structure of a flight control system provided by an embodiment of the present application.
  • the flight control system includes an unmanned aerial vehicle 100 and a control terminal 200 .
  • the control terminal 200 is located at the ground end of the unmanned aerial vehicle system 100 , and can communicate with the unmanned aerial vehicle 100 in a wireless manner, so as to remotely control the unmanned aerial vehicle 100 .
  • control terminal 200 may include a remote control, a smart phone, a tablet computer or a notebook computer, and the like.
  • control terminal 200 when the control terminal 200 executes the method for monitoring the growth of crops, the control terminal 200 should also pre-store a trained neural network model, and the input of the neural network model is the target image captured by the drone 100, and the output is Radius estimates for crops and confidence maps for multiple growth state categories.
  • control terminal 200 is used to: obtain a target image obtained by photographing a target area by an unmanned aerial vehicle, and the target area is planted with crops; input the target image into a pre-trained neural network model to obtain the The radius estimation map of the crop and the confidence map of multiple growth state categories; perform information fusion according to the radius estimate map and the confidence map of the multiple growth state categories to obtain a fusion map; and determine the Describe the growth status of crops.
  • Shooting target images by drones can improve the efficiency of statistical analysis.
  • the neural network model to learn the target images the radius estimation map of crops and the confidence maps of multiple growth state categories can be obtained, which can more accurately analyze the growth of crops. situation.
  • the user can control the drone 100 to fly to a target area through the control terminal 200 , and the target area is planted with crops, and the target area is photographed by the photographing device of the drone 100 to obtain the target area.
  • image the drone 100 sends the captured target image to the control terminal 200, and the control terminal 200 inputs the obtained target image into the neural network model for learning, and obtains the radius estimation map of the crop and the confidence map of multiple growth state categories ; and perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and determine the growth state of the crops according to the fusion map.
  • the growth monitoring method can improve the monitoring efficiency, and at the same time can monitor the growth status of multiple growth status categories of crops, so that the user can perform targeted remedial treatment, thereby improving the yield of crops.
  • the method for monitoring the growth of crops requires the use of a neural network model in addition to the use of unmanned aerial vehicles. Regardless of whether the method for monitoring the growth of crops is applied to an unmanned aerial vehicle or a control terminal, these devices (unmanned aerial vehicle) and control terminal, etc.) need to store a pre-trained neural network model.
  • the training method of a neural network model provided by the embodiments of the present application is first introduced.
  • the training can also be performed in a model training device, where the model training device is, for example, a computer device including a GPU, so as to improve the training efficiency of the model.
  • FIG. 5 shows a training method of a neural network model provided by an embodiment of the present application.
  • the neural network model obtained by training can output a radius estimation map of crops and a confidence map of multiple growth state categories.
  • the training method of the model includes steps S101 to S103.
  • sample data includes multiple images of the same crop, and the images are marked with multiple growth state categories of the crop and a radius value of the crop;
  • the neural network used is a fully convolutional upgrade network.
  • other convolutional neural networks can also be used.
  • Network the network architecture adopts GoogLeNet developed by Google, of course, other network architectures can also be used, such as AlexNet or ResNet and other structures.
  • Cropping refers to cropping all images.
  • labeling refers to labeling the coordinates of the radius of crops in multiple images and labeling the growth state categories of crops, in order to expect the trained neural network model to output the estimated radius of crops and multiple growth state categories. confidence map.
  • the labeling specifically includes generating a circle with Gaussian distribution on the corresponding confidence feature map and recording the normalized crop radius on the corresponding radius feature map, Generate Gaussian circles and normalized crop radii corresponding to all labeled coordinates, so that the model can output a crop radius estimate map and a confidence map of growth state categories, where the radius estimate map includes normalized crop radius, growth state category
  • the confidence map for includes a Gaussian circle.
  • the growth state category includes at least one or more of the normal growth category, the abnormal growth category, and the lacking seedling growth category, wherein the abnormal growth category is a category that is different from the normal growth and lacking seedlings, such as pests or dwarf growth, etc. .
  • the processed images are divided into two groups, one as training data and the other as test data. Specifically, it can be grouped according to a certain ratio. For example, in the case of 9:1 or 8:2, the processed images are divided into two groups.
  • the neural network model can learn target images including crops, and can output a radius estimation map of crops and a confidence map of multiple growth state categories.
  • the sample data can be added to the depth map of the crops, and the model can be trained, so that the obtained neural network model takes into account the factors of the depth map of the crops, so that the trained neural network model can output more accurate crops. Radius estimates and confidence maps for multiple growth state categories.
  • the crops in order to further improve the monitoring accuracy of the growth status of crops, can be classified to obtain the types of crops, and a neural network model can be trained for each type, such as training for sapling types, grass seedling types, vegetable types, etc.
  • the corresponding neural network model can be trained for each type, such as training for sapling types, grass seedling types, vegetable types, etc.
  • the corresponding neural network model it is also possible to train a neural network model for each crop, and establish a corresponding relationship between the trained neural network model and the crop, so that when the target image is captured, the crop in the target image can be identified to determine the corresponding neural network.
  • the user can also select the neural network model corresponding to the crop after knowing the crop to be monitored, and then determine the neural network model to be used for the crop according to the user's selection operation.
  • the neural network model can be stored in electronic devices such as drones and control terminals, so that these electronic devices can use the neural network model to perform the crop monitoring method provided by the embodiments of the present application, thereby realizing Growth monitoring of crops.
  • FIG. 6 is a schematic flowchart of steps of a method for monitoring the growth of crops provided by an embodiment of the present application. The following is an example of the method for monitoring growth of crops being applied to a control terminal of a flight control system.
  • the method for monitoring the growth of crops includes steps S201 to S204.
  • the target area can be the plot that the user needs to monitor, and the target area is planted with crops, such as fruit trees and corn seedlings.
  • the user When the user needs to monitor the target area, he can control the drone to fly to the target area and adjust the shooting device of the drone to shoot the target area to obtain a target image, which includes the crops to be monitored.
  • the photographing device of the drone can be controlled to photograph the target area from a bird's-eye view, and the target image can be captured from a bird's-eye view, so as to easily identify the estimated radius of crops and multiple growth states. Confidence map of the categories, thereby improving the monitoring efficiency and accuracy of crops.
  • the captured target image is an RGB image
  • pictures in other formats can be converted into RGB images, and the RGB images are input into the neural network model for learning.
  • RGB images can not only improve the accuracy of model recognition, but also facilitate the combination with the semantic map of the target area to monitor the growth of crops.
  • a depth map corresponding to when the drone captures the target image of the crop in the target area may also be obtained, wherein the depth map includes depth information of the crop, and the depth information is the crop distance to the drone.
  • the pre-trained neural network model is obtained by sampling the above model training method, and can learn the target image to output the radius estimation map of the crops in the target image and the confidence maps of multiple growth state categories, such as full convolution Neural network model.
  • the growth state category includes at least one or more of a normal growth category, an abnormal growth category, and a seedling-deficient growth category, and may of course also include other categories.
  • the abnormal growth category is divided into the category of short growth, the category of insect pests, and so on.
  • Each crop under each growth state category corresponds to a confidence level map, and the confidence level map includes a confidence level value used to represent the current growth state category of the crop, specifically a probability value in [0, 1].
  • the obtained target image is input into the fully convolutional neural network model, the target image is learned through the fully convolutional neural network model, and the radius estimation map of the crops in the target image and multiple Confidence map for growth state categories.
  • Category 1 in FIG. 7 can be a normal growth category
  • category 2 can be an abnormal growth category, etc.
  • each growth state category corresponds to a confidence map
  • the confidence map includes a Gaussian circle of at least one crop and a corresponding radius value, where the Gaussian circle can represent the probability value of the crop being a certain growth state category, such as the probability value of the normal growth category, of course, the confidence map can also include the confidence value.
  • the radius estimate map includes normalized radius values for the crop.
  • the confidence map in order to better count the growth status of crops, includes a target position of crops in the confidence map and a confidence value corresponding to the target position, wherein the confidence value is a probability value, which is used to represent the reliability of the category of the growth state of the crop.
  • the fully convolutional neural network model is used to detect saplings, and output confidence maps of three categories, which are respectively the confidence map 1 corresponding to the normal growth category, the confidence map 2 and the abnormal growth category.
  • Figure 3 Corresponding confidence levels for the lack of seedling growth categories.
  • Confidence graph 1 includes the target positions of multiple saplings in the confidence graph and the corresponding confidence values.
  • confidence graphs 2 and 3 also include the target positions of multiple saplings in the confidence graph and the corresponding confidence values.
  • only in the confidence map 2 is the abnormal growth category of saplings, in the confidence map 3 is the corresponding position of the lack of seedlings.
  • the target image and the depth map corresponding to the target image may also be input into a pre-trained neural network model, wherein the training data used in training the neural network model includes crops in-depth information. So that the neural network model can learn the target image and the depth map, and output the radius estimation map of the crop and the confidence map of multiple growth state categories. Using the depth map can further improve the accuracy of the output confidence map and radius estimation map.
  • the obtained target image A and the depth map A corresponding to the target image A are input into the fully convolutional neural network model, the target image is learned through the fully convolutional neural network model, and the output Radius estimates for crops in this target image and confidence maps for multiple growth state categories.
  • the depth map may also be normalized deal with. Specifically, the reference parameters are first determined, and then the depth information of the crops included in the depth map is normalized according to the reference parameters to obtain a normalized depth map, wherein the reference parameters are used as normalized
  • the base number for processing for example, the depth information of crops is subtracted from the base number.
  • the target crops can be selected from multiple crops in the target image according to the depth information, and the average value of the depth information of the target crops can be used as the base, such as the crops whose depth information is less than a preset threshold.
  • a preset proportion of crops with less depth information is selected as the target crops, and the preset proportion is 1%, that is, the depth information of the selected 1% of the crops is less than the depth information of the remaining 99% of the crops.
  • the radius estimate map and the confidence maps of multiple growth state categories are fused to obtain a fusion map.
  • the fusion map includes information corresponding to confidence maps and radius estimation maps of multiple growth state categories, thereby facilitating statistics on the growth states of crops according to the fusion map.
  • the radius estimation graph when performing information fusion, may be used as a reference graph, and the confidence maps of the multiple growth state categories may be fused and superimposed on the radius estimation graph to obtain a fusion graph.
  • the confidence map is superimposed on the radius estimation map. Specifically, the target position and confidence value of the crops in the confidence map can be added to the corresponding crops in the radius estimate map.
  • the number of confidence degrees in the confidence map of multiple growth state categories can be specifically calculated. Determine a stacking sequence corresponding to the confidence maps of the multiple growth state categories; and fuse and superimpose the confidence maps of the multiple growth state categories on the radius estimation map according to the stacking sequence.
  • the number of saplings corresponding to the normal growth category may be relatively large, so the number of corresponding confidence levels is relatively large, while the number of saplings corresponding to the abnormal growth category and the lacking seedling growth category may be relatively small, so the number of corresponding confidence levels is relatively small.
  • the confidence map may also be maximized. Value pooling is performed, invalid crops are deleted according to a preset threshold, and valid crops are retained, wherein the pixel values corresponding to the invalid crops are less than or equal to the preset threshold.
  • the preset threshold can be set by the user in order to filter out invalid crops.
  • the size of the convolution kernel used is 5*5.
  • the size of the convolution kernel can also be other values.
  • the number proportion of normal growth categories, the number proportion of abnormal growth categories, and the number proportion of seedling-missing growth types in the fusion graph can be counted; alternatively, the positions of crops with abnormal growth categories, and seedling-missing growth categories can also be counted. The corresponding position, etc.; or, the radius value of the crops can also be counted, and whether the overall growth condition of the target area meets the expected requirements can be determined according to the radius value.
  • the growth status of the crops is determined according to the fusion map, and specifically, the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories may be counted in the fusion diagram.
  • the quantity of crops and the quantity of crops corresponding to other categories determine the overall growth status of the crops.
  • Other categories are, for example, the abnormal growth category and the lack of seedling growth category.
  • the proportion of the normal growth category can be calculated, which is 95%. If the user expects a value of 90%, Then it can be determined that the saplings in the target area grow better.
  • the semantic map of the target area may be generated according to the plurality of color channel information of the acquired target image, and the semantic map of the target area may be generated according to the plurality of color channel information, and the radius estimation map and the plurality of growth state categories may be used.
  • the confidence map of the semantic map is used to mark the crops in the semantic map.
  • the semantic map can include more abundant information, which is convenient for users to use.
  • a semantic map of the target area may also be obtained, wherein the semantic map may be an image obtained by photographing the target area in advance and a map generated by using a semantic segmentation technique on the map.
  • the color areas represent different languages.
  • the semantic map can be corrected according to the fusion map to improve the accuracy of the semantic map, or the fusion map can be corrected according to the semantic map to improve the accuracy of the semantic map.
  • the accuracy of the fusion graph As a result, the accuracy of crop monitoring can be improved, thereby increasing crop yield.
  • the target image captured by the drone is acquired, and the target image is learned by using the neural network model, so that the growth status of the crops can be accurately monitored, for example, different growth statuses of the crops can be obtained.
  • Category (such as normal growth, abnormal growth or lack of seedlings, etc.), which can facilitate crop administrators to timely and targeted treatment of crops, such as fertilizing, replenishing seedlings or spraying pesticides, etc., which can improve crop yields.
  • FIG. 9 is a schematic flow chart of the steps of another method for monitoring the growth of crops provided by the embodiments of the present application.
  • the following is an example of the method for monitoring the growth of crops that can be applied to a control terminal of a flight control system.
  • the growth monitoring method includes steps S301 to S306.
  • image recognition technology can be used to identify the crops in the target image to determine the type of the crops, such as extracting the characteristics of the crops in the target image, and identifying the types of crops according to the characteristics of the crops.
  • the crop identification model can also be used to determine the type of crops, and the crop identification type can be obtained by training based on a convolutional neural network.
  • the types of crops can be, for example, grains, vegetables, fruits, medicinal materials, and of course, specific names of crops, such as apple trees, peach trees, corn seedlings, cotton seedlings, and so on.
  • a neural network model can be trained for each crop, that is, different types of crops correspond to different neural network models, and the corresponding relationship between the types of crops and the neural network model can be established to determine the crops.
  • the neural network model corresponding to the type of crops in the target image is determined according to the corresponding relationship, that is, the neural network model to be used.
  • the target image is input into the determined neural network model, and the radius estimation map of the crop and the confidence maps of multiple growth state categories are obtained, so that according to the radius estimation map and all the confidence maps
  • the confidence maps of the above-mentioned multiple growth state categories are fused to obtain a fusion map, and then statistical analysis of the growth state of the crops is performed.
  • the growth conditions of the crops meet the preset growth requirements according to the statistical results.
  • the actual position of the crop to be grown so that the user can process the crop according to the actual position, and of course, the actual position and growth state category of the crop that does not meet the preset growth requirements can be output, so that the user can carry out targeted processing, thereby improving crop yield.
  • the preset growth requirements include: the proportion of the number of crops corresponding to the abnormal growth category is greater than the first preset threshold, the proportion of the number of crops corresponding to the lack of seedling growth category is greater than the second preset threshold, and the proportion of the number of crops corresponding to the normal growth category is less than the first preset threshold.
  • One of three preset thresholds. The first preset threshold, the second preset threshold, and the third preset threshold are set according to actual needs, and their sizes are not limited here.
  • FIG. 10 shows a method for determining the actual position of a crop provided by an embodiment of the present application, and the method includes the following contents:
  • S403. Determine the actual position of the crop according to the flying position, the flying height, the angle of view and the relative position.
  • the corresponding flight information of the UAV when capturing the target image can also be acquired.
  • the flight information includes at least the flight position and the flight height.
  • the position in the world coordinate system can be measured by GPS on the drone, and the flight height is the distance from the drone to the surface of the crop, which can be measured by radar devices or binocular ranging.
  • the actual position of a certain crop needs to be determined, it is also necessary to determine the relative position of the crop relative to the center point of the target image in the target image. Therefore, it is convenient to determine the actual position of the crops according to the flight position, flight height, field of view and relative position.
  • the drone shoots the target image A from a top-down angle, and the corresponding position of the drone in the target image A is the center point in Figure 11a, and the drone is measured at this time.
  • the flight position is (x 0 , y 0 ), the flight height is h, and the field of view of the target image A captured by the photographing device is ⁇ .
  • the sapling a is abnormally growing, it is necessary to determine the actual position of the sapling a, so that the user can quickly Identify the sapling a and treat it.
  • the actual position of the sapling a and the flight position of the drone are both positions in the world coordinate system, so that it is convenient for the user to use GPS for positioning.
  • the relative position may include the distance d of the sapling a relative to the center point in the target image A and angle ⁇ .
  • the actual distance corresponding to the distance d in the world coordinate system is the distance in the target image A.
  • the actual distance corresponding to the half-angle width L of the target image A in the world coordinate system can be determined by using the trigonometric function relationship according to the field of view angle ⁇ and the flying height h when the shooting device shoots the target image A;
  • the proportional relationship of the width L in the target image A can determine the actual distance corresponding to the distance d in the world coordinate system.
  • the target image captured by the drone is acquired, and the target image is learned by using the neural network model, so that the growth status of the crops can be accurately monitored, for example, different growth statuses of the crops can be obtained.
  • category such as normal growth, abnormal growth or lack of seedlings, etc.
  • users can quickly and timely carry out targeted treatments on crops, such as fertilizing, replenishing seedlings or spraying pesticides, etc. , thereby increasing the yield of crops.
  • FIG. 12 is a schematic block diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • the drone 100 further includes at least one or more processors 101 , a memory 102 and a photographing device 13 .
  • the processor 101 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 102 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a mobile hard disk, and the like.
  • ROM Read-Only Memory
  • the photographing device 13 is used for photographing crops planted in the target area to obtain a target image.
  • the memory 102 is used for storing a computer program; the processor 101 is used for executing the computer program, and when executing the computer program, executes any one of the crop growth monitoring methods provided in the embodiments of this application.
  • the processor is configured to execute the computer program and implement the following steps when executing the computer program:
  • FIG. 13 is a schematic block diagram of a control terminal provided by an embodiment of the present application. As shown in FIG. 13 , the control terminal 200 further includes at least one or more processors 201 and a memory 202 .
  • the processor 201 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 202 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a removable hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 202 is used for storing a computer program; the processor 201 is used for executing the computer program and when executing the computer program, executes any one of the crop growth monitoring methods provided in the embodiments of this application.
  • the processor is configured to execute the computer program and implement the following steps when executing the computer program:
  • FIG. 14 is a schematic block diagram of a monitoring apparatus provided by an embodiment of the present application.
  • the monitoring device 300 further includes at least one or more processors 301 and a memory 302 .
  • the processor 301 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 302 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a removable hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 302 is used for storing a computer program; the processor 301 is used for executing the computer program and when executing the computer program, executes any one of the crop growth monitoring methods provided in the embodiments of the present application.
  • the monitoring apparatus 300 may be configured in an electronic device to implement any one of the methods for monitoring the growth of crops provided in the embodiments of the present application, for example, it may be configured in an unmanned aerial vehicle or a control terminal.
  • the embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the above implementation The steps of any one of the methods for monitoring the growth of crops provided in the example.
  • the computer-readable storage medium may be an internal storage unit of the UAV or the control terminal described in any of the foregoing embodiments, such as a memory or memory of the UAV.
  • the computer-readable storage medium can also be an external storage device of the drone, such as a plug-in hard disk equipped on the drone, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A growth monitoring method for crops, and an unmanned aerial vehicle, a control terminal, a monitoring apparatus, and a storage medium. The growth monitoring method for crops comprises: acquiring a target image obtained by an unmanned aerial vehicle photographing a target area, wherein crops are grown in the target area; inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crops and a confidence map of a plurality of growth state categories; performing information fusion according to the radius estimation map and the confidence map of the plurality of growth state categories, so as to obtain a fusion map; and determining the growth status of the crops according to the fusion map.

Description

农作物的生长监测方法、设备及存储介质Crop growth monitoring method, equipment and storage medium 技术领域technical field
本申请涉及农业监测技术领域,尤其涉及一种农作物的生长监测方法、无人机、控制终端、监测装置以及存储介质。The present application relates to the technical field of agricultural monitoring, and in particular, to a method for monitoring the growth of crops, an unmanned aerial vehicle, a control terminal, a monitoring device and a storage medium.
背景技术Background technique
目前,对农作物(比如树苗)的生长状况进行分析,一般多采用人工统计的方法,但是人工统计的方法分析速度较慢而且没有数字化记录,并且耗时耗力,有时还会出现树苗异常生长或缺乏树苗的位置没有被及时发现,因此无法进行补救,使得农作物的产量下降。因此亟需提供一种农作物的生长监测方法以解决上述问题。At present, to analyze the growth status of crops (such as saplings), the method of manual statistics is generally used. However, the method of manual statistics analysis is slow and has no digital records, and it is time-consuming and labor-intensive, and sometimes abnormal growth of saplings or The locations of the lack of saplings were not identified in time and therefore could not be remedied, resulting in reduced crop yields. Therefore, there is an urgent need to provide a method for monitoring the growth of crops to solve the above problems.
发明内容SUMMARY OF THE INVENTION
本申请提实施例供的农作物的生长监测方法、无人机、控制终端、监测装置以及存储介质,可以快速地确定农作物的生长状况,以便进行及时处理以提高农作物的产量。The method for monitoring the growth of crops, the unmanned aerial vehicle, the control terminal, the monitoring device and the storage medium provided by the embodiments of the present application can quickly determine the growth status of the crops, so that the crops can be processed in time to improve the yield of the crops.
第一方面,本申请实施例提供了一种农作物的生长监测方法,所述方法包括:In a first aspect, the embodiments of the present application provide a method for monitoring the growth of crops, the method comprising:
获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
第二方面,本申请实施例还提供了一种无人机,所述无人机包括:In a second aspect, an embodiment of the present application further provides an unmanned aerial vehicle, and the unmanned aerial vehicle includes:
机体;body;
云台,所述云台安装在所述机体上;PTZ, the PTZ is installed on the body;
拍摄装置,所述拍摄装置安装在所述云台上,通过调整所述云台能够调整所述拍摄装置的拍摄角度;a photographing device, wherein the photographing device is installed on the cradle, and the photographing angle of the photographing device can be adjusted by adjusting the cradle;
处理器和存储器;processor and memory;
其中,所述存储器用于存储计算机程序以及预先训练好的神经网络模型;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Wherein, the memory is used to store a computer program and a pre-trained neural network model; the processor is used to execute the computer program and implement the following steps when executing the computer program:
获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
第三方面,本申请实施例还提供了一种控制终端,所述控制终端与所述无人机通信连接,用于控制所述无人机飞行,所述无人机包括云台和安装在所述云台上的拍摄装置;所述控制终端包括:In a third aspect, an embodiment of the present application further provides a control terminal, the control terminal is connected to the unmanned aerial vehicle in communication, and is used to control the flying of the unmanned aerial vehicle, and the unmanned aerial vehicle includes a gimbal and a The photographing device on the PTZ; the control terminal includes:
处理器和存储器;processor and memory;
其中,所述存储器用于存储计算机程序以及预先训练好的神经网络模型;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Wherein, the memory is used to store a computer program and a pre-trained neural network model; the processor is used to execute the computer program and implement the following steps when executing the computer program:
获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
第四方面,本申请实施例还提供了一种监测装置,所述监测装置包括:In a fourth aspect, an embodiment of the present application further provides a monitoring device, the monitoring device comprising:
处理器和存储器;processor and memory;
其中,所述存储器用于存储计算机程序以及预先训练好的神经网络模型;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Wherein, the memory is used to store a computer program and a pre-trained neural network model; the processor is used to execute the computer program and implement the following steps when executing the computer program:
获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
第五方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如本申请实施例提供的任一项所述的生长监测方法的步骤。In a fifth aspect, an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor can be implemented as implemented in the present application The steps of the growth monitoring method of any one of the examples provided.
本申请实施例公开的农作物的生长监测方法、无人机、控制终端、监测装置以及存储介质,可以准确地监测到农作物的生长状况,具体包括不同生长状态类别(比如正常生长、异常生长或缺苗等),由此方便农作物管理员及时对农作物进行有针对性的处理,比如施肥、补苗或喷洒农药等,进而可以提高农作物的产量。The method for monitoring the growth of crops, the drone, the control terminal, the monitoring device and the storage medium disclosed in the embodiments of the present application can accurately monitor the growth status of the crops, specifically including different growth status categories (such as normal growth, abnormal growth or lack of growth). It is convenient for crop administrators to carry out targeted treatments on crops in a timely manner, such as fertilizing, replenishing seedlings or spraying pesticides, etc., thereby increasing the yield of crops.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not limiting of the present application.
附图说明Description of drawings
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. For those of ordinary skill, other drawings can also be obtained from these drawings without any creative effort.
图1是本申请实施例提供的一种无人机的结构示意图;1 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the present application;
图2是本申请实施例提供的一种无人机的控制系统的示意性框图;2 is a schematic block diagram of a control system of an unmanned aerial vehicle provided by an embodiment of the present application;
图3是本申请实施例提供的一种飞行控制系统的结构示意图;3 is a schematic structural diagram of a flight control system provided by an embodiment of the present application;
图4是本申请实施例提供的控制无人机拍摄的场景示意图;4 is a schematic diagram of a scene captured by a controlled drone provided by an embodiment of the present application;
图5是本申请实施例提供的一种模型训练方法的步骤示意流程图;5 is a schematic flowchart of steps of a model training method provided by an embodiment of the present application;
图6是本申请实施例提供的一种农作物的生长监测方法的步骤示意流程图;6 is a schematic flowchart of the steps of a method for monitoring the growth of crops provided in an embodiment of the present application;
图7是本申请实施例提供的神经网络模型的一种使用过程说明的示意图;FIG. 7 is a schematic diagram illustrating a usage process of a neural network model provided by an embodiment of the present application;
图8是本申请实施例提供的神经网络模型的另一种使用过程说明的示意图;8 is a schematic diagram illustrating another usage process of the neural network model provided by an embodiment of the present application;
图9是本申请实施例提供的另一种农作物的生长监测方法的步骤示意流程图;9 is a schematic flowchart of the steps of another method for monitoring the growth of crops provided in the embodiment of the present application;
图10是本申请实施例提供的一种确定农作物的实际位置的方法的步骤示意流程图;10 is a schematic flowchart of steps of a method for determining the actual position of a crop provided by an embodiment of the present application;
图11a和图11b是本申请实施例提供的确定农作物的实际位置对应原理示意图;FIG. 11a and FIG. 11b are schematic diagrams of corresponding principles for determining the actual position of crops provided by the embodiments of the present application;
图12是本申请实施例提供的一种无人机的示意框图;12 is a schematic block diagram of an unmanned aerial vehicle provided by an embodiment of the present application;
图13是本申请实施例提供的一种控制终端的示意框图;13 is a schematic block diagram of a control terminal provided by an embodiment of the present application;
图14是本申请实施例提供的一种监测装置的示意框图。FIG. 14 is a schematic block diagram of a monitoring apparatus provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should also be understood that the terminology used in the specification of the application herein is for the purpose of describing particular embodiments only and is not intended to limit the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural unless the context clearly dictates otherwise.
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should also be further understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items .
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flowcharts shown in the figures are for illustration only, and do not necessarily include all contents and operations/steps, nor do they have to be performed in the order described. For example, some operations/steps can also be decomposed, combined or partially combined, so the actual execution order may be changed according to the actual situation.
目前,对农作物(比如树苗、禾苗、蔬菜苗等)的生长状况进行分析,一般多采用人工统计的方法,即需要监测人员现场统计分析,这类人工统计的方法分析速度慢而且没有数字化记录,同时还耗时耗力,有时还会出现树苗异常生长或缺乏树苗的位置没有被及时发现,因此农作物的管理人员无法进行针对性的补救处理,针对性的补救处理比如对异常生长进行施肥或除害虫等,对缺苗进行补苗,由于无法进行及时补救,使得最终树苗的产量下降。At present, the analysis of the growth status of crops (such as tree seedlings, grass seedlings, vegetable seedlings, etc.) is generally carried out by manual statistical methods, that is, monitoring personnel are required to conduct statistical analysis on the spot. Such manual statistical methods are slow in analysis and have no digital records. At the same time, it is also time-consuming and labor-intensive, and sometimes the abnormal growth of saplings or the location of the lack of saplings is not detected in time, so crop managers cannot carry out targeted remedial treatment, such as fertilizing or removing abnormal growth. Pests, etc., make up for the missing seedlings, because the timely remediation cannot be carried out, the final output of the saplings will decrease.
为此,本申请的实施例提供了一种农作物的生长监测方法、无人机、控制终端、监测装置及存储介质。通过利用无人机和神经网络技术,可以提高农作物的生长状况监测效率和准确率,进而提高农作物的产量。To this end, the embodiments of the present application provide a method for monitoring the growth of crops, an unmanned aerial vehicle, a control terminal, a monitoring device, and a storage medium. By using drones and neural network technology, the monitoring efficiency and accuracy of crop growth conditions can be improved, thereby increasing crop yields.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and features in the embodiments may be combined with each other without conflict.
本申请实施例提供的农作物的生长监测方法,具体是利用无人机对目标区域种植的农作物进行拍摄得到的目标图像进行处理,以实现对农作物的生长状况监测。因此在介绍农作物的生长监测方法之前,先对无人机结构以及工作原理进行介绍。The method for monitoring the growth of crops provided by the embodiments of the present application specifically uses a drone to process target images obtained by photographing crops grown in a target area, so as to monitor the growth conditions of the crops. Therefore, before introducing the growth monitoring method of crops, the structure and working principle of the UAV are introduced first.
请参阅图1和图2,图1示出了本申请实施例提供的无人机100的结构,图2示出了本申请实施例提供的无人机100的控制系统的结构框架。如图1和图2所示,该无人机100可以包括机体11、云台12、拍摄装置13、动力系统14和控制系统15等。Please refer to FIG. 1 and FIG. 2 . FIG. 1 shows the structure of the drone 100 provided by the embodiment of the present application, and FIG. 2 shows the structural framework of the control system of the drone 100 provided by the embodiment of the present application. As shown in FIG. 1 and FIG. 2 , the UAV 100 may include a body 11 , a gimbal 12 , a photographing device 13 , a power system 14 , a control system 15 , and the like.
机体11可以包括机身和脚架(也称为起落架)。机身可以包括中心架以及与中心架连接的一个或多个机臂,一个或多个机臂呈辐射状从中心架延伸出。脚架与机身连接,用于在无人机100着陆时起支撑作用。 Airframe 11 may include a fuselage and a foot frame (also called landing gear). The fuselage may include a center frame and one or more arms connected to the center frame, the one or more arms extending radially from the center frame. The tripod is connected with the fuselage, and is used for supporting when the drone 100 is landed.
云台12安装在机体11上,用于搭载拍摄装置13。其中,该云台12可以包括三个电机,即云台12为三轴云台,在无人机100的控制系统15的控制下,可以调整拍摄装置13的拍摄角度,该拍摄角度可以理解为拍摄装置13的镜头朝向待拍摄目标的方向相对水平方向或竖直方向的角度。The pan/tilt 12 is mounted on the body 11 for mounting the photographing device 13 . The pan/tilt 12 may include three motors, that is, the pan/tilt 12 is a three-axis pan/tilt. Under the control of the control system 15 of the UAV 100, the shooting angle of the photographing device 13 can be adjusted, and the shooting angle can be understood as The angle of the direction in which the lens of the photographing device 13 faces the target to be photographed relative to the horizontal direction or the vertical direction.
在一些实施例中,云台12还可以包括控制器,用于通过控制云台的电机来 控制云台12的运动,进而调整拍摄装置13的拍摄角度。应理解,云台12可以独立于无人机100,也可以为无人机100的一部分。还应理解,电机122可以是直流电机,也可以是交流电机;或者,电机122可以是无刷电机,也可以是有刷电机。In some embodiments, the pan/tilt 12 may further include a controller for controlling the movement of the pan/tilt 12 by controlling the motor of the pan/tilt, thereby adjusting the shooting angle of the shooting device 13. It should be understood that the gimbal 12 may be independent of the UAV 100 , or may be a part of the UAV 100 . It should also be understood that the motor 122 may be a DC motor or an AC motor; or, the motor 122 may be a brushless motor or a brushed motor.
拍摄装置13可例如为照相机或摄像机等用于捕获图像的设备,拍摄装置13可以与控制系统15通信,并在控制系统15的控制下进行拍摄。在本申请的实施例中,拍摄装置13是通过云台12搭载在无人机100的机体11上。可以理解的是,拍摄装置13也可直接固定于无人机100的机体11上,从而云台12可以省略。The photographing device 13 can be, for example, a device for capturing images such as a camera or a video camera, and the photographing device 13 can communicate with the control system 15 and perform photographing under the control of the control system 15 . In the embodiment of the present application, the photographing device 13 is mounted on the body 11 of the drone 100 through the gimbal 12 . It can be understood that, the photographing device 13 can also be directly fixed on the body 11 of the drone 100, so that the gimbal 12 can be omitted.
在一些实施例中,可以控制拍摄装置13以俯视角对目标区域进行拍摄,得到目标图像。其中,该目标区域种植有农作物,目标区域比如为某一个田地块,或者为用户需要监测的一片区域,农作物比如树苗、禾苗等,以便对该目标图像进行识别处理分析,确定目标区域的农作物的生长状况。In some embodiments, the photographing device 13 may be controlled to photograph the target area from a bird's-eye view to obtain the target image. Among them, the target area is planted with crops, such as a certain field plot, or an area that the user needs to monitor, such as crops such as saplings, grass seedlings, etc., so that the target image can be identified, processed and analyzed to determine the crops in the target area. growth status.
俯视角为拍摄装置13的镜头的光轴方向与待拍摄的目标区域垂直,或者是与待拍摄的目标区域大致垂直,该大致垂直比如呈88度或92度等,当然也可以为其他角度值,在此不做限定。以俯视角进行拍摄,可以提高图像识别的准确率和效率,进而有利于对农作物的生长状况的监测。The top view angle is that the optical axis direction of the lens of the photographing device 13 is perpendicular to the target area to be photographed, or is approximately perpendicular to the target area to be photographed, and the approximately perpendicularity is, for example, 88 degrees or 92 degrees. , which is not limited here. Shooting from a bird's-eye view can improve the accuracy and efficiency of image recognition, which is conducive to monitoring the growth of crops.
在一些实施例中,拍摄装置13可以包括单目摄像头或者双目摄像头,用于实现不同功能的拍摄,比如单目摄像头用于拍摄目标区域的目标图像,具体可以为RGB图像,双目摄像头可以测量目标物体距离无人机的距离,还可以测量农作物至无人机的距离,即得到农作物的深度图,该深度图可以和目标图像结合使用,以提高对农作物的状态类别和半径的识别效果。In some embodiments, the photographing device 13 may include a monocular camera or a binocular camera, which is used for capturing different functions. For example, the monocular camera is used to photograph a target image of a target area, which may specifically be an RGB image, and the binocular camera may Measure the distance from the target object to the drone, and also measure the distance from the crop to the drone, that is, to get the depth map of the crop, which can be used in conjunction with the target image to improve the recognition effect of the state category and radius of the crop .
在一些实施例中,可以将深度图像与目标图像结合使用,以分析目标区域的农作物的生长状况,进而可以更准确地分析出农作物的生长状况。In some embodiments, the depth image can be used in combination with the target image to analyze the growth conditions of crops in the target area, so that the growth conditions of crops can be more accurately analyzed.
动力系统14可以包括一个或多个电子调速器(简称为电调)、一个或多个螺旋桨以及与一个或多个螺旋桨相对应的一个或多个电机,其中,电机连接在电子调速器与螺旋桨之间,电机和螺旋桨设置在无人机100的机臂上。电子调速器用于接收控制系统15产生的驱动信号,并根据该驱动信号提供驱动电流给电机,以控制电机的转速进而驱动螺旋桨旋转,从而为无人机100的飞行提供动力,该动力使得无人机100能够实现一个或多个自由度的运动。在某些实施 例中,无人机100可以围绕一个或多个旋转轴旋转。The power system 14 may include one or more electronic governors (referred to as ESCs for short), one or more propellers, and one or more motors corresponding to the one or more propellers, wherein the motors are connected to the electronic governors. Between the propeller and the motor, the motor and the propeller are arranged on the arm of the drone 100 . The electronic governor is used to receive the driving signal generated by the control system 15, and provide a driving current to the motor according to the driving signal, so as to control the speed of the motor and then drive the propeller to rotate, so as to provide power for the flight of the UAV 100, which makes no The human-machine 100 is capable of motion in one or more degrees of freedom. In some embodiments, the drone 100 can rotate about one or more axes of rotation.
例如,上述旋转轴可以包括横滚轴(Roll)、偏航轴(Yaw)和俯仰轴(pitch)。应理解,电机可以是直流电机,也可以交流电机。另外,电机可以是无刷电机,也可以是有刷电机。For example, the above-mentioned rotation axis may include a roll axis (Roll), a yaw axis (Yaw), and a pitch axis (pitch). It should be understood that the motor may be a DC motor or an AC motor. In addition, the motor may be a brushless motor or a brushed motor.
控制系统15可以包括控制器和传感系统。其中,控制器用于控制无人机100的飞行,例如,可以根据传感系统测量的姿态信息控制无人机100的飞行。应理解,控制器可以按照预先编好的程序指令对无人机100进行控制,也可以通过响应来自控制终端的一个或多个控制指令对无人机100进行控制。传感系统用于测量无人机100的姿态信息,即无人机100在空间的位置信息和状态信息,例如,三维位置、三维角度、三维速度、三维加速度和三维角速度等。 Control system 15 may include a controller and a sensing system. The controller is used to control the flight of the UAV 100, for example, the flight of the UAV 100 can be controlled according to the attitude information measured by the sensing system. It should be understood that the controller can control the UAV 100 according to pre-programmed instructions, and can also control the UAV 100 by responding to one or more control instructions from the control terminal. The sensing system is used to measure the attitude information of the UAV 100, that is, the position information and state information of the UAV 100 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, and three-dimensional angular velocity.
传感系统例如可以包括陀螺仪、超声传感器、电子罗盘、惯性测量单元(Inertial Measurement Unit,IMU)、视觉传感器、全球导航卫星系统和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。For example, the sensing system may include at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, a barometer, and other sensors. For example, the global navigation satellite system may be the Global Positioning System (GPS).
控制器可以包括一个或多个处理器和存储器。处理器例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。存储器可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。The controller may include one or more processors and memory. The processor may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), and the like. The memory may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
在本申请的实施例中,若使用无人机执行农作物的生长监测方法,该无人机的存储器需要存储有预先训练好的神经网络模型。其中,该神经网络模型的输入为无人机100拍摄目标图像,输出为农作物的半径估计图和多个生长状态类别的置信度图。In the embodiment of the present application, if a drone is used to perform the method for monitoring the growth of crops, the memory of the drone needs to store a pre-trained neural network model. Among them, the input of the neural network model is the target image captured by the drone 100, and the output is the radius estimation map of the crop and the confidence map of multiple growth state categories.
在一些实施例中,为了节省无人机的存储空间,可以对预先训练好的神经网络模型进行压缩处理,将压缩处理后的神经网络模型存储在无人机的存储器中,其中,该压缩处理至少包括剪枝处理等。In some embodiments, in order to save the storage space of the UAV, the pre-trained neural network model can be compressed, and the compressed neural network model is stored in the memory of the UAV, wherein the compression process At least including pruning and so on.
在一些实施例中,无人机100还可以包括雷达装置,该雷达装置安装在无人机100上,具体可以安装在无人机100的机体11上,在无人机100的飞行过程中,用于测量无人机100的周围环境,比如障碍物等,以确保飞行的安全性。In some embodiments, the UAV 100 may further include a radar device, and the radar device may be installed on the UAV 100, specifically, may be installed on the body 11 of the UAV 100. During the flight of the UAV 100, It is used to measure the surrounding environment of the UAV 100, such as obstacles, etc., to ensure the safety of flight.
雷达装置安装在无人机100的脚架上,该雷达装置与控制系统15通信连接, 雷达装置将采集到的观测数据传输至控制系统,由控制系统15进行处理。The radar device is installed on the tripod of the UAV 100, the radar device is connected to the control system 15 in communication, and the radar device transmits the collected observation data to the control system for processing by the control system 15.
需要说明的是,无人机100可以包括两个或两个以上脚架,雷达装置搭载在其中一个脚架上。雷达装置也可以搭载在无人机100的其他位置,对此不作具体限定。It should be noted that the UAV 100 may include two or more tripods, and the radar device is mounted on one of the tripods. The radar device may also be mounted on other positions of the UAV 100, which is not specifically limited.
雷达装置主要包括射频前端模块和信号处理模块,射频前端模块可以包括发射天线和接收天线,发射天线用于向目标发送信号,接收天线用于接收被目标反射回来的信号,信号处理模块负责产生调制信号以及对采集的中频信号进行处理分析,其中目标比如为障碍物,具体可以是建筑物、铁塔、树木等。The radar device mainly includes a radio frequency front-end module and a signal processing module. The radio frequency front-end module may include a transmitting antenna and a receiving antenna. The transmitting antenna is used to send signals to the target, and the receiving antenna is used to receive the signal reflected by the target. The signal processing module is responsible for generating modulation Signal and process and analyze the collected intermediate frequency signal, where the target is an obstacle, such as a building, an iron tower, a tree, etc.
在本申请的实施例中,该雷达装置具体可以测量农作物至无人机的距离,进而得到农作物的深度图。In the embodiment of the present application, the radar device can specifically measure the distance from the crop to the drone, and then obtain the depth map of the crop.
本申请的实施例提供的农作物的生长监测方法,均是通过对获取到的无人机100对农作物拍摄的目标图像进行处理实现的。其中,本申请提供的任一项农作物的生长监测方法可以应用于无人机100,具体应用于无人机100的控制器。The methods for monitoring the growth of crops provided by the embodiments of the present application are all realized by processing the acquired target images of crops photographed by the drone 100 . Wherein, any one of the crop growth monitoring methods provided in this application can be applied to the UAV 100 , and is specifically applied to the controller of the UAV 100 .
示例性的,无人机100的控制器用于:获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及根据所述融合图确定所述农作物的生长状况。通过无人机拍摄目标图像,可以提高统计分析效率,利用神经网络模型对目标图像进行学习得到农作物的半径估计图和多个生长状态类别的置信度图,可以更为准确地分析出农作物的生长状况。Exemplarily, the controller of the unmanned aerial vehicle 100 is used to: obtain a target image obtained by photographing a target area by the unmanned aerial vehicle, and the target area is planted with crops; input the target image into the pre-trained neural network model, Obtain the radius estimation map of the crop and the confidence maps of multiple growth state categories; perform information fusion according to the radius estimation map and the confidence maps of the multiple growth state categories to obtain a fusion map; and according to the fusion The graph determines the growth status of the crop. Shooting target images by drones can improve the efficiency of statistical analysis. Using the neural network model to learn the target images, the radius estimation map of crops and the confidence maps of multiple growth state categories can be obtained, which can more accurately analyze the growth of crops. situation.
需要说明的是,本申请实施例提供的农作物的生长监测方法,除了可以应用于无人机外,还可以应用于其他电子设备,比如应用于飞行控制系统的控制终端。It should be noted that, the method for monitoring the growth of crops provided by the embodiments of the present application can be applied to other electronic devices, such as a control terminal of a flight control system, in addition to the drone.
请参阅图3,图3示出了本申请实施例提供的一种飞行控制系统的结构,该飞行控制系统包括无人机100和控制终端200。控制终端200为位于无人飞行系统100的地面端,可以通过无线方式与无人机100进行通信,用于对无人机100进行远程操纵。Please refer to FIG. 3 . FIG. 3 shows a structure of a flight control system provided by an embodiment of the present application. The flight control system includes an unmanned aerial vehicle 100 and a control terminal 200 . The control terminal 200 is located at the ground end of the unmanned aerial vehicle system 100 , and can communicate with the unmanned aerial vehicle 100 in a wireless manner, so as to remotely control the unmanned aerial vehicle 100 .
其中,该控制终端200可以包括遥控器、智能手机、平板电脑或笔记本电 脑等。Wherein, the control terminal 200 may include a remote control, a smart phone, a tablet computer or a notebook computer, and the like.
可以理解的是,控制终端200执行所述农作物的生长监测方法时,该控制终端200还应预先保存有训练好的神经网络模型,该神经网络模型的输入为无人机100拍摄目标图像,输出为农作物的半径估计图和多个生长状态类别的置信度图。It can be understood that when the control terminal 200 executes the method for monitoring the growth of crops, the control terminal 200 should also pre-store a trained neural network model, and the input of the neural network model is the target image captured by the drone 100, and the output is Radius estimates for crops and confidence maps for multiple growth state categories.
示例性的,控制终端200用于:获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及根据所述融合图确定所述农作物的生长状况。通过无人机拍摄目标图像,可以提高统计分析效率,利用神经网络模型对目标图像进行学习得到农作物的半径估计图和多个生长状态类别的置信度图,可以更为准确地分析出农作物的生长状况。Exemplarily, the control terminal 200 is used to: obtain a target image obtained by photographing a target area by an unmanned aerial vehicle, and the target area is planted with crops; input the target image into a pre-trained neural network model to obtain the The radius estimation map of the crop and the confidence map of multiple growth state categories; perform information fusion according to the radius estimate map and the confidence map of the multiple growth state categories to obtain a fusion map; and determine the Describe the growth status of crops. Shooting target images by drones can improve the efficiency of statistical analysis. Using the neural network model to learn the target images, the radius estimation map of crops and the confidence maps of multiple growth state categories can be obtained, which can more accurately analyze the growth of crops. situation.
在实际应用中,如图4所示,用户可以通过控制终端200控制无人机100飞行至目标区域,该目标区域种植有农作物,并通过无人机100的拍摄装置对目标区域进行拍摄得到目标图像,无人机100将拍摄的目标图像发送给控制终端200,控制终端200将获取到的目标图像输入至神经网络模型进行学习,得到农作物的半径估计图和多个生长状态类别的置信度图;并根据半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及根据所述融合图确定所述农作物的生长状况。相对人工统计的方式,该生长监测方法可以提高监测效率,同时又可以监测农作物的多个生长状态类别的生长状况,以便用户进行有针对的补救处理,由此可以提高农作物的产量。In practical applications, as shown in FIG. 4 , the user can control the drone 100 to fly to a target area through the control terminal 200 , and the target area is planted with crops, and the target area is photographed by the photographing device of the drone 100 to obtain the target area. image, the drone 100 sends the captured target image to the control terminal 200, and the control terminal 200 inputs the obtained target image into the neural network model for learning, and obtains the radius estimation map of the crop and the confidence map of multiple growth state categories ; and perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and determine the growth state of the crops according to the fusion map. Compared with manual statistics, the growth monitoring method can improve the monitoring efficiency, and at the same time can monitor the growth status of multiple growth status categories of crops, so that the user can perform targeted remedial treatment, thereby improving the yield of crops.
本申请实施例提供的农作物的生长监测方法除了需要利用无人机,还需要利用神经网络模型,不管该农作物的生长监测方法应用于无人机,还是应用于控制终端,这些设备(无人机和控制终端等)中均需存储有预先训练好的神经网络模型。The method for monitoring the growth of crops provided by the embodiments of the present application requires the use of a neural network model in addition to the use of unmanned aerial vehicles. Regardless of whether the method for monitoring the growth of crops is applied to an unmanned aerial vehicle or a control terminal, these devices (unmanned aerial vehicle) and control terminal, etc.) need to store a pre-trained neural network model.
因此,为了便于理解,在详细介绍本申请实施例提供的农作物的生长监测方法之前,首先介绍本申请实施例提供的一种神经网络模型的训练方法,该神经网络模型的训练方法可以在控制终端进行训练,也可以在模型训练设备中进行训练,其中,模型训练设备比如为包括GPU的计算机设备,以提高模型的训 练效率。Therefore, in order to facilitate understanding, before the detailed description of the method for monitoring the growth of crops provided by the embodiments of the present application, the training method of a neural network model provided by the embodiments of the present application is first introduced. The training can also be performed in a model training device, where the model training device is, for example, a computer device including a GPU, so as to improve the training efficiency of the model.
请参阅图5,图5示出了本申请实施例提供的一种神经网络模型的训练方法,训练得到的神经网络模型可以输出农作物的半径估计图和多个生长状态类别的置信度图。Please refer to FIG. 5. FIG. 5 shows a training method of a neural network model provided by an embodiment of the present application. The neural network model obtained by training can output a radius estimation map of crops and a confidence map of multiple growth state categories.
如图5所示,该模型的训练方法包括:步骤S101至步骤S103。As shown in FIG. 5 , the training method of the model includes steps S101 to S103.
S101、获取样本数据,所述样本数据包括同一农作物的多个图像,所述图像上标注有所述农作物的多个生长状态类别和所述农作物的半径值;S101. Obtain sample data, where the sample data includes multiple images of the same crop, and the images are marked with multiple growth state categories of the crop and a radius value of the crop;
S102、将所述样本数据分为训练数据和测试数据;S102, dividing the sample data into training data and test data;
S103、基于确定的神经网络,利用所述训练数据进行模型训练,以及利用所述测试数据进行模型测试,在训练的模型收敛时得到神经网络模型,作为所述预先训练好的神经网络模型。S103. Based on the determined neural network, use the training data to perform model training, and use the test data to perform model testing, and obtain a neural network model when the trained model converges as the pre-trained neural network model.
在模型训练之前,需要先准备样本数据,以及确定需要使用的神经网络及其网络架构,在本申请的实施例中,采用的神经网络为全卷积升级网络,当然也可以采用其他卷积神经网络,网络架构采用Google开发的GoogLeNet,当然也可以采用其他网络架构,比如采用AlexNet或ResNet等结构。Before model training, it is necessary to prepare sample data, and determine the neural network and its network architecture to be used. In the embodiment of this application, the neural network used is a fully convolutional upgrade network. Of course, other convolutional neural networks can also be used. Network, the network architecture adopts GoogLeNet developed by Google, of course, other network architectures can also be used, such as AlexNet or ResNet and other structures.
在准备样本数据时,首先确定需要监测的农作物,比如树苗,收集一定数量的包括该农作物的图像,对这些图像进行处理,该处理具体包括进行裁剪和标注等,裁剪是指将所有的图像裁剪具有相同像素的图像,标注是指多图像中的农作物的半径大小进行标注坐标以及对农作物的生长状态类别进行标注,以期望训练后的神经网络模型输出农作物的半径估计图和多个生长状态类别的置信度图。When preparing sample data, first determine the crops that need to be monitored, such as saplings, collect a certain number of images including the crops, and process these images, which specifically includes cropping and labeling. Cropping refers to cropping all images. For images with the same pixel, labeling refers to labeling the coordinates of the radius of crops in multiple images and labeling the growth state categories of crops, in order to expect the trained neural network model to output the estimated radius of crops and multiple growth state categories. confidence map.
在一些实施例中,为了使得训练的模型输出更好的效果,标注具体包括在对应的置信度特征图上生成高斯分布的圆形和在对应的半径特征图上记录归一化的农作物半径,生成所有标注坐标对应的高斯圆形和归一化的农作物半径,以便模型输出农作物的半径估计图和生长状态类别的置信度图,其中,半径估计图包括归一化的农作物半径,生长状态类别的置信度图中包括高斯圆形。In some embodiments, in order to make the trained model output a better effect, the labeling specifically includes generating a circle with Gaussian distribution on the corresponding confidence feature map and recording the normalized crop radius on the corresponding radius feature map, Generate Gaussian circles and normalized crop radii corresponding to all labeled coordinates, so that the model can output a crop radius estimate map and a confidence map of growth state categories, where the radius estimate map includes normalized crop radius, growth state category The confidence map for includes a Gaussian circle.
其中,生长状态类别至少包括正常生长类别、异常生长类别和缺苗生长类别中的一种或多种,其中,异常生长类别是不同于正常生长和缺苗的类别,比如有害虫或生长矮小等。Among them, the growth state category includes at least one or more of the normal growth category, the abnormal growth category, and the lacking seedling growth category, wherein the abnormal growth category is a category that is different from the normal growth and lacking seedlings, such as pests or dwarf growth, etc. .
将处理后的图像分为两组,一组作为训练数据,另一组作为测试数据。具 体可以按照一定比例进行分组,比如案子9:1或者8:2将处理后的图像分为两组。The processed images are divided into two groups, one as training data and the other as test data. Specifically, it can be grouped according to a certain ratio. For example, in the case of 9:1 or 8:2, the processed images are divided into two groups.
将训练数据输入确定的神经网络的输入层,对模型进行训练,通过利用测试数据对训练得到的模型进行测试,当训练得到的模型的输出结果与目标结果相近时,即在训练的模型收敛时,得到神经网络模型作为所述预先训练好的神经网络模型。该神经网络模型能够对包括农作物的目标图像进行学习,能够输出农作物的半径估计图和多个生长状态类别的置信度图。Input the training data into the input layer of the determined neural network, train the model, and test the trained model by using the test data. When the output result of the trained model is similar to the target result, that is, when the trained model converges , a neural network model is obtained as the pre-trained neural network model. The neural network model can learn target images including crops, and can output a radius estimation map of crops and a confidence map of multiple growth state categories.
在一些实施例中,可以将样本数据加入农作物的深度图,对模型进行训练,使得得到神经网络模型考虑到农作物的深度图的因素,进而可以使得训练得到的神经网络模型输出更为准确的农作物的半径估计图和多个生长状态类别的置信度图。In some embodiments, the sample data can be added to the depth map of the crops, and the model can be trained, so that the obtained neural network model takes into account the factors of the depth map of the crops, so that the trained neural network model can output more accurate crops. Radius estimates and confidence maps for multiple growth state categories.
在一些实施例中,为了进一步提高农作物的生长状况监测准确率,可以对农作物进行分类,得到农作物的类型,针对每一个类型训练一个神经网络模型,比如树苗类型、禾苗类型、蔬菜类型等均训练相对应的神经网络模型。当然,还可以针对每一种农作物都训练一个神经网络模型,将训练好的神经网络模型与农作物的建立对应关系,以便在拍摄到目标图像时,识别到目标图像中的农作物确定对应的神经网络模型,当然也可以有用户知道监测的农作物后,选择该农作物对应的神经网络模型,根据用户的选择操作进而确定该农作物要使用的神经网络模型。In some embodiments, in order to further improve the monitoring accuracy of the growth status of crops, the crops can be classified to obtain the types of crops, and a neural network model can be trained for each type, such as training for sapling types, grass seedling types, vegetable types, etc. The corresponding neural network model. Of course, it is also possible to train a neural network model for each crop, and establish a corresponding relationship between the trained neural network model and the crop, so that when the target image is captured, the crop in the target image can be identified to determine the corresponding neural network. Of course, the user can also select the neural network model corresponding to the crop after knowing the crop to be monitored, and then determine the neural network model to be used for the crop according to the user's selection operation.
在训练好神经网络模型后,可以将该神经网络模型保存在无人机、控制终端等电子设备中,以便这些电子设备利用该神经网络模型执行本申请实施例提供的农作物的监测方法,进而实现对农作物的生长监测。After the neural network model is trained, the neural network model can be stored in electronic devices such as drones and control terminals, so that these electronic devices can use the neural network model to perform the crop monitoring method provided by the embodiments of the present application, thereby realizing Growth monitoring of crops.
请参阅图6,图6是本申请实施例提供的一种农作物的生长监测方法的步骤示意流程图,以下以该生长监测方法可以应用于飞行控制系统的控制终端为例进行。Please refer to FIG. 6 . FIG. 6 is a schematic flowchart of steps of a method for monitoring the growth of crops provided by an embodiment of the present application. The following is an example of the method for monitoring growth of crops being applied to a control terminal of a flight control system.
如图6所示,该农作物的生长监测方法包括步骤S201至步骤S204。As shown in FIG. 6 , the method for monitoring the growth of crops includes steps S201 to S204.
S201、获取无人机对目标区域进行拍摄得到的目标图像。S201 , acquiring a target image obtained by photographing a target area by a drone.
目标区域可以为用户需要监测的地块,该目标区域种植有农作物,农作物比如果树、玉米苗等。The target area can be the plot that the user needs to monitor, and the target area is planted with crops, such as fruit trees and corn seedlings.
在用户需要对该目标区域进行监测时,可以控制无人机飞行至该目标区域 并调整无人机的拍摄装置对目标区域进行拍摄,得到目标图像,该目标图像中包括待监测的农作物。When the user needs to monitor the target area, he can control the drone to fly to the target area and adjust the shooting device of the drone to shoot the target area to obtain a target image, which includes the crops to be monitored.
在一些实施例中,可以控制无人机的拍摄装置以俯视角对所述目标区域进行拍摄得到的目标图像,通过俯视角进行拍摄目标图像,方便识别出农作物的半径估计图和多个生长状态类别的置信度图,由此提高农作物的监测效率和准确率。In some embodiments, the photographing device of the drone can be controlled to photograph the target area from a bird's-eye view, and the target image can be captured from a bird's-eye view, so as to easily identify the estimated radius of crops and multiple growth states. Confidence map of the categories, thereby improving the monitoring efficiency and accuracy of crops.
在一些实施例中,拍摄的目标图像为RGB图像,或者其他格式图片可以转换成RGB图像,并将该RGB图像输入所述神经网络模型进行学习。RGB图像不仅可以提高模型识别准确率,还方便与该目标区域的语义地图结合,对农作物的生长状况进行监测。In some embodiments, the captured target image is an RGB image, or pictures in other formats can be converted into RGB images, and the RGB images are input into the neural network model for learning. RGB images can not only improve the accuracy of model recognition, but also facilitate the combination with the semantic map of the target area to monitor the growth of crops.
在一些实施例中,还可以获取所述无人机拍摄所述目标区域的农作物的目标图像时对应的深度图,其中,所述深度图包括农作物的深度信息,所述深度信息为所述农作物至所述无人机的距离。In some embodiments, a depth map corresponding to when the drone captures the target image of the crop in the target area may also be obtained, wherein the depth map includes depth information of the crop, and the depth information is the crop distance to the drone.
S202、将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图。S202. Input the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories.
该预先训练好的神经网络模型为采样上述模型训练方法得到的,能够对目标图像进行学习以输出目标图像中农作物的半径估计图和多个生长状态类别的置信度图,具体比如为全卷积神经网络模型。The pre-trained neural network model is obtained by sampling the above model training method, and can learn the target image to output the radius estimation map of the crops in the target image and the confidence maps of multiple growth state categories, such as full convolution Neural network model.
其中,所述生长状态类别至少包括正常生长类别、异常生长类别和缺苗生长类别中的一种或多种,当然也可以包括其他类别。当然划分更多的类别,比如将异常生长类别进行划分生长矮小类别、虫害类别等等。在每个生长状态类别下的农作物均对应置信度图,该置信度图包括用于表示农作物当前的生长状态类别的置信度值,具体可以为[0,1]内的概率值。Wherein, the growth state category includes at least one or more of a normal growth category, an abnormal growth category, and a seedling-deficient growth category, and may of course also include other categories. Of course, there are more categories, for example, the abnormal growth category is divided into the category of short growth, the category of insect pests, and so on. Each crop under each growth state category corresponds to a confidence level map, and the confidence level map includes a confidence level value used to represent the current growth state category of the crop, specifically a probability value in [0, 1].
具体地,如图7所示,将获取的目标图像输入至全卷积神经网络模型,通过全卷积神经网络模型对该目标图像进行学习,输出该目标图像中农作物的半径估计图和多个生长状态类别的置信度图。图7中的类别1可以为正常生长类别、类别2可以异常生长类别等等,其中每一个生长状态类别均对应一个置信度图,该置信度图包括至少一个农作物的高斯圆形以及对应的半径值,其中高斯圆形可以表示该农作物为某一生长状态类别的概率值,比如为正常生长类别的概率值,当然置信度图还可以包括置信度值。所述半径估计图包括所述农作 物的经归一化处理后的半径值。Specifically, as shown in FIG. 7 , the obtained target image is input into the fully convolutional neural network model, the target image is learned through the fully convolutional neural network model, and the radius estimation map of the crops in the target image and multiple Confidence map for growth state categories. Category 1 in FIG. 7 can be a normal growth category, category 2 can be an abnormal growth category, etc., wherein each growth state category corresponds to a confidence map, and the confidence map includes a Gaussian circle of at least one crop and a corresponding radius value, where the Gaussian circle can represent the probability value of the crop being a certain growth state category, such as the probability value of the normal growth category, of course, the confidence map can also include the confidence value. The radius estimate map includes normalized radius values for the crop.
在一些实施例中,为了更好地统计农作的生长状况,置信度图包括农作物在所述置信度图中的目标位置以及所述目标位置对应的置信度值,其中,所述置信度值为概率值,用于表示所述农作物的生长状态类别的可信度。In some embodiments, in order to better count the growth status of crops, the confidence map includes a target position of crops in the confidence map and a confidence value corresponding to the target position, wherein the confidence value is a probability value, which is used to represent the reliability of the category of the growth state of the crop.
示例性的,比如,全卷积神经网络模型用于对树苗进行检测,输出三个类别的置信度图,分别为正常生长类别对应的置信度图1、异常生长类别对应的置信度图2和缺苗生长类别对应的置信度图3。Exemplarily, for example, the fully convolutional neural network model is used to detect saplings, and output confidence maps of three categories, which are respectively the confidence map 1 corresponding to the normal growth category, the confidence map 2 and the abnormal growth category. Figure 3. Corresponding confidence levels for the lack of seedling growth categories.
置信度图1包括多个树苗在置信度图中目标位置以及对应的置信度值,同样置信度图2和置信度图3也包括多个树苗在置信度图中目标位置以及对应的置信度值,只是在置信度图2为异常生长类别的树苗,在置信度图3为缺苗对应位置。 Confidence graph 1 includes the target positions of multiple saplings in the confidence graph and the corresponding confidence values. Similarly, confidence graphs 2 and 3 also include the target positions of multiple saplings in the confidence graph and the corresponding confidence values. , only in the confidence map 2 is the abnormal growth category of saplings, in the confidence map 3 is the corresponding position of the lack of seedlings.
在一些实施例中,还可以将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型,其中,在训练所述神经网络模型时使用的训练数据包括农作物的深度信息。以便该神经网络模型可以对目标图像和深度图进行学习,并输出所述农作物的半径估计图和多个生长状态类别的置信度图。利用深度图可以进一步地提高输出的置信度图和半径估计图的准确率。In some embodiments, the target image and the depth map corresponding to the target image may also be input into a pre-trained neural network model, wherein the training data used in training the neural network model includes crops in-depth information. So that the neural network model can learn the target image and the depth map, and output the radius estimation map of the crop and the confidence map of multiple growth state categories. Using the depth map can further improve the accuracy of the output confidence map and radius estimation map.
具体地,如图8所示,将获取到目标图像A以及该目标图像A对应的深度图A一起输入至全卷积神经网络模型,通过全卷积神经网络模型对该目标图像进行学习,输出该目标图像中农作物的半径估计图和多个生长状态类别的置信度图。Specifically, as shown in FIG. 8 , the obtained target image A and the depth map A corresponding to the target image A are input into the fully convolutional neural network model, the target image is learned through the fully convolutional neural network model, and the output Radius estimates for crops in this target image and confidence maps for multiple growth state categories.
在一些实施例中,提高神经网络模型的学习效率,在将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型之前,还可以对深度图进行归一化处理。具体为:先确定基准参数,再根据所述基准参数对所述深度图包括的农作物的深度信息进行归一化处理,得到归一化处理后的深度图,其中,所述基准参数作为归一化处理的基数,比如将农作物的深度信息均减去该基数。In some embodiments, to improve the learning efficiency of the neural network model, before inputting the target image and the depth map corresponding to the target image together into the pre-trained neural network model, the depth map may also be normalized deal with. Specifically, the reference parameters are first determined, and then the depth information of the crops included in the depth map is normalized according to the reference parameters to obtain a normalized depth map, wherein the reference parameters are used as normalized The base number for processing, for example, the depth information of crops is subtracted from the base number.
确定基准参数,具体可以根据深度信息,从目标图像中多个农作物中选取目标农作物,将目标农作物的深度信息的平均值作为基数,目标农作物比如为其深度信息小于预设阈值的农作物。或者,选取预设比例的深度信息较小农作物作为目标农作物,预设比例比如1%,即选取的1%数量的农作物的深度信息 小于其余99%的农作物的深度信息。To determine the reference parameters, specifically, the target crops can be selected from multiple crops in the target image according to the depth information, and the average value of the depth information of the target crops can be used as the base, such as the crops whose depth information is less than a preset threshold. Alternatively, a preset proportion of crops with less depth information is selected as the target crops, and the preset proportion is 1%, that is, the depth information of the selected 1% of the crops is less than the depth information of the remaining 99% of the crops.
S203、根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图。S203. Perform information fusion according to the radius estimation map and the confidence maps of the multiple growth state categories to obtain a fusion map.
在得到目标图像中农作物的半径估计图和多个生长状态类别的置信度图后,对半径估计图和多个生长状态类别的置信度图进行融合处理,得到融合图。该融合图中包括多个生长状态类别的置信度图和半径估计图对应的信息,由此方便根据融合图统计农作物的生长状况。After obtaining the radius estimation map of the crops in the target image and the confidence maps of multiple growth state categories, the radius estimate map and the confidence maps of multiple growth state categories are fused to obtain a fusion map. The fusion map includes information corresponding to confidence maps and radius estimation maps of multiple growth state categories, thereby facilitating statistics on the growth states of crops according to the fusion map.
在一些实施例中,在进行信息融合时,具体可以以所述半径估计图为基准图,将所述多个生长状态类别的置信度图融合叠加在半径估计图上,得到融合图。在半径估计图上叠加置信度图,具体可以将置信度图中农作物的目标位置和置信度值添加至半径估计图中对应的农作物上。In some embodiments, when performing information fusion, the radius estimation graph may be used as a reference graph, and the confidence maps of the multiple growth state categories may be fused and superimposed on the radius estimation graph to obtain a fusion graph. The confidence map is superimposed on the radius estimation map. Specifically, the target position and confidence value of the crops in the confidence map can be added to the corresponding crops in the radius estimate map.
其中,为了方便快速地得到融合图,在将多个生长状态类别的置信度图融合叠加在所述半径估计图上时,具体可以根据多个生长状态类别的置信度图中置信度的数量,确定所述多个生长状态类别的置信度图对应的叠加顺序;按照所述叠加顺序将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。Among them, in order to obtain the fusion map conveniently and quickly, when the confidence maps of multiple growth state categories are fused and superimposed on the radius estimation map, the number of confidence degrees in the confidence map of multiple growth state categories can be specifically calculated. Determine a stacking sequence corresponding to the confidence maps of the multiple growth state categories; and fuse and superimpose the confidence maps of the multiple growth state categories on the radius estimation map according to the stacking sequence.
比如,正常生长类别对应的树苗可能比较多,因此对应的置信度的数量相对较多,而异常生长类别和缺苗生长类别对应的树苗可能较少,因此对应的置信度的数量相对较少,根据数量确定叠加顺序具体置信度的数量从多至少确定叠加顺序,即数量多先叠加,数量少的后叠加。确定叠加顺序可以方便快速地得到融合图,进而提高农作物的分析效率。For example, the number of saplings corresponding to the normal growth category may be relatively large, so the number of corresponding confidence levels is relatively large, while the number of saplings corresponding to the abnormal growth category and the lacking seedling growth category may be relatively small, so the number of corresponding confidence levels is relatively small. Determine the stacking order according to the number of specific confidence levels from the largest number to at least determine the stacking order, that is, the larger number is superimposed first, and the smaller number is superimposed later. Determining the stacking order can easily and quickly obtain a fusion map, thereby improving the analysis efficiency of crops.
在一些实施例中,为了得到更为准确的分析效果,在根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合之前,还可以对所述置信度图进行极大值池化处理,并根据预设阈值删除无效的农作物,保留有效的农作物,其中,所述无效的农作物对应的像素值小于或等于所述预设阈值。该预设阈值可以由用户进行设定,目的是可以滤除无效的农作物。In some embodiments, in order to obtain a more accurate analysis effect, before information fusion is performed according to the radius estimation map and the confidence maps of the multiple growth state categories, the confidence map may also be maximized. Value pooling is performed, invalid crops are deleted according to a preset threshold, and valid crops are retained, wherein the pixel values corresponding to the invalid crops are less than or equal to the preset threshold. The preset threshold can be set by the user in order to filter out invalid crops.
在进行极大值池化处理时,采用的卷积核的大小为5*5,当然卷积核的大小也可以为其他值。When performing the maximum value pooling process, the size of the convolution kernel used is 5*5. Of course, the size of the convolution kernel can also be other values.
S204、根据所述融合图确定所述农作物的生长状况。S204. Determine the growth status of the crop according to the fusion map.
具体地,可以统计融合图中正常生长类别的数量占比、异常生长类别的数 量占比以及缺苗生长类型的数量占比;或者,还可以统计异常生长类别的农作物的位置、缺苗生长类别对应的位置等等;再或者,还可以统计农作物的半径值,并根据半径值确定该目标区域整体的生长状况是否满足预期要求。Specifically, the number proportion of normal growth categories, the number proportion of abnormal growth categories, and the number proportion of seedling-missing growth types in the fusion graph can be counted; alternatively, the positions of crops with abnormal growth categories, and seedling-missing growth categories can also be counted. The corresponding position, etc.; or, the radius value of the crops can also be counted, and whether the overall growth condition of the target area meets the expected requirements can be determined according to the radius value.
示例性的,根据所述融合图确定所述农作物的生长状况,具体可以统计所述融合图中正常生长类别对应的农作物的数量和其他类别对应的农作物的数量,根据所述正常生长类别对应的农作物的数量和其他类别对应的农作物的数量,确定所述农作物的整体生长状况。其他类别比如为异常生长类别和缺苗生长类别。Exemplarily, the growth status of the crops is determined according to the fusion map, and specifically, the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories may be counted in the fusion diagram. The quantity of crops and the quantity of crops corresponding to other categories determine the overall growth status of the crops. Other categories are, for example, the abnormal growth category and the lack of seedling growth category.
比如,统计正常生长类别对应的树苗的数量950棵,异常生长类别和缺苗生长类别的数量为50棵,可以计算正常生长类别的占比,该占比为95%,如果用户期望值90%,则可以确定该目标区域的树苗生长较好。For example, if the number of saplings corresponding to the normal growth category is 950, and the number of abnormal growth categories and missing seedling growth categories is 50, the proportion of the normal growth category can be calculated, which is 95%. If the user expects a value of 90%, Then it can be determined that the saplings in the target area grow better.
在一些实施例中,还可以根据获取目标图像的多个颜色通道信息,根据所述多个颜色通道信息生成所述目标区域的语义地图,根据所述半径估计图和所述多个生长状态类别的置信度图,对所述语义地图中农作物进行标记。使得语义地图可以包括更为丰富的信息,方便用户的使用。In some embodiments, the semantic map of the target area may be generated according to the plurality of color channel information of the acquired target image, and the semantic map of the target area may be generated according to the plurality of color channel information, and the radius estimation map and the plurality of growth state categories may be used. The confidence map of the semantic map is used to mark the crops in the semantic map. The semantic map can include more abundant information, which is convenient for users to use.
在一些实施例中,还可以获取目标区域的语义地图,其中,该语义地图可以是事先对所述目标区域进行拍摄得到的图像并对所述地图采用语义分割技术生成的地图,该地图中不同颜色区域代表不同的语,在获取语义地图后,可以根据所述融合地图对所述语义地图进行校正,进而提高语义地图的准确率,或者,根据语义地图对所述融合地图进行校正,以提高融合图的准确率。由此可以提高农作物监测的准确率,进而提高农作物的产量。In some embodiments, a semantic map of the target area may also be obtained, wherein the semantic map may be an image obtained by photographing the target area in advance and a map generated by using a semantic segmentation technique on the map. The color areas represent different languages. After acquiring the semantic map, the semantic map can be corrected according to the fusion map to improve the accuracy of the semantic map, or the fusion map can be corrected according to the semantic map to improve the accuracy of the semantic map. The accuracy of the fusion graph. As a result, the accuracy of crop monitoring can be improved, thereby increasing crop yield.
上述实施例提供的农作物的生长监测方法,通过获取无人机拍摄的目标图像,利用神经网络模型对目标图像进行学习,由此可以准确地监测到农作物的生长状况,比如得到农作物的不同生长状态类别(比如正常生长、异常生长或缺苗等)的状况,由此可以方便农作物管理员及时对农作物进行有针对性的处理,比如施肥、补苗或喷洒农药等,进而可以提高农作物的产量。In the method for monitoring the growth of crops provided by the above embodiments, the target image captured by the drone is acquired, and the target image is learned by using the neural network model, so that the growth status of the crops can be accurately monitored, for example, different growth statuses of the crops can be obtained. Category (such as normal growth, abnormal growth or lack of seedlings, etc.), which can facilitate crop administrators to timely and targeted treatment of crops, such as fertilizing, replenishing seedlings or spraying pesticides, etc., which can improve crop yields.
请参阅图9,图9是本申请实施例提供的另一种农作物的生长监测方法的步骤示意流程图,以下以该生长监测方法可以应用于飞行控制系统的控制终端为例进行。Please refer to FIG. 9 . FIG. 9 is a schematic flow chart of the steps of another method for monitoring the growth of crops provided by the embodiments of the present application. The following is an example of the method for monitoring the growth of crops that can be applied to a control terminal of a flight control system.
如图9所示,该生长监测方法包括步骤S301至步骤S306。As shown in FIG. 9 , the growth monitoring method includes steps S301 to S306.
S301、获取无人机对目标区域进行拍摄得到的目标图像。S301 , acquiring a target image obtained by photographing a target area by a drone.
S302、识别所述目标图像中农作物的类型;S302, identifying the type of crops in the target image;
S303、根据所述农作物的类型与神经网络模型之间预设的对应关系,确定所述目标图像中农作物的类型对应的神经网络模型;S303, according to the preset correspondence between the type of the crop and the neural network model, determine the neural network model corresponding to the type of the crop in the target image;
S304、将所述目标图像输入至确定的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;S304, inputting the target image into the determined neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
S305、根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及S305. Perform information fusion according to the radius estimation map and the confidence maps of the multiple growth state categories to obtain a fusion map; and
S306、根据所述融合图确定所述农作物的生长状况。S306. Determine the growth status of the crop according to the fusion map.
具体可以采用图像识别技术对目标图像中的农作物进行识别,以确定该农作物的类型,比如提取目标图像中农作物的特征,根据农作物的特征识别农作物的类型。当然,还可以使用农作物识别模型确定农作物的类型,该农作物识别类型可以为基于卷积神经网络训练得到。农作物的类型比如可以是粮食、蔬菜、水果、药材,当然也可以是农作物的具体名称,比如苹果树、桃树、玉米苗、棉花苗等等。Specifically, image recognition technology can be used to identify the crops in the target image to determine the type of the crops, such as extracting the characteristics of the crops in the target image, and identifying the types of crops according to the characteristics of the crops. Of course, the crop identification model can also be used to determine the type of crops, and the crop identification type can be obtained by training based on a convolutional neural network. The types of crops can be, for example, grains, vegetables, fruits, medicinal materials, and of course, specific names of crops, such as apple trees, peach trees, corn seedlings, cotton seedlings, and so on.
为了提高神经网络模型的准确率,可以针对每种农作物训练一个神经网络模型,即不同类型的农作物对应不同的神经网络模型,并建立农作物的类型与神经网络模型的对应关系,以便在确定农作物的类型时根据该对应关系确定目标图像中农作物的类型对应的神经网络模型,即需要使用的神经网络模型。In order to improve the accuracy of the neural network model, a neural network model can be trained for each crop, that is, different types of crops correspond to different neural network models, and the corresponding relationship between the types of crops and the neural network model can be established to determine the crops. When determining the type, the neural network model corresponding to the type of crops in the target image is determined according to the corresponding relationship, that is, the neural network model to be used.
在确定对应的神经网络模型后,将所述目标图像输入至确定的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图,以便根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图,进而对农作物的生长状况进行统计分析。After the corresponding neural network model is determined, the target image is input into the determined neural network model, and the radius estimation map of the crop and the confidence maps of multiple growth state categories are obtained, so that according to the radius estimation map and all the confidence maps The confidence maps of the above-mentioned multiple growth state categories are fused to obtain a fusion map, and then statistical analysis of the growth state of the crops is performed.
在得到农作物的生长状况的统计分析结果后,还根据统计结果确定农作物的生长状况是否满足预设生长要求,在确定所述农作物的生长状况无法满足预设生长要求时,可以输出不满足预设生长要的农作物的实际位置,以便用户根据实际位置对该农作物进行处理,当然还以输出不满足预设生长要的农作物的实际位置以及生长状态类别,以便用户进行有针对的处理,由此提高农作物的产量。After obtaining the statistical analysis results of the growth conditions of the crops, it is also determined whether the growth conditions of the crops meet the preset growth requirements according to the statistical results. The actual position of the crop to be grown, so that the user can process the crop according to the actual position, and of course, the actual position and growth state category of the crop that does not meet the preset growth requirements can be output, so that the user can carry out targeted processing, thereby improving crop yield.
预设生长要求包括:异常生长类别对应农作物的数量占比大于第一预设阈 值、缺苗生长类别对应农作物的数量占比大于第二预设阈值和正常生长类别对应农作物的数量占比小于第三预设阈值中的一种。第一预设阈值、第二预设阈值和第三预设阈值,根据实际需求进行设定,其大小在此不做限定。The preset growth requirements include: the proportion of the number of crops corresponding to the abnormal growth category is greater than the first preset threshold, the proportion of the number of crops corresponding to the lack of seedling growth category is greater than the second preset threshold, and the proportion of the number of crops corresponding to the normal growth category is less than the first preset threshold. One of three preset thresholds. The first preset threshold, the second preset threshold, and the third preset threshold are set according to actual needs, and their sizes are not limited here.
其中,不满足预设生长要的农作物的实际位置的确定方式,可以通过确定农作物的实际位置的方法确定。具体地,如图10所示,图10示出了本申请实施例提供的一种确定农作物的实际位置的方法,该方法包括以下内容:Wherein, the method of determining the actual position of the crops that do not meet the preset growth requirements can be determined by the method of determining the actual position of the crops. Specifically, as shown in FIG. 10, FIG. 10 shows a method for determining the actual position of a crop provided by an embodiment of the present application, and the method includes the following contents:
S401、获取所述无人机在拍摄目标图像时对应的飞行信息以及所述无人机的拍摄装置在拍摄目标图像时的视场角,所述飞行信息包括飞行位置和飞行高度;S401, obtaining the flight information corresponding to the drone when shooting the target image and the field of view angle of the shooting device of the drone when shooting the target image, the flight information including the flight position and the flight height;
S402、获取所述农作物在目标图像中相对所述目标图像的中心点的相对位置;S402, obtaining the relative position of the crop in the target image relative to the center point of the target image;
S403、根据所述飞行位置、所述飞行高度、所述视场角和所述相对位置,确定所述农作物的实际位置。S403. Determine the actual position of the crop according to the flying position, the flying height, the angle of view and the relative position.
在获取无人机的拍摄装置拍摄目标区域的目标图像时,还可以获取无人机在拍摄目标图像时对应的飞行信息,该飞行信息至少包括飞行位置和飞行高度,飞行位置为无人机在世界坐标系下的位置,可以通过无人机上GPS测量,飞行高度为无人机至农作物表面的距离,可以通过雷达装置或双目测距测量。同时还需获取拍摄装置在拍摄目标图像时的视场角。When acquiring the target image of the target area captured by the camera of the UAV, the corresponding flight information of the UAV when capturing the target image can also be acquired. The flight information includes at least the flight position and the flight height. The position in the world coordinate system can be measured by GPS on the drone, and the flight height is the distance from the drone to the surface of the crop, which can be measured by radar devices or binocular ranging. At the same time, it is also necessary to obtain the field of view angle of the photographing device when photographing the target image.
如果需要确定某棵农作物的实际位置,还需要确定该农作物在目标图像中相对所述目标图像的中心点的相对位置。由此方便根据飞行位置、飞行高度、视场角和相对位置,确定农作物的实际位置。If the actual position of a certain crop needs to be determined, it is also necessary to determine the relative position of the crop relative to the center point of the target image in the target image. Therefore, it is convenient to determine the actual position of the crops according to the flight position, flight height, field of view and relative position.
示例性的,如图11a和图11b所示,无人机以俯视角拍摄目标图像A,该无人机在目标图像A中对应的位置为图11a中的中心点,此时测量无人机的飞行位置为(x 0,y 0),飞行高度为h,拍摄装置拍摄目标图像A的视场角为θ,假设树苗a为异常生长,因此需要确定树苗a的实际位置,以便用户进行快速确定该树苗a并对其进行处理。其中,树苗a的实际位置和无人机的飞行位置均为在世界坐标系的位置,由此方便用户利用GPS进行定位。 Exemplarily, as shown in Figure 11a and Figure 11b, the drone shoots the target image A from a top-down angle, and the corresponding position of the drone in the target image A is the center point in Figure 11a, and the drone is measured at this time. The flight position is (x 0 , y 0 ), the flight height is h, and the field of view of the target image A captured by the photographing device is θ. Assuming that the sapling a is abnormally growing, it is necessary to determine the actual position of the sapling a, so that the user can quickly Identify the sapling a and treat it. Among them, the actual position of the sapling a and the flight position of the drone are both positions in the world coordinate system, so that it is convenient for the user to use GPS for positioning.
在计算树苗a的实际位置时,首先需要确定树苗a在目标图像a中相对中心点的相对位置,如图11b所示,该相对位置可以包括树苗a在目标图像A中相对中心点的距离d和角度α。When calculating the actual position of the sapling a, it is necessary to first determine the relative position of the sapling a relative to the center point in the target image a, as shown in Figure 11b, the relative position may include the distance d of the sapling a relative to the center point in the target image A and angle α.
在确定树苗a的相对位置之后,还需要确定距离d在世界坐标系下对应的实际距离,该距离d为在目标图像A中的距离。具体地,可以根据拍摄装置拍摄目标图像A时的视场角θ以及飞行高度h,利用三角函数关系确定目标图像A的半角宽度L在世界坐标系下对应的实际距离;再利用距离d和半角宽度L的在目标图像A中比例关系,即可以确定距离d在世界坐标系下对应的实际距离。After determining the relative position of the sapling a, it is also necessary to determine the actual distance corresponding to the distance d in the world coordinate system, where the distance d is the distance in the target image A. Specifically, the actual distance corresponding to the half-angle width L of the target image A in the world coordinate system can be determined by using the trigonometric function relationship according to the field of view angle θ and the flying height h when the shooting device shoots the target image A; The proportional relationship of the width L in the target image A can determine the actual distance corresponding to the distance d in the world coordinate system.
如图11b所示,假设无人机对应的中心点为坐标原点O,树苗a相对中心点的距离和角度分别为距离d和角度α,同时又确定距离d对应的实际距离以及坐标原点的坐标(x 0,y 0),因此可以确定树苗a的实际位置(x 1,y 1)。 As shown in Figure 11b, it is assumed that the center point corresponding to the UAV is the coordinate origin O, the distance and angle of the sapling a relative to the center point are the distance d and the angle α, respectively, and the actual distance corresponding to the distance d and the coordinates of the coordinate origin are also determined. (x 0 , y 0 ), so the actual position (x 1 , y 1 ) of the sapling a can be determined.
上述实施例提供的农作物的生长监测方法,通过获取无人机拍摄的目标图像,利用神经网络模型对目标图像进行学习,由此可以准确地监测到农作物的生长状况,比如得到农作物的不同生长状态类别(比如正常生长、异常生长或缺苗等)的状况,并且还可以确定农作物的实际位置,由此方便用户可以快速及时地对农作物进行有针对性的处理,比如施肥、补苗或喷洒农药等,进而可以提高农作物的产量。In the method for monitoring the growth of crops provided by the above embodiments, the target image captured by the drone is acquired, and the target image is learned by using the neural network model, so that the growth status of the crops can be accurately monitored, for example, different growth statuses of the crops can be obtained. category (such as normal growth, abnormal growth or lack of seedlings, etc.), and can also determine the actual location of crops, so that users can quickly and timely carry out targeted treatments on crops, such as fertilizing, replenishing seedlings or spraying pesticides, etc. , thereby increasing the yield of crops.
请参阅图12所示,图12是本申请实施例提供的一种无人机的示意性框图。如图12所示,该无人机100还至少包括一个或多个处理器101、存储器102和拍摄装置13。Please refer to FIG. 12 , which is a schematic block diagram of an unmanned aerial vehicle provided by an embodiment of the present application. As shown in FIG. 12 , the drone 100 further includes at least one or more processors 101 , a memory 102 and a photographing device 13 .
处理器101例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。The processor 101 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
存储器102可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。The memory 102 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a mobile hard disk, and the like.
拍摄装置13用于对目标区域种植的农作物进行拍摄,得到目标图像。The photographing device 13 is used for photographing crops planted in the target area to obtain a target image.
其中,存储器102用于存储计算机程序;处理器101用于执行所述计算机程序并在执行所述计算机程序时,执行本申请实施例提供的任一项所述的农作物的生长监测方法。Wherein, the memory 102 is used for storing a computer program; the processor 101 is used for executing the computer program, and when executing the computer program, executes any one of the crop growth monitoring methods provided in the embodiments of this application.
示例性的,所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Exemplarily, the processor is configured to execute the computer program and implement the following steps when executing the computer program:
获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农 作物;将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及根据所述融合图确定所述农作物的生长状况。Obtain a target image obtained by photographing a target area by a drone, and the target area is planted with crops; input the target image into a pre-trained neural network model to obtain a radius estimation map and multiple growth states of the crops a confidence map of the categories; perform information fusion according to the radius estimation map and the confidence maps of the multiple growth state categories to obtain a fusion map; and determine the growth state of the crop according to the fusion map.
请参阅图13所示,图13是本申请实施例提供的一种控制终端的示意性框图。如图13所示,该控制终端200还至少包括一个或多个处理器201和存储器202。Referring to FIG. 13 , FIG. 13 is a schematic block diagram of a control terminal provided by an embodiment of the present application. As shown in FIG. 13 , the control terminal 200 further includes at least one or more processors 201 and a memory 202 .
处理器201例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。The processor 201 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
存储器202可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。The memory 202 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a removable hard disk, and the like.
其中,存储器202用于存储计算机程序;处理器201用于执行所述计算机程序并在执行所述计算机程序时,执行本申请实施例提供的任一项所述的农作物的生长监测方法。Wherein, the memory 202 is used for storing a computer program; the processor 201 is used for executing the computer program and when executing the computer program, executes any one of the crop growth monitoring methods provided in the embodiments of this application.
示例性的,所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Exemplarily, the processor is configured to execute the computer program and implement the following steps when executing the computer program:
获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及根据所述融合图确定所述农作物的生长状况。Obtain a target image obtained by photographing a target area by a drone, and the target area is planted with crops; input the target image into a pre-trained neural network model to obtain a radius estimation map and multiple growth states of the crops a confidence map of the categories; perform information fusion according to the radius estimation map and the confidence maps of the multiple growth state categories to obtain a fusion map; and determine the growth state of the crop according to the fusion map.
请参阅图14所示,图14是本申请实施例提供的一种监测装置的示意性框图。如图14所示,该监测装置300还至少包括一个或多个处理器301和存储器302。Referring to FIG. 14 , FIG. 14 is a schematic block diagram of a monitoring apparatus provided by an embodiment of the present application. As shown in FIG. 14 , the monitoring device 300 further includes at least one or more processors 301 and a memory 302 .
处理器301例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。The processor 301 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), or the like.
存储器302可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。The memory 302 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a removable hard disk, and the like.
其中,存储器302用于存储计算机程序;处理器301用于执行所述计算机程序并在执行所述计算机程序时,执行本申请实施例提供的任一项所述的农作物的生长监测方法。Wherein, the memory 302 is used for storing a computer program; the processor 301 is used for executing the computer program and when executing the computer program, executes any one of the crop growth monitoring methods provided in the embodiments of the present application.
需要说明的是,该监测装置300可以配置在电子设备中,以实现本申请实施例提供的任一项所述的农作物的生长监测方法,比如可以配置在无人机或控制终端中。It should be noted that the monitoring apparatus 300 may be configured in an electronic device to implement any one of the methods for monitoring the growth of crops provided in the embodiments of the present application, for example, it may be configured in an unmanned aerial vehicle or a control terminal.
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的任一种所述的农作物的生长监测方法的步骤。The embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the above implementation The steps of any one of the methods for monitoring the growth of crops provided in the example.
其中,所述计算机可读存储介质可以是前述任一实施例所述的无人机或控制终端的内部存储单元,例如所述无人机的存储器或内存。所述计算机可读存储介质也可以是所述无人机的外部存储设备,例如所述无人机上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。Wherein, the computer-readable storage medium may be an internal storage unit of the UAV or the control terminal described in any of the foregoing embodiments, such as a memory or memory of the UAV. The computer-readable storage medium can also be an external storage device of the drone, such as a plug-in hard disk equipped on the drone, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art can easily think of various equivalents within the technical scope disclosed in the present application. Modifications or substitutions shall be covered by the protection scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (59)

  1. 一种农作物的生长监测方法,其特征在于,所述方法包括:A method for monitoring the growth of crops, characterized in that the method comprises:
    获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
    将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
    根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
    根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
  2. 根据权利要求1所述的方法,其特征在于,所述生长状态类别至少包括正常生长类别、异常生长类别和缺苗生长类别中的一种或多种。The method according to claim 1, wherein the growth state categories include at least one or more of a normal growth category, an abnormal growth category, and a seedling-deficient growth category.
  3. 根据权利要求1所述的方法,其特征在于,所述置信度图包括农作物在所述置信度图中的目标位置以及所述目标位置对应的置信度值;The method according to claim 1, wherein the confidence map includes a target position of the crop in the confidence map and a confidence value corresponding to the target position;
    其中,所述置信度值为概率值,用于表示所述农作物的生长状态类别的可信度。Wherein, the confidence value is a probability value, which is used to represent the reliability of the growth state category of the crop.
  4. 根据权利要求1所述的方法,其特征在于,所述半径估计图包括所述农作物的经归一化处理后的半径值。The method of claim 1, wherein the radius estimation map includes normalized radius values of the crops.
  5. 根据权利要求1所述的方法,其特征在于,所述神经网络模型包括全卷积神经网络模型;和/或,The method according to claim 1, wherein the neural network model comprises a fully convolutional neural network model; and/or,
    所述目标图像为所述无人机控制拍摄装置以俯视角对所述目标区域进行拍摄得到的图像;和/或,The target image is an image obtained by photographing the target area by the drone-controlled photographing device from a bird's-eye view; and/or,
    输入所述神经网络模型的目标图像为RGB图像。The target image input to the neural network model is an RGB image.
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取所述无人机拍摄所述目标区域的农作物的目标图像时对应的深度图,其中,所述深度图包括农作物的深度信息,所述深度信息为所述农作物至所述无人机的距离;Obtain the depth map corresponding to when the drone shoots the target image of the crop in the target area, wherein the depth map includes the depth information of the crop, and the depth information is the distance from the crop to the drone ;
    所述将所述目标图像输入至预先训练好的神经网络模型,包括:The inputting the target image into the pre-trained neural network model includes:
    将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型;Inputting the target image and the depth map corresponding to the target image together into a pre-trained neural network model;
    其中,在训练所述神经网络模型时使用的训练数据包括农作物的深度信息。Wherein, the training data used in training the neural network model includes the depth information of crops.
  7. 根据权利要求6所述的方法,其特征在于,所述将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型之前,所述方法还包括:The method according to claim 6, wherein before the target image and the depth map corresponding to the target image are input together into a pre-trained neural network model, the method further comprises:
    确定基准参数,其中,所述基准参数作为归一化处理的基数;determining a reference parameter, wherein the reference parameter is used as a base for normalization;
    根据所述基准参数,对所述深度图包括的农作物的深度信息进行归一化处理,得到归一化处理后的深度图。According to the reference parameters, the depth information of the crops included in the depth map is normalized to obtain a normalized depth map.
  8. 根据权利要求1所述的方法,其特征在于,所述根据所述多个生长状态类别的置信度图和所述半径估计图进行信息融合,包括:The method according to claim 1, wherein the performing information fusion according to the confidence map of the multiple growth state categories and the radius estimation map, comprising:
    以所述半径估计图为基准图,将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。Using the radius estimation graph as a reference graph, the confidence maps of the multiple growth state categories are fused and superimposed on the radius estimation graph.
  9. 根据权利要求8所述的方法,其特征在于,所述将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上,包括:The method according to claim 8, wherein the fusion and superposition of the confidence maps of the plurality of growth state categories on the radius estimation map comprises:
    根据多个生长状态类别的置信度图中置信度的数量,确定所述多个生长状态类别的置信度图对应的叠加顺序;以及determining a stacking sequence corresponding to the confidence maps of the multiple growth state categories according to the number of confidence scores in the confidence maps of the multiple growth state categories; and
    按照所述叠加顺序将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。The confidence maps of the plurality of growth state categories are fused and superimposed on the radius estimation map according to the superposition order.
  10. 根据权利要求1所述的方法,其特征在于,所述根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合之前,所述方法还包括:The method according to claim 1, wherein before the information fusion is performed according to the radius estimation map and the confidence maps of the multiple growth state categories, the method further comprises:
    对所述置信度图进行极大值池化处理,并根据预设阈值删除无效的农作物;Perform maximum value pooling on the confidence map, and delete invalid crops according to a preset threshold;
    其中,所述无效的农作物对应的像素值小于或等于所述预设阈值。Wherein, the pixel value corresponding to the invalid crop is less than or equal to the preset threshold.
  11. 根据权利要求10所述的方法,其特征在于,所述极大值池化处理的卷积核的大小为5*5。The method according to claim 10, wherein the size of the convolution kernel of the maximum value pooling process is 5*5.
  12. 根据权利要求1所述的方法,其特征在于,所述根据所述融合图确定所述农作物的生长状况,包括:The method according to claim 1, wherein the determining the growth status of the crops according to the fusion map comprises:
    统计所述融合图中正常生长类别对应的农作物的数量和其他类别对应的农作物的数量,Count the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories in the fusion graph,
    根据所述正常生长类别对应的农作物的数量和其他类别对应的农作物的数 量,确定所述农作物的整体生长状况。According to the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories, the overall growth status of the crops is determined.
  13. 根据权利要求1所述的方法,其特征在于,所述方法包括:The method of claim 1, wherein the method comprises:
    获取所述无人机在拍摄目标图像时对应的飞行信息以及所述无人机的拍摄装置在拍摄目标图像时的视场角,其中,所述飞行信息包括飞行位置和飞行高度;Obtain the flight information corresponding to the drone when the target image is captured and the field of view of the drone's shooting device when the target image is captured, wherein the flight information includes a flight position and a flight height;
    获取所述农作物在目标图像中相对所述目标图像的中心点的相对位置;obtaining the relative position of the crop in the target image relative to the center point of the target image;
    根据所述飞行位置、所述飞行高度、所述视场角和所述相对位置,确定所述农作物的实际位置。According to the flying position, the flying height, the angle of view and the relative position, the actual position of the crop is determined.
  14. 根据权利要求13所述的方法,其特征在于,所述方法包括:The method of claim 13, wherein the method comprises:
    在确定所述农作物的生长状况无法满足预设生长要求时,输出所述农作物的实际位置;When it is determined that the growth condition of the crop cannot meet the preset growth requirement, outputting the actual position of the crop;
    其中,所述预设生长要求包括:异常生长类别对应农作物的数量占比大于第一预设阈值、缺苗生长类别对应农作物的数量占比大于第二预设阈值和正常生长类别对应农作物的数量占比小于第三预设阈值中的一种。The preset growth requirements include: the proportion of the number of crops corresponding to the abnormal growth category is greater than the first preset threshold, the proportion of the number of crops corresponding to the lacking seedling growth category is greater than the second preset threshold and the number of crops corresponding to the normal growth category The proportion is less than one of the third preset thresholds.
  15. 根据权利要求1至14任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 14, wherein the method further comprises:
    获取样本数据,所述样本数据包括同一农作物的多个图像,所述图像上标注有所述农作物的多个生长状态类别和所述农作物的半径值;acquiring sample data, the sample data includes multiple images of the same crop, and the images are marked with multiple growth state categories of the crop and a radius value of the crop;
    将所述样本数据分为训练数据和测试数据;Divide the sample data into training data and test data;
    基于确定的神经网络,利用所述训练数据进行模型训练,以及利用所述测试数据进行模型测试,在训练的模型收敛时得到神经网络模型,作为所述预先训练好的神经网络模型。Based on the determined neural network, model training is performed using the training data, and model testing is performed using the test data, and a neural network model is obtained when the trained model converges as the pre-trained neural network model.
  16. 根据权利要求15所述的方法,其特征在于,所述样本数据还包括所述农作物的深度图。The method of claim 15, wherein the sample data further comprises a depth map of the crop.
  17. 根据权利要求1至14任一项所述的方法,其特征在于,不同类型的农作物对应不同的神经网络模型;The method according to any one of claims 1 to 14, wherein different types of crops correspond to different neural network models;
    所述将所述目标图像输入至预先训练好的神经网络模型,包括:The inputting the target image into the pre-trained neural network model includes:
    识别所述目标图像中农作物的类型;identifying the type of crops in the target image;
    根据所述农作物的类型与神经网络模型之间预设的对应关系,确定所述目标图像中农作物的类型对应的神经网络模型;According to the preset correspondence between the type of crops and the neural network model, determine the neural network model corresponding to the type of crops in the target image;
    将所述目标图像输入至确定的神经网络模型。The target image is input to the determined neural network model.
  18. 根据权利要求1至14任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 14, wherein the method further comprises:
    根据所述目标图像的多个颜色通道信息生成所述目标区域的语义地图;以及generating a semantic map of the target area according to a plurality of color channel information of the target image; and
    根据所述半径估计图和所述多个生长状态类别的置信度图,对所述语义地图中农作物进行标记。The crops in the semantic map are marked according to the radius estimation map and the confidence maps of the plurality of growth state categories.
  19. 根据权利要求1至14任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 14, wherein the method further comprises:
    获取所述目标区域的语义地图;obtaining a semantic map of the target area;
    根据所述融合地图对所述语义地图进行校正,或者,根据所述语义地图对所述融合地图进行校正。The semantic map is corrected according to the fusion map, or the fusion map is corrected according to the semantic map.
  20. 一种无人机,其特征在于,所述无人机包括:An unmanned aerial vehicle, characterized in that the unmanned aerial vehicle comprises:
    机体;body;
    云台,所述云台安装在所述机体上;PTZ, the PTZ is installed on the body;
    拍摄装置,所述拍摄装置安装在所述云台上,通过调整所述云台能够调整所述拍摄装置的拍摄角度;a photographing device, wherein the photographing device is installed on the cradle, and the photographing angle of the photographing device can be adjusted by adjusting the cradle;
    处理器和存储器;processor and memory;
    其中,所述存储器用于存储计算机程序以及预先训练好的神经网络模型;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Wherein, the memory is used to store a computer program and a pre-trained neural network model; the processor is used to execute the computer program and implement the following steps when executing the computer program:
    获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
    将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
    根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
    根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
  21. 根据权利要求20所述的无人机,其特征在于,所述生长状态类别至少包括正常生长类别、异常生长类别和缺苗生长类别中的一种或多种。The unmanned aerial vehicle according to claim 20, wherein the growth state categories include at least one or more of a normal growth category, an abnormal growth category, and a seedling-deficient growth category.
  22. 根据权利要求20所述的无人机,其特征在于,所述置信度图包括农作物在所述置信度图中的目标位置以及所述目标位置对应的置信度值;The drone according to claim 20, wherein the confidence map includes a target position of the crops in the confidence map and a confidence value corresponding to the target position;
    其中,所述置信度值为概率值,用于表示所述农作物的生长状态类别的可信度。Wherein, the confidence value is a probability value, which is used to represent the reliability of the growth state category of the crop.
  23. 根据权利要求20所述的无人机,其特征在于,所述半径估计图包括所述农作物的经归一化处理后的半径值。21. The UAV of claim 20, wherein the radius estimation map includes a normalized radius value of the crop.
  24. 根据权利要求20所述的无人机,其特征在于,所述神经网络模型包括全卷积神经网络模型;和/或,The drone according to claim 20, wherein the neural network model comprises a fully convolutional neural network model; and/or,
    所述目标图像为所述无人机控制拍摄装置以俯视角对所述目标区域进行拍摄得到的图像;和/或,The target image is an image obtained by photographing the target area by the drone-controlled photographing device from a bird's-eye view; and/or,
    输入所述神经网络模型的目标图像为RGB图像。The target image input to the neural network model is an RGB image.
  25. 根据权利要求20所述的无人机,其特征在于,所述处理器用于实现:The drone of claim 20, wherein the processor is configured to:
    获取所述无人机拍摄所述目标区域的农作物的目标图像时对应的深度图,其中,所述深度图包括农作物的深度信息,所述深度信息为所述农作物至所述无人机的距离;Obtain the depth map corresponding to when the drone shoots the target image of the crop in the target area, wherein the depth map includes the depth information of the crop, and the depth information is the distance from the crop to the drone ;
    所述将所述目标图像输入至预先训练好的神经网络模型,包括:The inputting the target image into the pre-trained neural network model includes:
    将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型;Inputting the target image and the depth map corresponding to the target image together into a pre-trained neural network model;
    其中,在训练所述神经网络模型时使用的训练数据包括农作物的深度信息。Wherein, the training data used in training the neural network model includes the depth information of crops.
  26. 根据权利要求25所述的无人机,其特征在于,所述将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型之前,所述处理器用于实现:The drone according to claim 25, wherein before the target image and the depth map corresponding to the target image are input together into a pre-trained neural network model, the processor is configured to:
    确定基准参数,其中,所述基准参数作为归一化处理的基数;determining a reference parameter, wherein the reference parameter is used as a base for normalization;
    根据所述基准参数,对所述深度图包括的农作物的深度信息进行归一化处理,得到归一化处理后的深度图。According to the reference parameters, the depth information of the crops included in the depth map is normalized to obtain a normalized depth map.
  27. 根据权利要求20所述的无人机,其特征在于,所述根据所述多个生长状态类别的置信度图和所述半径估计图进行信息融合,包括:The unmanned aerial vehicle according to claim 20, wherein the performing information fusion according to the confidence map of the multiple growth state categories and the radius estimation map, comprising:
    以所述半径估计图为基准图,将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。Using the radius estimation graph as a reference graph, the confidence maps of the multiple growth state categories are fused and superimposed on the radius estimation graph.
  28. 根据权利要求27所述的无人机,其特征在于,所述将所述多个生长状 态类别的置信度图融合叠加在所述半径估计图上,包括:The unmanned aerial vehicle according to claim 27, wherein the fusion and superposition of the confidence maps of the plurality of growth state categories on the radius estimation map, comprising:
    根据多个生长状态类别的置信度图中置信度的数量,确定所述多个生长状态类别的置信度图对应的叠加顺序;以及determining a stacking sequence corresponding to the confidence maps of the multiple growth state categories according to the number of confidence scores in the confidence maps of the multiple growth state categories; and
    按照所述叠加顺序将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。The confidence maps of the plurality of growth state categories are fused and superimposed on the radius estimation map according to the superposition order.
  29. 根据权利要求20所述的无人机,其特征在于,所述根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合之前,所述处理器用于实现:The UAV according to claim 20, wherein before the information fusion is performed according to the radius estimation map and the confidence maps of the multiple growth state categories, the processor is configured to:
    对所述置信度图进行极大值池化处理,并根据预设阈值删除无效的农作物;Perform maximum value pooling on the confidence map, and delete invalid crops according to a preset threshold;
    其中,所述无效的农作物对应的像素值小于或等于所述预设阈值。Wherein, the pixel value corresponding to the invalid crop is less than or equal to the preset threshold.
  30. 根据权利要求29所述的无人机,其特征在于,所述极大值池化处理的卷积核的大小为5*5。The drone according to claim 29, wherein the size of the convolution kernel of the maximum value pooling process is 5*5.
  31. 根据权利要求20所述的无人机,其特征在于,所述根据所述融合图确定所述农作物的生长状况,包括:The unmanned aerial vehicle according to claim 20, wherein the determining the growth status of the crops according to the fusion map comprises:
    统计所述融合图中正常生长类别对应的农作物的数量和其他类别对应的农作物的数量,Count the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories in the fusion graph,
    根据所述正常生长类别对应的农作物的数量和其他类别对应的农作物的数量,确定所述农作物的整体生长状况。The overall growth status of the crops is determined according to the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories.
  32. 根据权利要求20所述的无人机,其特征在于,所述处理器用于实现:The drone of claim 20, wherein the processor is configured to:
    获取所述无人机在拍摄目标图像时对应的飞行信息以及所述无人机的拍摄装置在拍摄目标图像时的视场角,其中,所述飞行信息包括飞行位置和飞行高度;Obtain the flight information corresponding to the drone when the target image is captured and the field of view of the drone's shooting device when the target image is captured, wherein the flight information includes a flight position and a flight height;
    获取所述农作物在目标图像中相对所述目标图像的中心点的相对位置;obtaining the relative position of the crop in the target image relative to the center point of the target image;
    根据所述飞行位置、所述飞行高度、所述视场角和所述相对位置,确定所述农作物的实际位置。According to the flying position, the flying height, the angle of view and the relative position, the actual position of the crop is determined.
  33. 根据权利要求32所述的无人机,其特征在于,所述处理器用于实现:The drone of claim 32, wherein the processor is configured to:
    在确定所述农作物的生长状况无法满足预设生长要求时,输出所述农作物的实际位置;When it is determined that the growth condition of the crop cannot meet the preset growth requirement, outputting the actual position of the crop;
    其中,所述预设生长要求包括:异常生长类别对应农作物的数量占比大于第一预设阈值、缺苗生长类别对应农作物的数量占比大于第二预设阈值和正常 生长类别对应农作物的数量占比小于第三预设阈值中的一种。Wherein, the preset growth requirements include: the proportion of the number of crops corresponding to the abnormal growth category is greater than the first preset threshold, the proportion of the number of crops corresponding to the lacking seedling growth category is greater than the second preset threshold and the number of crops corresponding to the normal growth category The proportion is less than one of the third preset thresholds.
  34. 根据权利要求20至33任一项所述的无人机,其特征在于,所述处理器用于实现:The drone according to any one of claims 20 to 33, wherein the processor is configured to implement:
    获取样本数据,所述样本数据包括同一农作物的多个图像,所述图像上标注有所述农作物的多个生长状态类别和所述农作物的半径值;acquiring sample data, the sample data includes multiple images of the same crop, and the images are marked with multiple growth state categories of the crop and a radius value of the crop;
    将所述样本数据分为训练数据和测试数据;Divide the sample data into training data and test data;
    基于确定的神经网络,利用所述训练数据进行模型训练,以及利用所述测试数据进行模型测试,在训练的模型收敛时得到神经网络模型,作为所述预先训练好的神经网络模型。Based on the determined neural network, model training is performed using the training data, and model testing is performed using the test data, and a neural network model is obtained when the trained model converges as the pre-trained neural network model.
  35. 根据权利要求34所述的无人机,其特征在于,所述样本数据还包括所述农作物的深度图。35. The drone of claim 34, wherein the sample data further comprises a depth map of the crop.
  36. 根据权利要求20至33任一项所述的无人机,其特征在于,不同类型的农作物对应不同的神经网络模型;The drone according to any one of claims 20 to 33, wherein different types of crops correspond to different neural network models;
    所述将所述目标图像输入至预先训练好的神经网络模型,包括:The inputting the target image into the pre-trained neural network model includes:
    识别所述目标图像中农作物的类型;identifying the type of crops in the target image;
    根据所述农作物的类型与神经网络模型之间预设的对应关系,确定所述目标图像中农作物的类型对应的神经网络模型;According to the preset correspondence between the type of crops and the neural network model, determine the neural network model corresponding to the type of crops in the target image;
    将所述目标图像输入至确定的神经网络模型。The target image is input to the determined neural network model.
  37. 根据权利要求20至33任一项所述的无人机,其特征在于,所述处理器用于实现:The drone according to any one of claims 20 to 33, wherein the processor is configured to implement:
    根据所述目标图像的多个颜色通道信息生成所述目标区域的语义地图;以及generating a semantic map of the target area according to a plurality of color channel information of the target image; and
    根据所述半径估计图和所述多个生长状态类别的置信度图,对所述语义地图中农作物进行标记。The crops in the semantic map are marked according to the radius estimation map and the confidence maps of the plurality of growth state categories.
  38. 根据权利要求20至33任一项所述的无人机,其特征在于,所述处理器用于实现:The drone according to any one of claims 20 to 33, wherein the processor is configured to implement:
    获取所述目标区域的语义地图;obtaining a semantic map of the target area;
    根据所述融合地图对所述语义地图进行校正,或者,根据所述语义地图对所述融合地图进行校正。The semantic map is corrected according to the fusion map, or the fusion map is corrected according to the semantic map.
  39. 一种控制终端,其特征在于,所述控制终端与所述无人机通信连接, 用于控制所述无人机飞行,所述无人机包括云台和安装在所述云台上的拍摄装置;所述控制终端包括:A control terminal, characterized in that the control terminal is connected to the unmanned aerial vehicle in communication for controlling the flight of the unmanned aerial vehicle, and the unmanned aerial vehicle comprises a pan/tilt and a camera mounted on the pan/tilt. device; the control terminal includes:
    处理器和存储器;processor and memory;
    其中,所述存储器用于存储计算机程序以及预先训练好的神经网络模型;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Wherein, the memory is used to store a computer program and a pre-trained neural network model; the processor is used to execute the computer program and implement the following steps when executing the computer program:
    获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
    将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
    根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
    根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
  40. 根据权利要求39所述的控制终端,其特征在于,所述生长状态类别至少包括正常生长类别、异常生长类别和缺苗生长类别中的一种或多种。The control terminal according to claim 39, wherein the growth state categories include at least one or more of a normal growth category, an abnormal growth category, and a seedling-deficient growth category.
  41. 根据权利要求39所述的控制终端,其特征在于,所述置信度图包括农作物在所述置信度图中的目标位置以及所述目标位置对应的置信度值;The control terminal according to claim 39, wherein the confidence map includes a target position of the crop in the confidence map and a confidence value corresponding to the target position;
    其中,所述置信度值为概率值,用于表示所述农作物的生长状态类别的可信度。Wherein, the confidence value is a probability value, which is used to represent the reliability of the growth state category of the crop.
  42. 根据权利要求39所述的控制终端,其特征在于,所述半径估计图包括所述农作物的经归一化处理后的半径值。The control terminal according to claim 39, wherein the radius estimation map includes a normalized radius value of the crop.
  43. 根据权利要求39所述的控制终端,其特征在于,所述神经网络模型包括全卷积神经网络模型;和/或,The control terminal according to claim 39, wherein the neural network model comprises a fully convolutional neural network model; and/or,
    所述目标图像为所述无人机控制拍摄装置以俯视角对所述目标区域进行拍摄得到的图像;和/或,The target image is an image obtained by photographing the target area by the drone-controlled photographing device from a bird's-eye view; and/or,
    输入所述神经网络模型的目标图像为RGB图像。The target image input to the neural network model is an RGB image.
  44. 根据权利要求39所述的控制终端,其特征在于,所述处理器用于实现:The control terminal according to claim 39, wherein the processor is configured to implement:
    获取所述无人机拍摄所述目标区域的农作物的目标图像时对应的深度图,其中,所述深度图包括农作物的深度信息,所述深度信息为所述农作物至所述无人机的距离;Obtain the depth map corresponding to when the drone shoots the target image of the crop in the target area, wherein the depth map includes the depth information of the crop, and the depth information is the distance from the crop to the drone ;
    所述将所述目标图像输入至预先训练好的神经网络模型,包括:The inputting the target image into the pre-trained neural network model includes:
    将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型;Inputting the target image and the depth map corresponding to the target image together into a pre-trained neural network model;
    其中,在训练所述神经网络模型时使用的训练数据包括农作物的深度信息。Wherein, the training data used in training the neural network model includes the depth information of crops.
  45. 根据权利要求44所述的控制终端,其特征在于,所述将所述目标图像和所述目标图像对应的深度图一起输入至预先训练好的神经网络模型之前,所述处理器用于实现:The control terminal according to claim 44, wherein before the target image and the depth map corresponding to the target image are input together into a pre-trained neural network model, the processor is configured to:
    确定基准参数,其中,所述基准参数作为归一化处理的基数;determining a reference parameter, wherein the reference parameter is used as a base for normalization;
    根据所述基准参数,对所述深度图包括的农作物的深度信息进行归一化处理,得到归一化处理后的深度图。According to the reference parameters, the depth information of the crops included in the depth map is normalized to obtain a normalized depth map.
  46. 根据权利要求39所述的控制终端,其特征在于,所述根据所述多个生长状态类别的置信度图和所述半径估计图进行信息融合,包括:The control terminal according to claim 39, wherein the performing information fusion according to the confidence map of the plurality of growth state categories and the radius estimation map comprises:
    以所述半径估计图为基准图,将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。Using the radius estimation graph as a reference graph, the confidence maps of the multiple growth state categories are fused and superimposed on the radius estimation graph.
  47. 根据权利要求46所述的控制终端,其特征在于,所述将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上,包括:The control terminal according to claim 46, wherein the fusion and superposition of the confidence maps of the multiple growth state categories on the radius estimation map comprises:
    根据多个生长状态类别的置信度图中置信度的数量,确定所述多个生长状态类别的置信度图对应的叠加顺序;以及determining a stacking sequence corresponding to the confidence maps of the multiple growth state categories according to the number of confidence scores in the confidence maps of the multiple growth state categories; and
    按照所述叠加顺序将所述多个生长状态类别的置信度图融合叠加在所述半径估计图上。The confidence maps of the plurality of growth state categories are fused and superimposed on the radius estimation map according to the superposition order.
  48. 根据权利要求39所述的控制终端,其特征在于,所述根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合之前,所述处理器用于实现:The control terminal according to claim 39, wherein before the information fusion is performed according to the radius estimation map and the confidence maps of the multiple growth state categories, the processor is configured to:
    对所述置信度图进行极大值池化处理,并根据预设阈值删除无效的农作物;Perform maximum value pooling on the confidence map, and delete invalid crops according to a preset threshold;
    其中,所述无效的农作物对应的像素值小于或等于所述预设阈值。Wherein, the pixel value corresponding to the invalid crop is less than or equal to the preset threshold.
  49. 根据权利要求48所述的控制终端,其特征在于,所述极大值池化处理的卷积核的大小为5*5。The control terminal according to claim 48, wherein the size of the convolution kernel of the maximum value pooling process is 5*5.
  50. 根据权利要求39所述的控制终端,其特征在于,所述根据所述融合图确定所述农作物的生长状况,包括:The control terminal according to claim 39, wherein the determining the growth status of the crops according to the fusion map comprises:
    统计所述融合图中正常生长类别对应的农作物的数量和其他类别对应的农 作物的数量,Count the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories in the fusion graph,
    根据所述正常生长类别对应的农作物的数量和其他类别对应的农作物的数量,确定所述农作物的整体生长状况。The overall growth status of the crops is determined according to the number of crops corresponding to the normal growth category and the number of crops corresponding to other categories.
  51. 根据权利要求39所述的控制终端,其特征在于,所述处理器用于实现:The control terminal according to claim 39, wherein the processor is configured to implement:
    获取所述无人机在拍摄目标图像时对应的飞行信息以及所述无人机的拍摄装置在拍摄目标图像时的视场角,其中,所述飞行信息包括飞行位置和飞行高度;Obtain the flight information corresponding to the drone when the target image is captured and the field of view of the drone's shooting device when the target image is captured, wherein the flight information includes a flight position and a flight height;
    获取所述农作物在目标图像中相对所述目标图像的中心点的相对位置;obtaining the relative position of the crop in the target image relative to the center point of the target image;
    根据所述飞行位置、所述飞行高度、所述视场角和所述相对位置,确定所述农作物的实际位置。According to the flying position, the flying height, the angle of view and the relative position, the actual position of the crop is determined.
  52. 根据权利要求51所述的控制终端,其特征在于,所述处理器用于实现:The control terminal according to claim 51, wherein the processor is configured to implement:
    在确定所述农作物的生长状况无法满足预设生长要求时,输出所述农作物的实际位置;When it is determined that the growth condition of the crop cannot meet the preset growth requirement, outputting the actual position of the crop;
    其中,所述预设生长要求包括:异常生长类别对应农作物的数量占比大于第一预设阈值、缺苗生长类别对应农作物的数量占比大于第二预设阈值和正常生长类别对应农作物的数量占比小于第三预设阈值中的一种。Wherein, the preset growth requirements include: the proportion of the number of crops corresponding to the abnormal growth category is greater than the first preset threshold, the proportion of the number of crops corresponding to the lacking seedling growth category is greater than the second preset threshold and the number of crops corresponding to the normal growth category The proportion is less than one of the third preset thresholds.
  53. 根据权利要求39至52任一项所述的控制终端,其特征在于,所述处理器用于实现:The control terminal according to any one of claims 39 to 52, wherein the processor is configured to implement:
    获取样本数据,所述样本数据包括同一农作物的多个图像,所述图像上标注有所述农作物的多个生长状态类别和所述农作物的半径值;acquiring sample data, the sample data includes multiple images of the same crop, and the images are marked with multiple growth state categories of the crop and a radius value of the crop;
    将所述样本数据分为训练数据和测试数据;Divide the sample data into training data and test data;
    基于确定的神经网络,利用所述训练数据进行模型训练,以及利用所述测试数据进行模型测试,在训练的模型收敛时得到神经网络模型,作为所述预先训练好的神经网络模型。Based on the determined neural network, model training is performed using the training data, and model testing is performed using the test data, and a neural network model is obtained when the trained model converges as the pre-trained neural network model.
  54. 根据权利要求53所述的控制终端,其特征在于,所述样本数据还包括所述农作物的深度图。The control terminal according to claim 53, wherein the sample data further comprises a depth map of the crops.
  55. 根据权利要求39至52任一项所述的控制终端,其特征在于,不同类型的农作物对应不同的神经网络模型;The control terminal according to any one of claims 39 to 52, wherein different types of crops correspond to different neural network models;
    所述将所述目标图像输入至预先训练好的神经网络模型,包括:The inputting the target image into the pre-trained neural network model includes:
    识别所述目标图像中农作物的类型;identifying the type of crops in the target image;
    根据所述农作物的类型与神经网络模型之间预设的对应关系,确定所述目标图像中农作物的类型对应的神经网络模型;According to the preset correspondence between the type of crops and the neural network model, determine the neural network model corresponding to the type of crops in the target image;
    将所述目标图像输入至确定的神经网络模型。The target image is input to the determined neural network model.
  56. 根据权利要求39至52任一项所述的控制终端,其特征在于,所述处理器用于实现:The control terminal according to any one of claims 39 to 52, wherein the processor is configured to implement:
    根据所述目标图像的多个颜色通道信息生成所述目标区域的语义地图;以及generating a semantic map of the target area according to a plurality of color channel information of the target image; and
    根据所述半径估计图和所述多个生长状态类别的置信度图,对所述语义地图中农作物进行标记。The crops in the semantic map are marked according to the radius estimation map and the confidence maps of the plurality of growth state categories.
  57. 根据权利要求39至52任一项所述的控制终端,其特征在于,所述处理器用于实现:The control terminal according to any one of claims 39 to 52, wherein the processor is configured to implement:
    获取所述目标区域的语义地图;obtaining a semantic map of the target area;
    根据所述融合地图对所述语义地图进行校正,或者,根据所述语义地图对所述融合地图进行校正。The semantic map is corrected according to the fusion map, or the fusion map is corrected according to the semantic map.
  58. 一种监测装置,其特征在于,所述监测装置包括:A monitoring device, characterized in that the monitoring device comprises:
    处理器和存储器;processor and memory;
    其中,所述存储器用于存储计算机程序以及预先训练好的神经网络模型;所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:Wherein, the memory is used to store a computer program and a pre-trained neural network model; the processor is used to execute the computer program and implement the following steps when executing the computer program:
    获取无人机对目标区域进行拍摄得到的目标图像,所述目标区域种植有农作物;obtaining a target image obtained by photographing a target area by a drone, and the target area is planted with crops;
    将所述目标图像输入至预先训练好的神经网络模型,得到所述农作物的半径估计图和多个生长状态类别的置信度图;Inputting the target image into a pre-trained neural network model to obtain a radius estimation map of the crop and a confidence map of multiple growth state categories;
    根据所述半径估计图和所述多个生长状态类别的置信度图进行信息融合,得到融合图;以及Perform information fusion according to the radius estimation map and the confidence maps of the plurality of growth state categories to obtain a fusion map; and
    根据所述融合图确定所述农作物的生长状况。The growth status of the crop is determined according to the fusion map.
  59. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至19任一项所述的生长监测方法的步骤。A computer-readable storage medium, characterized in that, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor can realize the implementation of any one of claims 1 to 19. The steps of the growth monitoring method.
PCT/CN2020/126731 2020-11-05 2020-11-05 Growth monitoring method for crops, and devices and storage medium WO2022094854A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/126731 WO2022094854A1 (en) 2020-11-05 2020-11-05 Growth monitoring method for crops, and devices and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/126731 WO2022094854A1 (en) 2020-11-05 2020-11-05 Growth monitoring method for crops, and devices and storage medium

Publications (1)

Publication Number Publication Date
WO2022094854A1 true WO2022094854A1 (en) 2022-05-12

Family

ID=81458390

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126731 WO2022094854A1 (en) 2020-11-05 2020-11-05 Growth monitoring method for crops, and devices and storage medium

Country Status (1)

Country Link
WO (1) WO2022094854A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314851A (en) * 2022-07-05 2022-11-08 南京邮电大学 Agricultural information management platform based on big data platform
CN115355888A (en) * 2022-07-12 2022-11-18 重庆文理学院 Unmanned aerial vehicle ginger planting inspection system based on background image positioning
CN115454164A (en) * 2022-09-15 2022-12-09 马鞍山粤美智造电子科技有限公司 Automatic dish warming and temperature controlling system based on intelligent identification
CN116076211A (en) * 2022-12-07 2023-05-09 北京市农林科学院智能装备技术研究中心 Seedling water and fertilizer management method, system, electronic equipment and storage medium
CN116453003A (en) * 2023-06-14 2023-07-18 之江实验室 Method and system for intelligently identifying rice growth vigor based on unmanned aerial vehicle monitoring
CN116543309A (en) * 2023-06-28 2023-08-04 华南农业大学 Crop abnormal information acquisition method, system, electronic equipment and medium
CN116863403A (en) * 2023-07-11 2023-10-10 仲恺农业工程学院 Crop big data environment monitoring method and device and electronic equipment
CN116883875A (en) * 2023-07-18 2023-10-13 仲恺农业工程学院 Agricultural condition monitoring method and system by cooperation of unmanned aerial vehicle and ground robot
CN116895045A (en) * 2023-07-12 2023-10-17 生态环境部南京环境科学研究所 Monitoring system and method for biological control of foreign invasive plant
CN116993302A (en) * 2023-09-26 2023-11-03 济南天楚科技有限公司 Agricultural information intelligent management method and system based on big data
CN117132934A (en) * 2023-08-28 2023-11-28 广东省农业科学院设施农业研究所 System and method for supervising drug delivery safety interval period
CN117146831A (en) * 2023-10-31 2023-12-01 北京佳格天地科技有限公司 Fruit tree growth state evaluation method and system based on machine learning and unmanned aerial vehicle
CN117315492A (en) * 2023-11-29 2023-12-29 中国平安财产保险股份有限公司 Planting risk early warning method, system, equipment and medium based on unmanned aerial vehicle technology
CN117765403A (en) * 2024-02-22 2024-03-26 山西余得水农牧有限公司 fertilizing method for improving lodging resistance and grain quality of crops
CN118465868A (en) * 2024-07-11 2024-08-09 浙江大学山东(临沂)现代农业研究院 Agricultural meteorological data intelligent acquisition method and system based on big data
CN118521903A (en) * 2024-07-23 2024-08-20 广州海睿智能科技股份有限公司 High-integration multi-source heterogeneous agriculture condition online monitoring method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643529A (en) * 2016-09-30 2017-05-10 中国科学院、水利部成都山地灾害与环境研究所 Rapid measuring method for growing height of agricultural crops in mountainous areas based on unmanned aerial vehicle photographed image
CN109886094A (en) * 2019-01-08 2019-06-14 中国农业大学 A kind of crop growth of cereal crop seedlings seedling gesture capturing analysis method and device
CN109977924A (en) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 For real time image processing and system on the unmanned plane machine of crops
CN110598619A (en) * 2019-09-06 2019-12-20 中国农业科学院农业资源与农业区划研究所 Method and system for identifying and counting fruit trees by using unmanned aerial vehicle images
US20200253127A1 (en) * 2018-09-11 2020-08-13 Pollen Systems Corporation Crop management method and apparatus with autonomous vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643529A (en) * 2016-09-30 2017-05-10 中国科学院、水利部成都山地灾害与环境研究所 Rapid measuring method for growing height of agricultural crops in mountainous areas based on unmanned aerial vehicle photographed image
US20200253127A1 (en) * 2018-09-11 2020-08-13 Pollen Systems Corporation Crop management method and apparatus with autonomous vehicles
CN109886094A (en) * 2019-01-08 2019-06-14 中国农业大学 A kind of crop growth of cereal crop seedlings seedling gesture capturing analysis method and device
CN109977924A (en) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 For real time image processing and system on the unmanned plane machine of crops
CN110598619A (en) * 2019-09-06 2019-12-20 中国农业科学院农业资源与农业区划研究所 Method and system for identifying and counting fruit trees by using unmanned aerial vehicle images

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115314851A (en) * 2022-07-05 2022-11-08 南京邮电大学 Agricultural information management platform based on big data platform
CN115355888A (en) * 2022-07-12 2022-11-18 重庆文理学院 Unmanned aerial vehicle ginger planting inspection system based on background image positioning
CN115355888B (en) * 2022-07-12 2024-03-15 重庆文理学院 Unmanned aerial vehicle ginger planting inspection system based on background image positioning
CN115454164A (en) * 2022-09-15 2022-12-09 马鞍山粤美智造电子科技有限公司 Automatic dish warming and temperature controlling system based on intelligent identification
CN116076211A (en) * 2022-12-07 2023-05-09 北京市农林科学院智能装备技术研究中心 Seedling water and fertilizer management method, system, electronic equipment and storage medium
CN116453003B (en) * 2023-06-14 2023-09-01 之江实验室 Method and system for intelligently identifying rice growth vigor based on unmanned aerial vehicle monitoring
CN116453003A (en) * 2023-06-14 2023-07-18 之江实验室 Method and system for intelligently identifying rice growth vigor based on unmanned aerial vehicle monitoring
CN116543309B (en) * 2023-06-28 2023-10-27 华南农业大学 Crop abnormal information acquisition method, system, electronic equipment and medium
CN116543309A (en) * 2023-06-28 2023-08-04 华南农业大学 Crop abnormal information acquisition method, system, electronic equipment and medium
CN116863403A (en) * 2023-07-11 2023-10-10 仲恺农业工程学院 Crop big data environment monitoring method and device and electronic equipment
CN116863403B (en) * 2023-07-11 2024-01-02 仲恺农业工程学院 Crop big data environment monitoring method and device and electronic equipment
CN116895045A (en) * 2023-07-12 2023-10-17 生态环境部南京环境科学研究所 Monitoring system and method for biological control of foreign invasive plant
CN116883875A (en) * 2023-07-18 2023-10-13 仲恺农业工程学院 Agricultural condition monitoring method and system by cooperation of unmanned aerial vehicle and ground robot
CN117132934A (en) * 2023-08-28 2023-11-28 广东省农业科学院设施农业研究所 System and method for supervising drug delivery safety interval period
CN117132934B (en) * 2023-08-28 2024-03-01 广东省农业科学院设施农业研究所 System and method for supervising drug delivery safety interval period
CN116993302A (en) * 2023-09-26 2023-11-03 济南天楚科技有限公司 Agricultural information intelligent management method and system based on big data
CN117146831A (en) * 2023-10-31 2023-12-01 北京佳格天地科技有限公司 Fruit tree growth state evaluation method and system based on machine learning and unmanned aerial vehicle
CN117146831B (en) * 2023-10-31 2024-01-02 北京佳格天地科技有限公司 Fruit tree growth state evaluation method and system based on machine learning and unmanned aerial vehicle
CN117315492A (en) * 2023-11-29 2023-12-29 中国平安财产保险股份有限公司 Planting risk early warning method, system, equipment and medium based on unmanned aerial vehicle technology
CN117315492B (en) * 2023-11-29 2024-04-02 中国平安财产保险股份有限公司 Planting risk early warning method, system, equipment and medium based on unmanned aerial vehicle technology
CN117765403A (en) * 2024-02-22 2024-03-26 山西余得水农牧有限公司 fertilizing method for improving lodging resistance and grain quality of crops
CN117765403B (en) * 2024-02-22 2024-04-30 山西余得水农牧有限公司 Fertilizing method for improving lodging resistance and grain quality of crops
CN118465868A (en) * 2024-07-11 2024-08-09 浙江大学山东(临沂)现代农业研究院 Agricultural meteorological data intelligent acquisition method and system based on big data
CN118521903A (en) * 2024-07-23 2024-08-20 广州海睿智能科技股份有限公司 High-integration multi-source heterogeneous agriculture condition online monitoring method and device

Similar Documents

Publication Publication Date Title
WO2022094854A1 (en) Growth monitoring method for crops, and devices and storage medium
US10942528B2 (en) Return path configuration for remote controlled aerial vehicle
US10648809B2 (en) Adaptive compass calibration based on local field conditions
CN109596118B (en) Method and equipment for acquiring spatial position information of target object
JP6609833B2 (en) Method and system for controlling the flight of an unmanned aerial vehicle
US20180122246A1 (en) Autonomous mission action alteration
WO2020181719A1 (en) Unmanned aerial vehicle control method, unmanned aerial vehicle, and system
US11543836B2 (en) Unmanned aerial vehicle action plan creation system, method and program
WO2020103108A1 (en) Semantic generation method and device, drone and storage medium
CN106931963A (en) Environmental data shared platform, unmanned vehicle, localization method and alignment system
JPWO2018062336A1 (en) Flight control device, unmanned airplane, flight control method, and computer readable recording medium
WO2020103109A1 (en) Map generation method and device, drone and storage medium
US11935289B2 (en) Agricultural analysis robotic systems and methods thereof
CN206804018U (en) Environmental data server, unmanned vehicle and alignment system
Liu et al. Development of a positioning system using UAV-based computer vision for an airboat navigation in paddy field
US12024284B2 (en) Information processing device, information processing method, and recording medium
CN112040175A (en) Unmanned aerial vehicle inspection method and device, computer equipment and readable storage medium
CN116724281A (en) Unmanned aerial vehicle control method, unmanned aerial vehicle, flight system and storage medium
CN107205111A (en) Camera device, mobile device, camera system, image capture method and recording medium
CN108007437B (en) Method for measuring farmland boundary and internal obstacles based on multi-rotor aircraft
WO2019167210A1 (en) Control device, mobile body, and program
US20240019866A1 (en) Aerial vehicle control method and apparatus, aerial vehicle, and storage medium
WO2022061632A1 (en) Obstacle detection method and apparatus, and unmanned aerial vehicle and storage medium
CN113110597A (en) Indoor unmanned aerial vehicle autonomous flight system based on ROS system
WO2021207977A1 (en) Movable platform operation method, movable platform and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20960315

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20960315

Country of ref document: EP

Kind code of ref document: A1