WO2020078396A1 - Procédé de détermination d'informations de distribution, et procédé de commande et dispositif destinés à un véhicule aérien sans pilote - Google Patents

Procédé de détermination d'informations de distribution, et procédé de commande et dispositif destinés à un véhicule aérien sans pilote Download PDF

Info

Publication number
WO2020078396A1
WO2020078396A1 PCT/CN2019/111515 CN2019111515W WO2020078396A1 WO 2020078396 A1 WO2020078396 A1 WO 2020078396A1 CN 2019111515 W CN2019111515 W CN 2019111515W WO 2020078396 A1 WO2020078396 A1 WO 2020078396A1
Authority
WO
WIPO (PCT)
Prior art keywords
image information
target object
distribution
information
area
Prior art date
Application number
PCT/CN2019/111515
Other languages
English (en)
Chinese (zh)
Inventor
代双亮
Original Assignee
广州极飞科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州极飞科技有限公司 filed Critical 广州极飞科技有限公司
Priority to US17/309,058 priority Critical patent/US20210357643A1/en
Priority to JP2021520573A priority patent/JP2022502794A/ja
Priority to KR1020217014072A priority patent/KR20210071062A/ko
Priority to AU2019362430A priority patent/AU2019362430B2/en
Priority to CA3115564A priority patent/CA3115564A1/fr
Priority to EP19873665.4A priority patent/EP3859479A4/fr
Publication of WO2020078396A1 publication Critical patent/WO2020078396A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D1/00Dropping, ejecting, releasing, or receiving articles, liquids, or the like, in flight
    • B64D1/16Dropping or releasing powdered, liquid, or gaseous matter, e.g. for fire-fighting
    • B64D1/18Dropping or releasing powdered, liquid, or gaseous matter, e.g. for fire-fighting by spraying, e.g. insecticides
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/003Flight plan management
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/40UAVs specially adapted for particular uses or applications for agriculture or forestry operations

Definitions

  • the present application relates to the field of plant protection, and in particular, to a method for determining distribution information, a method and a device for controlling an unmanned aerial vehicle.
  • drones generally use a general spraying solution for weeding or defoliating agents. If general spraying is used, it will cause a lot of waste of pesticides and pesticide residues or some areas with serious grass damage. Great economic loss.
  • This embodiment provides a method for determining distribution information, a method and a device for controlling an unmanned aerial vehicle, so as to at least solve the technical problems of waste of medicine and pesticide residue caused by the difficulty of distinguishing crops and weeds in the related art.
  • a method for controlling an unmanned aerial vehicle includes: acquiring image information of a target area to be processed; inputting image information to be processed into a preset model for analysis to obtain target objects in the image to be processed Distribution information, where the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information of the target area, and a label used to identify the distribution information of the target object in the sample image information ; According to the distribution information corresponding to the image to be processed, control the unmanned aerial vehicle to spray the medicine to the target object.
  • the step of training the preset model includes:
  • sample image information and mark the position of the target object in the sample image information, obtain a label of the distribution information of the target object corresponding to the sample image information, and enter the sample image and the corresponding label into a preset model;
  • Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image to adjust parameters of each part of the preset model.
  • inputting the image information to be processed into a preset model for analysis to obtain distribution information of target objects in the image information to be processed includes:
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the above sample image information includes: a density map of the target object, which is used to reflect the density of the target object in each distribution area in the target area.
  • the density map has an identifier for indicating the density of the target object.
  • the above distribution information includes at least one of the following: the density of the target object in each distribution area of the target area, the area of the distribution area where the target object is located; controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information, including: Determine the amount or duration of drug spraying of the UAV in the distribution area according to the density of the target object in the distribution area; and / or determine the spraying range of drug according to the area of the distribution area where the target object is located.
  • the distribution information further includes: a distribution area of the target object in the target area; the method further includes: determining a flight path of the unmanned aerial vehicle according to the location of the distribution area of the target object; controlling the unmanned aerial vehicle to move according to the flight path.
  • the method further includes: detecting a remaining distribution area of the unmanned aerial vehicle in the target area, wherein the remaining distribution area is a distribution area of the unsprayed drug in the target area ; Determine the density of the target object in the remaining distribution area, and the total area of the remaining distribution area; determine the total amount of drug required by the remaining distribution area according to the density of the target object in the remaining distribution area, and the total area of the remaining distribution area; The difference between the remaining dose and the total dose of the aircraft; compare the difference and the preset threshold, and adjust the flight path of the unmanned aerial vehicle according to the comparison result.
  • the method before controlling the drone to spray the target object according to the distribution information, the method further includes: determining the unmanned aerial vehicle according to the size of the distribution area of the target object in the distribution area and the density of the target object in the distribution area Target dosage.
  • a control device for an unmanned aerial vehicle including: an acquisition module for acquiring image information of a target area; and an analysis module for inputting image information into a preset model for analysis, Obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, each of the multiple sets of data includes: sample image information of the target area, used to identify the target in the sample image information The label of the distribution information of the object; the control module is used to control the unmanned aerial vehicle to spray the medicine to the target object according to the distribution information.
  • an unmanned aerial vehicle including: an image acquisition device for acquiring image information of a target area; a processor for inputting image information into a preset model for analysis to obtain a target The distribution information of the target objects in the area, where the preset model is obtained by training multiple sets of data, each of the multiple sets of data includes: sample image information of the target area, and the image used to identify the target object in the sample image information The label of the distribution information; and control the unmanned aerial vehicle to spray the medicine to the target object according to the distribution information.
  • an unmanned aerial vehicle including: a communication module for receiving image information from a target area of a designated device, wherein the designated device includes: a network-side server or a mapping drone;
  • the processor is used to input image information to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes : The sample image information of the target area, the label used to identify the distribution information of the target object in the sample image information; and controlling the unmanned aerial vehicle to spray the drug to the target object according to the distribution information.
  • a storage medium including a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to perform the above method for determining distribution information.
  • a processor for running a program, where the above method for determining distribution information is executed when the program is run.
  • a method for determining distribution information of a target object including: acquiring image information of a target area; inputting the image information into a preset model for analysis to obtain distribution information of the target object in the target area Among them, the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information and a label used to identify the distribution information of the target object in the sample image information.
  • the step of training the preset model includes:
  • sample image information and mark the position of the target object in the sample image information, obtain a label of the distribution information of the target object corresponding to the sample image information, and enter the sample image and the corresponding label into a preset model;
  • Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image, and parameters of each part of the preset model are adjusted.
  • inputting the image information to be processed into a preset model for analysis to obtain distribution information of target objects in the image information to be processed includes:
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the sample image information includes: a density map of the target object, which is used to reflect the density of the target object in each distribution area in the target area.
  • the density map has an indicator for indicating the density of the target object.
  • the target sales area of the medicine is determined according to the density map of the target objects in the multiple target areas.
  • the distribution information further includes: a distribution area of the target object in the target area; the above method further includes: determining a flight route of the unmanned aerial vehicle according to the location of the distribution area of the target object.
  • the method further includes: determining the type of the target object; determining the application of each sub-area in the target area according to the type and distribution information Medicine information, the medicine application information includes the medicine type and the target medicine spray amount of the target object in the sub-area of the target area; the mark information for identifying the medicine application information is added to the image information of the target area to obtain the prescription map of the target area.
  • the target area is farmland to be applied, and the target object is weeds.
  • the image information of the target area is obtained; the image information is input to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, Each set of data in the multiple sets of data includes: image information of the target area, a label for identifying the distribution information of the target object in the image information; and controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information.
  • FIG. 1 is a schematic flowchart of a method for controlling an unmanned aerial vehicle according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of training a preset model according to an embodiment of the present application
  • 3a and 3b are schematic diagrams of sample images and their annotations according to embodiments of the present application.
  • FIG. 4 is a schematic flowchart of another training preset model according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a density map according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an optional control device for an unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an optional unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another optional unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for determining distribution information according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method for controlling an unmanned aerial vehicle according to this embodiment. As shown in FIG. 1, the method includes the following steps:
  • Step S102 Acquire image information to be processed in the target area.
  • the image information to be processed may be obtained by capturing an image of the target area through an image acquisition device provided on the UAV.
  • the target area may be one or more agricultural fields to be applied.
  • the UAV may be provided with a positioning system, so as to determine area information and latitude and longitude information of the current target area according to the positioning system.
  • Step S104 Input image information to be processed into a preset model for analysis to obtain distribution information of the target object in the target area.
  • the target object may be weeds in the farmland.
  • the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information of the target area, and a label used to identify the distribution information of the target object in the sample image information.
  • a weed recognition model for recognizing the type of weed can be trained.
  • the weed recognition model is obtained by training multiple sets of data.
  • Each set of data in the multiple sets of data includes: sample image information of the target area, A label used to identify the type of target object in the sample image information.
  • the image information is input to a preset weed identification model for analysis to obtain the type of the target object in the target area, where the target object is weed.
  • Step S106 Control the unmanned aerial vehicle to spray medicine on the target object according to the distribution information corresponding to the image to be processed.
  • the distribution information may be: the density of the target object in each distribution area in the target area, and the area of the distribution area where the target object is located.
  • Controlling the unmanned aerial vehicle to spray medicine to the target object according to the distribution information may be achieved in the following ways:
  • the amount of drug sprayed or the duration of spraying of the UAV in the distribution area is determined according to the density of the target object in the distribution area; and / or the range of drug spraying is determined according to the area of the distribution area where the target object is located.
  • the greater the density of the target object in the distribution area the greater the amount of medicine sprayed by the UAV in the corresponding distribution area, and the longer the spray duration.
  • the density of the target object in the distribution area and the area of the target object in the distribution area are comprehensively considered to determine the amount of medicine sprayed by the UAV in the corresponding distribution area.
  • the spray amount is determined according to the density of the target object in the distribution area.
  • the spraying range of the medicine may be a vertical range or a horizontal range.
  • the distribution information of the target object further includes: the distribution area of the target object in the target area, specifically, the pixel area of the distribution area in the image in the image and / or the pass can be determined according to the acquired image information of the target area
  • the positioning device obtains the latitude and longitude range occupied by the target object in the target area.
  • the flight path of the unmanned aerial vehicle may be determined according to the location of the distribution area of the target object, and the unmanned aerial vehicle may be controlled to move according to the flight path.
  • the flight route can be determined in an area free of weeds, and the unmanned aerial vehicle can be controlled to move according to the flight route.
  • the drone After controlling the drone to spray medicine to the target object according to the distribution information, it may further:
  • the remaining distribution area of the UAV in the target area wherein the remaining distribution area is a distribution area of unsprayed medicine in the target area; determining the density of the target object in the remaining distribution area, And the total area of the remaining distribution area; the total amount of medicine required for the remaining distribution area is determined according to the density of the target object in the remaining distribution area and the total area of the remaining distribution area; The difference between the remaining medicine amount of the UAV and the total medicine amount; comparing the difference value with a preset threshold, and adjusting the flight path of the UAV according to the comparison result.
  • the flight route of the unmanned aerial vehicle can be maliciously adjusted as the return route to reload the pesticide.
  • the farmland on the return route can be sprayed.
  • the return route can be planned according to the area of the target object that has not been sprayed with the drug and the amount of remaining medicine, so as to spray a whole area on the way back.
  • the image information of the target area may be obtained through an image acquisition device; the image information is input into a preset model to determine the image information , The distribution information of the target object in the target area, and according to the size of the distribution area of the target object in the target area in the distribution information, and the size of the density of the target object in the distribution area, determine the target medication amount of the UAV .
  • a target medication amount is determined.
  • a target medication amount is determined according to the distribution information of the target object in the distribution area being larger in the distribution information and the density of the target object in the distribution area being smaller.
  • a target medication amount is determined according to the distribution information of the target object in the distribution area being smaller in the distribution information and the density of the target object in the distribution area being smaller.
  • a target medication amount is determined according to the distribution information of the target object in the distribution area being larger in the distribution information and the density of the target object in the distribution area being larger.
  • the training method of the preset model may include the following steps.
  • Step S302 Obtain sample image information and mark the position of the target object in the sample image information to obtain a label of distribution information of the target object corresponding to the sample image information;
  • the image corresponding to the sample image information is an RGB image.
  • the distribution information of the target object in the sample image information can be identified by a label.
  • the label includes the latitude and longitude distribution range of the target object in the target area and / or the pixel distribution range in the picture.
  • a cross "x" can be used to indicate a crop area
  • a circle " ⁇ " can indicate a weed area.
  • Fig. 3b is the identification of the target object on the real electronic map. The dark areas are weeds and the light areas are crops.
  • Step S304 Use the first convolutional network model in the preset model to process the sample image information to obtain a first convolution image of the sample image information;
  • Step S306 the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the sample image information, wherein the first convolutional network model and The convolution kernel used in the second convolutional network model is different;
  • the size of the convolution kernel of the first convolutional network model may be 3 * 3, and the convolution step size may be set to 2.
  • the sample image information is an RGB image, and has three dimensions of R, G, and B.
  • the Perform downsampling In addition, the dimensions of the first convolution image can also be set.
  • the first convolutional network model when used to convolve the labeled image to obtain the first convolutional image, multiple convolutions can be performed, and the size of the convolution kernel is the same every time the convolution is performed It is 3 * 3, the convolution step is 2, every time convolution is down-sampling. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • the size of the convolution kernel of the second convolutional network model can be set to 5 * 5, and the convolution step size can be set to 2.
  • the sample image information is an RGB image and has three dimensions of R, G, and B.
  • the second convolution network model is used to perform convolution processing on the marked image to obtain the second convolution image, downsampling may be performed.
  • the dimensions of the second convolutional image can also be set.
  • the second convolutional network model when used to convolve the marked image to obtain the second convolutional image, multiple convolutions may be performed, and the convolution kernel is 5 for each convolution * 5, the convolution step is 2, and downsampling is performed every time. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • the first convolution image and the second convolution image have the same image size.
  • Step S308 Combine the first convolution image and the second convolution image of the sample image information to obtain a merged image
  • Step S310 Perform deconvolution processing on the merged image, and perform back propagation according to the deconvolution processing result and the label of the sample image to adjust parameters of each part of the preset model.
  • the merged image needs to be deconvoluted as many times as the number of convolutions from the sample image information to the first image, and, Set the dimensions of the deconvoluted image.
  • the deconvolution kernel size can be set to 3 * 3.
  • the size of the image is the same as the size of the sample image information.
  • the back propagation is performed to adjust the parameters of each layer of the preset model.
  • the preset model can have the ability to identify the distribution position of the target object in the image to be processed.
  • FIG. 4 is a schematic flowchart of another method for acquiring sample image information of a target area in each set of data provided by this embodiment; the method includes the following steps:
  • Step S402 acquiring sample image information, and labeling the position of the target object in the sample image information, obtaining a label of the distribution information of the target object corresponding to the sample image information, and inputting the sample image and the corresponding label into the pre Design model
  • the image corresponding to the sample image information is an RGB image.
  • the distribution information of the target object in the sample image information can be identified by a label.
  • the label includes the latitude and longitude distribution range of the target object in the target area and / or the pixel distribution range in the picture.
  • the first convolution network model in the preset model is used to process the sample image information to obtain a first convolution image of the sample image information.
  • the size of the convolution kernel of the first convolutional network model is 3 * 3, and the convolution step size can be set to 2.
  • the image corresponding to the sample image information is an RGB image, which has three dimensions of R, G, and B.
  • the process of convolving the labeled image with the first convolution network model to obtain the first convolution image it can be performed Downsampling.
  • the dimensions of the first convolution image can also be set.
  • step S4042 a total of three convolutions are performed, which are step S4042, step S4044, and step S4046 in sequence.
  • Each convolution is down-sampled and the convolution step is set to 2.
  • the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • n1, n2, and n3 are the corresponding dimension corresponding to each set of convolution, and the dimension is used to represent the length of the data vector corresponding to each pixel of the first convolution image.
  • the corresponding dimension of the pixel of the image after the first convolution is 1, and the data corresponding to the pixel can be a gray value.
  • n1 is 3 after the first convolution
  • the corresponding dimension of the pixel of the image is 3, and the data corresponding to this pixel can be RGB value.
  • the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the sample image information, wherein the first convolutional network model and the first The convolution kernel used in the two-convolution network model is different.
  • the size of the convolution kernel of the second convolutional network model can be set to 5 * 5, and the convolution step size can be set to 2.
  • the image corresponding to the sample image information is an RGB image, and has three dimensions of R, G, and B.
  • it can be performed Downsampling.
  • the dimensions of the second convolutional image can also be set.
  • the second convolutional network model when used to convolve the marked image to obtain the second convolutional image, multiple convolutions may be performed, and the convolution kernel is 5 for each convolution * 5.
  • the convolution step size can be set to 2, and downsampling is performed every time. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • step S4062 a total of three convolutions are performed, which are step S4062, step S4064, and step S4066 in sequence.
  • Each convolution is down-sampled, and the convolution step is set to 2.
  • the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • m1, m2, and m3 are respectively the corresponding set dimensions for each convolution, and the dimensions are used to represent the length of the data vector corresponding to each pixel of the second convolution image.
  • the corresponding dimension of the pixel of the image after the first convolution is 1
  • the data corresponding to the pixel can be a gray value
  • the data corresponding to this pixel can be RGB value.
  • the second convolutional network model is used to convolve the marked image to obtain the second convolutional image, multiple convolutions can be performed, and the convolution kernel is 5 * 5 each time.
  • the first convolution image and the second convolution image have the same image size.
  • Step S408 Combine the first convolution image and the second convolution image of the sample image information to obtain a merged image.
  • Deconvolution processing is performed on the merged image.
  • the merged image is deconvoluted as many times as the number of convolutions from the sample image information to the first image, and the deconvoluted image may be dimensioned.
  • Three deconvolutions are performed on the merged image, which are step S4102, step S4104, and step S4106, respectively, to obtain a density map, that is, sample image information of the target area, that is, step S412.
  • the size of the deconvolution kernel can be set to 3 * 3.
  • the size of the image is the same as the size of the sample image information.
  • Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image, and parameters of each part of the preset model are adjusted.
  • the preset model can have the ability to identify the distribution position of the target object in the image to be processed.
  • the image information to be processed can be input into the trained preset model.
  • the first convolutional network model in the preset model is used to process the sample image information to obtain a first convolutional image of the image information to be processed.
  • the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the image information to be processed.
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the density map has an indicator for indicating the density of the target object. For example, the lighter the color of the distribution area in the density map, the greater the density of the target object in the distribution area.
  • FIG. 5 is a density map obtained after the processing. In FIG. 5, if the color of area A is lighter than that of area B, the density of the aggregated target object in area A is greater.
  • the value of a pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the deconvoluted density map may be a grayscale map.
  • the deconvolution is a grayscale image, in the image, white is 255 and black is 0.
  • the larger the gray value the denser the distribution of target objects in the target area. That is, where the color is darker, the weeds are more densely distributed; where the color is lighter, the weeds are more sparsely distributed.
  • a preset model is used to analyze the image information of the target area to obtain the distribution information of the target object in the target area, and based on the distribution information, the unmanned aerial vehicle is controlled to spray the target object with medicine, by acquiring the target area Image information; input the image information to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, the multiple sets of data
  • Each group of data in includes: image information of the target area, a label for identifying distribution information of the target object in the image information; and controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information.
  • FIG. 6 is a schematic structural diagram of an optional unmanned aerial vehicle control device according to this embodiment. As shown in FIG. 6, the device includes: an acquisition module 62; an analysis module 64, and a control module 66. among them,
  • the obtaining module 62 is used to obtain image information of the target area
  • the analysis module 64 is configured to input the image information to be processed into a preset model for analysis to obtain distribution information of the target object in the image information to be processed, wherein the preset model is obtained by training through multiple sets of data Each of the multiple sets of data includes: sample image information of the target area, and a label for identifying distribution information of the target object in the sample image information.
  • the control module 66 is configured to control the unmanned aerial vehicle to spray medicine on the target object according to the distribution information corresponding to the image to be processed.
  • the device includes: an image acquisition device 72 and a processor 74. among them,
  • the image acquisition device 72 is used to acquire image information to be processed in the target area
  • the processor 74 is configured to input the to-be-processed image information into a preset model for analysis to obtain distribution information of the target object in the to-be-processed image information, wherein the preset model is obtained through training of multiple sets of data , Each of the multiple sets of data includes: sample image information of the target area, a label used to identify the distribution information of the target object in the sample image information; and a distribution information control station based on the distribution information corresponding to the image information to be processed
  • the unmanned aerial vehicle sprays medicine to the target object.
  • FIG. 8 is a schematic structural diagram of an optional unmanned aerial vehicle control device according to this embodiment. As shown in FIG. 4, the device includes: an image acquisition device 82 and a processor 84. among them,
  • the communication module 82 is configured to receive image information to be processed from the target area of the designated device.
  • the processor 84 is configured to input the image information to be processed into a preset model for analysis to obtain distribution information of the target object in the image to be processed, wherein the preset model is obtained through training of multiple sets of data, Each set of data in the plurality of sets of data includes: sample image information of the target area, a label for identifying distribution information of the target object in the sample image information; and controlling the none according to the distribution information corresponding to the image to be processed
  • the human aircraft sprays the medicine to the target object.
  • unmanned aerial vehicle control device can refer to the related description of the steps shown in FIG. 1, and details are not repeated here.
  • FIG. 9 is a schematic flowchart of a method for determining distribution information according to this embodiment. As shown in Figure 9, the method includes:
  • Step S902 Acquire image information to be processed in the target area
  • Step S904 Input the image information to be processed into a preset model for analysis to obtain the distribution information of the target object in the image information to be processed, wherein the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data Both include: sample image information and a label used to identify the distribution information of the target object in the sample image information.
  • the step of training the preset model includes: obtaining sample image information, and labeling the position of the target object in the sample image information, obtaining a label of distribution information of the target object corresponding to the sample image information, and Input the sample image and the corresponding label into a preset model; use the first convolution network model in the preset model to process the sample image information to obtain a first convolution image of the sample image information; Processing the sample image information by using the second convolution network model in the preset model to obtain a second convolution image of the sample image information, wherein the first convolution network model and the second volume
  • the convolution kernel used in the product network model is different; merge the first convolution image and the second convolution image of the sample image information to obtain a merged image; perform deconvolution processing on the merged image, and according to The deconvolution processing result and the label of the sample image are back-propagated to adjust the parameters of each part of the preset model.
  • the step of processing the to-be-processed image through the preset model includes: inputting the to-be-processed image information into the trained preset model; adopting the first convolutional network model in the preset model to the sample image information Performing processing to obtain a first convolutional image of the image information to be processed; using a second convolutional network model in the preset model to process the sample image information to obtain a second of the image information to be processed Convolution image; merge the first convolution image and the second convolution image of the image information to be processed, and perform deconvolution processing on the merged image to obtain the corresponding density map of the image to be processed as the The distribution information of the target object in the image to be processed.
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the density map is used to reflect the density of the target object in each distribution area in the target area.
  • the density map has an indicator for indicating the density of the target object, and the identifier may be different colors or different depths of the same color or digital information, etc.
  • the target sales area of the medicine is determined according to the density map of the target objects in the multiple target areas. For example, for a sales area where the density indicated by the density map is greater, more medicine is required, so that the target sales area is indirectly determined.
  • the above distribution information may further include: the distribution area of the target object in the target area; at this time, the flight path of the UAV may be determined according to the location of the distribution area of the target object.
  • the prescription map of the target area can also be determined according to the distribution information, and the prescription map is used to display the target
  • the application information of the area specifically: determine the type of the target object; determine the application information of each sub-area in the target area according to the type and distribution information, the application information includes the drug type and target of the target object in the sub-area of the target area The amount of spraying; add the marking information used to identify the application information in the image information of the target area to obtain the prescription map of the target area.
  • the type of the target object can be determined through machine learning, for example, the image of the target object is input into the prediction model that has been trained, and the type of the target object is identified using the prediction model.
  • a storage medium including a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to execute the above-mentioned unmanned aerial vehicle control method.
  • a processor for running a program, wherein the above-mentioned unmanned aerial vehicle control method is executed when the program is running.
  • the technical content disclosed in this embodiment may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application may be essentially or part of the contribution to the existing technology or all or part of the technical solution may be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .
  • the image information to be processed in the target area is obtained; the image information to be processed is input into a preset model for analysis to obtain the distribution information of the target object in the image information to be processed, wherein the preset model is Group data training, each group of data in the multi-group data includes: sample image information of the target area, a label used to identify the distribution information of the target object in the sample image information; according to the distribution information, the UAV controls the target object Drug spray.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Pest Control & Pesticides (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Catching Or Destruction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne un procédé de détermination d'informations de distribution, et un procédé et un dispositif de commande destinés à un véhicule aérien sans pilote. Le procédé de commande consiste : à acquérir des informations d'image d'une région cible (S102) ; à entrer les informations d'image dans un modèle prédéfini en vue d'une analyse, de façon à obtenir des informations de distribution d'un objet cible dans la région cible (S104), le modèle prédéfini étant obtenu par apprentissage à l'aide de multiples ensembles de données, et chaque ensemble de données dans les multiples ensembles de données comportant des informations d'image d'échantillon de la région cible ainsi qu'une étiquette servant à l'identification d'informations de distribution de l'objet cible dans les informations d'image d'échantillon ; et à commander, en fonction des informations de distribution, le véhicule aérien sans pilote en vue de la pulvérisation d'un pesticide sur l'objet cible (S106). La présente invention résout les problèmes, tels que les déchets et résidus de pesticides, dus à la difficulté de distinguer les cultures des mauvaises herbes.
PCT/CN2019/111515 2018-10-18 2019-10-16 Procédé de détermination d'informations de distribution, et procédé de commande et dispositif destinés à un véhicule aérien sans pilote WO2020078396A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US17/309,058 US20210357643A1 (en) 2018-10-18 2019-10-16 Method for determining distribution information, and control method and device for unmanned aerial vehicle
JP2021520573A JP2022502794A (ja) 2018-10-18 2019-10-16 分布情報の確定方法、無人飛行体の制御方法及び装置
KR1020217014072A KR20210071062A (ko) 2018-10-18 2019-10-16 분포 정보의 확정 방법, 무인 항공기의 제어 방법 및 장치
AU2019362430A AU2019362430B2 (en) 2018-10-18 2019-10-16 Method for determining distribution information, and control method and device for unmanned aerial vehicle
CA3115564A CA3115564A1 (fr) 2018-10-18 2019-10-16 Procede de determination d'informations de distribution, et procede de commande et dispositif destines a un vehicule aerien sans pilote
EP19873665.4A EP3859479A4 (fr) 2018-10-18 2019-10-16 Procédé de détermination d'informations de distribution, et procédé de commande et dispositif destinés à un véhicule aérien sans pilote

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811217967.XA CN109445457B (zh) 2018-10-18 2018-10-18 分布信息的确定方法、无人飞行器的控制方法及装置
CN201811217967.X 2018-10-18

Publications (1)

Publication Number Publication Date
WO2020078396A1 true WO2020078396A1 (fr) 2020-04-23

Family

ID=65546651

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111515 WO2020078396A1 (fr) 2018-10-18 2019-10-16 Procédé de détermination d'informations de distribution, et procédé de commande et dispositif destinés à un véhicule aérien sans pilote

Country Status (8)

Country Link
US (1) US20210357643A1 (fr)
EP (1) EP3859479A4 (fr)
JP (1) JP2022502794A (fr)
KR (1) KR20210071062A (fr)
CN (1) CN109445457B (fr)
AU (1) AU2019362430B2 (fr)
CA (1) CA3115564A1 (fr)
WO (1) WO2020078396A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783549A (zh) * 2020-06-04 2020-10-16 北京海益同展信息科技有限公司 一种分布图生成方法、系统、巡检机器人及控制终端

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109445457B (zh) * 2018-10-18 2021-05-14 广州极飞科技股份有限公司 分布信息的确定方法、无人飞行器的控制方法及装置
US10822085B2 (en) 2019-03-06 2020-11-03 Rantizo, Inc. Automated cartridge replacement system for unmanned aerial vehicle
CN112948371A (zh) * 2019-12-10 2021-06-11 广州极飞科技股份有限公司 数据处理方法、装置、存储介质、处理器
CN113011221A (zh) * 2019-12-19 2021-06-22 广州极飞科技股份有限公司 作物分布信息的获取方法、装置及测量系统
CN113011220A (zh) * 2019-12-19 2021-06-22 广州极飞科技股份有限公司 穗数识别方法、装置、存储介质及处理器
CN111459183B (zh) * 2020-04-10 2021-07-20 广州极飞科技股份有限公司 作业参数推荐方法、装置、无人设备及存储介质
CN112425328A (zh) * 2020-11-23 2021-03-02 广州极飞科技有限公司 多物料播撒控制方法、装置、终端设备、无人设备及介质
CN113973793B (zh) * 2021-09-09 2023-08-04 常州希米智能科技有限公司 一种病虫害区域无人机喷洒处理方法和系统
CN115337430A (zh) * 2022-08-11 2022-11-15 深圳市隆瑞科技有限公司 一种喷雾小车的控制方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170192424A1 (en) * 2015-12-31 2017-07-06 Unmanned Innovation, Inc. Unmanned aerial vehicle rooftop inspection system
CN108154196A (zh) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 用于输出图像的方法和装置
CN108541683A (zh) * 2018-04-18 2018-09-18 济南浪潮高新科技投资发展有限公司 一种基于卷积神经网络芯片的无人机农药喷洒系统
CN108629289A (zh) * 2018-04-11 2018-10-09 千寻位置网络有限公司 农田的识别方法及系统、应用于农业的无人机
CN109445457A (zh) * 2018-10-18 2019-03-08 广州极飞科技有限公司 分布信息的确定方法、无人飞行器的控制方法及装置

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015057633A1 (fr) * 2013-10-14 2015-04-23 Kinze Manufacturing, Inc. Systèmes, méthodes et appareil autonomes pour opérations agricoles
US10104836B2 (en) * 2014-06-11 2018-10-23 John Paul Jamison Systems and methods for forming graphical and/or textual elements on land for remote viewing
AU2015305406B2 (en) * 2014-08-22 2020-09-10 Climate Llc Methods for agronomic and agricultural monitoring using unmanned aerial systems
US10139279B2 (en) * 2015-05-12 2018-11-27 BioSensing Systems, LLC Apparatuses and methods for bio-sensing using unmanned aerial vehicles
CN105159319B (zh) * 2015-09-29 2017-10-31 广州极飞科技有限公司 一种无人机的喷药方法及无人机
US10638744B2 (en) * 2016-06-30 2020-05-05 Optim Corporation Application and method for controlling moving vehicle
US10520943B2 (en) * 2016-08-12 2019-12-31 Skydio, Inc. Unmanned aerial image capture platform
GB2568007A (en) * 2016-09-08 2019-05-01 Walmart Apollo Llc Systems and methods for dispensing an insecticide via unmanned vehicles to defend a crop-containing area against pests
JP6798854B2 (ja) * 2016-10-25 2020-12-09 株式会社パスコ 目的物個数推定装置、目的物個数推定方法及びプログラム
US10721859B2 (en) * 2017-01-08 2020-07-28 Dolly Y. Wu PLLC Monitoring and control implement for crop improvement
JP6906959B2 (ja) * 2017-01-12 2021-07-21 東光鉄工株式会社 ドローンを使用した肥料散布方法
CN108509961A (zh) * 2017-02-27 2018-09-07 北京旷视科技有限公司 图像处理方法和装置
CN106882380A (zh) * 2017-03-03 2017-06-23 杭州杉林科技有限公司 空地一体农林用植保系统装置及使用方法
CN106951836B (zh) * 2017-03-05 2019-12-13 北京工业大学 基于先验阈值优化卷积神经网络的作物覆盖度提取方法
CN106910247B (zh) * 2017-03-20 2020-10-02 厦门黑镜科技有限公司 用于生成三维头像模型的方法和装置
CN107274378B (zh) * 2017-07-25 2020-04-03 江西理工大学 一种融合记忆cnn的图像模糊类型识别及参数整定方法
US10740607B2 (en) * 2017-08-18 2020-08-11 Autel Robotics Co., Ltd. Method for determining target through intelligent following of unmanned aerial vehicle, unmanned aerial vehicle and remote control
CN107728642B (zh) * 2017-10-30 2021-03-09 北京博鹰通航科技有限公司 一种无人机飞行控制系统及其方法
CN107933921B (zh) * 2017-10-30 2020-11-17 广州极飞科技有限公司 飞行器及其喷洒路线生成和执行方法、装置、控制终端
CN107703960A (zh) * 2017-11-17 2018-02-16 江西天祥通用航空股份有限公司 农药喷洒直升机的地空跟踪监测装置
CN108596222B (zh) * 2018-04-11 2021-05-18 西安电子科技大学 基于反卷积神经网络的图像融合方法
CN108594850B (zh) * 2018-04-20 2021-06-11 广州极飞科技股份有限公司 基于无人机的航线规划及控制无人机作业的方法、装置
US10660277B2 (en) * 2018-09-11 2020-05-26 Pollen Systems Corporation Vine growing management method and apparatus with autonomous vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170192424A1 (en) * 2015-12-31 2017-07-06 Unmanned Innovation, Inc. Unmanned aerial vehicle rooftop inspection system
CN108154196A (zh) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 用于输出图像的方法和装置
CN108629289A (zh) * 2018-04-11 2018-10-09 千寻位置网络有限公司 农田的识别方法及系统、应用于农业的无人机
CN108541683A (zh) * 2018-04-18 2018-09-18 济南浪潮高新科技投资发展有限公司 一种基于卷积神经网络芯片的无人机农药喷洒系统
CN109445457A (zh) * 2018-10-18 2019-03-08 广州极飞科技有限公司 分布信息的确定方法、无人飞行器的控制方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3859479A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783549A (zh) * 2020-06-04 2020-10-16 北京海益同展信息科技有限公司 一种分布图生成方法、系统、巡检机器人及控制终端

Also Published As

Publication number Publication date
US20210357643A1 (en) 2021-11-18
AU2019362430A1 (en) 2021-05-13
EP3859479A1 (fr) 2021-08-04
CN109445457A (zh) 2019-03-08
CA3115564A1 (fr) 2020-04-23
JP2022502794A (ja) 2022-01-11
CN109445457B (zh) 2021-05-14
EP3859479A4 (fr) 2021-11-24
AU2019362430B2 (en) 2022-09-08
KR20210071062A (ko) 2021-06-15

Similar Documents

Publication Publication Date Title
WO2020078396A1 (fr) Procédé de détermination d'informations de distribution, et procédé de commande et dispositif destinés à un véhicule aérien sans pilote
JP7307743B2 (ja) 作業対象領域境界の取得方法および装置、並びに作業経路の計画方法
Huang et al. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery
US20180260947A1 (en) Inventory, growth, and risk prediction using image processing
Blok et al. The effect of data augmentation and network simplification on the image‐based detection of broccoli heads with Mask R‐CNN
EP3815529A1 (fr) Système de détection et de contrôle de plantes agricoles
US20220192174A1 (en) Agricultural sprayer with real-time, on-machine target sensor
CN109197278A (zh) 作业策略的确定方法及装置、药物喷洒策略的确定方法
US20220256834A1 (en) Method for generating an application map for treating a field with an agricultural equipment
US11944087B2 (en) Agricultural sprayer with real-time, on-machine target sensor
Passos et al. Automatic detection of Aedes aegypti breeding grounds based on deep networks with spatio-temporal consistency
Buddha et al. Weed detection and classification in high altitude aerial images for robot-based precision agriculture
CN110188661B (zh) 边界识别方法及装置
Quiroz et al. A method for automatic identification of crop lines in drone images from a mango tree plantation using segmentation over YCrCb color space and Hough transform
US11832609B2 (en) Agricultural sprayer with real-time, on-machine target sensor
CN109492541B (zh) 目标对象类型的确定方法及装置、植保方法、植保系统
US20220392214A1 (en) Scouting functionality emergence
Krestenitis et al. Overcome the Fear Of Missing Out: Active sensing UAV scanning for precision agriculture
Shahid et al. Aerial imagery-based tobacco plant counting framework for efficient crop emergence estimation
CN117274674A (zh) 对靶施药方法、电子设备、存储介质及系统
US20240095911A1 (en) Estimating properties of physical objects, by processing image data with neural networks
Charitha et al. Detection of Weed Plants Using Image Processing and Deep Learning Techniques
CN117036886A (zh) 一种伪装目标检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19873665

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3115564

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2021520573

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019873665

Country of ref document: EP

Effective date: 20210428

ENP Entry into the national phase

Ref document number: 20217014072

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019362430

Country of ref document: AU

Date of ref document: 20191016

Kind code of ref document: A