WO2020078396A1 - 分布信息的确定方法、无人飞行器的控制方法及装置 - Google Patents

分布信息的确定方法、无人飞行器的控制方法及装置 Download PDF

Info

Publication number
WO2020078396A1
WO2020078396A1 PCT/CN2019/111515 CN2019111515W WO2020078396A1 WO 2020078396 A1 WO2020078396 A1 WO 2020078396A1 CN 2019111515 W CN2019111515 W CN 2019111515W WO 2020078396 A1 WO2020078396 A1 WO 2020078396A1
Authority
WO
WIPO (PCT)
Prior art keywords
image information
target object
distribution
information
area
Prior art date
Application number
PCT/CN2019/111515
Other languages
English (en)
French (fr)
Inventor
代双亮
Original Assignee
广州极飞科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州极飞科技有限公司 filed Critical 广州极飞科技有限公司
Priority to KR1020217014072A priority Critical patent/KR20210071062A/ko
Priority to JP2021520573A priority patent/JP2022502794A/ja
Priority to CA3115564A priority patent/CA3115564A1/en
Priority to US17/309,058 priority patent/US20210357643A1/en
Priority to AU2019362430A priority patent/AU2019362430B2/en
Priority to EP19873665.4A priority patent/EP3859479A4/en
Publication of WO2020078396A1 publication Critical patent/WO2020078396A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D1/00Dropping, ejecting, releasing, or receiving articles, liquids, or the like, in flight
    • B64D1/16Dropping or releasing powdered, liquid, or gaseous matter, e.g. for fire-fighting
    • B64D1/18Dropping or releasing powdered, liquid, or gaseous matter, e.g. for fire-fighting by spraying, e.g. insecticides
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/003Flight plan management
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/40UAVs specially adapted for particular uses or applications for agriculture or forestry operations

Definitions

  • the present application relates to the field of plant protection, and in particular, to a method for determining distribution information, a method and a device for controlling an unmanned aerial vehicle.
  • drones generally use a general spraying solution for weeding or defoliating agents. If general spraying is used, it will cause a lot of waste of pesticides and pesticide residues or some areas with serious grass damage. Great economic loss.
  • This embodiment provides a method for determining distribution information, a method and a device for controlling an unmanned aerial vehicle, so as to at least solve the technical problems of waste of medicine and pesticide residue caused by the difficulty of distinguishing crops and weeds in the related art.
  • a method for controlling an unmanned aerial vehicle includes: acquiring image information of a target area to be processed; inputting image information to be processed into a preset model for analysis to obtain target objects in the image to be processed Distribution information, where the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information of the target area, and a label used to identify the distribution information of the target object in the sample image information ; According to the distribution information corresponding to the image to be processed, control the unmanned aerial vehicle to spray the medicine to the target object.
  • the step of training the preset model includes:
  • sample image information and mark the position of the target object in the sample image information, obtain a label of the distribution information of the target object corresponding to the sample image information, and enter the sample image and the corresponding label into a preset model;
  • Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image to adjust parameters of each part of the preset model.
  • inputting the image information to be processed into a preset model for analysis to obtain distribution information of target objects in the image information to be processed includes:
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the above sample image information includes: a density map of the target object, which is used to reflect the density of the target object in each distribution area in the target area.
  • the density map has an identifier for indicating the density of the target object.
  • the above distribution information includes at least one of the following: the density of the target object in each distribution area of the target area, the area of the distribution area where the target object is located; controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information, including: Determine the amount or duration of drug spraying of the UAV in the distribution area according to the density of the target object in the distribution area; and / or determine the spraying range of drug according to the area of the distribution area where the target object is located.
  • the distribution information further includes: a distribution area of the target object in the target area; the method further includes: determining a flight path of the unmanned aerial vehicle according to the location of the distribution area of the target object; controlling the unmanned aerial vehicle to move according to the flight path.
  • the method further includes: detecting a remaining distribution area of the unmanned aerial vehicle in the target area, wherein the remaining distribution area is a distribution area of the unsprayed drug in the target area ; Determine the density of the target object in the remaining distribution area, and the total area of the remaining distribution area; determine the total amount of drug required by the remaining distribution area according to the density of the target object in the remaining distribution area, and the total area of the remaining distribution area; The difference between the remaining dose and the total dose of the aircraft; compare the difference and the preset threshold, and adjust the flight path of the unmanned aerial vehicle according to the comparison result.
  • the method before controlling the drone to spray the target object according to the distribution information, the method further includes: determining the unmanned aerial vehicle according to the size of the distribution area of the target object in the distribution area and the density of the target object in the distribution area Target dosage.
  • a control device for an unmanned aerial vehicle including: an acquisition module for acquiring image information of a target area; and an analysis module for inputting image information into a preset model for analysis, Obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, each of the multiple sets of data includes: sample image information of the target area, used to identify the target in the sample image information The label of the distribution information of the object; the control module is used to control the unmanned aerial vehicle to spray the medicine to the target object according to the distribution information.
  • an unmanned aerial vehicle including: an image acquisition device for acquiring image information of a target area; a processor for inputting image information into a preset model for analysis to obtain a target The distribution information of the target objects in the area, where the preset model is obtained by training multiple sets of data, each of the multiple sets of data includes: sample image information of the target area, and the image used to identify the target object in the sample image information The label of the distribution information; and control the unmanned aerial vehicle to spray the medicine to the target object according to the distribution information.
  • an unmanned aerial vehicle including: a communication module for receiving image information from a target area of a designated device, wherein the designated device includes: a network-side server or a mapping drone;
  • the processor is used to input image information to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes : The sample image information of the target area, the label used to identify the distribution information of the target object in the sample image information; and controlling the unmanned aerial vehicle to spray the drug to the target object according to the distribution information.
  • a storage medium including a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to perform the above method for determining distribution information.
  • a processor for running a program, where the above method for determining distribution information is executed when the program is run.
  • a method for determining distribution information of a target object including: acquiring image information of a target area; inputting the image information into a preset model for analysis to obtain distribution information of the target object in the target area Among them, the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information and a label used to identify the distribution information of the target object in the sample image information.
  • the step of training the preset model includes:
  • sample image information and mark the position of the target object in the sample image information, obtain a label of the distribution information of the target object corresponding to the sample image information, and enter the sample image and the corresponding label into a preset model;
  • Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image, and parameters of each part of the preset model are adjusted.
  • inputting the image information to be processed into a preset model for analysis to obtain distribution information of target objects in the image information to be processed includes:
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the sample image information includes: a density map of the target object, which is used to reflect the density of the target object in each distribution area in the target area.
  • the density map has an indicator for indicating the density of the target object.
  • the target sales area of the medicine is determined according to the density map of the target objects in the multiple target areas.
  • the distribution information further includes: a distribution area of the target object in the target area; the above method further includes: determining a flight route of the unmanned aerial vehicle according to the location of the distribution area of the target object.
  • the method further includes: determining the type of the target object; determining the application of each sub-area in the target area according to the type and distribution information Medicine information, the medicine application information includes the medicine type and the target medicine spray amount of the target object in the sub-area of the target area; the mark information for identifying the medicine application information is added to the image information of the target area to obtain the prescription map of the target area.
  • the target area is farmland to be applied, and the target object is weeds.
  • the image information of the target area is obtained; the image information is input to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, Each set of data in the multiple sets of data includes: image information of the target area, a label for identifying the distribution information of the target object in the image information; and controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information.
  • FIG. 1 is a schematic flowchart of a method for controlling an unmanned aerial vehicle according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of training a preset model according to an embodiment of the present application
  • 3a and 3b are schematic diagrams of sample images and their annotations according to embodiments of the present application.
  • FIG. 4 is a schematic flowchart of another training preset model according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a density map according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an optional control device for an unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an optional unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another optional unmanned aerial vehicle according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for determining distribution information according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method for controlling an unmanned aerial vehicle according to this embodiment. As shown in FIG. 1, the method includes the following steps:
  • Step S102 Acquire image information to be processed in the target area.
  • the image information to be processed may be obtained by capturing an image of the target area through an image acquisition device provided on the UAV.
  • the target area may be one or more agricultural fields to be applied.
  • the UAV may be provided with a positioning system, so as to determine area information and latitude and longitude information of the current target area according to the positioning system.
  • Step S104 Input image information to be processed into a preset model for analysis to obtain distribution information of the target object in the target area.
  • the target object may be weeds in the farmland.
  • the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information of the target area, and a label used to identify the distribution information of the target object in the sample image information.
  • a weed recognition model for recognizing the type of weed can be trained.
  • the weed recognition model is obtained by training multiple sets of data.
  • Each set of data in the multiple sets of data includes: sample image information of the target area, A label used to identify the type of target object in the sample image information.
  • the image information is input to a preset weed identification model for analysis to obtain the type of the target object in the target area, where the target object is weed.
  • Step S106 Control the unmanned aerial vehicle to spray medicine on the target object according to the distribution information corresponding to the image to be processed.
  • the distribution information may be: the density of the target object in each distribution area in the target area, and the area of the distribution area where the target object is located.
  • Controlling the unmanned aerial vehicle to spray medicine to the target object according to the distribution information may be achieved in the following ways:
  • the amount of drug sprayed or the duration of spraying of the UAV in the distribution area is determined according to the density of the target object in the distribution area; and / or the range of drug spraying is determined according to the area of the distribution area where the target object is located.
  • the greater the density of the target object in the distribution area the greater the amount of medicine sprayed by the UAV in the corresponding distribution area, and the longer the spray duration.
  • the density of the target object in the distribution area and the area of the target object in the distribution area are comprehensively considered to determine the amount of medicine sprayed by the UAV in the corresponding distribution area.
  • the spray amount is determined according to the density of the target object in the distribution area.
  • the spraying range of the medicine may be a vertical range or a horizontal range.
  • the distribution information of the target object further includes: the distribution area of the target object in the target area, specifically, the pixel area of the distribution area in the image in the image and / or the pass can be determined according to the acquired image information of the target area
  • the positioning device obtains the latitude and longitude range occupied by the target object in the target area.
  • the flight path of the unmanned aerial vehicle may be determined according to the location of the distribution area of the target object, and the unmanned aerial vehicle may be controlled to move according to the flight path.
  • the flight route can be determined in an area free of weeds, and the unmanned aerial vehicle can be controlled to move according to the flight route.
  • the drone After controlling the drone to spray medicine to the target object according to the distribution information, it may further:
  • the remaining distribution area of the UAV in the target area wherein the remaining distribution area is a distribution area of unsprayed medicine in the target area; determining the density of the target object in the remaining distribution area, And the total area of the remaining distribution area; the total amount of medicine required for the remaining distribution area is determined according to the density of the target object in the remaining distribution area and the total area of the remaining distribution area; The difference between the remaining medicine amount of the UAV and the total medicine amount; comparing the difference value with a preset threshold, and adjusting the flight path of the UAV according to the comparison result.
  • the flight route of the unmanned aerial vehicle can be maliciously adjusted as the return route to reload the pesticide.
  • the farmland on the return route can be sprayed.
  • the return route can be planned according to the area of the target object that has not been sprayed with the drug and the amount of remaining medicine, so as to spray a whole area on the way back.
  • the image information of the target area may be obtained through an image acquisition device; the image information is input into a preset model to determine the image information , The distribution information of the target object in the target area, and according to the size of the distribution area of the target object in the target area in the distribution information, and the size of the density of the target object in the distribution area, determine the target medication amount of the UAV .
  • a target medication amount is determined.
  • a target medication amount is determined according to the distribution information of the target object in the distribution area being larger in the distribution information and the density of the target object in the distribution area being smaller.
  • a target medication amount is determined according to the distribution information of the target object in the distribution area being smaller in the distribution information and the density of the target object in the distribution area being smaller.
  • a target medication amount is determined according to the distribution information of the target object in the distribution area being larger in the distribution information and the density of the target object in the distribution area being larger.
  • the training method of the preset model may include the following steps.
  • Step S302 Obtain sample image information and mark the position of the target object in the sample image information to obtain a label of distribution information of the target object corresponding to the sample image information;
  • the image corresponding to the sample image information is an RGB image.
  • the distribution information of the target object in the sample image information can be identified by a label.
  • the label includes the latitude and longitude distribution range of the target object in the target area and / or the pixel distribution range in the picture.
  • a cross "x" can be used to indicate a crop area
  • a circle " ⁇ " can indicate a weed area.
  • Fig. 3b is the identification of the target object on the real electronic map. The dark areas are weeds and the light areas are crops.
  • Step S304 Use the first convolutional network model in the preset model to process the sample image information to obtain a first convolution image of the sample image information;
  • Step S306 the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the sample image information, wherein the first convolutional network model and The convolution kernel used in the second convolutional network model is different;
  • the size of the convolution kernel of the first convolutional network model may be 3 * 3, and the convolution step size may be set to 2.
  • the sample image information is an RGB image, and has three dimensions of R, G, and B.
  • the Perform downsampling In addition, the dimensions of the first convolution image can also be set.
  • the first convolutional network model when used to convolve the labeled image to obtain the first convolutional image, multiple convolutions can be performed, and the size of the convolution kernel is the same every time the convolution is performed It is 3 * 3, the convolution step is 2, every time convolution is down-sampling. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • the size of the convolution kernel of the second convolutional network model can be set to 5 * 5, and the convolution step size can be set to 2.
  • the sample image information is an RGB image and has three dimensions of R, G, and B.
  • the second convolution network model is used to perform convolution processing on the marked image to obtain the second convolution image, downsampling may be performed.
  • the dimensions of the second convolutional image can also be set.
  • the second convolutional network model when used to convolve the marked image to obtain the second convolutional image, multiple convolutions may be performed, and the convolution kernel is 5 for each convolution * 5, the convolution step is 2, and downsampling is performed every time. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • the first convolution image and the second convolution image have the same image size.
  • Step S308 Combine the first convolution image and the second convolution image of the sample image information to obtain a merged image
  • Step S310 Perform deconvolution processing on the merged image, and perform back propagation according to the deconvolution processing result and the label of the sample image to adjust parameters of each part of the preset model.
  • the merged image needs to be deconvoluted as many times as the number of convolutions from the sample image information to the first image, and, Set the dimensions of the deconvoluted image.
  • the deconvolution kernel size can be set to 3 * 3.
  • the size of the image is the same as the size of the sample image information.
  • the back propagation is performed to adjust the parameters of each layer of the preset model.
  • the preset model can have the ability to identify the distribution position of the target object in the image to be processed.
  • FIG. 4 is a schematic flowchart of another method for acquiring sample image information of a target area in each set of data provided by this embodiment; the method includes the following steps:
  • Step S402 acquiring sample image information, and labeling the position of the target object in the sample image information, obtaining a label of the distribution information of the target object corresponding to the sample image information, and inputting the sample image and the corresponding label into the pre Design model
  • the image corresponding to the sample image information is an RGB image.
  • the distribution information of the target object in the sample image information can be identified by a label.
  • the label includes the latitude and longitude distribution range of the target object in the target area and / or the pixel distribution range in the picture.
  • the first convolution network model in the preset model is used to process the sample image information to obtain a first convolution image of the sample image information.
  • the size of the convolution kernel of the first convolutional network model is 3 * 3, and the convolution step size can be set to 2.
  • the image corresponding to the sample image information is an RGB image, which has three dimensions of R, G, and B.
  • the process of convolving the labeled image with the first convolution network model to obtain the first convolution image it can be performed Downsampling.
  • the dimensions of the first convolution image can also be set.
  • step S4042 a total of three convolutions are performed, which are step S4042, step S4044, and step S4046 in sequence.
  • Each convolution is down-sampled and the convolution step is set to 2.
  • the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • n1, n2, and n3 are the corresponding dimension corresponding to each set of convolution, and the dimension is used to represent the length of the data vector corresponding to each pixel of the first convolution image.
  • the corresponding dimension of the pixel of the image after the first convolution is 1, and the data corresponding to the pixel can be a gray value.
  • n1 is 3 after the first convolution
  • the corresponding dimension of the pixel of the image is 3, and the data corresponding to this pixel can be RGB value.
  • the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the sample image information, wherein the first convolutional network model and the first The convolution kernel used in the two-convolution network model is different.
  • the size of the convolution kernel of the second convolutional network model can be set to 5 * 5, and the convolution step size can be set to 2.
  • the image corresponding to the sample image information is an RGB image, and has three dimensions of R, G, and B.
  • it can be performed Downsampling.
  • the dimensions of the second convolutional image can also be set.
  • the second convolutional network model when used to convolve the marked image to obtain the second convolutional image, multiple convolutions may be performed, and the convolution kernel is 5 for each convolution * 5.
  • the convolution step size can be set to 2, and downsampling is performed every time. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • step S4062 a total of three convolutions are performed, which are step S4062, step S4064, and step S4066 in sequence.
  • Each convolution is down-sampled, and the convolution step is set to 2.
  • the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
  • m1, m2, and m3 are respectively the corresponding set dimensions for each convolution, and the dimensions are used to represent the length of the data vector corresponding to each pixel of the second convolution image.
  • the corresponding dimension of the pixel of the image after the first convolution is 1
  • the data corresponding to the pixel can be a gray value
  • the data corresponding to this pixel can be RGB value.
  • the second convolutional network model is used to convolve the marked image to obtain the second convolutional image, multiple convolutions can be performed, and the convolution kernel is 5 * 5 each time.
  • the first convolution image and the second convolution image have the same image size.
  • Step S408 Combine the first convolution image and the second convolution image of the sample image information to obtain a merged image.
  • Deconvolution processing is performed on the merged image.
  • the merged image is deconvoluted as many times as the number of convolutions from the sample image information to the first image, and the deconvoluted image may be dimensioned.
  • Three deconvolutions are performed on the merged image, which are step S4102, step S4104, and step S4106, respectively, to obtain a density map, that is, sample image information of the target area, that is, step S412.
  • the size of the deconvolution kernel can be set to 3 * 3.
  • the size of the image is the same as the size of the sample image information.
  • Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image, and parameters of each part of the preset model are adjusted.
  • the preset model can have the ability to identify the distribution position of the target object in the image to be processed.
  • the image information to be processed can be input into the trained preset model.
  • the first convolutional network model in the preset model is used to process the sample image information to obtain a first convolutional image of the image information to be processed.
  • the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the image information to be processed.
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the density map has an indicator for indicating the density of the target object. For example, the lighter the color of the distribution area in the density map, the greater the density of the target object in the distribution area.
  • FIG. 5 is a density map obtained after the processing. In FIG. 5, if the color of area A is lighter than that of area B, the density of the aggregated target object in area A is greater.
  • the value of a pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the deconvoluted density map may be a grayscale map.
  • the deconvolution is a grayscale image, in the image, white is 255 and black is 0.
  • the larger the gray value the denser the distribution of target objects in the target area. That is, where the color is darker, the weeds are more densely distributed; where the color is lighter, the weeds are more sparsely distributed.
  • a preset model is used to analyze the image information of the target area to obtain the distribution information of the target object in the target area, and based on the distribution information, the unmanned aerial vehicle is controlled to spray the target object with medicine, by acquiring the target area Image information; input the image information to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, the multiple sets of data
  • Each group of data in includes: image information of the target area, a label for identifying distribution information of the target object in the image information; and controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information.
  • FIG. 6 is a schematic structural diagram of an optional unmanned aerial vehicle control device according to this embodiment. As shown in FIG. 6, the device includes: an acquisition module 62; an analysis module 64, and a control module 66. among them,
  • the obtaining module 62 is used to obtain image information of the target area
  • the analysis module 64 is configured to input the image information to be processed into a preset model for analysis to obtain distribution information of the target object in the image information to be processed, wherein the preset model is obtained by training through multiple sets of data Each of the multiple sets of data includes: sample image information of the target area, and a label for identifying distribution information of the target object in the sample image information.
  • the control module 66 is configured to control the unmanned aerial vehicle to spray medicine on the target object according to the distribution information corresponding to the image to be processed.
  • the device includes: an image acquisition device 72 and a processor 74. among them,
  • the image acquisition device 72 is used to acquire image information to be processed in the target area
  • the processor 74 is configured to input the to-be-processed image information into a preset model for analysis to obtain distribution information of the target object in the to-be-processed image information, wherein the preset model is obtained through training of multiple sets of data , Each of the multiple sets of data includes: sample image information of the target area, a label used to identify the distribution information of the target object in the sample image information; and a distribution information control station based on the distribution information corresponding to the image information to be processed
  • the unmanned aerial vehicle sprays medicine to the target object.
  • FIG. 8 is a schematic structural diagram of an optional unmanned aerial vehicle control device according to this embodiment. As shown in FIG. 4, the device includes: an image acquisition device 82 and a processor 84. among them,
  • the communication module 82 is configured to receive image information to be processed from the target area of the designated device.
  • the processor 84 is configured to input the image information to be processed into a preset model for analysis to obtain distribution information of the target object in the image to be processed, wherein the preset model is obtained through training of multiple sets of data, Each set of data in the plurality of sets of data includes: sample image information of the target area, a label for identifying distribution information of the target object in the sample image information; and controlling the none according to the distribution information corresponding to the image to be processed
  • the human aircraft sprays the medicine to the target object.
  • unmanned aerial vehicle control device can refer to the related description of the steps shown in FIG. 1, and details are not repeated here.
  • FIG. 9 is a schematic flowchart of a method for determining distribution information according to this embodiment. As shown in Figure 9, the method includes:
  • Step S902 Acquire image information to be processed in the target area
  • Step S904 Input the image information to be processed into a preset model for analysis to obtain the distribution information of the target object in the image information to be processed, wherein the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data Both include: sample image information and a label used to identify the distribution information of the target object in the sample image information.
  • the step of training the preset model includes: obtaining sample image information, and labeling the position of the target object in the sample image information, obtaining a label of distribution information of the target object corresponding to the sample image information, and Input the sample image and the corresponding label into a preset model; use the first convolution network model in the preset model to process the sample image information to obtain a first convolution image of the sample image information; Processing the sample image information by using the second convolution network model in the preset model to obtain a second convolution image of the sample image information, wherein the first convolution network model and the second volume
  • the convolution kernel used in the product network model is different; merge the first convolution image and the second convolution image of the sample image information to obtain a merged image; perform deconvolution processing on the merged image, and according to The deconvolution processing result and the label of the sample image are back-propagated to adjust the parameters of each part of the preset model.
  • the step of processing the to-be-processed image through the preset model includes: inputting the to-be-processed image information into the trained preset model; adopting the first convolutional network model in the preset model to the sample image information Performing processing to obtain a first convolutional image of the image information to be processed; using a second convolutional network model in the preset model to process the sample image information to obtain a second of the image information to be processed Convolution image; merge the first convolution image and the second convolution image of the image information to be processed, and perform deconvolution processing on the merged image to obtain the corresponding density map of the image to be processed as the The distribution information of the target object in the image to be processed.
  • the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
  • the density map is used to reflect the density of the target object in each distribution area in the target area.
  • the density map has an indicator for indicating the density of the target object, and the identifier may be different colors or different depths of the same color or digital information, etc.
  • the target sales area of the medicine is determined according to the density map of the target objects in the multiple target areas. For example, for a sales area where the density indicated by the density map is greater, more medicine is required, so that the target sales area is indirectly determined.
  • the above distribution information may further include: the distribution area of the target object in the target area; at this time, the flight path of the UAV may be determined according to the location of the distribution area of the target object.
  • the prescription map of the target area can also be determined according to the distribution information, and the prescription map is used to display the target
  • the application information of the area specifically: determine the type of the target object; determine the application information of each sub-area in the target area according to the type and distribution information, the application information includes the drug type and target of the target object in the sub-area of the target area The amount of spraying; add the marking information used to identify the application information in the image information of the target area to obtain the prescription map of the target area.
  • the type of the target object can be determined through machine learning, for example, the image of the target object is input into the prediction model that has been trained, and the type of the target object is identified using the prediction model.
  • a storage medium including a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to execute the above-mentioned unmanned aerial vehicle control method.
  • a processor for running a program, wherein the above-mentioned unmanned aerial vehicle control method is executed when the program is running.
  • the technical content disclosed in this embodiment may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application may be essentially or part of the contribution to the existing technology or all or part of the technical solution may be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .
  • the image information to be processed in the target area is obtained; the image information to be processed is input into a preset model for analysis to obtain the distribution information of the target object in the image information to be processed, wherein the preset model is Group data training, each group of data in the multi-group data includes: sample image information of the target area, a label used to identify the distribution information of the target object in the sample image information; according to the distribution information, the UAV controls the target object Drug spray.

Abstract

一种分布信息的确定方法、无人飞行器的控制方法及装置。控制方法包括:获取目标区域的图像信息(S102);将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息(S104),预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;依据分布信息控制无人飞行器对目标对象进行药物喷洒(S106)。解决了农作物和杂草难以区分造成药物的浪费和农药残留等的技术问题。

Description

分布信息的确定方法、无人飞行器的控制方法及装置
相关申请的交叉引用
本申请要求于2018年10月18日提交中国专利局的申请号为201811217967.X、名称为“分布信息的确定方法、无人飞行器的控制方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及植保领域,具体而言,涉及一种分布信息的确定方法、无人飞行器的控制方法及装置。
背景技术
目前无人机对于除草或者落叶剂,基本上都是采用普遍喷洒的方案,如果采用普洒的话,又会造成大量的农药的浪费和农药残留或部分草害严重的地方打药量不足,造成了很大的经济损失。
发明内容
本实施例提供了一种分布信息的确定方法、无人飞行器的控制方法及装置,以至少解决相关技术中农作物和杂草难以区分造成药物的浪费和农药残留等的技术问题。
根据本实施例的一个方面,提供了一种无人飞行器的控制方法,包括:获取目标区域的待处理图像信息;将待处理图像信息输入至预设模型进行分析,得到待处理图像中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;依据待处理图像对应的分布信息控制无人飞行器对目标对象进行药物喷洒。
可选地,训练所述预设模型的步骤包括:
获取样本图像信息,并对所述样本图像信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签,并将所述样本图像及对应的标签输入预设模型;
采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像;
采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型采用的卷积核是不同的;
合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像;
对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的 标签进行反向传播,调整所述预设模型各部分的参数。
可选地,将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,包括:
将待处理图像信息输入训练好的所述预设模型;
采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第一卷积图像;
采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第二卷积图像;
合并所述待处理图像信息的第一卷积图像和第二卷积图像,并对合并后的图像进行反卷积处理,获得所述待处理图像的对应的密度图作为所述待处理图像中目标对象的分布信息。
可选地,所述密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
可选地,上述样本图像信息包括:目标对象的密度图,该密度图用于反映目标区域中目标对象在各个分布区域的密度大小。
可选地,上述密度图中具有用于指示目标对象的密度大小的标识。
可选地,上述分布信息包括以下至少之一:目标对象在目标区域中各个分布区域内的密度、目标对象所在分布区域的面积;依据分布信息控制无人飞行器对目标对象进行药物喷洒,包括:依据目标对象在分布区域中的密度确定无人飞行器在分布区域的药物喷洒量或喷洒持续时间;和/或依据目标对象所在分布区域的面积确定药物喷洒幅度。
可选地,分布信息还包括:目标对象在目标区域的分布区域;方法还包括:依据目标对象的分布区域所在的位置确定无人飞行器的飞行路线;控制无人飞行器按照飞行路线移动。
可选地,依据分布信息控制无人飞行器对目标对象进行药物喷洒之后,方法还包括:检测目标区域中无人飞行器的剩余分布区域,其中,剩余分布区域为目标区域中未喷洒药物的分布区域;确定剩余分布区域中目标对象的密度,以及剩余分布区域的总面积;依据剩余分布区域中目标对象的密度,以及剩余分布区域的总面积确定剩余分布区域所需的总药量;确定无人飞行器的剩余药量和总药量的差值;比较差值和预设阈值,并依据比较结果调整无人飞行器的飞行路线。
可选地,依据分布信息控制无人飞行器对目标对象进行药物喷洒之前,方法还包括:依据分布信息中目标对象在目标区域的分布区域大小,以及目标对象在分布区域的密度大小确定无人飞行器的目标用药量。
根据本实施例的另一方面,提供了一种无人飞行器的控制装置,包括:获取模块,用于获取目标区域的图像信息;分析模块,用于将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;控制模块,用于依据分布信息控制无人飞行器对目标对象进行药物喷洒。
根据本实施例的又一方面,提供了一种无人飞行器,包括:图像采集装置,用于获取目标区域的图像信息;处理器,用于将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据分布信息控制无人飞行器对目标对象进行药物喷洒。
根据本实施例的再一方面,提供了一种无人飞行器,包括:通信模块,用于接收来自指定设备的目标区域的图像信息,其中,指定设备包括:网络侧服务器或者测绘无人机;处理器,用于将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据分布信息控制无人飞行器对目标对象进行药物喷洒。
根据本实施例的又一方面,提供了一种存储介质,该存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行以上的分布信息的确定方法。
根据本实施例的又一方面,提供了一种处理器,该处理器用于运行程序,其中,程序运行时执行以上的分布信息的确定方法。
根据本实施例的又一方面,提供了一种目标对象的分布信息确定方法,包括:获取目标区域的图像信息;将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签。
可选地,训练所述预设模型的步骤包括:
获取样本图像信息,并对所述样本图像信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签,并将所述样本图像及对应的标签输入预设模型;
采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像;
采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型 采用的卷积核是不同的;
合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像;
对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整所述预设模型各部分的参数。
可选地,将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,包括:
将待处理图像信息输入训练好的所述预设模型;
采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第一卷积图像;
采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第二卷积图像;
合并所述待处理图像信息的第一卷积图像和第二卷积图像,并对合并后的图像进行反卷积处理,获得所述待处理图像的对应的密度图作为所述待处理图像中目标对象的分布信息。
可选地,所述密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
可选地,样本图像信息包括:目标对象的密度图,该密度图用于反映目标区域中目标对象在各个分布区域的密度大小。
可选地,密度图中具有用于指示目标对象的密度大小的标识。
可选地,在目标区域为多个且多个目标区域位于不同的销售区域时,依据多个目标区域中目标对象的密度图确定药物的目标销售区域。
可选地,分布信息还包括:目标对象在目标区域的分布区域;上述方法还包括:依据目标对象的分布区域所在的位置确定无人飞行器的飞行路线。
可选地,将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息之后,方法还包括:确定目标对象的种类;依据种类和分布信息确定目标区域中各个子区域的施药信息,该施药信息包括目标对象在目标区域的子区域的药物种类和目标喷药量;在目标区域的图像信息中添加用于标识施药信息的标记信息,得到目标区域的处方图。
可选地,所述目标区域为待施药的农田,所述目标对象为杂草。
在本申请实施例中,通过获取目标区域的图像信息;将图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的图像信息、用于标识图像信息中目标对象的分布信息的标签;依据分布信息控制无人飞行器对目标对象进行药物喷洒。达到了针对杂草在 不同区域的分布密度,针对性控制喷药量的目的,从而实现结合杂草在不同区域的分布密度,设置喷药量,节省农药,提高喷药效率的技术效果,进而解决了相关技术中农作物和杂草难以区分造成药物的浪费和农药残留等的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种无人飞行器的控制方法的流程示意图;
图2是根据本申请实施例的训练预设模型的流程示意图;
图3a与图3b是根据本申请实施例的样本图像及其标注的示意图。
图4是根据本申请实施例的另一种训练预设模型的流程示意图;
图5是根据本申请实施例的一种密度图的示意图;
图6是根据本申请实施例的一种可选的无人飞行器的控制装置的结构示意图;
图7是根据本申请实施例的一种可选的无人飞行器的结构示意图;
图8是根据本申请实施例的另一种可选的无人飞行器的结构示意图;
图9是根据本申请实施例的一种分布信息的确定方法的流程示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例,提供了一种无人飞行器的控制方的法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本实施例的一种无人飞行器的控制方法的流程示意图,如图1所示,该方 法包括如下步骤:
步骤S102,获取目标区域的待处理图像信息。
可选地,可以通过设置在无人飞行器上的图像采集装置对目标区域的图像进行拍摄获得所述待处理图像信息。目标区域可以为一片或多片待施药的农田。无人飞行器上可设置有定位系统,从而根据定位系统确定当前目标区域的面积信息以及经纬度信息。
步骤S104,将待处理图像信息输入至预设模型进行分析,得到目标区域中目标对象的分布信息。
可选地,目标对象可以为农田中的杂草。
其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签。
例如,可以训练用于识别杂草类型的杂草识别模型,所述杂草识别模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的类型的标签。
可选地,获取目标区域的图像信息之后,将图像信息输入至预设杂草识别模型进行分析,得到目标区域中目标对象的类型,其中目标对象为杂草。
步骤S106,依据所述待处理图像对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
分布信息可以为:所述目标对象在所述目标区域中各个分布区域内的密度、所述目标对象所在分布区域的面积。
依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒,可以通过以下方式实现:
依据所述目标对象在分布区域中的密度确定所述无人飞行器在所述分布区域的药物喷洒量或喷洒持续时间;和/或依据所述目标对象所在分布区域的面积确定药物喷洒幅度。
可选地,目标对象在分布区域中的密度越大,无人飞行器在对应分布区域的药物喷洒量越大,喷洒持续时间越长。目标对象所在分布区域的面积越大,无人飞行器的药物喷洒的幅度越大。将目标对象在分布区域的密度与目标对象在分布区域的面积综合考虑,确定无人飞行器在对应分布区域的药物喷洒量。例如:根据目标对象在分布区域中的密度大小确定喷药量。其中,药物喷洒的幅度可以为竖直幅度,也可以为水平幅度。
目标对象的分布信息还包括:所述目标对象在所述目标区域的分布区域,具体的,可根据获取到的目标区域的图像信息,确定图像中分布区域在图像中的像素区域和/或通过定位装置获得目标对象在目标区域内所占用的经纬度范围。
可选地,可依据所述目标对象的分布区域所在的位置确定所述无人飞行器的飞行路线, 并控制所述无人飞行器按照所述飞行路线移动。
具体的,可避开没有杂草的区域确定飞行路线,并控制无人飞行器按照飞行路线移动。
依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒之后,还可以:
检测所述目标区域中所述无人飞行器的剩余分布区域,其中,所述剩余分布区域为所述目标区域中未喷洒药物的分布区域;确定所述剩余分布区域中所述目标对象的密度,以及所述剩余分布区域的总面积;依据所述剩余分布区域中所述目标对象的密度,以及所述剩余分布区域的总面积确定所述剩余分布区域所需的总药量;确定所述无人飞行器的剩余药量和所述总药量的差值;比较所述差值和预设阈值,并依据比较结果调整所述无人飞行器的飞行路线。
可选地,当剩余药量与上述总药量的差值为负值时,可恶意将调整无人飞行器的飞行路线为返回路线,以重新装载农药。其中,在返回途中,可对返回路线上的农田进行喷药。
可选地,在调整飞行路线为返回路线之前,可根据未喷洒药物的目标对象的区域以及剩余药量,规划返回路线,以在返回途中,对某一整块区域进行喷药。
可选地,依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒之前,可通过图像采集装置,获取目标区域的图像信息;并将图像信息输入预设模型,确定图像信息中,目标区域中目标对象的分布信息,并根据分布信息中目标对象在目标区域中的分布区域大小,以及所述目标对象在所述分布区域的密度大小确定所述无人飞行器的目标用药量。
可选地,根据分布信息中目标对象在目标区域中的分布区域较小,以及所述目标对象在所述分布区域的密度较大确定一种目标用药量。根据分布信息中目标对象在目标区域中的分布区域较大,以及所述目标对象在所述分布区域的密度较小确定一种目标用药量。根据分布信息中目标对象在目标区域中的分布区域较小,以及所述目标对象在所述分布区域的密度较小确定一种目标用药量。根据分布信息中目标对象在目标区域中的分布区域较大,以及所述目标对象在所述分布区域的密度较大确定一种目标用药量。确定无人飞行器的目标用药量之后,进行农药装载。
请参照图2,预设模型的训练方法的可以包括以下步骤。
步骤S302,获取样本图像信息,并对所述样本图像信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签;
可选地,样本图像信息对应的图像为RGB图像。
可选的,可通过对样本图像信息中目标对象的分布信息通过标签进行标识。标签包括目标对象在目标区域的经纬度分布范围和/或在图片中的像素分布范围。例如,请参照见图3a,可以使用叉号“x”表示庄稼区,圆圈“Ο”表示杂草区。请参照图3b,图3b为实景 电子地图上目标对象的标识,其中,深色区域为杂草,浅色区域为庄稼。
步骤S304,采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像;
步骤S306,采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型采用的卷积核是不同的;
可选地,第一卷积网络模型的卷积核的大小可以为3*3,卷积步长可以设置为2。
可选地,样本图像信息为RGB图像,具有R、G、B三个维度,在采用第一卷积网络模型对所述标记图像进行卷积处理,得到第一卷积图像的过程中,可以进行下采样。另外,还可以对第一卷积图像的维度进行设定。
可选地,当采用第一卷积网络模型对所述标记图像进行卷积处理,得到第一卷积图像的过程中,可进行多次卷积,每次卷积时卷积核的大小都为3*3,卷积步长为2,每次卷积都进行下采样。每次下采样过后,图像为下采样之前图像的1/2大小,可大量减少数据处理量,提高数据的运算速度。
可选地,第二卷积网络模型的卷积核的大小可以设置为5*5,卷积步长可以设置为2。
样本图像信息为RGB图像,具有R、G、B三个维度,当采用第二卷积网络模型对所述标记图像进行卷积处理,得到第二卷积图像的过程中,可以进行下采样。另外,还可以对第二卷积图像的维度进行设定。
可选地,当采用第二卷积网络模型对所述标记图像进行卷积处理,得到第二卷积图像的过程中,可进行多次卷积,每次卷积时卷积核都为5*5,卷积步长为2,每次卷积都进行下采样。每次下采样过后,图像为下采样之前图像的1/2大小,可大量减少数据处理量,提高数据的运算速度。
第一卷积图像与第二卷积图像的图像大小相同。
步骤S308,合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像;
步骤S310,对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整所述预设模型各部分的参数。
可选地,将第一卷积图像与第二卷积图像合并后,需对合并后的图像进行与上述由样本图像信息至第一图像的卷积次数相同次数的反卷积,并且,可对反卷积后的图像设置维度。
对合并后的图像进行反卷积时,反卷积核大小可以设置为3*3。
对合并后的图像进行反卷积后,图像大小与样本图像信息的大小相同。
最后并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整预设模型各个层的参数。
通过使用多个样本图像对预设模型进行训练,可以使该预设模型具有识别待处理图像中目标对象的分布位置的能力。
图4为本实施例提供的另一种每组数据中的目标区域的样本图像信息获取方法的流程示意图;该方法包括以下步骤:
步骤S402,获取样本图像信息,并对所述样本图像信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签,并将所述样本图像及对应的标签输入预设模型;
可选地,样本图像信息对应的图像为RGB图像。
可选地,可通过对样本图像信息中目标对象的分布信息通过标签进行标识。标签包括目标对象在目标区域的经纬度分布范围和/或在图片中的像素分布范围。
采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像。
可选地,第一卷积网络模型的卷积核的大小为3*3,卷积步长可以设置为2。样本图像信息对应的图像为RGB图像,具有R、G、B三个维度,在采用第一卷积网络模型对所述标记图像进行卷积处理,得到第一卷积图像的过程中,可以进行下采样。另外,还可以对第一卷积图像的维度进行设定。
如图4中,得到第一卷积图像的过程中,共进行三次卷积,依次为步骤S4042,步骤S4044,步骤S4046,每次卷积都进行下采样,设置卷积步长为2。每次下采样过后,图像为下采样之前图像的1/2大小,可大量减少数据处理量,提高数据的运算速度。
图4中,n1,n2,n3分别是对应每次卷积时对应的设置的维度,该维度用于表示第一卷积图像每一个像素对应的数据向量长度。在一个例子中,当n1为1时,第一次卷积后的图像的像素对应的维度为1,该像素对应的数据可以为灰度值,当n1为3时,第一次卷积后的图像的像素对应的维度为3,该像素对应的数据可以为RGB值。当采用第一卷积网络模型对所述标记图像进行卷积处理,得到第一卷积图像的过程中,可进行多次卷积,每次卷积时卷积核都为3*3。
并且,采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型采用的卷积核是不同的。
可选地,第二卷积网络模型的卷积核的大小可以设置为5*5,卷积步长可以设置为2。样本图像信息对应的图像为RGB图像,具有R、G、B三个维度,在采用第二卷积网络模 型对所述标记图像进行卷积处理,得到第二卷积图像的过程中,可以进行下采样。另外,还可以对第二卷积图像的维度进行设定。
可选地,当采用第二卷积网络模型对所述标记图像进行卷积处理,得到第二卷积图像的过程中,可进行多次卷积,每次卷积时卷积核都为5*5,卷积步长可以设置为2,每次卷积都进行下采样。每次下采样过后,图像为下采样之前图像的1/2大小,可大量减少数据处理量,提高数据的运算速度。
如图4中,得到第二卷积图像的过程中,共进行三次卷积,依次为步骤S4062,步骤S4064,步骤S4066,每次卷积都进行下采样,设置卷积步长为2。每次下采样过后,图像为下采样之前的图像的1/2大小,可大量减少数据处理量,提高数据的运算速度。
图4中,m1,m2,m3分别是对应每次卷积时对应的设置的维度,该维度用于表示第二卷积图像每一个像素对应的数据向量长度。在一个例子中,当m1为1时,第一次卷积后的图像的像素对应的维度为1,该像素对应的数据可以为灰度值,当m1为3时,第一次卷积后的图像的像素对应的维度为3,该像素对应的数据可以为RGB值。当采用第二卷积网络模型对所述标记图像进行卷积处理,得到第二卷积图像的过程中,可进行多次卷积,每次卷积时卷积核都为5*5。
第一卷积图像与第二卷积图像的图像大小相同。
步骤S408,合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像。
对所述合并后的图像进行反卷积处理。
可选地,对合并后的图像进行与上述由样本图像信息至第一图像的卷积次数相同次数的反卷积,并且,可对反卷积后的图像设置维度。对合并后的图像进行三次反卷积,分别为步骤S4102,步骤S4104,步骤S4106,获取密度图,即目标区域的样本图像信息,即步骤S412。
对合并后的图像进行反卷积时,反卷积核的大小可以设置为3*3。
对合并后的图像进行反卷积后,图像大小与样本图像信息的大小相同。
对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整所述预设模型各部分的参数。
通过使用多个样本图像对预设模型进行训练,可以使该预设模型具有识别待处理图像中目标对象的分布位置的能力。
相应地,在使用预设模型进行图像识别处理时,可以将待处理图像信息输入训练好的所述预设模型。
采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所 述待处理图像信息的第一卷积图像。
采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第二卷积图像。
合并所述待处理图像信息的第一卷积图像和第二卷积图像,并对合并后的图像进行反卷积处理,获得所述待处理图像的对应的密度图作为所述待处理图像中目标对象的分布信息。其中,所述密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
可选地,密度图中具有用于指示所述目标对象的密度大小的标识。例如,密度图中所述分布区域的颜色越浅,所述分布区域中所述目标对象的密度越大。如图5所示,图5为处理后获得的密度图,在图5的中,若A区比B区的颜色浅,则A区汇总目标对象的密度大。在另一个例子中,密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
可选地,反卷积后的密度图可以为灰度图。当反卷积为灰度图时,图像中,白色为255,黑色为0,灰度值越大的地方标识目标区域中目标对象的分布越密集。即颜色较深的地方,杂草的分布较密集;颜色较浅的地方,杂草的分布较稀疏。
在本实施例中,采用预设模型对目标区域的图像信息进行分析,从而得到目标区域中目标对象的分布信息,并基于该分布信息控制无人飞行器对目标对象进行药物喷洒,通过获取目标区域的图像信息;将所述图像信息输入至预设模型进行分析,得到所述目标区域中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的图像信息、用于标识图像信息中目标对象的分布信息的标签;依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。达到了针对杂草在不同区域的分布密度,针对性控制喷药量的目的,从而实现结合杂草在不同区域的分布密度,设置喷药量,节省农药,提高喷药效率的技术效果,进而可以解决农作物和杂草难以区分造成药物的浪费和农药残留等的问题。
图6是根据本实施例的一种可选的无人飞行器的控制装置的结构示意图。如图6所示,该装置包括:获取模块62;分析模块64,控制模块66。其中,
获取模块62,用于获取目标区域的图像信息;
分析模块64,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签。
控制模块66,用于依据所述待处理图像对应的分布信息控制所述无人飞行器对所述目 标对象进行药物喷洒。
需要说明的是,该无人飞行器的控制装置各模块的具体功能可以参见图1所示步骤的相关描述,此处不再赘述。
图7是根据本实施例的一种可选的无人飞行器的结构示意图。如图4所示,该装置包括:图像采集装置72;处理器74。其中,
图像采集装置72,用于获取目标区域的待处理图像信息;
处理器74,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
需要说明的是,该无人飞行器具体功能可以参见图1所示步骤的相关描述,此处不再赘述。
图8是根据本实施例的一种可选的无人飞行器控制设备的结构示意图。如图4所示,该装置包括:图像获取装置82;处理器84。其中,
通信模块82,用于接收来自指定设备的目标区域的待处理图像信息。
处理器84,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据所述待处理图像对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
需要说明的是,该无人飞行器控制设备的功能可以参见图1所示步骤的相关描述,此处不再赘述。
图9是根据本实施例的一种分布信息的确定方法的流程示意图。如图9所示,该方法包括:
步骤S902,获取目标区域的待处理图像信息;
步骤S904,将待处理图像信息输入至预设模型进行分析,得到待处理图像信息中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签。
可选地,训练所述预设模型的步骤包括:获取样本图像信息,并对所述样本图像 信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签,并将所述样本图像及对应的标签输入预设模型;采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像;采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型采用的卷积核是不同的;合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像;对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整所述预设模型各部分的参数。
相应地,通过预设模型处理待处理图像的步骤包括:将待处理图像信息输入训练好的所述预设模型;采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第一卷积图像;采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第二卷积图像;合并所述待处理图像信息的第一卷积图像和第二卷积图像,并对合并后的图像进行反卷积处理,获得所述待处理图像的对应的密度图作为所述待处理图像中目标对象的分布信息。
可选地,所述密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
可选地,,该密度图用于反映目标区域中目标对象在各个分布区域的密度大小。其中,密度图中具有用于指示目标对象的密度大小的标识,该标识可以为不同的颜色或同一种颜色的不同深度或者数字信息等。
可选地,在目标区域为多个且多个目标区域位于不同的销售区域时,依据多个目标区域中目标对象的密度图确定药物的目标销售区域。例如,对于密度图指示的密度越大的销售区域,所需要的药量越多,从而间接地确定目标销售区域。
上述分布信息还可以包括:目标对象在目标区域的分布区域;此时可以依据目标对象的分布区域所在的位置确定无人飞行器的飞行路线。
可选地,在将待处理图像信息输入至预设模型进行分析,得到待处理图像信息中目标对象的分布信息之后,还可以依据分布信息确定目标区域的处方图,该处方图用于展示目标区域的施药信息,具体地:确定目标对象的种类;依据种类和分布信息确定目标区域中各个子区域的施药信息,该施药信息包括目标对象在目标区域的子区域的药物种类和目标喷药量;在目标区域的图像信息中添加用于标识施药信息的标记信息,得到目标区域的处方图。
其中,可以通过机器学习的方式确定目标对象的种类,例如,将目标对象的图像输入 至已经训练好的预测模型中,利用该预测模型识别目标对象的种类。
需要说明的是,分布信息的确定方法的具体步骤可以参见图1-7所示步骤的相关描述,此处不再赘述。
根据本实施例的再一个方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,在程序运行时控制存储介质所在设备执行以上所述的无人飞行器的控制方法。
根据本实施例的再一个方面,还提供了一种处理器,该处理器用于运行程序,其中,程序运行时执行以上所述的无人飞行器的控制方法。
应该理解到,在本实施例中所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
在本申请实施例中,通过获取目标区域的待处理图像信息;将待处理图像信息输入至 预设模型进行分析,得到待处理图像信息中目标对象的分布信息,其中,预设模型为通过多组数据训练得到的,多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;依据分布信息控制无人飞行器对目标对象进行药物喷洒。达到了针对杂草在不同区域的分布密度,针对性控制喷药量的目的,从而实现结合杂草在不同区域的分布密度,设置喷药量,节省农药,提高喷药效率的技术效果,进而解决了相关技术中农作物和杂草难以区分造成药物的浪费和农药残留等的技术问题。

Claims (20)

  1. 一种分布信息的确定方法,其特征在于,包括:
    获取目标区域的待处理图像信息;
    将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签。
  2. 根据权利要求1所述的方法,其特征在于,训练所述预设模型的步骤包括:
    获取样本图像信息,并对所述样本图像信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签,并将所述样本图像及对应的标签输入预设模型;
    采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像;
    采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型采用的卷积核是不同的;
    合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像;
    对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整所述预设模型各部分的参数。
  3. 根据权利要求2所述的方法,其特征在于,将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,包括:
    将待处理图像信息输入训练好的所述预设模型;
    采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第一卷积图像;
    采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第二卷积图像;
    合并所述待处理图像信息的第一卷积图像和第二卷积图像,并对合并后的图像进行反卷积处理,获得所述待处理图像的对应的密度图作为所述待处理图像中目标对象的分布信息。
  4. 根据权利要求3所述的方法,其特征在于,所述密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
  5. 根据权利要求2所述的方法,其特征在于,所述样本图像信息包括:所述目标对象的密度图,该密度图用于反映所述目标区域中所述目标对象在各个分布区域的密度大小。
  6. 根据权利要求5所述的方法,其特征在于,所述密度图中具有用于指示所述目标对象的密度大小的标识。
  7. 根据权利要求1至6任意一项所述的方法,其特征在于,在所述目标区域为多个且多个所述目标区域位于不同的销售区域时,依据多个目标区域中所述目标对象的密度图确定药物的目标销售区域。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,
    所述分布信息包括:所述目标对象在所述目标区域的分布区域;
    所述方法还包括:依据所述目标对象的分布区域所在的位置确定所述无人飞行器的飞行路线。
  9. 根据权利要求1至8中任意一项所述的方法,其特征在于,将所述图像信息输入至预设模型进行分析,得到所述目标区域中目标对象的分布信息之后,所述方法还包括:
    确定所述目标对象的种类;
    依据所述种类和所述分布信息确定所述目标区域中各个子区域的施药信息,该施药信息包括所述目标对象在所述目标区域的子区域的药物种类和目标喷药量;
    在所述目标区域的图像信息中添加用于标识所述施药信息的标记信息,得到所述目标区域的处方图。
  10. 根据权利要求9所述的方法,其特征在于,所述目标区域为待施药的农田,所述目标对象为杂草。
  11. 一种无人飞行器的控制方法,其特征在于,包括:
    获取目标区域的待处理图像信息;
    将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;
    依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
  12. 根据权利要求11所述的方法,其特征在于,所述分布信息包括以下至少之一:所述目标对象在所述目标区域中各个分布区域内的密度、所述目标对象所在分布区域 的面积;依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒,包括:
    依据所述目标对象在分布区域中的密度确定所述无人飞行器在所述分布区域的药物喷洒量或喷洒持续时间;和/或
    依据所述目标对象所在分布区域的面积确定药物喷洒幅度。
  13. 根据权利要求11或12所述的方法,其特征在于,所述目标区域为待施药的农田,所述目标对象为杂草;
    所述分布信息还包括:所述目标对象在所述目标区域的分布区域;
    所述方法还包括:依据所述目标对象的分布区域所在的位置确定所述无人飞行器的飞行路线;控制所述无人飞行器按照所述飞行路线移动。
  14. 根据权利要求11至13任意一项所述的方法,其特征在于,依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒之后,所述方法还包括:
    检测所述目标区域中所述无人飞行器的剩余分布区域,其中,所述剩余分布区域为所述目标区域中未喷洒药物的分布区域;
    确定所述剩余分布区域中所述目标对象的密度,以及所述剩余分布区域的总面积;
    依据所述剩余分布区域中所述目标对象的密度,以及所述剩余分布区域的总面积确定所述剩余分布区域所需的总药量;
    确定所述无人飞行器的剩余药量和所述总药量的差值;
    比较所述差值和预设阈值,并依据比较结果调整所述无人飞行器的飞行路线。
  15. 根据权利要求11至14中任意一项所述的方法,其特征在于,依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒之前,所述方法还包括:
    依据所述分布信息中所述目标对象在所述目标区域的分布区域大小,以及所述目标对象在所述分布区域的密度大小确定所述无人飞行器的目标用药量。
  16. 一种无人飞行器的控制装置,其特征在于,包括:
    获取模块,用于获取目标区域的待处理图像信息;
    分析模块,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;
    控制模块,用于依据所述待处理图像对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
  17. 一种无人飞行器,其特征在于,包括:
    图像采集装置,用于获取目标区域的待处理图像信息;
    处理器,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
  18. 一种无人飞行器控制设备,其特征在于,包括:
    图像获取装置,用于接收目标区域的待处理图像信息;
    处理器,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
  19. 一种存储介质,其特征在于,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至10中任意一项所述的分布信息的确定方法。
  20. 一种处理器,其特征在于,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至10中任意一项所述的分布信息的确定方法。
PCT/CN2019/111515 2018-10-18 2019-10-16 分布信息的确定方法、无人飞行器的控制方法及装置 WO2020078396A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020217014072A KR20210071062A (ko) 2018-10-18 2019-10-16 분포 정보의 확정 방법, 무인 항공기의 제어 방법 및 장치
JP2021520573A JP2022502794A (ja) 2018-10-18 2019-10-16 分布情報の確定方法、無人飛行体の制御方法及び装置
CA3115564A CA3115564A1 (en) 2018-10-18 2019-10-16 Method for determining distribution information, and control method and device for unmanned aerial vehicle
US17/309,058 US20210357643A1 (en) 2018-10-18 2019-10-16 Method for determining distribution information, and control method and device for unmanned aerial vehicle
AU2019362430A AU2019362430B2 (en) 2018-10-18 2019-10-16 Method for determining distribution information, and control method and device for unmanned aerial vehicle
EP19873665.4A EP3859479A4 (en) 2018-10-18 2019-10-16 PROCEDURES FOR DETERMINING DISTRIBUTION INFORMATION AND CONTROL PROCEDURES AND DEVICE FOR UNMANNED AIRCRAFT

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811217967.X 2018-10-18
CN201811217967.XA CN109445457B (zh) 2018-10-18 2018-10-18 分布信息的确定方法、无人飞行器的控制方法及装置

Publications (1)

Publication Number Publication Date
WO2020078396A1 true WO2020078396A1 (zh) 2020-04-23

Family

ID=65546651

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111515 WO2020078396A1 (zh) 2018-10-18 2019-10-16 分布信息的确定方法、无人飞行器的控制方法及装置

Country Status (8)

Country Link
US (1) US20210357643A1 (zh)
EP (1) EP3859479A4 (zh)
JP (1) JP2022502794A (zh)
KR (1) KR20210071062A (zh)
CN (1) CN109445457B (zh)
AU (1) AU2019362430B2 (zh)
CA (1) CA3115564A1 (zh)
WO (1) WO2020078396A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783549A (zh) * 2020-06-04 2020-10-16 北京海益同展信息科技有限公司 一种分布图生成方法、系统、巡检机器人及控制终端

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109445457B (zh) * 2018-10-18 2021-05-14 广州极飞科技股份有限公司 分布信息的确定方法、无人飞行器的控制方法及装置
US10822085B2 (en) 2019-03-06 2020-11-03 Rantizo, Inc. Automated cartridge replacement system for unmanned aerial vehicle
CN112948371A (zh) * 2019-12-10 2021-06-11 广州极飞科技股份有限公司 数据处理方法、装置、存储介质、处理器
CN113011220A (zh) * 2019-12-19 2021-06-22 广州极飞科技股份有限公司 穗数识别方法、装置、存储介质及处理器
CN111459183B (zh) * 2020-04-10 2021-07-20 广州极飞科技股份有限公司 作业参数推荐方法、装置、无人设备及存储介质
CN112425328A (zh) * 2020-11-23 2021-03-02 广州极飞科技有限公司 多物料播撒控制方法、装置、终端设备、无人设备及介质
CN113973793B (zh) * 2021-09-09 2023-08-04 常州希米智能科技有限公司 一种病虫害区域无人机喷洒处理方法和系统
CN115337430A (zh) * 2022-08-11 2022-11-15 深圳市隆瑞科技有限公司 一种喷雾小车的控制方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170192424A1 (en) * 2015-12-31 2017-07-06 Unmanned Innovation, Inc. Unmanned aerial vehicle rooftop inspection system
CN108154196A (zh) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 用于输出图像的方法和装置
CN108541683A (zh) * 2018-04-18 2018-09-18 济南浪潮高新科技投资发展有限公司 一种基于卷积神经网络芯片的无人机农药喷洒系统
CN108629289A (zh) * 2018-04-11 2018-10-09 千寻位置网络有限公司 农田的识别方法及系统、应用于农业的无人机
CN109445457A (zh) * 2018-10-18 2019-03-08 广州极飞科技有限公司 分布信息的确定方法、无人飞行器的控制方法及装置

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10104824B2 (en) * 2013-10-14 2018-10-23 Kinze Manufacturing, Inc. Autonomous systems, methods, and apparatus for AG based operations
US10104836B2 (en) * 2014-06-11 2018-10-23 John Paul Jamison Systems and methods for forming graphical and/or textual elements on land for remote viewing
CN107148633B (zh) * 2014-08-22 2020-12-01 克莱米特公司 用于使用无人机系统进行农艺和农业监测的方法
US10139279B2 (en) * 2015-05-12 2018-11-27 BioSensing Systems, LLC Apparatuses and methods for bio-sensing using unmanned aerial vehicles
CN105159319B (zh) * 2015-09-29 2017-10-31 广州极飞科技有限公司 一种无人机的喷药方法及无人机
EP3479691B1 (en) * 2016-06-30 2024-04-24 Optim Corporation Mobile body control application and mobile body control method
US10520943B2 (en) * 2016-08-12 2019-12-31 Skydio, Inc. Unmanned aerial image capture platform
CA3035068A1 (en) * 2016-09-08 2018-03-15 Walmart Apollo, Llc Systems and methods for dispensing an insecticide via unmanned vehicles to defend a crop-containing area against pests
JP6798854B2 (ja) * 2016-10-25 2020-12-09 株式会社パスコ 目的物個数推定装置、目的物個数推定方法及びプログラム
US10721859B2 (en) * 2017-01-08 2020-07-28 Dolly Y. Wu PLLC Monitoring and control implement for crop improvement
JP6906959B2 (ja) * 2017-01-12 2021-07-21 東光鉄工株式会社 ドローンを使用した肥料散布方法
CN108509961A (zh) * 2017-02-27 2018-09-07 北京旷视科技有限公司 图像处理方法和装置
CN106882380A (zh) * 2017-03-03 2017-06-23 杭州杉林科技有限公司 空地一体农林用植保系统装置及使用方法
CN106951836B (zh) * 2017-03-05 2019-12-13 北京工业大学 基于先验阈值优化卷积神经网络的作物覆盖度提取方法
CN106910247B (zh) * 2017-03-20 2020-10-02 厦门黑镜科技有限公司 用于生成三维头像模型的方法和装置
CN107274378B (zh) * 2017-07-25 2020-04-03 江西理工大学 一种融合记忆cnn的图像模糊类型识别及参数整定方法
US10740607B2 (en) * 2017-08-18 2020-08-11 Autel Robotics Co., Ltd. Method for determining target through intelligent following of unmanned aerial vehicle, unmanned aerial vehicle and remote control
CN107728642B (zh) * 2017-10-30 2021-03-09 北京博鹰通航科技有限公司 一种无人机飞行控制系统及其方法
CN107933921B (zh) * 2017-10-30 2020-11-17 广州极飞科技有限公司 飞行器及其喷洒路线生成和执行方法、装置、控制终端
CN107703960A (zh) * 2017-11-17 2018-02-16 江西天祥通用航空股份有限公司 农药喷洒直升机的地空跟踪监测装置
CN108596222B (zh) * 2018-04-11 2021-05-18 西安电子科技大学 基于反卷积神经网络的图像融合方法
CN108594850B (zh) * 2018-04-20 2021-06-11 广州极飞科技股份有限公司 基于无人机的航线规划及控制无人机作业的方法、装置
US10660277B2 (en) * 2018-09-11 2020-05-26 Pollen Systems Corporation Vine growing management method and apparatus with autonomous vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170192424A1 (en) * 2015-12-31 2017-07-06 Unmanned Innovation, Inc. Unmanned aerial vehicle rooftop inspection system
CN108154196A (zh) * 2018-01-19 2018-06-12 百度在线网络技术(北京)有限公司 用于输出图像的方法和装置
CN108629289A (zh) * 2018-04-11 2018-10-09 千寻位置网络有限公司 农田的识别方法及系统、应用于农业的无人机
CN108541683A (zh) * 2018-04-18 2018-09-18 济南浪潮高新科技投资发展有限公司 一种基于卷积神经网络芯片的无人机农药喷洒系统
CN109445457A (zh) * 2018-10-18 2019-03-08 广州极飞科技有限公司 分布信息的确定方法、无人飞行器的控制方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3859479A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783549A (zh) * 2020-06-04 2020-10-16 北京海益同展信息科技有限公司 一种分布图生成方法、系统、巡检机器人及控制终端

Also Published As

Publication number Publication date
CN109445457A (zh) 2019-03-08
CN109445457B (zh) 2021-05-14
AU2019362430B2 (en) 2022-09-08
US20210357643A1 (en) 2021-11-18
EP3859479A1 (en) 2021-08-04
AU2019362430A1 (en) 2021-05-13
JP2022502794A (ja) 2022-01-11
KR20210071062A (ko) 2021-06-15
CA3115564A1 (en) 2020-04-23
EP3859479A4 (en) 2021-11-24

Similar Documents

Publication Publication Date Title
WO2020078396A1 (zh) 分布信息的确定方法、无人飞行器的控制方法及装置
Huang et al. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery
CN110297483B (zh) 待作业区域边界获取方法、装置,作业航线规划方法
US11093745B2 (en) Automated plant detection using image data
Guo et al. Aerial imagery analysis–quantifying appearance and number of sorghum heads for applications in breeding and agronomy
US20180260947A1 (en) Inventory, growth, and risk prediction using image processing
Blok et al. The effect of data augmentation and network simplification on the image‐based detection of broccoli heads with Mask R‐CNN
EP3815529A1 (en) Agricultural plant detection and control system
US20220192174A1 (en) Agricultural sprayer with real-time, on-machine target sensor
US20220256834A1 (en) Method for generating an application map for treating a field with an agricultural equipment
EP4014734A1 (en) Agricultural machine and method of controlling such
CN107213635A (zh) 视野显示方法和装置
Buddha et al. Weed detection and classification in high altitude aerial images for robot-based precision agriculture
Passos et al. Automatic detection of Aedes aegypti breeding grounds based on deep networks with spatio-temporal consistency
CN110188661B (zh) 边界识别方法及装置
Sassu et al. Artichoke deep learning detection network for site-specific agrochemicals uas spraying
Olsen Improving the accuracy of weed species detection for robotic weed control in complex real-time environments
CN111009000A (zh) 昆虫取食行为分析方法、装置和存储介质
US11832609B2 (en) Agricultural sprayer with real-time, on-machine target sensor
CN109492541B (zh) 目标对象类型的确定方法及装置、植保方法、植保系统
US20220392214A1 (en) Scouting functionality emergence
Shahid et al. Aerial imagery-based tobacco plant counting framework for efficient crop emergence estimation
CN117274674A (zh) 对靶施药方法、电子设备、存储介质及系统
Charitha et al. Detection of Weed Plants Using Image Processing and Deep Learning Techniques
CN117036886A (zh) 一种伪装目标检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19873665

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3115564

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2021520573

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019873665

Country of ref document: EP

Effective date: 20210428

ENP Entry into the national phase

Ref document number: 20217014072

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019362430

Country of ref document: AU

Date of ref document: 20191016

Kind code of ref document: A