WO2020078396A1 - 分布信息的确定方法、无人飞行器的控制方法及装置 - Google Patents
分布信息的确定方法、无人飞行器的控制方法及装置 Download PDFInfo
- Publication number
- WO2020078396A1 WO2020078396A1 PCT/CN2019/111515 CN2019111515W WO2020078396A1 WO 2020078396 A1 WO2020078396 A1 WO 2020078396A1 CN 2019111515 W CN2019111515 W CN 2019111515W WO 2020078396 A1 WO2020078396 A1 WO 2020078396A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image information
- target object
- distribution
- information
- area
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 239000007921 spray Substances 0.000 claims abstract description 33
- 238000004458 analytical method Methods 0.000 claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 27
- 241000196324 Embryophyta Species 0.000 claims abstract description 26
- 239000003814 drug Substances 0.000 claims description 63
- 238000012545 processing Methods 0.000 claims description 44
- 229940079593 drug Drugs 0.000 claims description 21
- 238000005507 spraying Methods 0.000 claims description 18
- 239000000575 pesticide Substances 0.000 abstract description 7
- 239000000447 pesticide residue Substances 0.000 abstract description 6
- 239000010914 pesticide waste Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 239000002699 waste material Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000009333 weeding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/0094—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
- B64C39/024—Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64D—EQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
- B64D1/00—Dropping, ejecting, releasing, or receiving articles, liquids, or the like, in flight
- B64D1/16—Dropping or releasing powdered, liquid, or gaseous matter, e.g. for fire-fighting
- B64D1/18—Dropping or releasing powdered, liquid, or gaseous matter, e.g. for fire-fighting by spraying, e.g. insecticides
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/003—Flight plan management
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/40—UAVs specially adapted for particular uses or applications for agriculture or forestry operations
Definitions
- the present application relates to the field of plant protection, and in particular, to a method for determining distribution information, a method and a device for controlling an unmanned aerial vehicle.
- drones generally use a general spraying solution for weeding or defoliating agents. If general spraying is used, it will cause a lot of waste of pesticides and pesticide residues or some areas with serious grass damage. Great economic loss.
- This embodiment provides a method for determining distribution information, a method and a device for controlling an unmanned aerial vehicle, so as to at least solve the technical problems of waste of medicine and pesticide residue caused by the difficulty of distinguishing crops and weeds in the related art.
- a method for controlling an unmanned aerial vehicle includes: acquiring image information of a target area to be processed; inputting image information to be processed into a preset model for analysis to obtain target objects in the image to be processed Distribution information, where the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information of the target area, and a label used to identify the distribution information of the target object in the sample image information ; According to the distribution information corresponding to the image to be processed, control the unmanned aerial vehicle to spray the medicine to the target object.
- the step of training the preset model includes:
- sample image information and mark the position of the target object in the sample image information, obtain a label of the distribution information of the target object corresponding to the sample image information, and enter the sample image and the corresponding label into a preset model;
- Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image to adjust parameters of each part of the preset model.
- inputting the image information to be processed into a preset model for analysis to obtain distribution information of target objects in the image information to be processed includes:
- the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
- the above sample image information includes: a density map of the target object, which is used to reflect the density of the target object in each distribution area in the target area.
- the density map has an identifier for indicating the density of the target object.
- the above distribution information includes at least one of the following: the density of the target object in each distribution area of the target area, the area of the distribution area where the target object is located; controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information, including: Determine the amount or duration of drug spraying of the UAV in the distribution area according to the density of the target object in the distribution area; and / or determine the spraying range of drug according to the area of the distribution area where the target object is located.
- the distribution information further includes: a distribution area of the target object in the target area; the method further includes: determining a flight path of the unmanned aerial vehicle according to the location of the distribution area of the target object; controlling the unmanned aerial vehicle to move according to the flight path.
- the method further includes: detecting a remaining distribution area of the unmanned aerial vehicle in the target area, wherein the remaining distribution area is a distribution area of the unsprayed drug in the target area ; Determine the density of the target object in the remaining distribution area, and the total area of the remaining distribution area; determine the total amount of drug required by the remaining distribution area according to the density of the target object in the remaining distribution area, and the total area of the remaining distribution area; The difference between the remaining dose and the total dose of the aircraft; compare the difference and the preset threshold, and adjust the flight path of the unmanned aerial vehicle according to the comparison result.
- the method before controlling the drone to spray the target object according to the distribution information, the method further includes: determining the unmanned aerial vehicle according to the size of the distribution area of the target object in the distribution area and the density of the target object in the distribution area Target dosage.
- a control device for an unmanned aerial vehicle including: an acquisition module for acquiring image information of a target area; and an analysis module for inputting image information into a preset model for analysis, Obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, each of the multiple sets of data includes: sample image information of the target area, used to identify the target in the sample image information The label of the distribution information of the object; the control module is used to control the unmanned aerial vehicle to spray the medicine to the target object according to the distribution information.
- an unmanned aerial vehicle including: an image acquisition device for acquiring image information of a target area; a processor for inputting image information into a preset model for analysis to obtain a target The distribution information of the target objects in the area, where the preset model is obtained by training multiple sets of data, each of the multiple sets of data includes: sample image information of the target area, and the image used to identify the target object in the sample image information The label of the distribution information; and control the unmanned aerial vehicle to spray the medicine to the target object according to the distribution information.
- an unmanned aerial vehicle including: a communication module for receiving image information from a target area of a designated device, wherein the designated device includes: a network-side server or a mapping drone;
- the processor is used to input image information to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes : The sample image information of the target area, the label used to identify the distribution information of the target object in the sample image information; and controlling the unmanned aerial vehicle to spray the drug to the target object according to the distribution information.
- a storage medium including a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to perform the above method for determining distribution information.
- a processor for running a program, where the above method for determining distribution information is executed when the program is run.
- a method for determining distribution information of a target object including: acquiring image information of a target area; inputting the image information into a preset model for analysis to obtain distribution information of the target object in the target area Among them, the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information and a label used to identify the distribution information of the target object in the sample image information.
- the step of training the preset model includes:
- sample image information and mark the position of the target object in the sample image information, obtain a label of the distribution information of the target object corresponding to the sample image information, and enter the sample image and the corresponding label into a preset model;
- Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image, and parameters of each part of the preset model are adjusted.
- inputting the image information to be processed into a preset model for analysis to obtain distribution information of target objects in the image information to be processed includes:
- the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
- the sample image information includes: a density map of the target object, which is used to reflect the density of the target object in each distribution area in the target area.
- the density map has an indicator for indicating the density of the target object.
- the target sales area of the medicine is determined according to the density map of the target objects in the multiple target areas.
- the distribution information further includes: a distribution area of the target object in the target area; the above method further includes: determining a flight route of the unmanned aerial vehicle according to the location of the distribution area of the target object.
- the method further includes: determining the type of the target object; determining the application of each sub-area in the target area according to the type and distribution information Medicine information, the medicine application information includes the medicine type and the target medicine spray amount of the target object in the sub-area of the target area; the mark information for identifying the medicine application information is added to the image information of the target area to obtain the prescription map of the target area.
- the target area is farmland to be applied, and the target object is weeds.
- the image information of the target area is obtained; the image information is input to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, Each set of data in the multiple sets of data includes: image information of the target area, a label for identifying the distribution information of the target object in the image information; and controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information.
- FIG. 1 is a schematic flowchart of a method for controlling an unmanned aerial vehicle according to an embodiment of the present application
- FIG. 2 is a schematic flowchart of training a preset model according to an embodiment of the present application
- 3a and 3b are schematic diagrams of sample images and their annotations according to embodiments of the present application.
- FIG. 4 is a schematic flowchart of another training preset model according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of a density map according to an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of an optional control device for an unmanned aerial vehicle according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of an optional unmanned aerial vehicle according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of another optional unmanned aerial vehicle according to an embodiment of the present application.
- FIG. 9 is a schematic flowchart of a method for determining distribution information according to an embodiment of the present application.
- FIG. 1 is a schematic flowchart of a method for controlling an unmanned aerial vehicle according to this embodiment. As shown in FIG. 1, the method includes the following steps:
- Step S102 Acquire image information to be processed in the target area.
- the image information to be processed may be obtained by capturing an image of the target area through an image acquisition device provided on the UAV.
- the target area may be one or more agricultural fields to be applied.
- the UAV may be provided with a positioning system, so as to determine area information and latitude and longitude information of the current target area according to the positioning system.
- Step S104 Input image information to be processed into a preset model for analysis to obtain distribution information of the target object in the target area.
- the target object may be weeds in the farmland.
- the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data includes: sample image information of the target area, and a label used to identify the distribution information of the target object in the sample image information.
- a weed recognition model for recognizing the type of weed can be trained.
- the weed recognition model is obtained by training multiple sets of data.
- Each set of data in the multiple sets of data includes: sample image information of the target area, A label used to identify the type of target object in the sample image information.
- the image information is input to a preset weed identification model for analysis to obtain the type of the target object in the target area, where the target object is weed.
- Step S106 Control the unmanned aerial vehicle to spray medicine on the target object according to the distribution information corresponding to the image to be processed.
- the distribution information may be: the density of the target object in each distribution area in the target area, and the area of the distribution area where the target object is located.
- Controlling the unmanned aerial vehicle to spray medicine to the target object according to the distribution information may be achieved in the following ways:
- the amount of drug sprayed or the duration of spraying of the UAV in the distribution area is determined according to the density of the target object in the distribution area; and / or the range of drug spraying is determined according to the area of the distribution area where the target object is located.
- the greater the density of the target object in the distribution area the greater the amount of medicine sprayed by the UAV in the corresponding distribution area, and the longer the spray duration.
- the density of the target object in the distribution area and the area of the target object in the distribution area are comprehensively considered to determine the amount of medicine sprayed by the UAV in the corresponding distribution area.
- the spray amount is determined according to the density of the target object in the distribution area.
- the spraying range of the medicine may be a vertical range or a horizontal range.
- the distribution information of the target object further includes: the distribution area of the target object in the target area, specifically, the pixel area of the distribution area in the image in the image and / or the pass can be determined according to the acquired image information of the target area
- the positioning device obtains the latitude and longitude range occupied by the target object in the target area.
- the flight path of the unmanned aerial vehicle may be determined according to the location of the distribution area of the target object, and the unmanned aerial vehicle may be controlled to move according to the flight path.
- the flight route can be determined in an area free of weeds, and the unmanned aerial vehicle can be controlled to move according to the flight route.
- the drone After controlling the drone to spray medicine to the target object according to the distribution information, it may further:
- the remaining distribution area of the UAV in the target area wherein the remaining distribution area is a distribution area of unsprayed medicine in the target area; determining the density of the target object in the remaining distribution area, And the total area of the remaining distribution area; the total amount of medicine required for the remaining distribution area is determined according to the density of the target object in the remaining distribution area and the total area of the remaining distribution area; The difference between the remaining medicine amount of the UAV and the total medicine amount; comparing the difference value with a preset threshold, and adjusting the flight path of the UAV according to the comparison result.
- the flight route of the unmanned aerial vehicle can be maliciously adjusted as the return route to reload the pesticide.
- the farmland on the return route can be sprayed.
- the return route can be planned according to the area of the target object that has not been sprayed with the drug and the amount of remaining medicine, so as to spray a whole area on the way back.
- the image information of the target area may be obtained through an image acquisition device; the image information is input into a preset model to determine the image information , The distribution information of the target object in the target area, and according to the size of the distribution area of the target object in the target area in the distribution information, and the size of the density of the target object in the distribution area, determine the target medication amount of the UAV .
- a target medication amount is determined.
- a target medication amount is determined according to the distribution information of the target object in the distribution area being larger in the distribution information and the density of the target object in the distribution area being smaller.
- a target medication amount is determined according to the distribution information of the target object in the distribution area being smaller in the distribution information and the density of the target object in the distribution area being smaller.
- a target medication amount is determined according to the distribution information of the target object in the distribution area being larger in the distribution information and the density of the target object in the distribution area being larger.
- the training method of the preset model may include the following steps.
- Step S302 Obtain sample image information and mark the position of the target object in the sample image information to obtain a label of distribution information of the target object corresponding to the sample image information;
- the image corresponding to the sample image information is an RGB image.
- the distribution information of the target object in the sample image information can be identified by a label.
- the label includes the latitude and longitude distribution range of the target object in the target area and / or the pixel distribution range in the picture.
- a cross "x" can be used to indicate a crop area
- a circle " ⁇ " can indicate a weed area.
- Fig. 3b is the identification of the target object on the real electronic map. The dark areas are weeds and the light areas are crops.
- Step S304 Use the first convolutional network model in the preset model to process the sample image information to obtain a first convolution image of the sample image information;
- Step S306 the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the sample image information, wherein the first convolutional network model and The convolution kernel used in the second convolutional network model is different;
- the size of the convolution kernel of the first convolutional network model may be 3 * 3, and the convolution step size may be set to 2.
- the sample image information is an RGB image, and has three dimensions of R, G, and B.
- the Perform downsampling In addition, the dimensions of the first convolution image can also be set.
- the first convolutional network model when used to convolve the labeled image to obtain the first convolutional image, multiple convolutions can be performed, and the size of the convolution kernel is the same every time the convolution is performed It is 3 * 3, the convolution step is 2, every time convolution is down-sampling. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
- the size of the convolution kernel of the second convolutional network model can be set to 5 * 5, and the convolution step size can be set to 2.
- the sample image information is an RGB image and has three dimensions of R, G, and B.
- the second convolution network model is used to perform convolution processing on the marked image to obtain the second convolution image, downsampling may be performed.
- the dimensions of the second convolutional image can also be set.
- the second convolutional network model when used to convolve the marked image to obtain the second convolutional image, multiple convolutions may be performed, and the convolution kernel is 5 for each convolution * 5, the convolution step is 2, and downsampling is performed every time. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
- the first convolution image and the second convolution image have the same image size.
- Step S308 Combine the first convolution image and the second convolution image of the sample image information to obtain a merged image
- Step S310 Perform deconvolution processing on the merged image, and perform back propagation according to the deconvolution processing result and the label of the sample image to adjust parameters of each part of the preset model.
- the merged image needs to be deconvoluted as many times as the number of convolutions from the sample image information to the first image, and, Set the dimensions of the deconvoluted image.
- the deconvolution kernel size can be set to 3 * 3.
- the size of the image is the same as the size of the sample image information.
- the back propagation is performed to adjust the parameters of each layer of the preset model.
- the preset model can have the ability to identify the distribution position of the target object in the image to be processed.
- FIG. 4 is a schematic flowchart of another method for acquiring sample image information of a target area in each set of data provided by this embodiment; the method includes the following steps:
- Step S402 acquiring sample image information, and labeling the position of the target object in the sample image information, obtaining a label of the distribution information of the target object corresponding to the sample image information, and inputting the sample image and the corresponding label into the pre Design model
- the image corresponding to the sample image information is an RGB image.
- the distribution information of the target object in the sample image information can be identified by a label.
- the label includes the latitude and longitude distribution range of the target object in the target area and / or the pixel distribution range in the picture.
- the first convolution network model in the preset model is used to process the sample image information to obtain a first convolution image of the sample image information.
- the size of the convolution kernel of the first convolutional network model is 3 * 3, and the convolution step size can be set to 2.
- the image corresponding to the sample image information is an RGB image, which has three dimensions of R, G, and B.
- the process of convolving the labeled image with the first convolution network model to obtain the first convolution image it can be performed Downsampling.
- the dimensions of the first convolution image can also be set.
- step S4042 a total of three convolutions are performed, which are step S4042, step S4044, and step S4046 in sequence.
- Each convolution is down-sampled and the convolution step is set to 2.
- the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
- n1, n2, and n3 are the corresponding dimension corresponding to each set of convolution, and the dimension is used to represent the length of the data vector corresponding to each pixel of the first convolution image.
- the corresponding dimension of the pixel of the image after the first convolution is 1, and the data corresponding to the pixel can be a gray value.
- n1 is 3 after the first convolution
- the corresponding dimension of the pixel of the image is 3, and the data corresponding to this pixel can be RGB value.
- the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the sample image information, wherein the first convolutional network model and the first The convolution kernel used in the two-convolution network model is different.
- the size of the convolution kernel of the second convolutional network model can be set to 5 * 5, and the convolution step size can be set to 2.
- the image corresponding to the sample image information is an RGB image, and has three dimensions of R, G, and B.
- it can be performed Downsampling.
- the dimensions of the second convolutional image can also be set.
- the second convolutional network model when used to convolve the marked image to obtain the second convolutional image, multiple convolutions may be performed, and the convolution kernel is 5 for each convolution * 5.
- the convolution step size can be set to 2, and downsampling is performed every time. After each downsampling, the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
- step S4062 a total of three convolutions are performed, which are step S4062, step S4064, and step S4066 in sequence.
- Each convolution is down-sampled, and the convolution step is set to 2.
- the image is 1/2 the size of the image before downsampling, which can greatly reduce the amount of data processing and increase the speed of data calculation.
- m1, m2, and m3 are respectively the corresponding set dimensions for each convolution, and the dimensions are used to represent the length of the data vector corresponding to each pixel of the second convolution image.
- the corresponding dimension of the pixel of the image after the first convolution is 1
- the data corresponding to the pixel can be a gray value
- the data corresponding to this pixel can be RGB value.
- the second convolutional network model is used to convolve the marked image to obtain the second convolutional image, multiple convolutions can be performed, and the convolution kernel is 5 * 5 each time.
- the first convolution image and the second convolution image have the same image size.
- Step S408 Combine the first convolution image and the second convolution image of the sample image information to obtain a merged image.
- Deconvolution processing is performed on the merged image.
- the merged image is deconvoluted as many times as the number of convolutions from the sample image information to the first image, and the deconvoluted image may be dimensioned.
- Three deconvolutions are performed on the merged image, which are step S4102, step S4104, and step S4106, respectively, to obtain a density map, that is, sample image information of the target area, that is, step S412.
- the size of the deconvolution kernel can be set to 3 * 3.
- the size of the image is the same as the size of the sample image information.
- Deconvolution processing is performed on the merged image, and back propagation is performed according to the deconvolution processing result and the label of the sample image, and parameters of each part of the preset model are adjusted.
- the preset model can have the ability to identify the distribution position of the target object in the image to be processed.
- the image information to be processed can be input into the trained preset model.
- the first convolutional network model in the preset model is used to process the sample image information to obtain a first convolutional image of the image information to be processed.
- the second convolutional network model in the preset model is used to process the sample image information to obtain a second convolutional image of the image information to be processed.
- the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
- the density map has an indicator for indicating the density of the target object. For example, the lighter the color of the distribution area in the density map, the greater the density of the target object in the distribution area.
- FIG. 5 is a density map obtained after the processing. In FIG. 5, if the color of area A is lighter than that of area B, the density of the aggregated target object in area A is greater.
- the value of a pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
- the deconvoluted density map may be a grayscale map.
- the deconvolution is a grayscale image, in the image, white is 255 and black is 0.
- the larger the gray value the denser the distribution of target objects in the target area. That is, where the color is darker, the weeds are more densely distributed; where the color is lighter, the weeds are more sparsely distributed.
- a preset model is used to analyze the image information of the target area to obtain the distribution information of the target object in the target area, and based on the distribution information, the unmanned aerial vehicle is controlled to spray the target object with medicine, by acquiring the target area Image information; input the image information to a preset model for analysis to obtain the distribution information of the target object in the target area, wherein the preset model is obtained by training multiple sets of data, the multiple sets of data
- Each group of data in includes: image information of the target area, a label for identifying distribution information of the target object in the image information; and controlling the unmanned aerial vehicle to spray the target object with medicine according to the distribution information.
- FIG. 6 is a schematic structural diagram of an optional unmanned aerial vehicle control device according to this embodiment. As shown in FIG. 6, the device includes: an acquisition module 62; an analysis module 64, and a control module 66. among them,
- the obtaining module 62 is used to obtain image information of the target area
- the analysis module 64 is configured to input the image information to be processed into a preset model for analysis to obtain distribution information of the target object in the image information to be processed, wherein the preset model is obtained by training through multiple sets of data Each of the multiple sets of data includes: sample image information of the target area, and a label for identifying distribution information of the target object in the sample image information.
- the control module 66 is configured to control the unmanned aerial vehicle to spray medicine on the target object according to the distribution information corresponding to the image to be processed.
- the device includes: an image acquisition device 72 and a processor 74. among them,
- the image acquisition device 72 is used to acquire image information to be processed in the target area
- the processor 74 is configured to input the to-be-processed image information into a preset model for analysis to obtain distribution information of the target object in the to-be-processed image information, wherein the preset model is obtained through training of multiple sets of data , Each of the multiple sets of data includes: sample image information of the target area, a label used to identify the distribution information of the target object in the sample image information; and a distribution information control station based on the distribution information corresponding to the image information to be processed
- the unmanned aerial vehicle sprays medicine to the target object.
- FIG. 8 is a schematic structural diagram of an optional unmanned aerial vehicle control device according to this embodiment. As shown in FIG. 4, the device includes: an image acquisition device 82 and a processor 84. among them,
- the communication module 82 is configured to receive image information to be processed from the target area of the designated device.
- the processor 84 is configured to input the image information to be processed into a preset model for analysis to obtain distribution information of the target object in the image to be processed, wherein the preset model is obtained through training of multiple sets of data, Each set of data in the plurality of sets of data includes: sample image information of the target area, a label for identifying distribution information of the target object in the sample image information; and controlling the none according to the distribution information corresponding to the image to be processed
- the human aircraft sprays the medicine to the target object.
- unmanned aerial vehicle control device can refer to the related description of the steps shown in FIG. 1, and details are not repeated here.
- FIG. 9 is a schematic flowchart of a method for determining distribution information according to this embodiment. As shown in Figure 9, the method includes:
- Step S902 Acquire image information to be processed in the target area
- Step S904 Input the image information to be processed into a preset model for analysis to obtain the distribution information of the target object in the image information to be processed, wherein the preset model is obtained by training multiple sets of data, and each set of data in the multiple sets of data Both include: sample image information and a label used to identify the distribution information of the target object in the sample image information.
- the step of training the preset model includes: obtaining sample image information, and labeling the position of the target object in the sample image information, obtaining a label of distribution information of the target object corresponding to the sample image information, and Input the sample image and the corresponding label into a preset model; use the first convolution network model in the preset model to process the sample image information to obtain a first convolution image of the sample image information; Processing the sample image information by using the second convolution network model in the preset model to obtain a second convolution image of the sample image information, wherein the first convolution network model and the second volume
- the convolution kernel used in the product network model is different; merge the first convolution image and the second convolution image of the sample image information to obtain a merged image; perform deconvolution processing on the merged image, and according to The deconvolution processing result and the label of the sample image are back-propagated to adjust the parameters of each part of the preset model.
- the step of processing the to-be-processed image through the preset model includes: inputting the to-be-processed image information into the trained preset model; adopting the first convolutional network model in the preset model to the sample image information Performing processing to obtain a first convolutional image of the image information to be processed; using a second convolutional network model in the preset model to process the sample image information to obtain a second of the image information to be processed Convolution image; merge the first convolution image and the second convolution image of the image information to be processed, and perform deconvolution processing on the merged image to obtain the corresponding density map of the image to be processed as the The distribution information of the target object in the image to be processed.
- the value of the pixel in the density map is the distribution density value of the target object at the position corresponding to the pixel.
- the density map is used to reflect the density of the target object in each distribution area in the target area.
- the density map has an indicator for indicating the density of the target object, and the identifier may be different colors or different depths of the same color or digital information, etc.
- the target sales area of the medicine is determined according to the density map of the target objects in the multiple target areas. For example, for a sales area where the density indicated by the density map is greater, more medicine is required, so that the target sales area is indirectly determined.
- the above distribution information may further include: the distribution area of the target object in the target area; at this time, the flight path of the UAV may be determined according to the location of the distribution area of the target object.
- the prescription map of the target area can also be determined according to the distribution information, and the prescription map is used to display the target
- the application information of the area specifically: determine the type of the target object; determine the application information of each sub-area in the target area according to the type and distribution information, the application information includes the drug type and target of the target object in the sub-area of the target area The amount of spraying; add the marking information used to identify the application information in the image information of the target area to obtain the prescription map of the target area.
- the type of the target object can be determined through machine learning, for example, the image of the target object is input into the prediction model that has been trained, and the type of the target object is identified using the prediction model.
- a storage medium including a stored program, wherein, when the program is running, the device where the storage medium is located is controlled to execute the above-mentioned unmanned aerial vehicle control method.
- a processor for running a program, wherein the above-mentioned unmanned aerial vehicle control method is executed when the program is running.
- the technical content disclosed in this embodiment may be implemented in other ways.
- the device embodiments described above are only schematic.
- the division of the unit may be a logical function division.
- there may be another division manner for example, multiple units or components may be combined or may Integration into another system, or some features can be ignored, or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or software function unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
- the technical solution of the present application may be essentially or part of the contribution to the existing technology or all or part of the technical solution may be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
- the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program code .
- the image information to be processed in the target area is obtained; the image information to be processed is input into a preset model for analysis to obtain the distribution information of the target object in the image information to be processed, wherein the preset model is Group data training, each group of data in the multi-group data includes: sample image information of the target area, a label used to identify the distribution information of the target object in the sample image information; according to the distribution information, the UAV controls the target object Drug spray.
Abstract
Description
Claims (20)
- 一种分布信息的确定方法,其特征在于,包括:获取目标区域的待处理图像信息;将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签。
- 根据权利要求1所述的方法,其特征在于,训练所述预设模型的步骤包括:获取样本图像信息,并对所述样本图像信息中目标对象的位置进行标注,获得该样本图像信息对应的目标对象的分布信息的标签,并将所述样本图像及对应的标签输入预设模型;采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第一卷积图像;采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述样本图像信息的第二卷积图像,其中,所述第一卷积网络模型和第二卷积网络模型采用的卷积核是不同的;合并所述样本图像信息的第一卷积图像和第二卷积图像,得到合并图像;对所述合并后的图像进行反卷积处理,并根据反卷积处理结果及所述样本图像的标签进行反向传播,调整所述预设模型各部分的参数。
- 根据权利要求2所述的方法,其特征在于,将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,包括:将待处理图像信息输入训练好的所述预设模型;采用所述预设模型中的第一卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第一卷积图像;采用所述预设模型中的第二卷积网络模型对所述样本图像信息进行处理,获得所述待处理图像信息的第二卷积图像;合并所述待处理图像信息的第一卷积图像和第二卷积图像,并对合并后的图像进行反卷积处理,获得所述待处理图像的对应的密度图作为所述待处理图像中目标对象的分布信息。
- 根据权利要求3所述的方法,其特征在于,所述密度图中像素点的值为该像素点对应位置上所述目标对象的分布密度值。
- 根据权利要求2所述的方法,其特征在于,所述样本图像信息包括:所述目标对象的密度图,该密度图用于反映所述目标区域中所述目标对象在各个分布区域的密度大小。
- 根据权利要求5所述的方法,其特征在于,所述密度图中具有用于指示所述目标对象的密度大小的标识。
- 根据权利要求1至6任意一项所述的方法,其特征在于,在所述目标区域为多个且多个所述目标区域位于不同的销售区域时,依据多个目标区域中所述目标对象的密度图确定药物的目标销售区域。
- 根据权利要求1至7任意一项所述的方法,其特征在于,所述分布信息包括:所述目标对象在所述目标区域的分布区域;所述方法还包括:依据所述目标对象的分布区域所在的位置确定所述无人飞行器的飞行路线。
- 根据权利要求1至8中任意一项所述的方法,其特征在于,将所述图像信息输入至预设模型进行分析,得到所述目标区域中目标对象的分布信息之后,所述方法还包括:确定所述目标对象的种类;依据所述种类和所述分布信息确定所述目标区域中各个子区域的施药信息,该施药信息包括所述目标对象在所述目标区域的子区域的药物种类和目标喷药量;在所述目标区域的图像信息中添加用于标识所述施药信息的标记信息,得到所述目标区域的处方图。
- 根据权利要求9所述的方法,其特征在于,所述目标区域为待施药的农田,所述目标对象为杂草。
- 一种无人飞行器的控制方法,其特征在于,包括:获取目标区域的待处理图像信息;将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
- 根据权利要求11所述的方法,其特征在于,所述分布信息包括以下至少之一:所述目标对象在所述目标区域中各个分布区域内的密度、所述目标对象所在分布区域 的面积;依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒,包括:依据所述目标对象在分布区域中的密度确定所述无人飞行器在所述分布区域的药物喷洒量或喷洒持续时间;和/或依据所述目标对象所在分布区域的面积确定药物喷洒幅度。
- 根据权利要求11或12所述的方法,其特征在于,所述目标区域为待施药的农田,所述目标对象为杂草;所述分布信息还包括:所述目标对象在所述目标区域的分布区域;所述方法还包括:依据所述目标对象的分布区域所在的位置确定所述无人飞行器的飞行路线;控制所述无人飞行器按照所述飞行路线移动。
- 根据权利要求11至13任意一项所述的方法,其特征在于,依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒之后,所述方法还包括:检测所述目标区域中所述无人飞行器的剩余分布区域,其中,所述剩余分布区域为所述目标区域中未喷洒药物的分布区域;确定所述剩余分布区域中所述目标对象的密度,以及所述剩余分布区域的总面积;依据所述剩余分布区域中所述目标对象的密度,以及所述剩余分布区域的总面积确定所述剩余分布区域所需的总药量;确定所述无人飞行器的剩余药量和所述总药量的差值;比较所述差值和预设阈值,并依据比较结果调整所述无人飞行器的飞行路线。
- 根据权利要求11至14中任意一项所述的方法,其特征在于,依据所述分布信息控制所述无人飞行器对所述目标对象进行药物喷洒之前,所述方法还包括:依据所述分布信息中所述目标对象在所述目标区域的分布区域大小,以及所述目标对象在所述分布区域的密度大小确定所述无人飞行器的目标用药量。
- 一种无人飞行器的控制装置,其特征在于,包括:获取模块,用于获取目标区域的待处理图像信息;分析模块,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;控制模块,用于依据所述待处理图像对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
- 一种无人飞行器,其特征在于,包括:图像采集装置,用于获取目标区域的待处理图像信息;处理器,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
- 一种无人飞行器控制设备,其特征在于,包括:图像获取装置,用于接收目标区域的待处理图像信息;处理器,用于将所述待处理图像信息输入至预设模型进行分析,得到所述待处理图像信息中目标对象的分布信息,其中,所述预设模型为通过多组数据训练得到的,所述多组数据中的每组数据均包括:目标区域的样本图像信息、用于标识样本图像信息中目标对象的分布信息的标签;以及依据所述待处理图像信息对应的分布信息控制所述无人飞行器对所述目标对象进行药物喷洒。
- 一种存储介质,其特征在于,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至10中任意一项所述的分布信息的确定方法。
- 一种处理器,其特征在于,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至10中任意一项所述的分布信息的确定方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217014072A KR20210071062A (ko) | 2018-10-18 | 2019-10-16 | 분포 정보의 확정 방법, 무인 항공기의 제어 방법 및 장치 |
JP2021520573A JP2022502794A (ja) | 2018-10-18 | 2019-10-16 | 分布情報の確定方法、無人飛行体の制御方法及び装置 |
CA3115564A CA3115564A1 (en) | 2018-10-18 | 2019-10-16 | Method for determining distribution information, and control method and device for unmanned aerial vehicle |
US17/309,058 US20210357643A1 (en) | 2018-10-18 | 2019-10-16 | Method for determining distribution information, and control method and device for unmanned aerial vehicle |
AU2019362430A AU2019362430B2 (en) | 2018-10-18 | 2019-10-16 | Method for determining distribution information, and control method and device for unmanned aerial vehicle |
EP19873665.4A EP3859479A4 (en) | 2018-10-18 | 2019-10-16 | PROCEDURES FOR DETERMINING DISTRIBUTION INFORMATION AND CONTROL PROCEDURES AND DEVICE FOR UNMANNED AIRCRAFT |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811217967.X | 2018-10-18 | ||
CN201811217967.XA CN109445457B (zh) | 2018-10-18 | 2018-10-18 | 分布信息的确定方法、无人飞行器的控制方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020078396A1 true WO2020078396A1 (zh) | 2020-04-23 |
Family
ID=65546651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/111515 WO2020078396A1 (zh) | 2018-10-18 | 2019-10-16 | 分布信息的确定方法、无人飞行器的控制方法及装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US20210357643A1 (zh) |
EP (1) | EP3859479A4 (zh) |
JP (1) | JP2022502794A (zh) |
KR (1) | KR20210071062A (zh) |
CN (1) | CN109445457B (zh) |
AU (1) | AU2019362430B2 (zh) |
CA (1) | CA3115564A1 (zh) |
WO (1) | WO2020078396A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111783549A (zh) * | 2020-06-04 | 2020-10-16 | 北京海益同展信息科技有限公司 | 一种分布图生成方法、系统、巡检机器人及控制终端 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109445457B (zh) * | 2018-10-18 | 2021-05-14 | 广州极飞科技股份有限公司 | 分布信息的确定方法、无人飞行器的控制方法及装置 |
US10822085B2 (en) | 2019-03-06 | 2020-11-03 | Rantizo, Inc. | Automated cartridge replacement system for unmanned aerial vehicle |
CN112948371A (zh) * | 2019-12-10 | 2021-06-11 | 广州极飞科技股份有限公司 | 数据处理方法、装置、存储介质、处理器 |
CN113011220A (zh) * | 2019-12-19 | 2021-06-22 | 广州极飞科技股份有限公司 | 穗数识别方法、装置、存储介质及处理器 |
CN111459183B (zh) * | 2020-04-10 | 2021-07-20 | 广州极飞科技股份有限公司 | 作业参数推荐方法、装置、无人设备及存储介质 |
CN112425328A (zh) * | 2020-11-23 | 2021-03-02 | 广州极飞科技有限公司 | 多物料播撒控制方法、装置、终端设备、无人设备及介质 |
CN113973793B (zh) * | 2021-09-09 | 2023-08-04 | 常州希米智能科技有限公司 | 一种病虫害区域无人机喷洒处理方法和系统 |
CN115337430A (zh) * | 2022-08-11 | 2022-11-15 | 深圳市隆瑞科技有限公司 | 一种喷雾小车的控制方法和装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170192424A1 (en) * | 2015-12-31 | 2017-07-06 | Unmanned Innovation, Inc. | Unmanned aerial vehicle rooftop inspection system |
CN108154196A (zh) * | 2018-01-19 | 2018-06-12 | 百度在线网络技术(北京)有限公司 | 用于输出图像的方法和装置 |
CN108541683A (zh) * | 2018-04-18 | 2018-09-18 | 济南浪潮高新科技投资发展有限公司 | 一种基于卷积神经网络芯片的无人机农药喷洒系统 |
CN108629289A (zh) * | 2018-04-11 | 2018-10-09 | 千寻位置网络有限公司 | 农田的识别方法及系统、应用于农业的无人机 |
CN109445457A (zh) * | 2018-10-18 | 2019-03-08 | 广州极飞科技有限公司 | 分布信息的确定方法、无人飞行器的控制方法及装置 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10104824B2 (en) * | 2013-10-14 | 2018-10-23 | Kinze Manufacturing, Inc. | Autonomous systems, methods, and apparatus for AG based operations |
US10104836B2 (en) * | 2014-06-11 | 2018-10-23 | John Paul Jamison | Systems and methods for forming graphical and/or textual elements on land for remote viewing |
CN107148633B (zh) * | 2014-08-22 | 2020-12-01 | 克莱米特公司 | 用于使用无人机系统进行农艺和农业监测的方法 |
US10139279B2 (en) * | 2015-05-12 | 2018-11-27 | BioSensing Systems, LLC | Apparatuses and methods for bio-sensing using unmanned aerial vehicles |
CN105159319B (zh) * | 2015-09-29 | 2017-10-31 | 广州极飞科技有限公司 | 一种无人机的喷药方法及无人机 |
EP3479691B1 (en) * | 2016-06-30 | 2024-04-24 | Optim Corporation | Mobile body control application and mobile body control method |
US10520943B2 (en) * | 2016-08-12 | 2019-12-31 | Skydio, Inc. | Unmanned aerial image capture platform |
CA3035068A1 (en) * | 2016-09-08 | 2018-03-15 | Walmart Apollo, Llc | Systems and methods for dispensing an insecticide via unmanned vehicles to defend a crop-containing area against pests |
JP6798854B2 (ja) * | 2016-10-25 | 2020-12-09 | 株式会社パスコ | 目的物個数推定装置、目的物個数推定方法及びプログラム |
US10721859B2 (en) * | 2017-01-08 | 2020-07-28 | Dolly Y. Wu PLLC | Monitoring and control implement for crop improvement |
JP6906959B2 (ja) * | 2017-01-12 | 2021-07-21 | 東光鉄工株式会社 | ドローンを使用した肥料散布方法 |
CN108509961A (zh) * | 2017-02-27 | 2018-09-07 | 北京旷视科技有限公司 | 图像处理方法和装置 |
CN106882380A (zh) * | 2017-03-03 | 2017-06-23 | 杭州杉林科技有限公司 | 空地一体农林用植保系统装置及使用方法 |
CN106951836B (zh) * | 2017-03-05 | 2019-12-13 | 北京工业大学 | 基于先验阈值优化卷积神经网络的作物覆盖度提取方法 |
CN106910247B (zh) * | 2017-03-20 | 2020-10-02 | 厦门黑镜科技有限公司 | 用于生成三维头像模型的方法和装置 |
CN107274378B (zh) * | 2017-07-25 | 2020-04-03 | 江西理工大学 | 一种融合记忆cnn的图像模糊类型识别及参数整定方法 |
US10740607B2 (en) * | 2017-08-18 | 2020-08-11 | Autel Robotics Co., Ltd. | Method for determining target through intelligent following of unmanned aerial vehicle, unmanned aerial vehicle and remote control |
CN107728642B (zh) * | 2017-10-30 | 2021-03-09 | 北京博鹰通航科技有限公司 | 一种无人机飞行控制系统及其方法 |
CN107933921B (zh) * | 2017-10-30 | 2020-11-17 | 广州极飞科技有限公司 | 飞行器及其喷洒路线生成和执行方法、装置、控制终端 |
CN107703960A (zh) * | 2017-11-17 | 2018-02-16 | 江西天祥通用航空股份有限公司 | 农药喷洒直升机的地空跟踪监测装置 |
CN108596222B (zh) * | 2018-04-11 | 2021-05-18 | 西安电子科技大学 | 基于反卷积神经网络的图像融合方法 |
CN108594850B (zh) * | 2018-04-20 | 2021-06-11 | 广州极飞科技股份有限公司 | 基于无人机的航线规划及控制无人机作业的方法、装置 |
US10660277B2 (en) * | 2018-09-11 | 2020-05-26 | Pollen Systems Corporation | Vine growing management method and apparatus with autonomous vehicles |
-
2018
- 2018-10-18 CN CN201811217967.XA patent/CN109445457B/zh active Active
-
2019
- 2019-10-16 US US17/309,058 patent/US20210357643A1/en not_active Abandoned
- 2019-10-16 KR KR1020217014072A patent/KR20210071062A/ko not_active Application Discontinuation
- 2019-10-16 CA CA3115564A patent/CA3115564A1/en active Pending
- 2019-10-16 WO PCT/CN2019/111515 patent/WO2020078396A1/zh unknown
- 2019-10-16 JP JP2021520573A patent/JP2022502794A/ja active Pending
- 2019-10-16 AU AU2019362430A patent/AU2019362430B2/en not_active Expired - Fee Related
- 2019-10-16 EP EP19873665.4A patent/EP3859479A4/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170192424A1 (en) * | 2015-12-31 | 2017-07-06 | Unmanned Innovation, Inc. | Unmanned aerial vehicle rooftop inspection system |
CN108154196A (zh) * | 2018-01-19 | 2018-06-12 | 百度在线网络技术(北京)有限公司 | 用于输出图像的方法和装置 |
CN108629289A (zh) * | 2018-04-11 | 2018-10-09 | 千寻位置网络有限公司 | 农田的识别方法及系统、应用于农业的无人机 |
CN108541683A (zh) * | 2018-04-18 | 2018-09-18 | 济南浪潮高新科技投资发展有限公司 | 一种基于卷积神经网络芯片的无人机农药喷洒系统 |
CN109445457A (zh) * | 2018-10-18 | 2019-03-08 | 广州极飞科技有限公司 | 分布信息的确定方法、无人飞行器的控制方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3859479A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111783549A (zh) * | 2020-06-04 | 2020-10-16 | 北京海益同展信息科技有限公司 | 一种分布图生成方法、系统、巡检机器人及控制终端 |
Also Published As
Publication number | Publication date |
---|---|
CN109445457A (zh) | 2019-03-08 |
CN109445457B (zh) | 2021-05-14 |
AU2019362430B2 (en) | 2022-09-08 |
US20210357643A1 (en) | 2021-11-18 |
EP3859479A1 (en) | 2021-08-04 |
AU2019362430A1 (en) | 2021-05-13 |
JP2022502794A (ja) | 2022-01-11 |
KR20210071062A (ko) | 2021-06-15 |
CA3115564A1 (en) | 2020-04-23 |
EP3859479A4 (en) | 2021-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020078396A1 (zh) | 分布信息的确定方法、无人飞行器的控制方法及装置 | |
Huang et al. | A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery | |
CN110297483B (zh) | 待作业区域边界获取方法、装置,作业航线规划方法 | |
US11093745B2 (en) | Automated plant detection using image data | |
Guo et al. | Aerial imagery analysis–quantifying appearance and number of sorghum heads for applications in breeding and agronomy | |
US20180260947A1 (en) | Inventory, growth, and risk prediction using image processing | |
Blok et al. | The effect of data augmentation and network simplification on the image‐based detection of broccoli heads with Mask R‐CNN | |
EP3815529A1 (en) | Agricultural plant detection and control system | |
US20220192174A1 (en) | Agricultural sprayer with real-time, on-machine target sensor | |
US20220256834A1 (en) | Method for generating an application map for treating a field with an agricultural equipment | |
EP4014734A1 (en) | Agricultural machine and method of controlling such | |
CN107213635A (zh) | 视野显示方法和装置 | |
Buddha et al. | Weed detection and classification in high altitude aerial images for robot-based precision agriculture | |
Passos et al. | Automatic detection of Aedes aegypti breeding grounds based on deep networks with spatio-temporal consistency | |
CN110188661B (zh) | 边界识别方法及装置 | |
Sassu et al. | Artichoke deep learning detection network for site-specific agrochemicals uas spraying | |
Olsen | Improving the accuracy of weed species detection for robotic weed control in complex real-time environments | |
CN111009000A (zh) | 昆虫取食行为分析方法、装置和存储介质 | |
US11832609B2 (en) | Agricultural sprayer with real-time, on-machine target sensor | |
CN109492541B (zh) | 目标对象类型的确定方法及装置、植保方法、植保系统 | |
US20220392214A1 (en) | Scouting functionality emergence | |
Shahid et al. | Aerial imagery-based tobacco plant counting framework for efficient crop emergence estimation | |
CN117274674A (zh) | 对靶施药方法、电子设备、存储介质及系统 | |
Charitha et al. | Detection of Weed Plants Using Image Processing and Deep Learning Techniques | |
CN117036886A (zh) | 一种伪装目标检测方法、装置、设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19873665 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3115564 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2021520573 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019873665 Country of ref document: EP Effective date: 20210428 |
|
ENP | Entry into the national phase |
Ref document number: 20217014072 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019362430 Country of ref document: AU Date of ref document: 20191016 Kind code of ref document: A |