WO2021051278A1 - Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur - Google Patents

Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2021051278A1
WO2021051278A1 PCT/CN2019/106228 CN2019106228W WO2021051278A1 WO 2021051278 A1 WO2021051278 A1 WO 2021051278A1 CN 2019106228 W CN2019106228 W CN 2019106228W WO 2021051278 A1 WO2021051278 A1 WO 2021051278A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
disaster
feature
land
Prior art date
Application number
PCT/CN2019/106228
Other languages
English (en)
Chinese (zh)
Inventor
董双
李鑫超
王涛
李思晋
梁家斌
田艺
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033702.0A priority Critical patent/CN112154447A/zh
Priority to PCT/CN2019/106228 priority patent/WO2021051278A1/fr
Publication of WO2021051278A1 publication Critical patent/WO2021051278A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a method, equipment, drone, and computer-readable storage medium for recognizing ground features.
  • UAVs are growing rapidly in the fields of agriculture, aerial surveys, power line inspections, natural gas (oil) pipeline inspections, forest fire prevention, emergency rescue and disaster relief, and smart cities.
  • the automatic spraying of pesticides on crops can be achieved through drones.
  • the present application provides a method, equipment, unmanned aerial vehicle, and computer-readable storage medium for recognizing ground features, aiming to improve the accuracy and convenience of recognition results of ground features.
  • this application provides a land surface feature recognition method, including:
  • ground surface image information includes image information of multiple color channels and image depth information
  • the recognition result of the surface feature is determined.
  • the present application also provides an unmanned aerial vehicle including a spraying device and a processor, and the processor is configured to implement the following steps:
  • the present application also provides a surface feature recognition device, the surface feature recognition device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • ground surface image information includes image information of multiple color channels and image depth information
  • the recognition result of the surface feature is determined.
  • the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the above The surface feature recognition method described.
  • the embodiments of the present application provide a method for identifying surface features, a drone, and a computer-readable storage medium.
  • features containing surface semantic information can be obtained Figure, through the surface semantic information in the feature map, the recognition result of the surface feature can be accurately determined, and the entire recognition process does not require manual participation, which can improve the accuracy and convenience of surface feature recognition.
  • FIG. 1 is a schematic flow chart of the steps of a method for recognizing ground features according to an embodiment of the present application
  • Fig. 2 is a schematic flowchart of sub-steps of the land surface feature recognition method in Fig. 1;
  • Fig. 3 is a schematic diagram of splicing ground surface images in an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of the steps of another land surface feature recognition method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of the steps of another method for recognizing ground features according to an embodiment of the present application.
  • Figure 6 is a schematic structural diagram of a drone provided by an embodiment of the present application.
  • FIG. 7 is a schematic flow chart of the steps of a drone performing a spraying task provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a flying spray route in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a flying spraying route in an embodiment of the present application.
  • Figure 10 is a schematic diagram of a disaster spread boundary in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of overlapping spraying operation areas in an embodiment of the present application.
  • FIG. 12 is another schematic diagram of overlapping spraying operation areas in an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of a surface feature recognition device provided by an embodiment of the present application.
  • the surface feature recognition method provided in this application can be applied to ground control platforms, servers, and/or drones to recognize surface features.
  • the ground control platform includes laptop computers and PC computers.
  • the server can be a single server or a server cluster composed of multiple servers.
  • UAVs include rotary-wing UAVs, such as quadrotor UAVs, The six-rotor UAV, the eight-rotor UAV can also be a fixed-wing UAV, or a combination of a rotary-wing type and a fixed-wing UAV, which is not limited here.
  • FIG. 1 is a schematic flowchart of steps of a method for identifying ground features according to an embodiment of the present application.
  • the method for identifying land features includes step S101 to step S103.
  • ground surface image information includes image information of multiple color channels and image depth information.
  • the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information.
  • the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
  • the ground surface image information also includes a top view and the image depth information is height information in the top view.
  • the surface image is obtained through aerial photography. In the process of aerial photography, due to the tilt of the movable platform and other reasons, the surface image obtained by aerial photography is not a normal top view image. By converting the surface image to a top view image, it can be guaranteed The accuracy of surface image information can improve the accuracy of surface feature recognition.
  • the surface image information also includes geographic location information corresponding to the surface image.
  • the geographic location information includes positioning information obtained through a global satellite navigation and positioning system; and/or positioning information obtained through a real-time differential positioning system.
  • the mobile platform can obtain the geographic location information of the surface image through the global satellite navigation and positioning system or the real-time differential positioning system, which can further enrich the surface image information and facilitate subsequent query of the area to which the identified surface feature belongs.
  • the method for determining the image depth information may be based on a binocular ranging algorithm and image information of multiple color channels, or may be based on a monocular ranging algorithm and image information of multiple color channels.
  • the associated frame is an overlapped image frame in the image information of multiple color channels.
  • the overlapped image frame is regarded as the same two image frames, and the disparity of the two image frames is calculated, and then the image can be determined by the disparity In-depth information.
  • the binocular ranging algorithm can be set based on actual conditions, which is not specifically limited in this application.
  • the binocular ranging algorithm can be selected as a semi-global matching algorithm (Semi-Global Matching, SGM).
  • step S101 includes sub-step S1011 to sub-step S1012.
  • the land surface image set includes a plurality of surface images, and each of the surface images is a top view of the surface image; according to each surface image in the surface image set, a corresponding depth map is generated.
  • the surface image set can be obtained by aerial photography of the ground surface by a drone.
  • the drone acquires surface aerial photography tasks, where the surface aerial photography tasks include aerial photography flight routes and aerial photography parameters; the drone performs surface aerial photography tasks in order to The ground surface is aerial photographed to obtain a ground surface image set, where the ground surface image in the ground surface image set includes geographic location information.
  • the drone can store the surface image collection obtained by aerial photography locally, or upload the surface image collection obtained by aerial photography to the cloud.
  • the method for obtaining surface aerial photography tasks is specifically as follows: the drone obtains surface aerial photography mission files, where the surface aerial photography task files include waypoint information and aerial photography parameters of each waypoint; according to the waypoint information, an aerial photography flight route is generated, where , The aerial photography flight route is marked with multiple waypoints; according to the aerial photography parameters of each waypoint, the aerial photography parameters of each waypoint on the aerial photography flight route are set to generate surface aerial photography tasks.
  • the aerial photography parameters include aerial photography altitude and aerial photography frequency. The aerial photography frequency is used to control the camera for continuous photography, and the waypoint information includes the position and sequence of each waypoint.
  • the method of generating the depth map is as follows: based on the monocular ranging algorithm, according to the associated frame of each surface image in the surface image set, generate the depth map corresponding to each surface image, and perform the calculation of the depth map corresponding to each surface image. By stitching, a stitched depth map is obtained, and the stitched depth map is used as the depth map corresponding to the ground surface image set. Or based on the binocular ranging algorithm, according to every two related surface images in the surface image set, generate the depth map corresponding to each two related surface images, and stitch the depth maps corresponding to each two related surface images to get Splice the depth map, and use the spliced depth map as the depth map corresponding to the ground surface image set.
  • S1012. Process each ground surface image and the depth map in the ground surface image set to obtain ground surface image information.
  • each ground surface image and depth map in the ground surface image set is processed to obtain ground surface image information. Specifically, each surface image in the surface image set is spliced to obtain a spliced surface image; the depth map and the spliced surface image are merged to obtain surface image information, which includes image information and image depth of multiple color channels information.
  • the splicing method of the surface images is specifically: determining the corresponding splicing parameters of each surface image, where the splicing parameters include splicing order and splicing relationship; according to the corresponding splicing parameters of each surface image, each surface image is spliced , Get the spliced surface image.
  • FIG. 3 is a schematic view of stitching the ground surface image in the embodiment of the application. As shown in FIG.
  • the right side of the ground image A is stitched with the left side of the ground image B, and the right side of the ground image B is connected with
  • the left side of the surface image C is spliced
  • the upper side of the surface image D is spliced with the lower side of the surface image A
  • the right side of the surface image E is spliced with the left side of the surface image D
  • the left side of the surface image F is spliced to the right of the surface image E.
  • the method of determining the stitching parameters is: acquiring the respective aerial time point and aerial position of each surface image; determining the respective stitching sequence of each surface image according to the aerial time point corresponding to each surface image; The corresponding aerial position of each surface image is determined, and the corresponding splicing relationship of each surface image is determined.
  • the aerial position of the surface image can determine the difference between the surface images.
  • the location relationship, and the location relationship between the surface images of each region is regarded as the corresponding splicing relationship of each surface image.
  • S102 Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
  • the surface image information After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface.
  • the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
  • the multiple color channel information and image depth information are fused to obtain a fused image block; the fused image block is matched with an image block in a preset image block set to obtain a fused image block and each image
  • the matching degree between the blocks; according to the matching degree between the fused image block and each image block, the feature map containing the semantic information of the surface is determined.
  • the preset image block set includes a plurality of image blocks marked with surface features, and the image blocks in the preset image block set can be set based on actual conditions, which is not specifically limited in this application.
  • the matching method between the fused image block and the image block is specifically: split the fused image block into a preset number of fused image sub-blocks, and split the image blocks into a preset number of image sub-blocks, where, There is a one-to-one correspondence between fused image sub-blocks and image sub-blocks; calculate the similarity between each fused image sub-block and the corresponding image sub-block, and accumulate the difference between each fused image sub-block and the corresponding image sub-block Similarity, the degree of matching between the fused image block and the image block is obtained.
  • the method for determining the feature map is specifically: acquiring the image block whose matching degree is greater than or equal to the preset matching degree threshold as the target image block, and obtaining the difference between each image sub-block in each target image block and the corresponding fused sub-block The similarity between each target image block and the corresponding land surface feature of each image sub-block in each target image block; obtain the land surface feature corresponding to the image sub-block whose similarity is greater than or equal to the preset similarity threshold, and compare the image corresponding to the land surface feature The similarity of the sub-blocks is used as the recognition probability value; the land surface feature and the recognition probability value are marked in the corresponding fused image sub-block in the fused image block, thereby obtaining a feature map containing the semantic information of the land surface.
  • a plurality of color channel information and image depth information are fused to obtain a fused image block; the fused image block is processed through a pre-trained neural network to obtain a feature map containing semantic information on the surface.
  • the pre-trained neural network can extract the surface semantic information from the fused image block, thereby obtaining a feature map containing the surface semantic information.
  • the neural network may be a convolutional neural network or a cyclic convolutional neural network, which is not specifically limited in this application.
  • the neural network training method is specifically: acquiring a large number of surface images marked with surface semantic information, and performing normalization and data enhancement processing on the surface images marked with surface semantic information, thereby obtaining sample data; inputting the sample data To the neural network, the neural network is trained until the neural network converges, so that the ground surface image information is processed through the converged neural network, and a feature map containing the semantic information of the ground surface can be obtained.
  • the processing effect of the trained neural network on the surface image can be guaranteed, and the accurate feature map containing the surface semantic information can be obtained, and the surface feature recognition can be improved. Accuracy.
  • S103 Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
  • the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the recognition results of surface features include surface disaster type, surface disaster area information, surface disaster degree information, and surface disaster area information.
  • Surface disaster type is used to describe the type of disaster that occurs on the surface
  • surface disaster area information is used to describe the disaster on the surface.
  • the information on the degree of damage on the surface is used to describe the degree of damage in the affected area
  • the information on the damage on the surface of the surface is used to describe the area of the affected area.
  • types of disasters include but are not limited to lodging, plant diseases and insect pests, floods and droughts, which are not specifically limited in this application.
  • the surface feature recognition method provided by the foregoing embodiment can obtain a feature map containing surface semantic information by processing multiple color channel information and image depth information in the surface image information, and by using the surface semantic information in the feature map, Accurately determine the recognition results of surface features, the entire recognition process does not require human involvement, which can improve the accuracy and convenience of surface feature recognition.
  • FIG. 4 is a schematic flowchart of the steps of another method for identifying ground features according to an embodiment of the present application.
  • the method for identifying features of the land surface includes steps S201 to S205.
  • ground surface image information includes image information of multiple color channels and image depth information.
  • the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information.
  • the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
  • S202 Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
  • the surface image information After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface.
  • the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
  • S203 Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
  • the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the recognition result of the surface feature is determined, at least one historical recognition result of the surface feature is obtained.
  • the historical recognition result is the recognition result of the surface features determined before the current moment.
  • the historical recognition result is stored in the local disk, or the historical recognition result is stored in the cloud.
  • the recognition result of the surface feature can be stored in regions according to the geographic location information of the recognition result of the surface feature, and the storage can also be further divided into surface regions under the dimension of the region to facilitate subsequent queries.
  • the surface change trend includes, but is not limited to, the change trend of plant diseases and insect pests, the change trend of lodging, the change trend of floods, and the change trend of drought.
  • the change trend of plant diseases and insect pests includes the continuous spread of plant diseases and insect pests and the weakening of the intensity of plant diseases and insect pests.
  • the trend of change includes the continued spread of floods and the weakening of the intensity of the flood, and the trend of changes of droughts includes the continued spread of drought and the weakening of the intensity of drought.
  • the first definite time point of the recognition result of the surface feature and the second definite time point of each historical recognition result are acquired; according to the first definite time point and each second definite time point, the recognition result and Each historical recognition result is sorted to obtain a recognition result queue; according to every two adjacent recognition results in the recognition result queue, multiple candidate surface change trends are determined; multiple candidate surface change trends are processed to obtain a surface change trend.
  • the method for determining the change trend of the candidate land surface is specifically: obtaining every two adjacent recognition results in the recognition result queue, and comparing the two recognition results with each other to obtain the change trend of the candidate land surface. It should be noted that the smaller the identification result's determination time point is, the higher the position of the identification result in the identification result queue is, and the larger the identification result time point is, the greater the position of the identification result in the identification result queue. lean back.
  • the processing method of the change trend of the candidate land surface is specifically: obtain the time sequence corresponding to each candidate land surface change trend, and connect each candidate land surface change trend in turn according to the time sequence corresponding to each candidate land surface change trend, so as to obtain The changing trend of the surface.
  • the land surface feature recognition method provided by the above embodiment obtains the recognition result of the previously determined land surface feature after the recognition result of the land surface feature is accurately determined, through the recognition result of the currently determined land feature and the recognition result of the previously determined land feature , Can accurately determine the trend of surface changes, facilitate user decision-making, and greatly improve user experience.
  • FIG. 5 is a schematic flowchart of the steps of yet another method for identifying ground features according to an embodiment of the present application.
  • the land surface feature recognition method includes step S301 to step S305.
  • ground surface image information includes image information of multiple color channels and image depth information.
  • the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information.
  • the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
  • S302. Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
  • the surface image information After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface.
  • the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
  • the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the recognition results of surface features include surface disaster type, surface disaster area information, surface disaster degree information, and surface disaster area information.
  • Surface disaster type is used to describe the type of disaster that occurs on the surface
  • surface disaster area information is used to describe the disaster on the surface.
  • the information on the degree of damage on the surface is used to describe the degree of damage in the affected area
  • the information on the damage on the surface of the surface is used to describe the area of the affected area.
  • the types of disasters include but are not limited to lodging, plant diseases and insect pests, floods and droughts, which are not specifically limited in this application.
  • the three-dimensional surface map is generated based on a three-dimensional construction algorithm, and the three-dimensional construction algorithm can be set based on actual conditions, which is not specifically limited in this application.
  • S305 Mark the three-dimensional surface map according to the information on the surface disaster area, the degree of damage information, and the information on the surface area affected by the disaster, to obtain a target three-dimensional map marked with the area affected by the disaster, the extent of the disaster, and the area affected by the disaster.
  • the target three-dimensional map can be stored; and/or the target three-dimensional map is sent to the terminal device for the terminal device to display the target three-dimensional map; and/or the target three-dimensional map is sent to the cloud for the cloud Store a three-dimensional map of the target.
  • the surface disaster area information mark each disaster area in the three-dimensional surface map, that is, obtain the geographic location information of each disaster area from the surface disaster area information.
  • the three-dimensional surface Mark each disaster area on the map; mark the corresponding disaster degree of each disaster area according to the information of the disaster degree on the surface; mark the disaster area corresponding to each disaster area according to the information of the disaster area on the surface, which is obtained from the information of the disaster area on the surface
  • Each disaster area corresponds to the disaster area, and the disaster area is marked in the disaster area on the three-dimensional surface map.
  • the marking method of the affected area, the extent of the disaster, and the affected area can be set based on the actual situation, which is not specifically limited in this application.
  • the marking method of the degree of damage is color marking.
  • the corresponding color of each affected area is determined, that is, the degree of damage corresponding to each area is obtained from the information of the degree of damage on the surface, and the pre-stored mapping between the degree of damage and the color is obtained.
  • the relationship table and then query the mapping relationship table, can determine the corresponding disaster degree color of each disaster area; according to the disaster degree color of each disaster area, mark the disaster degree of each disaster area.
  • the surface feature recognition method provided by the above embodiment, after determining the recognition result of the surface feature, mark the three-dimensional surface map according to the surface disaster area information, the surface damage degree information and the surface damage area information in the surface feature recognition result, and obtain The target three-dimensional map marked with the affected area, the extent of the disaster and the affected area allows the user to quickly and simply know the affected area, the extent of the disaster and the affected area through the target three-dimensional map, which is convenient for users to refer to and improve the user experience.
  • FIG. 6 is a schematic structural diagram of a drone provided by an embodiment of the present application.
  • the UAV can be a rotary-wing UAV, such as a quadrotor UAV, a six-rotor UAV, an eight-rotor UAV, a fixed-wing UAV, or a rotary-wing UAV and a fixed-wing UAV.
  • the combination of is not limited here.
  • the drone 400 includes a spraying device 401 and a processor 402.
  • the drone 400 is used for spraying agricultural products, forests, and other liquids such as pesticides and water.
  • the unmanned aerial vehicle 400 can realize movement, rotation, turning, etc., and can drive the spraying device 401 to move to different positions or different angles to perform spraying operations in a preset area.
  • the processor 402 is installed inside the drone 400 and is not visible in FIG. 6.
  • the spraying device 401 includes a pump assembly 4011, a liquid supply tank 4012, a spray head assembly 4013 and a catheter 4014.
  • the liquid supply tank 4012 communicates with the pump assembly 4011.
  • the spray head assembly 4013 is used to implement spraying operations.
  • the liquid guide tube 4014 is connected with the pump assembly 4011 and the spray head assembly 4013 and is used to transport the liquid pumped from the pump assembly 4011 to the spray head assembly 4013.
  • the number of the nozzle assembly 4013 is at least one, which can be one, two, three, four or more, which is not limited in this application.
  • the processor 402 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • FIG. 7 is a schematic flow chart of the steps of a drone performing a spraying task according to an embodiment of the application.
  • the processor 402 is configured to implement step S401 to step S402.
  • the drone 400 acquires a flying spraying task, where the flying spraying task includes a flying spraying route and spraying parameters of each waypoint, and the spraying parameters include spraying time, spraying angle, spraying flow rate, and spraying box.
  • the recognition result of the surface feature is obtained from a local disk, a ground terminal or a server, where the recognition result of the surface feature includes the information of the surface disaster area and the information of the degree of damage to the surface; according to the information of the disaster area of the surface and the information of the degree of damage of the surface, Generate the corresponding flying spray task.
  • the method of generating the flight spraying task is specifically: according to the information of the disaster area on the surface, determine the waypoint information of the flight spraying route to be planned, and generate the corresponding flight spraying route according to the waypoint information; set according to the information on the degree of surface damage The spraying parameters of each waypoint on the flying spraying route to generate the corresponding flying spraying task.
  • the waypoint information is determined specifically as follows: determine the shape and area of the disaster area according to the information of the disaster area on the surface; determine the type of flight spraying route to be planned according to the shape of the disaster area; determine according to the area of the disaster area The number of waypoints of the spraying route to be planned; the waypoint information of the spraying route to be planned is determined according to the route type, the information of the disaster-affected area on the surface and the number of waypoints.
  • the method for determining the shape and area of the disaster area is specifically as follows: obtaining the contour information of the disaster area on the surface and the geographic location of each contour point from the information of the disaster area on the surface, and determining the disaster area according to the geographic location of each contour point According to the contour information, determine the contour shape of the disaster area on the surface, and calculate the similarity between the contour shape and each preset shape, and use the preset shape with the highest similarity as the shape of the disaster area.
  • the method for determining the route type is specifically: obtaining the mapping relationship table between the pre-stored shape and the route type, and querying the mapping relationship table, obtaining the route type corresponding to the shape of the disaster-affected area, and using the obtained route type as The route type of the spraying route to be planned.
  • Route types include strip routes and loop routes. It should be noted that the above-mentioned mapping relationship table between the shape and the route type can be set based on the actual situation, which is not specifically limited in this application.
  • the method for determining the number of waypoints is specifically: obtaining a mapping relationship table between the pre-stored area and the number of waypoints, and querying the mapping relationship table to obtain the number of waypoints corresponding to the area of the disaster-affected area, and to obtain the obtained
  • the number of waypoints is used as the number of waypoints of the spraying route to be planned. It should be noted that the above-mentioned mapping relationship table between the area and the number of waypoints can be set based on actual conditions, which is not specifically limited in this application.
  • the waypoint information is determined further by: obtaining a pre-stored map, and marking the corresponding surface disaster area in the pre-stored map according to the information of the surface disaster area; calculating the area of the marked surface disaster area, and calculating the area according to the area and navigation
  • the number of points determine the distance between the waypoints; according to the distance and route type, mark each waypoint in the disaster-stricken area on the surface in turn, and obtain the marking order of each waypoint and the location of each waypoint in the disaster-affected area on the surface
  • the location of each waypoint; the marking sequence of each waypoint and the geographical location of each waypoint in the disaster area on the surface are used as the waypoint sequence and position of each waypoint, so as to obtain the flight spraying route to be planned Waypoint information.
  • the method of generating the flight spray route is specifically as follows: obtain the navigation order and position of each waypoint from the waypoint information; connect each waypoint position in turn according to the navigation sequence of each waypoint, and Generate the corresponding flight spraying route.
  • the flight spraying route includes a circumnavigation route and/or a strip route.
  • FIG. 8 is a schematic diagram of the flying spraying route in the embodiment of the application.
  • the flying spraying route is a circle route, and the flying spraying route includes four waypoints, and the four waypoints are respectively waypoints.
  • A. Waypoint B, Waypoint C and Waypoint D and the navigation sequence is waypoint A ⁇ waypoint B ⁇ waypoint C ⁇ waypoint D. In this way, this generates a circle route waypoint A ⁇ waypoint B ⁇ waypoint C ⁇ waypoint D ⁇ waypoint A enclosed by waypoint A, waypoint B, waypoint C, and waypoint D.
  • Fig. 9 is a schematic diagram of the flying spraying route in the embodiment of the application.
  • the flying spraying route is a strip route, and the flying spraying route includes four waypoints, and the four waypoints are respectively waypoints.
  • the method of generating the flight spraying task is specifically: obtaining the pre-stored mapping relationship table between the surface damage degree and the spraying parameter; according to the surface damage degree information and the mapping relationship table, determining the spraying parameter of each waypoint on the flight spraying route ; According to the determined spraying parameters of each waypoint on the flying spraying route, set the spraying parameters of each waypoint on the flying spraying route to generate the corresponding flying spraying task.
  • the above-mentioned mapping relationship table between the degree of surface damage and spraying parameters can be set based on actual conditions, which is not specifically limited in this application.
  • the method of generating the flying spray task can also be: determining the disaster spread boundary of the disaster area on the surface according to the obtained surface disaster degree information, and determining the position relationship between each waypoint and the disaster spread boundary; according to each The positional relationship between the waypoint and the disaster spread boundary, and determine the spraying parameters of each waypoint on the flight spraying route, so that the spraying time, spraying concentration and/or spraying flow rate of the waypoints on the spread side of the disaster spreading boundary are determined.
  • the spraying parameters are greater than the spraying time, spraying concentration and/or spraying flow rate of the waypoints on the side to be spread on the disaster spreading boundary; according to the spraying parameters of each waypoint on the determined flying spraying route, set every spraying route on the flight
  • the spraying parameters of each waypoint to generate the corresponding flying spraying mission includes the spread side and the side to be spread located on the boundary of the disaster spread. The extent of the disaster on the spread side is greater than that of the side to be spread.
  • the spraying parameters are determined, so that the UAV can spray the disaster-stricken area on the surface according to the spraying parameters, so as to suppress or delay the continuous spread of the disaster.
  • FIG. 10 is a schematic diagram of the disaster spread boundary in an embodiment of the present application.
  • the spread side of the disaster spread boundary is the side where the surface disaster area A is located, and the disaster spread boundary is to be spread
  • the side is the side where the surface area B is located, and the disaster spreads from the surface disaster area A to the surface area B.
  • the crops of the surface disaster area A and the surface area B can be the same or different.
  • the spraying parameters also include spray box labels, which are used to identify spray boxes.
  • the drone includes at least two spray boxes, and each spray box has different pesticide types and/or pesticide concentrations, corresponding to different levels of surface damage Different types of pesticides and/or concentrations of pesticides. The higher the degree of surface damage, the higher the corresponding pesticide concentration. The lower the degree of surface damage, the lower the corresponding pesticide concentration.
  • spray box label corresponding to the pesticide type and/or pesticide concentration of each waypoint.
  • At least two drones can be used to coordinate the flight spraying task.
  • Each drone is responsible for a spraying area in the surface disaster area, and spraying operations between at least two drones.
  • the areas are overlapped.
  • the overlapped area is the area corresponding to the severe damage on the surface.
  • the drones are located at different heights or the spraying time points of the drones are different, or avoidance can be achieved through sensors. It can avoid the collision of drones when spraying in the overlapping area. Since at least two drones are spraying the overlapping area (the severely damaged area), it can effectively improve the damage to the area.
  • the treatment effect of severe areas can also quickly complete the spraying of disaster-stricken areas on the surface to inhibit or delay the continued spread of disasters.
  • the first drone of the two drones is responsible for a spraying area in the disaster area on the surface, and the two drones are The second drone is responsible for another spraying area in the disaster area on the surface.
  • the spraying area of the first drone and the spraying area of the second drone are overlapped, and the overlapping area is the surface In the area corresponding to the severe damage degree, in the overlapping area, the first drone and the second drone are located at different heights, or the spraying time of the first drone and the second drone are different.
  • the first UAV and the second UAV can avoid obstacles through sensors, which can prevent the first UAV and the second UAV from colliding when spraying in the overlapping area.
  • Figure 11 is a schematic diagram of the overlap of the spraying operation area in an embodiment of the present application.
  • the surface disaster area includes the spraying operation area A and the spraying operation area B, and the spraying operation area A and the spraying area
  • the overlapping area of operation area B is area C.
  • the first drone is responsible for spraying operation area A
  • the second drone is responsible for spraying operation area B. Both the first drone and the second drone are in the overlapping area C Perform spraying operations.
  • FIG. 12 is another schematic diagram of the overlap of the spraying operation area in an embodiment of the present application.
  • the determined disaster spreading direction of the surface affected area is the surface damage
  • the area A spreads to the surface area B.
  • the four drones are assigned the flight spraying area, and the flight spraying route is planned in their corresponding flight spraying area.
  • the flight spraying area of UAV 1 is a, and the flight of UAV 2 is flying.
  • the spraying area is b, the flying spraying area of drone 3 is c, the flying spraying area of drone 4 is d, and the flying spraying area a, the flying spraying area is b, and the flying spraying area is c and the flying spraying area d exists.
  • UAV 1, UAV 2 and UAV 3 are mainly responsible for spraying on the surface affected area A on the spread side
  • UAV 4 is mainly responsible for spraying on the surface area B on the side to be spread
  • the overlapping area Contains a part of the disaster-affected area A and a part of the surface area B.
  • the above embodiment is only an exemplary description of the spraying operation performed by multiple drones in coordination, and the number of drones can also be flexibly set according to actual needs.
  • the number of drones is two. 3 sets, 4 sets, 5 sets, etc., this application does not limit this.
  • S402 Execute the flying spraying task, and control the spraying device to execute a corresponding spraying action according to the spraying parameters in the flying spraying task.
  • the drone 400 obtains the flying spraying task, executes the flying spraying task, and controls the spraying device to perform the corresponding spraying action according to the spraying parameters in the flying spraying task, that is, obtain the flying spraying route and the flight spraying route and each waypoint from the flying spraying task.
  • Spraying parameters and flying according to the flying spraying route, and during the flight, the spraying device 401 is controlled to perform corresponding spraying actions according to the spraying parameters of each waypoint to complete the flying spraying task.
  • UAVs can perform flying spraying tasks determined based on the recognition results of ground features, and can automatically spray pesticides or water crops or fruit trees, and prevent and control crops or fruit trees from lodging, diseases and insect pests, or water shortages.
  • the application also provides a ground feature recognition device.
  • FIG. 13 is a schematic block diagram of a surface feature recognition device provided by an embodiment of the present application.
  • the surface feature recognition device 500 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected by a bus 503, which is, for example, an I2C (Inter-integrated Circuit) bus.
  • the surface feature recognition device 500 can be a ground control platform, a server or a drone.
  • the ground control platform includes a laptop computer and a PC computer.
  • the server can be a single server or a server cluster composed of multiple servers.
  • UAVs include rotary-wing UAVs, such as four-rotor UAVs, six-rotor UAVs, and eight-rotor UAVs. It can also be a fixed-wing UAV, or a rotary-wing UAV and a fixed-wing UAV. The combination of is not limited here.
  • the processor 501 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 501 is configured to run a computer program stored in the memory 502, and implement the following steps when the computer program is executed:
  • ground surface image information includes image information of multiple color channels and image depth information
  • the recognition result of the surface feature is determined.
  • the surface image information includes a top view and a front view.
  • the image depth information is height information in the top and front view.
  • the surface image information includes geographic location information corresponding to the surface image.
  • the geographic location information includes positioning information obtained through a global satellite navigation and positioning system
  • the image information of the multiple color channels includes at least R, G, and B three-channel information.
  • the image depth information is determined based on a binocular ranging algorithm and image information of the multiple color channels.
  • the image depth information is determined based on a monocular ranging algorithm and an associated frame of the image information of the multiple color channels.
  • the processor realizes the processing of the multiple color channel information and the image depth information to obtain a feature map containing the semantic information of the ground surface, it is used to realize:
  • a feature map containing the semantic information of the ground surface is determined.
  • the processor realizes the processing of the multiple color channel information and the image depth information to obtain a feature map containing the semantic information of the ground surface, it is used to realize:
  • the fused image block is processed by a pre-trained neural network to obtain a feature map containing semantic information of the ground surface.
  • the processor realizes the acquisition of surface image information, it is used to realize:
  • Each surface image in the surface image set and the depth map are processed to obtain surface image information.
  • the processor realizes the processing of each ground surface image and the depth map in the ground surface image set to obtain ground surface image information, it is used to realize:
  • the depth map and the spliced ground surface image are fused to obtain ground surface image information.
  • the processor implements the splicing of each ground surface image in the ground surface image set to obtain a spliced ground surface image, it is used to implement:
  • splicing each of the surface images According to the splicing parameters corresponding to each of the surface images, splicing each of the surface images to obtain a spliced surface image.
  • the processor implements the determination of the stitching parameters corresponding to each of the surface images, it is used to implement:
  • the stitching relationship corresponding to each of the surface images is determined.
  • the processor realizes the recognition result of the ground surface feature is determined according to the ground surface semantic information in the feature map, it is further used to realize:
  • a change trend of the land surface is determined.
  • the processor realizes the determination of the change trend of the surface according to the recognition result of the surface feature and the at least one historical recognition result of the surface feature, it is used to achieve:
  • the multiple candidate land surface change trends are processed to obtain the land surface change trends.
  • the processor realizes the recognition result of the ground surface feature is determined according to the ground surface semantic information in the feature map, it is further used to realize:
  • the processor implements marking the three-dimensional surface map according to the information on the surface disaster area, the information on the degree of the surface disaster, and the information on the surface area affected by the disaster, the processor is used to implement:
  • the disaster area corresponding to each of the disaster areas is marked.
  • the processor realizes marking the corresponding disaster degree of each disaster-affected area according to the information of the disaster degree of the surface, it is used to realize:
  • the disaster degree color corresponding to each disaster area mark the disaster degree corresponding to each disaster area.
  • the processor implements marking the three-dimensional surface map according to the information of the surface disaster area, the information of the degree of damage of the surface and the information of the area of the surface of the disaster, to obtain the target three-dimensional map marked with the disaster area, the degree of damage, and the area of the disaster. After the map, it is used to achieve:
  • the target three-dimensional map is sent to the cloud, so that the cloud stores the target three-dimensional map.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the surface feature recognition method provided in the example.
  • the computer-readable storage medium may be an internal storage unit of the surface feature recognition device described in any of the foregoing embodiments, such as a hard disk or memory of the surface feature recognition device.
  • the computer-readable storage medium may also be an external storage device of the surface feature recognition device, such as a plug-in hard disk equipped on the surface feature recognition device, a smart media card (SMC), a secure digital ( Secure Digital, SD card, Flash Card, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé et un dispositif d'identification de caractéristique de surface terrestre, un véhicule aérien sans pilote et un support de stockage lisible par ordinateur. Le procédé comprend les étapes consistant à : obtenir des informations d'image de surface terrestre (S101); traiter une pluralité d'éléments d'informations de canal de couleur et des informations de profondeur d'image pour obtenir une carte de caractéristiques contenant des informations sémantiques de surface terrestre (S102); et déterminer un résultat d'identification de caractéristique de surface terrestre en fonction des informations sémantiques de surface terrestre dans la carte de caractéristiques (S103). Le procédé améliore la précision et la commodité d'identification de caractéristiques de surface terrestre.
PCT/CN2019/106228 2019-09-17 2019-09-17 Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur WO2021051278A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033702.0A CN112154447A (zh) 2019-09-17 2019-09-17 地表特征识别方法、设备、无人机及计算机可读存储介质
PCT/CN2019/106228 WO2021051278A1 (fr) 2019-09-17 2019-09-17 Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/106228 WO2021051278A1 (fr) 2019-09-17 2019-09-17 Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2021051278A1 true WO2021051278A1 (fr) 2021-03-25

Family

ID=73891556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106228 WO2021051278A1 (fr) 2019-09-17 2019-09-17 Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN112154447A (fr)
WO (1) WO2021051278A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113296537A (zh) * 2021-05-25 2021-08-24 湖南博瑞通航航空技术有限公司 基于电力杆塔模型匹配的电力无人机巡检方法及系统
CN113312991A (zh) * 2021-05-14 2021-08-27 华能阜新风力发电有限责任公司 一种基于无人机的前端智能识别系统
CN113537309A (zh) * 2021-06-30 2021-10-22 北京百度网讯科技有限公司 一种对象识别方法、装置及电子设备
CN114067245A (zh) * 2021-11-16 2022-02-18 中国铁路兰州局集团有限公司 一种铁路外部环境隐患识别方法及系统
CN114299699A (zh) * 2021-12-03 2022-04-08 浙江朱道模块集成有限公司 一种基于物联网的园林植物智能语音情景标识系统
CN114675695A (zh) * 2022-03-26 2022-06-28 太仓武港码头有限公司 一种堆场抑尘的控制方法、系统、设备及存储介质
CN116630828A (zh) * 2023-05-30 2023-08-22 中国公路工程咨询集团有限公司 基于地形环境适配的无人机遥感信息采集系统及方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11904871B2 (en) * 2019-10-30 2024-02-20 Deere & Company Predictive machine control
CN115903855B (zh) * 2023-01-10 2023-05-09 北京航科星云科技有限公司 一种基于卫星遥感的林场喷药路径规划方法、装置及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105173085A (zh) * 2015-09-18 2015-12-23 山东农业大学 无人机变量施药自动控制系统及方法
CN105654103A (zh) * 2014-11-12 2016-06-08 联想(北京)有限公司 一种图像识别方法及电子设备
CN106778888A (zh) * 2016-12-27 2017-05-31 浙江大学 一种基于无人机遥感的果园病虫害普查系统和方法
CN106956778A (zh) * 2017-05-23 2017-07-18 广东容祺智能科技有限公司 一种无人机农药喷洒方法及系统
CN109446959A (zh) * 2018-10-18 2019-03-08 广州极飞科技有限公司 目标区域的划分方法及装置、药物的喷洒控制方法
US20190180119A1 (en) * 2017-03-30 2019-06-13 Hrl Laboratories, Llc System for real-time object detection and recognition using both image and size features

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017156205A1 (fr) * 2016-03-11 2017-09-14 Siemens Aktiengesellschaft Identification automatisée de pièces d'un ensemble
CN109978947B (zh) * 2019-03-21 2021-08-17 广州极飞科技股份有限公司 一种监控无人机的方法、装置、设备和存储介质
CN109977924A (zh) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 针对农作物的无人机机上实时图像处理方法及系统
CN110232418B (zh) * 2019-06-19 2021-12-17 达闼机器人有限公司 一种语义识别方法、终端及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654103A (zh) * 2014-11-12 2016-06-08 联想(北京)有限公司 一种图像识别方法及电子设备
CN105173085A (zh) * 2015-09-18 2015-12-23 山东农业大学 无人机变量施药自动控制系统及方法
CN106778888A (zh) * 2016-12-27 2017-05-31 浙江大学 一种基于无人机遥感的果园病虫害普查系统和方法
US20190180119A1 (en) * 2017-03-30 2019-06-13 Hrl Laboratories, Llc System for real-time object detection and recognition using both image and size features
CN106956778A (zh) * 2017-05-23 2017-07-18 广东容祺智能科技有限公司 一种无人机农药喷洒方法及系统
CN109446959A (zh) * 2018-10-18 2019-03-08 广州极飞科技有限公司 目标区域的划分方法及装置、药物的喷洒控制方法

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312991A (zh) * 2021-05-14 2021-08-27 华能阜新风力发电有限责任公司 一种基于无人机的前端智能识别系统
CN113296537A (zh) * 2021-05-25 2021-08-24 湖南博瑞通航航空技术有限公司 基于电力杆塔模型匹配的电力无人机巡检方法及系统
CN113296537B (zh) * 2021-05-25 2024-03-12 湖南博瑞通航航空技术有限公司 基于电力杆塔模型匹配的电力无人机巡检方法及系统
CN113537309A (zh) * 2021-06-30 2021-10-22 北京百度网讯科技有限公司 一种对象识别方法、装置及电子设备
CN113537309B (zh) * 2021-06-30 2023-07-28 北京百度网讯科技有限公司 一种对象识别方法、装置及电子设备
CN114067245A (zh) * 2021-11-16 2022-02-18 中国铁路兰州局集团有限公司 一种铁路外部环境隐患识别方法及系统
CN114299699A (zh) * 2021-12-03 2022-04-08 浙江朱道模块集成有限公司 一种基于物联网的园林植物智能语音情景标识系统
CN114299699B (zh) * 2021-12-03 2023-10-10 浙江朱道模块集成有限公司 一种基于物联网的园林植物智能语音情景标识系统
CN114675695A (zh) * 2022-03-26 2022-06-28 太仓武港码头有限公司 一种堆场抑尘的控制方法、系统、设备及存储介质
CN114675695B (zh) * 2022-03-26 2023-04-18 太仓武港码头有限公司 一种堆场抑尘的控制方法、系统、设备及存储介质
CN116630828A (zh) * 2023-05-30 2023-08-22 中国公路工程咨询集团有限公司 基于地形环境适配的无人机遥感信息采集系统及方法
CN116630828B (zh) * 2023-05-30 2023-11-24 中国公路工程咨询集团有限公司 基于地形环境适配的无人机遥感信息采集系统及方法

Also Published As

Publication number Publication date
CN112154447A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2021051278A1 (fr) Procédé et dispositif d'identification de caractéristique de surface terrestre, véhicule aérien sans pilote et support de stockage lisible par ordinateur
US20210150184A1 (en) Target region operation planning method and apparatus, storage medium, and processor
AU2019238711B2 (en) Method and apparatus for acquiring boundary of area to be operated, and operation route planning method
CN104615146B (zh) 一种无需外部导航信号的无人机喷药作业自动导航方法
EP3119178B1 (fr) Procédé et système de navigation pour un véhicule agricole sur une surface de terre
CN106873630B (zh) 一种飞行控制方法及装置,执行设备
JP2022523836A (ja) 農薬散布の制御方法、デバイス及び記憶媒体
CN109341702B (zh) 作业区域内的路线规划方法、装置、设备及存储介质
CN110254722B (zh) 一种飞行器系统及其方法、计算机可读存储介质
CN105159319A (zh) 一种无人机的喷药方法及无人机
CN109283937A (zh) 一种基于无人机的植保喷施作业的方法及系统
WO2021237448A1 (fr) Procédé, appareil et système de planification de trajet
CN110832494A (zh) 一种语义生成方法、设备、飞行器及存储介质
Hartley et al. Using roads for autonomous air vehicle guidance
CN117516513A (zh) 智能割草机路径规划方法、装置、设备及存储介质
Shah et al. Detecting, localizing, and recognizing trees with a monocular MAV: Towards preventing deforestation
WO2021081896A1 (fr) Procédé de planification de fonctionnement, système, et dispositif pour véhicule aérien sans pilote destiné à la pulvérisation
US20220214700A1 (en) Control method and device, and storage medium
CN114283067A (zh) 一种处方图获取方法、装置、存储介质及终端设备
Basso A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques
Sarkar Intelligent Energy-Efficient Drones: Path Planning, Real-Time Monitoring and Decision-Making
Hroob et al. Learned long-term stability scan filtering for robust robot localisation in continuously changing environments
Li et al. Low-altitude remote sensing-based global 3D path planning for precision navigation of agriculture vehicles-beyond crop row detection
Parlange et al. Leveraging single-shot detection and random sample consensus for wind turbine blade inspection
Wei et al. Review of Simultaneous Localization and Mapping Technology in the Agricultural Environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945940

Country of ref document: EP

Kind code of ref document: A1