WO2021051278A1 - Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium - Google Patents

Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium Download PDF

Info

Publication number
WO2021051278A1
WO2021051278A1 PCT/CN2019/106228 CN2019106228W WO2021051278A1 WO 2021051278 A1 WO2021051278 A1 WO 2021051278A1 CN 2019106228 W CN2019106228 W CN 2019106228W WO 2021051278 A1 WO2021051278 A1 WO 2021051278A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
disaster
feature
land
Prior art date
Application number
PCT/CN2019/106228
Other languages
French (fr)
Chinese (zh)
Inventor
董双
李鑫超
王涛
李思晋
梁家斌
田艺
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033702.0A priority Critical patent/CN112154447A/en
Priority to PCT/CN2019/106228 priority patent/WO2021051278A1/en
Publication of WO2021051278A1 publication Critical patent/WO2021051278A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • This application relates to the field of artificial intelligence, and in particular to a method, equipment, drone, and computer-readable storage medium for recognizing ground features.
  • UAVs are growing rapidly in the fields of agriculture, aerial surveys, power line inspections, natural gas (oil) pipeline inspections, forest fire prevention, emergency rescue and disaster relief, and smart cities.
  • the automatic spraying of pesticides on crops can be achieved through drones.
  • the present application provides a method, equipment, unmanned aerial vehicle, and computer-readable storage medium for recognizing ground features, aiming to improve the accuracy and convenience of recognition results of ground features.
  • this application provides a land surface feature recognition method, including:
  • ground surface image information includes image information of multiple color channels and image depth information
  • the recognition result of the surface feature is determined.
  • the present application also provides an unmanned aerial vehicle including a spraying device and a processor, and the processor is configured to implement the following steps:
  • the present application also provides a surface feature recognition device, the surface feature recognition device including a memory and a processor;
  • the memory is used to store a computer program
  • the processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
  • ground surface image information includes image information of multiple color channels and image depth information
  • the recognition result of the surface feature is determined.
  • the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the above The surface feature recognition method described.
  • the embodiments of the present application provide a method for identifying surface features, a drone, and a computer-readable storage medium.
  • features containing surface semantic information can be obtained Figure, through the surface semantic information in the feature map, the recognition result of the surface feature can be accurately determined, and the entire recognition process does not require manual participation, which can improve the accuracy and convenience of surface feature recognition.
  • FIG. 1 is a schematic flow chart of the steps of a method for recognizing ground features according to an embodiment of the present application
  • Fig. 2 is a schematic flowchart of sub-steps of the land surface feature recognition method in Fig. 1;
  • Fig. 3 is a schematic diagram of splicing ground surface images in an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of the steps of another land surface feature recognition method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of the steps of another method for recognizing ground features according to an embodiment of the present application.
  • Figure 6 is a schematic structural diagram of a drone provided by an embodiment of the present application.
  • FIG. 7 is a schematic flow chart of the steps of a drone performing a spraying task provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a flying spray route in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a flying spraying route in an embodiment of the present application.
  • Figure 10 is a schematic diagram of a disaster spread boundary in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of overlapping spraying operation areas in an embodiment of the present application.
  • FIG. 12 is another schematic diagram of overlapping spraying operation areas in an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of a surface feature recognition device provided by an embodiment of the present application.
  • the surface feature recognition method provided in this application can be applied to ground control platforms, servers, and/or drones to recognize surface features.
  • the ground control platform includes laptop computers and PC computers.
  • the server can be a single server or a server cluster composed of multiple servers.
  • UAVs include rotary-wing UAVs, such as quadrotor UAVs, The six-rotor UAV, the eight-rotor UAV can also be a fixed-wing UAV, or a combination of a rotary-wing type and a fixed-wing UAV, which is not limited here.
  • FIG. 1 is a schematic flowchart of steps of a method for identifying ground features according to an embodiment of the present application.
  • the method for identifying land features includes step S101 to step S103.
  • ground surface image information includes image information of multiple color channels and image depth information.
  • the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information.
  • the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
  • the ground surface image information also includes a top view and the image depth information is height information in the top view.
  • the surface image is obtained through aerial photography. In the process of aerial photography, due to the tilt of the movable platform and other reasons, the surface image obtained by aerial photography is not a normal top view image. By converting the surface image to a top view image, it can be guaranteed The accuracy of surface image information can improve the accuracy of surface feature recognition.
  • the surface image information also includes geographic location information corresponding to the surface image.
  • the geographic location information includes positioning information obtained through a global satellite navigation and positioning system; and/or positioning information obtained through a real-time differential positioning system.
  • the mobile platform can obtain the geographic location information of the surface image through the global satellite navigation and positioning system or the real-time differential positioning system, which can further enrich the surface image information and facilitate subsequent query of the area to which the identified surface feature belongs.
  • the method for determining the image depth information may be based on a binocular ranging algorithm and image information of multiple color channels, or may be based on a monocular ranging algorithm and image information of multiple color channels.
  • the associated frame is an overlapped image frame in the image information of multiple color channels.
  • the overlapped image frame is regarded as the same two image frames, and the disparity of the two image frames is calculated, and then the image can be determined by the disparity In-depth information.
  • the binocular ranging algorithm can be set based on actual conditions, which is not specifically limited in this application.
  • the binocular ranging algorithm can be selected as a semi-global matching algorithm (Semi-Global Matching, SGM).
  • step S101 includes sub-step S1011 to sub-step S1012.
  • the land surface image set includes a plurality of surface images, and each of the surface images is a top view of the surface image; according to each surface image in the surface image set, a corresponding depth map is generated.
  • the surface image set can be obtained by aerial photography of the ground surface by a drone.
  • the drone acquires surface aerial photography tasks, where the surface aerial photography tasks include aerial photography flight routes and aerial photography parameters; the drone performs surface aerial photography tasks in order to The ground surface is aerial photographed to obtain a ground surface image set, where the ground surface image in the ground surface image set includes geographic location information.
  • the drone can store the surface image collection obtained by aerial photography locally, or upload the surface image collection obtained by aerial photography to the cloud.
  • the method for obtaining surface aerial photography tasks is specifically as follows: the drone obtains surface aerial photography mission files, where the surface aerial photography task files include waypoint information and aerial photography parameters of each waypoint; according to the waypoint information, an aerial photography flight route is generated, where , The aerial photography flight route is marked with multiple waypoints; according to the aerial photography parameters of each waypoint, the aerial photography parameters of each waypoint on the aerial photography flight route are set to generate surface aerial photography tasks.
  • the aerial photography parameters include aerial photography altitude and aerial photography frequency. The aerial photography frequency is used to control the camera for continuous photography, and the waypoint information includes the position and sequence of each waypoint.
  • the method of generating the depth map is as follows: based on the monocular ranging algorithm, according to the associated frame of each surface image in the surface image set, generate the depth map corresponding to each surface image, and perform the calculation of the depth map corresponding to each surface image. By stitching, a stitched depth map is obtained, and the stitched depth map is used as the depth map corresponding to the ground surface image set. Or based on the binocular ranging algorithm, according to every two related surface images in the surface image set, generate the depth map corresponding to each two related surface images, and stitch the depth maps corresponding to each two related surface images to get Splice the depth map, and use the spliced depth map as the depth map corresponding to the ground surface image set.
  • S1012. Process each ground surface image and the depth map in the ground surface image set to obtain ground surface image information.
  • each ground surface image and depth map in the ground surface image set is processed to obtain ground surface image information. Specifically, each surface image in the surface image set is spliced to obtain a spliced surface image; the depth map and the spliced surface image are merged to obtain surface image information, which includes image information and image depth of multiple color channels information.
  • the splicing method of the surface images is specifically: determining the corresponding splicing parameters of each surface image, where the splicing parameters include splicing order and splicing relationship; according to the corresponding splicing parameters of each surface image, each surface image is spliced , Get the spliced surface image.
  • FIG. 3 is a schematic view of stitching the ground surface image in the embodiment of the application. As shown in FIG.
  • the right side of the ground image A is stitched with the left side of the ground image B, and the right side of the ground image B is connected with
  • the left side of the surface image C is spliced
  • the upper side of the surface image D is spliced with the lower side of the surface image A
  • the right side of the surface image E is spliced with the left side of the surface image D
  • the left side of the surface image F is spliced to the right of the surface image E.
  • the method of determining the stitching parameters is: acquiring the respective aerial time point and aerial position of each surface image; determining the respective stitching sequence of each surface image according to the aerial time point corresponding to each surface image; The corresponding aerial position of each surface image is determined, and the corresponding splicing relationship of each surface image is determined.
  • the aerial position of the surface image can determine the difference between the surface images.
  • the location relationship, and the location relationship between the surface images of each region is regarded as the corresponding splicing relationship of each surface image.
  • S102 Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
  • the surface image information After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface.
  • the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
  • the multiple color channel information and image depth information are fused to obtain a fused image block; the fused image block is matched with an image block in a preset image block set to obtain a fused image block and each image
  • the matching degree between the blocks; according to the matching degree between the fused image block and each image block, the feature map containing the semantic information of the surface is determined.
  • the preset image block set includes a plurality of image blocks marked with surface features, and the image blocks in the preset image block set can be set based on actual conditions, which is not specifically limited in this application.
  • the matching method between the fused image block and the image block is specifically: split the fused image block into a preset number of fused image sub-blocks, and split the image blocks into a preset number of image sub-blocks, where, There is a one-to-one correspondence between fused image sub-blocks and image sub-blocks; calculate the similarity between each fused image sub-block and the corresponding image sub-block, and accumulate the difference between each fused image sub-block and the corresponding image sub-block Similarity, the degree of matching between the fused image block and the image block is obtained.
  • the method for determining the feature map is specifically: acquiring the image block whose matching degree is greater than or equal to the preset matching degree threshold as the target image block, and obtaining the difference between each image sub-block in each target image block and the corresponding fused sub-block The similarity between each target image block and the corresponding land surface feature of each image sub-block in each target image block; obtain the land surface feature corresponding to the image sub-block whose similarity is greater than or equal to the preset similarity threshold, and compare the image corresponding to the land surface feature The similarity of the sub-blocks is used as the recognition probability value; the land surface feature and the recognition probability value are marked in the corresponding fused image sub-block in the fused image block, thereby obtaining a feature map containing the semantic information of the land surface.
  • a plurality of color channel information and image depth information are fused to obtain a fused image block; the fused image block is processed through a pre-trained neural network to obtain a feature map containing semantic information on the surface.
  • the pre-trained neural network can extract the surface semantic information from the fused image block, thereby obtaining a feature map containing the surface semantic information.
  • the neural network may be a convolutional neural network or a cyclic convolutional neural network, which is not specifically limited in this application.
  • the neural network training method is specifically: acquiring a large number of surface images marked with surface semantic information, and performing normalization and data enhancement processing on the surface images marked with surface semantic information, thereby obtaining sample data; inputting the sample data To the neural network, the neural network is trained until the neural network converges, so that the ground surface image information is processed through the converged neural network, and a feature map containing the semantic information of the ground surface can be obtained.
  • the processing effect of the trained neural network on the surface image can be guaranteed, and the accurate feature map containing the surface semantic information can be obtained, and the surface feature recognition can be improved. Accuracy.
  • S103 Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
  • the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the recognition results of surface features include surface disaster type, surface disaster area information, surface disaster degree information, and surface disaster area information.
  • Surface disaster type is used to describe the type of disaster that occurs on the surface
  • surface disaster area information is used to describe the disaster on the surface.
  • the information on the degree of damage on the surface is used to describe the degree of damage in the affected area
  • the information on the damage on the surface of the surface is used to describe the area of the affected area.
  • types of disasters include but are not limited to lodging, plant diseases and insect pests, floods and droughts, which are not specifically limited in this application.
  • the surface feature recognition method provided by the foregoing embodiment can obtain a feature map containing surface semantic information by processing multiple color channel information and image depth information in the surface image information, and by using the surface semantic information in the feature map, Accurately determine the recognition results of surface features, the entire recognition process does not require human involvement, which can improve the accuracy and convenience of surface feature recognition.
  • FIG. 4 is a schematic flowchart of the steps of another method for identifying ground features according to an embodiment of the present application.
  • the method for identifying features of the land surface includes steps S201 to S205.
  • ground surface image information includes image information of multiple color channels and image depth information.
  • the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information.
  • the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
  • S202 Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
  • the surface image information After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface.
  • the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
  • S203 Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
  • the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the recognition result of the surface feature is determined, at least one historical recognition result of the surface feature is obtained.
  • the historical recognition result is the recognition result of the surface features determined before the current moment.
  • the historical recognition result is stored in the local disk, or the historical recognition result is stored in the cloud.
  • the recognition result of the surface feature can be stored in regions according to the geographic location information of the recognition result of the surface feature, and the storage can also be further divided into surface regions under the dimension of the region to facilitate subsequent queries.
  • the surface change trend includes, but is not limited to, the change trend of plant diseases and insect pests, the change trend of lodging, the change trend of floods, and the change trend of drought.
  • the change trend of plant diseases and insect pests includes the continuous spread of plant diseases and insect pests and the weakening of the intensity of plant diseases and insect pests.
  • the trend of change includes the continued spread of floods and the weakening of the intensity of the flood, and the trend of changes of droughts includes the continued spread of drought and the weakening of the intensity of drought.
  • the first definite time point of the recognition result of the surface feature and the second definite time point of each historical recognition result are acquired; according to the first definite time point and each second definite time point, the recognition result and Each historical recognition result is sorted to obtain a recognition result queue; according to every two adjacent recognition results in the recognition result queue, multiple candidate surface change trends are determined; multiple candidate surface change trends are processed to obtain a surface change trend.
  • the method for determining the change trend of the candidate land surface is specifically: obtaining every two adjacent recognition results in the recognition result queue, and comparing the two recognition results with each other to obtain the change trend of the candidate land surface. It should be noted that the smaller the identification result's determination time point is, the higher the position of the identification result in the identification result queue is, and the larger the identification result time point is, the greater the position of the identification result in the identification result queue. lean back.
  • the processing method of the change trend of the candidate land surface is specifically: obtain the time sequence corresponding to each candidate land surface change trend, and connect each candidate land surface change trend in turn according to the time sequence corresponding to each candidate land surface change trend, so as to obtain The changing trend of the surface.
  • the land surface feature recognition method provided by the above embodiment obtains the recognition result of the previously determined land surface feature after the recognition result of the land surface feature is accurately determined, through the recognition result of the currently determined land feature and the recognition result of the previously determined land feature , Can accurately determine the trend of surface changes, facilitate user decision-making, and greatly improve user experience.
  • FIG. 5 is a schematic flowchart of the steps of yet another method for identifying ground features according to an embodiment of the present application.
  • the land surface feature recognition method includes step S301 to step S305.
  • ground surface image information includes image information of multiple color channels and image depth information.
  • the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information.
  • the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
  • S302. Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
  • the surface image information After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface.
  • the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
  • the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
  • the recognition results of surface features include surface disaster type, surface disaster area information, surface disaster degree information, and surface disaster area information.
  • Surface disaster type is used to describe the type of disaster that occurs on the surface
  • surface disaster area information is used to describe the disaster on the surface.
  • the information on the degree of damage on the surface is used to describe the degree of damage in the affected area
  • the information on the damage on the surface of the surface is used to describe the area of the affected area.
  • the types of disasters include but are not limited to lodging, plant diseases and insect pests, floods and droughts, which are not specifically limited in this application.
  • the three-dimensional surface map is generated based on a three-dimensional construction algorithm, and the three-dimensional construction algorithm can be set based on actual conditions, which is not specifically limited in this application.
  • S305 Mark the three-dimensional surface map according to the information on the surface disaster area, the degree of damage information, and the information on the surface area affected by the disaster, to obtain a target three-dimensional map marked with the area affected by the disaster, the extent of the disaster, and the area affected by the disaster.
  • the target three-dimensional map can be stored; and/or the target three-dimensional map is sent to the terminal device for the terminal device to display the target three-dimensional map; and/or the target three-dimensional map is sent to the cloud for the cloud Store a three-dimensional map of the target.
  • the surface disaster area information mark each disaster area in the three-dimensional surface map, that is, obtain the geographic location information of each disaster area from the surface disaster area information.
  • the three-dimensional surface Mark each disaster area on the map; mark the corresponding disaster degree of each disaster area according to the information of the disaster degree on the surface; mark the disaster area corresponding to each disaster area according to the information of the disaster area on the surface, which is obtained from the information of the disaster area on the surface
  • Each disaster area corresponds to the disaster area, and the disaster area is marked in the disaster area on the three-dimensional surface map.
  • the marking method of the affected area, the extent of the disaster, and the affected area can be set based on the actual situation, which is not specifically limited in this application.
  • the marking method of the degree of damage is color marking.
  • the corresponding color of each affected area is determined, that is, the degree of damage corresponding to each area is obtained from the information of the degree of damage on the surface, and the pre-stored mapping between the degree of damage and the color is obtained.
  • the relationship table and then query the mapping relationship table, can determine the corresponding disaster degree color of each disaster area; according to the disaster degree color of each disaster area, mark the disaster degree of each disaster area.
  • the surface feature recognition method provided by the above embodiment, after determining the recognition result of the surface feature, mark the three-dimensional surface map according to the surface disaster area information, the surface damage degree information and the surface damage area information in the surface feature recognition result, and obtain The target three-dimensional map marked with the affected area, the extent of the disaster and the affected area allows the user to quickly and simply know the affected area, the extent of the disaster and the affected area through the target three-dimensional map, which is convenient for users to refer to and improve the user experience.
  • FIG. 6 is a schematic structural diagram of a drone provided by an embodiment of the present application.
  • the UAV can be a rotary-wing UAV, such as a quadrotor UAV, a six-rotor UAV, an eight-rotor UAV, a fixed-wing UAV, or a rotary-wing UAV and a fixed-wing UAV.
  • the combination of is not limited here.
  • the drone 400 includes a spraying device 401 and a processor 402.
  • the drone 400 is used for spraying agricultural products, forests, and other liquids such as pesticides and water.
  • the unmanned aerial vehicle 400 can realize movement, rotation, turning, etc., and can drive the spraying device 401 to move to different positions or different angles to perform spraying operations in a preset area.
  • the processor 402 is installed inside the drone 400 and is not visible in FIG. 6.
  • the spraying device 401 includes a pump assembly 4011, a liquid supply tank 4012, a spray head assembly 4013 and a catheter 4014.
  • the liquid supply tank 4012 communicates with the pump assembly 4011.
  • the spray head assembly 4013 is used to implement spraying operations.
  • the liquid guide tube 4014 is connected with the pump assembly 4011 and the spray head assembly 4013 and is used to transport the liquid pumped from the pump assembly 4011 to the spray head assembly 4013.
  • the number of the nozzle assembly 4013 is at least one, which can be one, two, three, four or more, which is not limited in this application.
  • the processor 402 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • FIG. 7 is a schematic flow chart of the steps of a drone performing a spraying task according to an embodiment of the application.
  • the processor 402 is configured to implement step S401 to step S402.
  • the drone 400 acquires a flying spraying task, where the flying spraying task includes a flying spraying route and spraying parameters of each waypoint, and the spraying parameters include spraying time, spraying angle, spraying flow rate, and spraying box.
  • the recognition result of the surface feature is obtained from a local disk, a ground terminal or a server, where the recognition result of the surface feature includes the information of the surface disaster area and the information of the degree of damage to the surface; according to the information of the disaster area of the surface and the information of the degree of damage of the surface, Generate the corresponding flying spray task.
  • the method of generating the flight spraying task is specifically: according to the information of the disaster area on the surface, determine the waypoint information of the flight spraying route to be planned, and generate the corresponding flight spraying route according to the waypoint information; set according to the information on the degree of surface damage The spraying parameters of each waypoint on the flying spraying route to generate the corresponding flying spraying task.
  • the waypoint information is determined specifically as follows: determine the shape and area of the disaster area according to the information of the disaster area on the surface; determine the type of flight spraying route to be planned according to the shape of the disaster area; determine according to the area of the disaster area The number of waypoints of the spraying route to be planned; the waypoint information of the spraying route to be planned is determined according to the route type, the information of the disaster-affected area on the surface and the number of waypoints.
  • the method for determining the shape and area of the disaster area is specifically as follows: obtaining the contour information of the disaster area on the surface and the geographic location of each contour point from the information of the disaster area on the surface, and determining the disaster area according to the geographic location of each contour point According to the contour information, determine the contour shape of the disaster area on the surface, and calculate the similarity between the contour shape and each preset shape, and use the preset shape with the highest similarity as the shape of the disaster area.
  • the method for determining the route type is specifically: obtaining the mapping relationship table between the pre-stored shape and the route type, and querying the mapping relationship table, obtaining the route type corresponding to the shape of the disaster-affected area, and using the obtained route type as The route type of the spraying route to be planned.
  • Route types include strip routes and loop routes. It should be noted that the above-mentioned mapping relationship table between the shape and the route type can be set based on the actual situation, which is not specifically limited in this application.
  • the method for determining the number of waypoints is specifically: obtaining a mapping relationship table between the pre-stored area and the number of waypoints, and querying the mapping relationship table to obtain the number of waypoints corresponding to the area of the disaster-affected area, and to obtain the obtained
  • the number of waypoints is used as the number of waypoints of the spraying route to be planned. It should be noted that the above-mentioned mapping relationship table between the area and the number of waypoints can be set based on actual conditions, which is not specifically limited in this application.
  • the waypoint information is determined further by: obtaining a pre-stored map, and marking the corresponding surface disaster area in the pre-stored map according to the information of the surface disaster area; calculating the area of the marked surface disaster area, and calculating the area according to the area and navigation
  • the number of points determine the distance between the waypoints; according to the distance and route type, mark each waypoint in the disaster-stricken area on the surface in turn, and obtain the marking order of each waypoint and the location of each waypoint in the disaster-affected area on the surface
  • the location of each waypoint; the marking sequence of each waypoint and the geographical location of each waypoint in the disaster area on the surface are used as the waypoint sequence and position of each waypoint, so as to obtain the flight spraying route to be planned Waypoint information.
  • the method of generating the flight spray route is specifically as follows: obtain the navigation order and position of each waypoint from the waypoint information; connect each waypoint position in turn according to the navigation sequence of each waypoint, and Generate the corresponding flight spraying route.
  • the flight spraying route includes a circumnavigation route and/or a strip route.
  • FIG. 8 is a schematic diagram of the flying spraying route in the embodiment of the application.
  • the flying spraying route is a circle route, and the flying spraying route includes four waypoints, and the four waypoints are respectively waypoints.
  • A. Waypoint B, Waypoint C and Waypoint D and the navigation sequence is waypoint A ⁇ waypoint B ⁇ waypoint C ⁇ waypoint D. In this way, this generates a circle route waypoint A ⁇ waypoint B ⁇ waypoint C ⁇ waypoint D ⁇ waypoint A enclosed by waypoint A, waypoint B, waypoint C, and waypoint D.
  • Fig. 9 is a schematic diagram of the flying spraying route in the embodiment of the application.
  • the flying spraying route is a strip route, and the flying spraying route includes four waypoints, and the four waypoints are respectively waypoints.
  • the method of generating the flight spraying task is specifically: obtaining the pre-stored mapping relationship table between the surface damage degree and the spraying parameter; according to the surface damage degree information and the mapping relationship table, determining the spraying parameter of each waypoint on the flight spraying route ; According to the determined spraying parameters of each waypoint on the flying spraying route, set the spraying parameters of each waypoint on the flying spraying route to generate the corresponding flying spraying task.
  • the above-mentioned mapping relationship table between the degree of surface damage and spraying parameters can be set based on actual conditions, which is not specifically limited in this application.
  • the method of generating the flying spray task can also be: determining the disaster spread boundary of the disaster area on the surface according to the obtained surface disaster degree information, and determining the position relationship between each waypoint and the disaster spread boundary; according to each The positional relationship between the waypoint and the disaster spread boundary, and determine the spraying parameters of each waypoint on the flight spraying route, so that the spraying time, spraying concentration and/or spraying flow rate of the waypoints on the spread side of the disaster spreading boundary are determined.
  • the spraying parameters are greater than the spraying time, spraying concentration and/or spraying flow rate of the waypoints on the side to be spread on the disaster spreading boundary; according to the spraying parameters of each waypoint on the determined flying spraying route, set every spraying route on the flight
  • the spraying parameters of each waypoint to generate the corresponding flying spraying mission includes the spread side and the side to be spread located on the boundary of the disaster spread. The extent of the disaster on the spread side is greater than that of the side to be spread.
  • the spraying parameters are determined, so that the UAV can spray the disaster-stricken area on the surface according to the spraying parameters, so as to suppress or delay the continuous spread of the disaster.
  • FIG. 10 is a schematic diagram of the disaster spread boundary in an embodiment of the present application.
  • the spread side of the disaster spread boundary is the side where the surface disaster area A is located, and the disaster spread boundary is to be spread
  • the side is the side where the surface area B is located, and the disaster spreads from the surface disaster area A to the surface area B.
  • the crops of the surface disaster area A and the surface area B can be the same or different.
  • the spraying parameters also include spray box labels, which are used to identify spray boxes.
  • the drone includes at least two spray boxes, and each spray box has different pesticide types and/or pesticide concentrations, corresponding to different levels of surface damage Different types of pesticides and/or concentrations of pesticides. The higher the degree of surface damage, the higher the corresponding pesticide concentration. The lower the degree of surface damage, the lower the corresponding pesticide concentration.
  • spray box label corresponding to the pesticide type and/or pesticide concentration of each waypoint.
  • At least two drones can be used to coordinate the flight spraying task.
  • Each drone is responsible for a spraying area in the surface disaster area, and spraying operations between at least two drones.
  • the areas are overlapped.
  • the overlapped area is the area corresponding to the severe damage on the surface.
  • the drones are located at different heights or the spraying time points of the drones are different, or avoidance can be achieved through sensors. It can avoid the collision of drones when spraying in the overlapping area. Since at least two drones are spraying the overlapping area (the severely damaged area), it can effectively improve the damage to the area.
  • the treatment effect of severe areas can also quickly complete the spraying of disaster-stricken areas on the surface to inhibit or delay the continued spread of disasters.
  • the first drone of the two drones is responsible for a spraying area in the disaster area on the surface, and the two drones are The second drone is responsible for another spraying area in the disaster area on the surface.
  • the spraying area of the first drone and the spraying area of the second drone are overlapped, and the overlapping area is the surface In the area corresponding to the severe damage degree, in the overlapping area, the first drone and the second drone are located at different heights, or the spraying time of the first drone and the second drone are different.
  • the first UAV and the second UAV can avoid obstacles through sensors, which can prevent the first UAV and the second UAV from colliding when spraying in the overlapping area.
  • Figure 11 is a schematic diagram of the overlap of the spraying operation area in an embodiment of the present application.
  • the surface disaster area includes the spraying operation area A and the spraying operation area B, and the spraying operation area A and the spraying area
  • the overlapping area of operation area B is area C.
  • the first drone is responsible for spraying operation area A
  • the second drone is responsible for spraying operation area B. Both the first drone and the second drone are in the overlapping area C Perform spraying operations.
  • FIG. 12 is another schematic diagram of the overlap of the spraying operation area in an embodiment of the present application.
  • the determined disaster spreading direction of the surface affected area is the surface damage
  • the area A spreads to the surface area B.
  • the four drones are assigned the flight spraying area, and the flight spraying route is planned in their corresponding flight spraying area.
  • the flight spraying area of UAV 1 is a, and the flight of UAV 2 is flying.
  • the spraying area is b, the flying spraying area of drone 3 is c, the flying spraying area of drone 4 is d, and the flying spraying area a, the flying spraying area is b, and the flying spraying area is c and the flying spraying area d exists.
  • UAV 1, UAV 2 and UAV 3 are mainly responsible for spraying on the surface affected area A on the spread side
  • UAV 4 is mainly responsible for spraying on the surface area B on the side to be spread
  • the overlapping area Contains a part of the disaster-affected area A and a part of the surface area B.
  • the above embodiment is only an exemplary description of the spraying operation performed by multiple drones in coordination, and the number of drones can also be flexibly set according to actual needs.
  • the number of drones is two. 3 sets, 4 sets, 5 sets, etc., this application does not limit this.
  • S402 Execute the flying spraying task, and control the spraying device to execute a corresponding spraying action according to the spraying parameters in the flying spraying task.
  • the drone 400 obtains the flying spraying task, executes the flying spraying task, and controls the spraying device to perform the corresponding spraying action according to the spraying parameters in the flying spraying task, that is, obtain the flying spraying route and the flight spraying route and each waypoint from the flying spraying task.
  • Spraying parameters and flying according to the flying spraying route, and during the flight, the spraying device 401 is controlled to perform corresponding spraying actions according to the spraying parameters of each waypoint to complete the flying spraying task.
  • UAVs can perform flying spraying tasks determined based on the recognition results of ground features, and can automatically spray pesticides or water crops or fruit trees, and prevent and control crops or fruit trees from lodging, diseases and insect pests, or water shortages.
  • the application also provides a ground feature recognition device.
  • FIG. 13 is a schematic block diagram of a surface feature recognition device provided by an embodiment of the present application.
  • the surface feature recognition device 500 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected by a bus 503, which is, for example, an I2C (Inter-integrated Circuit) bus.
  • the surface feature recognition device 500 can be a ground control platform, a server or a drone.
  • the ground control platform includes a laptop computer and a PC computer.
  • the server can be a single server or a server cluster composed of multiple servers.
  • UAVs include rotary-wing UAVs, such as four-rotor UAVs, six-rotor UAVs, and eight-rotor UAVs. It can also be a fixed-wing UAV, or a rotary-wing UAV and a fixed-wing UAV. The combination of is not limited here.
  • the processor 501 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
  • MCU micro-controller unit
  • CPU central processing unit
  • DSP Digital Signal Processor
  • the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the processor 501 is configured to run a computer program stored in the memory 502, and implement the following steps when the computer program is executed:
  • ground surface image information includes image information of multiple color channels and image depth information
  • the recognition result of the surface feature is determined.
  • the surface image information includes a top view and a front view.
  • the image depth information is height information in the top and front view.
  • the surface image information includes geographic location information corresponding to the surface image.
  • the geographic location information includes positioning information obtained through a global satellite navigation and positioning system
  • the image information of the multiple color channels includes at least R, G, and B three-channel information.
  • the image depth information is determined based on a binocular ranging algorithm and image information of the multiple color channels.
  • the image depth information is determined based on a monocular ranging algorithm and an associated frame of the image information of the multiple color channels.
  • the processor realizes the processing of the multiple color channel information and the image depth information to obtain a feature map containing the semantic information of the ground surface, it is used to realize:
  • a feature map containing the semantic information of the ground surface is determined.
  • the processor realizes the processing of the multiple color channel information and the image depth information to obtain a feature map containing the semantic information of the ground surface, it is used to realize:
  • the fused image block is processed by a pre-trained neural network to obtain a feature map containing semantic information of the ground surface.
  • the processor realizes the acquisition of surface image information, it is used to realize:
  • Each surface image in the surface image set and the depth map are processed to obtain surface image information.
  • the processor realizes the processing of each ground surface image and the depth map in the ground surface image set to obtain ground surface image information, it is used to realize:
  • the depth map and the spliced ground surface image are fused to obtain ground surface image information.
  • the processor implements the splicing of each ground surface image in the ground surface image set to obtain a spliced ground surface image, it is used to implement:
  • splicing each of the surface images According to the splicing parameters corresponding to each of the surface images, splicing each of the surface images to obtain a spliced surface image.
  • the processor implements the determination of the stitching parameters corresponding to each of the surface images, it is used to implement:
  • the stitching relationship corresponding to each of the surface images is determined.
  • the processor realizes the recognition result of the ground surface feature is determined according to the ground surface semantic information in the feature map, it is further used to realize:
  • a change trend of the land surface is determined.
  • the processor realizes the determination of the change trend of the surface according to the recognition result of the surface feature and the at least one historical recognition result of the surface feature, it is used to achieve:
  • the multiple candidate land surface change trends are processed to obtain the land surface change trends.
  • the processor realizes the recognition result of the ground surface feature is determined according to the ground surface semantic information in the feature map, it is further used to realize:
  • the processor implements marking the three-dimensional surface map according to the information on the surface disaster area, the information on the degree of the surface disaster, and the information on the surface area affected by the disaster, the processor is used to implement:
  • the disaster area corresponding to each of the disaster areas is marked.
  • the processor realizes marking the corresponding disaster degree of each disaster-affected area according to the information of the disaster degree of the surface, it is used to realize:
  • the disaster degree color corresponding to each disaster area mark the disaster degree corresponding to each disaster area.
  • the processor implements marking the three-dimensional surface map according to the information of the surface disaster area, the information of the degree of damage of the surface and the information of the area of the surface of the disaster, to obtain the target three-dimensional map marked with the disaster area, the degree of damage, and the area of the disaster. After the map, it is used to achieve:
  • the target three-dimensional map is sent to the cloud, so that the cloud stores the target three-dimensional map.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the surface feature recognition method provided in the example.
  • the computer-readable storage medium may be an internal storage unit of the surface feature recognition device described in any of the foregoing embodiments, such as a hard disk or memory of the surface feature recognition device.
  • the computer-readable storage medium may also be an external storage device of the surface feature recognition device, such as a plug-in hard disk equipped on the surface feature recognition device, a smart media card (SMC), a secure digital ( Secure Digital, SD card, Flash Card, etc.

Abstract

An earth surface feature identification method and device, an unmanned aerial vehicle, and a computer readable storage medium. The method comprises: obtaining earth surface image information (S101); processing a plurality of pieces of color channel information, and image depth information to obtain a feature map containing earth surface semantic information (S102); and determining an earth surface feature identification result according to the earth surface semantic information in the feature map (S103). The method improves the accuracy and convenience in earth surface feature identification.

Description

地表特征识别方法、设备、无人机及计算机可读存储介质Surface feature recognition method, equipment, drone and computer readable storage medium 技术领域Technical field
本申请涉及人工智能领域,尤其涉及一种地表特征识别方法、设备、无人机及计算机可读存储介质。This application relates to the field of artificial intelligence, and in particular to a method, equipment, drone, and computer-readable storage medium for recognizing ground features.
背景技术Background technique
随着我国无人机制造业的高速发展,无人机在农业、航测、电力巡线、天然气(石油)管道巡检、森林防火、抢险救灾、智慧城市等领域快速成长。在农业领域,通过无人机可以实现农作物的农药自动喷洒。With the rapid development of my country's UAV manufacturing industry, UAVs are growing rapidly in the fields of agriculture, aerial surveys, power line inspections, natural gas (oil) pipeline inspections, forest fire prevention, emergency rescue and disaster relief, and smart cities. In the agricultural field, the automatic spraying of pesticides on crops can be achieved through drones.
目前,地表的各种自然灾害以及病虫害都对自然界有着较大的影响,人们可以通过定期观察地表的变化情况,如植物的生长情况等,识别出对应的地表特征之后才能确定自然界是否受到了自然灾害以及病虫害等的影响,但需要花费较多的时间成本和人力成本才能识别到想要的地表特征,也无法保证识别出来的结果的准确性。因此,如何提高地表特征识别的准确性和便利性是目前亟待解决的问题。At present, various natural disasters, diseases and insect pests on the ground have a great impact on the nature. People can observe the changes on the ground regularly, such as the growth of plants, etc., and then identify the corresponding ground features to determine whether the nature has been affected by nature. Disasters, diseases and insect pests, etc., but it takes a lot of time and labor costs to identify the desired surface features, and the accuracy of the identified results cannot be guaranteed. Therefore, how to improve the accuracy and convenience of surface feature recognition is a problem that needs to be solved urgently.
发明内容Summary of the invention
基于此,本申请提供了一种地表特征识别方法、设备、无人机及计算机可读存储介质,旨在提高地表特征的识别结果的准确性和便利性。Based on this, the present application provides a method, equipment, unmanned aerial vehicle, and computer-readable storage medium for recognizing ground features, aiming to improve the accuracy and convenience of recognition results of ground features.
第一方面,本申请提供了一种地表特征识别方法,包括:In the first aspect, this application provides a land surface feature recognition method, including:
获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息;Acquiring ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information;
对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图;Processing the multiple color channel information and image depth information to obtain a feature map containing semantic information on the surface;
根据所述特征图中的地表语义信息,确定地表特征的识别结果。According to the semantic information of the surface in the feature map, the recognition result of the surface feature is determined.
第二方面,本申请还提供了一种无人机,所述无人机包括喷洒装置和处理器,所述处理器,用于实现如下步骤:In a second aspect, the present application also provides an unmanned aerial vehicle including a spraying device and a processor, and the processor is configured to implement the following steps:
获取飞行喷洒任务,其中,所述飞行喷洒任务根据地表特征的识别结果确定;Acquiring a flying spraying task, wherein the flying spraying task is determined according to the recognition result of the ground surface features;
执行所述飞行喷洒任务,并控制所述喷洒装置按照所述飞行喷洒任务中的 喷洒参数执行对应的喷洒动作。Perform the flying spraying task, and control the spraying device to execute the corresponding spraying action according to the spraying parameters in the flying spraying task.
第三方面,本申请还提供了一种地表特征识别设备,所述地表特征识别设备包括存储器和处理器;In a third aspect, the present application also provides a surface feature recognition device, the surface feature recognition device including a memory and a processor;
所述存储器用于存储计算机程序;The memory is used to store a computer program;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:The processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息;Acquiring ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information;
对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图;Processing the multiple color channel information and image depth information to obtain a feature map containing semantic information on the surface;
根据所述特征图中的地表语义信息,确定地表特征的识别结果。According to the semantic information of the surface in the feature map, the recognition result of the surface feature is determined.
第四方面,本申请还提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如上所述的地表特征识别方法。In a fourth aspect, the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the above The surface feature recognition method described.
本申请实施例提供了一种地表特征识别方法、无人机及计算机可读存储介质,通过对地表图像信息中的多个颜色通道信息和图像深度信息进行处理,可以得到包含地表语义信息的特征图,通过该特征图中的地表语义信息,可以准确的确定地表特征的识别结果,整个识别过程不需要人工参与,可以提高地表特征识别的准确性和便利性。The embodiments of the present application provide a method for identifying surface features, a drone, and a computer-readable storage medium. By processing multiple color channel information and image depth information in the surface image information, features containing surface semantic information can be obtained Figure, through the surface semantic information in the feature map, the recognition result of the surface feature can be accurately determined, and the entire recognition process does not require manual participation, which can improve the accuracy and convenience of surface feature recognition.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。It should be understood that the above general description and the following detailed description are only exemplary and explanatory, and cannot limit the application.
附图说明Description of the drawings
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings used in the description of the embodiments. Obviously, the drawings in the following description are some embodiments of the present application. Ordinary technicians can obtain other drawings based on these drawings without creative work.
图1是本申请一实施例提供的一种地表特征识别方法的步骤示意流程图;FIG. 1 is a schematic flow chart of the steps of a method for recognizing ground features according to an embodiment of the present application;
图2是图1中的地表特征识别方法的子步骤示意流程图;Fig. 2 is a schematic flowchart of sub-steps of the land surface feature recognition method in Fig. 1;
图3是本申请实施例中拼接地表图像的一示意图;Fig. 3 is a schematic diagram of splicing ground surface images in an embodiment of the present application;
图4是本申请一实施例提供的另一种地表特征识别方法的步骤示意流程图;FIG. 4 is a schematic flow chart of the steps of another land surface feature recognition method provided by an embodiment of the present application;
图5是本申请一实施例提供的又一种地表特征识别方法的步骤示意流程图;FIG. 5 is a schematic flowchart of the steps of another method for recognizing ground features according to an embodiment of the present application;
图6是本申请一实施例提供的无人机的结构示意图;Figure 6 is a schematic structural diagram of a drone provided by an embodiment of the present application;
图7是本申请一实施例提供的无人机执行喷洒任务的步骤流程示意图;FIG. 7 is a schematic flow chart of the steps of a drone performing a spraying task provided by an embodiment of the present application;
图8是本申请实施例中的飞行喷洒航线的一示意图;FIG. 8 is a schematic diagram of a flying spray route in an embodiment of the present application;
图9是本申请实施例中的飞行喷洒航线的一示意图;FIG. 9 is a schematic diagram of a flying spraying route in an embodiment of the present application;
图10是本申请一实施例中灾害蔓延边界的一示意图;Figure 10 is a schematic diagram of a disaster spread boundary in an embodiment of the present application;
图11是本申请一实施例中喷洒作业区域交叠的一示意图;FIG. 11 is a schematic diagram of overlapping spraying operation areas in an embodiment of the present application;
图12是本申请一实施例中喷洒作业区域交叠的另一示意图FIG. 12 is another schematic diagram of overlapping spraying operation areas in an embodiment of the present application
图13是本申请一实施例提供的地表特征识别设备的示意性框图。FIG. 13 is a schematic block diagram of a surface feature recognition device provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。The flowchart shown in the drawings is only an example, and does not necessarily include all contents and operations/steps, nor does it have to be executed in the described order. For example, some operations/steps can also be decomposed, combined or partially combined, so the actual execution order may be changed according to actual conditions.
本申请提供的地表特征识别方法可以应用于地面控制平台、服务器和/或无人机中,用于对地表特征进行识别。其中,地面控制平台包括笔记本电脑和PC电脑等,服务器可以为单台的服务器,也可以为由多台服务器组成的服务器集群,无人机包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。The surface feature recognition method provided in this application can be applied to ground control platforms, servers, and/or drones to recognize surface features. Among them, the ground control platform includes laptop computers and PC computers. The server can be a single server or a server cluster composed of multiple servers. UAVs include rotary-wing UAVs, such as quadrotor UAVs, The six-rotor UAV, the eight-rotor UAV can also be a fixed-wing UAV, or a combination of a rotary-wing type and a fixed-wing UAV, which is not limited here.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Hereinafter, some embodiments of the present application will be described in detail with reference to the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
请参阅图1,图1是本申请一实施例提供的一种地表特征识别方法的步骤示意流程图。Please refer to FIG. 1. FIG. 1 is a schematic flowchart of steps of a method for identifying ground features according to an embodiment of the present application.
具体地,如图1所示,该地表特征识别方法包括步骤S101至步骤S103。Specifically, as shown in FIG. 1, the method for identifying land features includes step S101 to step S103.
S101、获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息。S101. Acquire ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information.
在需要对地表特征进行识别时,获取所需的地表图像信息,其中,该地表图像信息由多个颜色通道的图像信息和图像深度信息融合得到,多个颜色通道 的图像信息至少包括R、G、B三通道信息。通过对多个颜色通道的图像信息和图像深度信息进行融合,可以丰富地表图像信息,可以间接的提高地表特征识别的准确性。When it is necessary to identify the surface features, obtain the required surface image information, where the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information. By fusing the image information and image depth information of multiple color channels, the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
其中,该地表图像信息还包括俯视正视图,该图像深度信息为俯视正视图下的高度信息。地表图像是通过航拍得到的,在航拍过程中,由于可移动平台的倾斜等原因,导致航拍得到的地表图像不是正常的俯视正视图像,通过将地表图像转换为俯视正视下的地表图像,可以保证地表图像信息的准确性,可以提高地表特征识别的准确性。Wherein, the ground surface image information also includes a top view and the image depth information is height information in the top view. The surface image is obtained through aerial photography. In the process of aerial photography, due to the tilt of the movable platform and other reasons, the surface image obtained by aerial photography is not a normal top view image. By converting the surface image to a top view image, it can be guaranteed The accuracy of surface image information can improve the accuracy of surface feature recognition.
其中,该地表图像信息还包括地表图像所对应的地理位置信息。该地理位置信息包括通过全球卫星导航定位系统获得的定位信息;和/或,通过实时差分定位系统获得的定位信息。在航拍过程中,可移动平台可以通过全球卫星导航定位系统或实时差分定位系统获取地表图像的地理位置信息,可以进一步的丰富地表图像信息,也方便后续查询识别到的地表特征所属的地区。Wherein, the surface image information also includes geographic location information corresponding to the surface image. The geographic location information includes positioning information obtained through a global satellite navigation and positioning system; and/or positioning information obtained through a real-time differential positioning system. In the aerial photography process, the mobile platform can obtain the geographic location information of the surface image through the global satellite navigation and positioning system or the real-time differential positioning system, which can further enrich the surface image information and facilitate subsequent query of the area to which the identified surface feature belongs.
其中,该图像深度信息的确定方式可以为基于双目测距算法和多个颜色通道的图像信息确定,还可以为基于单目测距算法和多个颜色通道的图像信息的关联帧确定。该关联帧为多个颜色通道的图像信息中存在交叠的图像帧,将交叠的图像帧作为相同的两个图像帧,并计算这两个图像帧的视差,然后通过视差,可以确定图像深度信息。需要说明的是,该双目测距算法可基于实际情况进行设置,本申请对此不作具体限定。该双目测距算法可选为半全局匹配算法(Semi-Global Matching,SGM)。The method for determining the image depth information may be based on a binocular ranging algorithm and image information of multiple color channels, or may be based on a monocular ranging algorithm and image information of multiple color channels. The associated frame is an overlapped image frame in the image information of multiple color channels. The overlapped image frame is regarded as the same two image frames, and the disparity of the two image frames is calculated, and then the image can be determined by the disparity In-depth information. It should be noted that the binocular ranging algorithm can be set based on actual conditions, which is not specifically limited in this application. The binocular ranging algorithm can be selected as a semi-global matching algorithm (Semi-Global Matching, SGM).
在一实施例中,如图2所述,步骤S101包括子步骤S1011至子步骤S1012。In one embodiment, as shown in FIG. 2, step S101 includes sub-step S1011 to sub-step S1012.
S1011、获取地表图像集,并根据所述地表图像集中的每个地表图像,生成对应的深度图;S1011, acquiring a ground surface image set, and generating a corresponding depth map according to each ground surface image in the ground surface image set;
从本地或者云端获取地表图像集,该地表图像集包括多张地表图像,且每种地表图像均为俯视正视下的地表图像;根据地表图像集中的每个地表图像,生成对应的深度图。其中,地表图像集可以通过无人机对地表进行航拍得到,具体地,无人机获取地表航拍任务,其中,地表航拍任务包括航拍飞行航线和航拍参数;无人机执行地表航拍任务,以对地表进行航拍,得到获取地表图像集,其中,地表图像集中的地表图像包括地理位置信息。无人机可以将航拍得到的地表图像集存储在本地,也可以将航拍得到的地表图像集上传至云端。Obtain a land surface image set from the local or cloud. The land surface image set includes a plurality of surface images, and each of the surface images is a top view of the surface image; according to each surface image in the surface image set, a corresponding depth map is generated. Among them, the surface image set can be obtained by aerial photography of the ground surface by a drone. Specifically, the drone acquires surface aerial photography tasks, where the surface aerial photography tasks include aerial photography flight routes and aerial photography parameters; the drone performs surface aerial photography tasks in order to The ground surface is aerial photographed to obtain a ground surface image set, where the ground surface image in the ground surface image set includes geographic location information. The drone can store the surface image collection obtained by aerial photography locally, or upload the surface image collection obtained by aerial photography to the cloud.
其中,地表航拍任务的获取方式具体为:无人机获取地表航拍任务文件,其中,地表航拍任务文件包括航点信息和每个航点的航拍参数;根据航点信息, 生成航拍飞行航线,其中,航拍飞行航线上标记有多个航点;根据每个航点的航拍参数,设置航拍飞行航线上的每个航点的航拍参数,以生成地表航拍任务。其中,航拍参数包括航拍高度和航拍频率,该航拍频率用于控制摄像头进行连拍,航点信息包括每个航点的位置和顺序。Among them, the method for obtaining surface aerial photography tasks is specifically as follows: the drone obtains surface aerial photography mission files, where the surface aerial photography task files include waypoint information and aerial photography parameters of each waypoint; according to the waypoint information, an aerial photography flight route is generated, where , The aerial photography flight route is marked with multiple waypoints; according to the aerial photography parameters of each waypoint, the aerial photography parameters of each waypoint on the aerial photography flight route are set to generate surface aerial photography tasks. Among them, the aerial photography parameters include aerial photography altitude and aerial photography frequency. The aerial photography frequency is used to control the camera for continuous photography, and the waypoint information includes the position and sequence of each waypoint.
其中,深度图的生成方式具体为:基于单目测距算法根据地表图像集中的每个地表图像的关联帧,生成每个地表图像对应的深度图,并对每个地表图像对应的深度图进行拼接,得到拼接深度图,且将拼接深度图作为地表图像集对应的深度图。或者基于双目测距算法,根据地表图像集中每两个关联的地表图像,生成每两个关联的地表图像对应的深度图,并对每两个关联的地表图像对应的深度图进行拼接,得到拼接深度图,且将拼接深度图作为地表图像集对应的深度图。Among them, the method of generating the depth map is as follows: based on the monocular ranging algorithm, according to the associated frame of each surface image in the surface image set, generate the depth map corresponding to each surface image, and perform the calculation of the depth map corresponding to each surface image. By stitching, a stitched depth map is obtained, and the stitched depth map is used as the depth map corresponding to the ground surface image set. Or based on the binocular ranging algorithm, according to every two related surface images in the surface image set, generate the depth map corresponding to each two related surface images, and stitch the depth maps corresponding to each two related surface images to get Splice the depth map, and use the spliced depth map as the depth map corresponding to the ground surface image set.
S1012、对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息。S1012. Process each ground surface image and the depth map in the ground surface image set to obtain ground surface image information.
在生成地表图像集对应的深度图之后,对地表图像集中的每个地表图像和深度图进行处理,得到地表图像信息。具体地,对地表图像集中的每个地表图像进行拼接,得到拼接地表图像;对深度图和拼接地表图像进行融合,得到地表图像信息,该地表图像信息包括多个颜色通道的图像信息和图像深度信息。After the depth map corresponding to the ground surface image set is generated, each ground surface image and depth map in the ground surface image set is processed to obtain ground surface image information. Specifically, each surface image in the surface image set is spliced to obtain a spliced surface image; the depth map and the spliced surface image are merged to obtain surface image information, which includes image information and image depth of multiple color channels information.
其中,地表图像的拼接方式具体为:确定每个地表图像各自对应的拼接参数,其中,拼接参数包括拼接顺序和拼接关系;根据每个地表图像各自对应的拼接参数,对每个地表图像进行拼接,得到拼接地表图像。请参阅图3,图3为本申请实施例中的拼接地表图像的一拼接示意图,如图3所示,地表图像A的右侧与地表图像B的左侧拼接,地表图像B的右侧与地表图像C的左侧拼接,地表图像D的上侧与地表图像A的下侧拼接,地表图像E的右侧与地表图像D的左侧拼接,地表图像F的左侧与地表图像E的右侧拼接。Among them, the splicing method of the surface images is specifically: determining the corresponding splicing parameters of each surface image, where the splicing parameters include splicing order and splicing relationship; according to the corresponding splicing parameters of each surface image, each surface image is spliced , Get the spliced surface image. Please refer to FIG. 3, which is a schematic view of stitching the ground surface image in the embodiment of the application. As shown in FIG. 3, the right side of the ground image A is stitched with the left side of the ground image B, and the right side of the ground image B is connected with The left side of the surface image C is spliced, the upper side of the surface image D is spliced with the lower side of the surface image A, the right side of the surface image E is spliced with the left side of the surface image D, and the left side of the surface image F is spliced to the right of the surface image E. Side stitching.
其中,拼接参数的确定方式具体为:获取每个地表图像各自对应的航拍时刻点和航拍位置;根据每个地表图像各自对应的航拍时刻点,确定每个地表图像各自对应的拼接顺序;根据每个地表图像各自对应的航拍位置,确定每个地表图像各自对应的拼接关系。需要说明的是,航拍时刻点越小的地表图像的拼接顺序越靠前,航拍时刻点越大的地表图像的拼接顺序越靠后,通过地表图像的航拍位置,可以确定各地表图像之间的位置关系,并将各地表图像之间的位置关系作为每个地表图像各自对应的拼接关系。Among them, the method of determining the stitching parameters is: acquiring the respective aerial time point and aerial position of each surface image; determining the respective stitching sequence of each surface image according to the aerial time point corresponding to each surface image; The corresponding aerial position of each surface image is determined, and the corresponding splicing relationship of each surface image is determined. It should be noted that the smaller the aerial time point, the higher the splicing order of the surface image, the larger the aerial time point, the lower the splicing order of the surface image. The aerial position of the surface image can determine the difference between the surface images. The location relationship, and the location relationship between the surface images of each region is regarded as the corresponding splicing relationship of each surface image.
S102、对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表 语义信息的特征图。S102. Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
在获取到地表图像信息之后,对地表图像信息中的多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图。其中,该地表语义信息包括每个地表特征以及每个地表特征的识别概率值。After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface. Wherein, the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
在一实施例中,对多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;将融合图像块与预设的图像块集中的图像块进行匹配,得到融合图像块与每个图像块之间的匹配程度;根据融合图像块与每个图像块之间的匹配程度,确定包含地表语义信息的特征图。需要说明的是,预设的图像块集中包括多个标注有地表特征的图像块,预设的图像块集中的图像块可基于实际情况进行设置,本申请对此不作具体限定。In one embodiment, the multiple color channel information and image depth information are fused to obtain a fused image block; the fused image block is matched with an image block in a preset image block set to obtain a fused image block and each image The matching degree between the blocks; according to the matching degree between the fused image block and each image block, the feature map containing the semantic information of the surface is determined. It should be noted that the preset image block set includes a plurality of image blocks marked with surface features, and the image blocks in the preset image block set can be set based on actual conditions, which is not specifically limited in this application.
其中,融合图像块与图像块之间的匹配方式具体为:将融合图像块拆分为预设数量的融合图像子块,并将图像块均拆分为预设数量的图像子块,其中,融合图像子块与图像子块存在一一对应关系;计算每个融合图像子块与对应的图像子块之间的相似度,并累加每个融合图像子块与对应的图像子块之间的相似度,得到融合图像块与图像块之间的匹配程度。Among them, the matching method between the fused image block and the image block is specifically: split the fused image block into a preset number of fused image sub-blocks, and split the image blocks into a preset number of image sub-blocks, where, There is a one-to-one correspondence between fused image sub-blocks and image sub-blocks; calculate the similarity between each fused image sub-block and the corresponding image sub-block, and accumulate the difference between each fused image sub-block and the corresponding image sub-block Similarity, the degree of matching between the fused image block and the image block is obtained.
其中,特征图的确定方式具体为:获取该匹配程度大于或等于预设的匹配程度阈值的图像块作为目标图像块,并获取每个目标图像块中各图像子块与对应的融合子块之间的相似度以及每个目标图像块中各图像子块对应的地表特征;获取该相似度大于或等于预设的相似度阈值的图像子块对应的地表特征,并将该地表特征对应的图像子块的相似度作为识别概率值;在该融合图像块中的对应融合图像子块中标记该地表特征以及识别概率值,从而得到包含地表语义信息的特征图。The method for determining the feature map is specifically: acquiring the image block whose matching degree is greater than or equal to the preset matching degree threshold as the target image block, and obtaining the difference between each image sub-block in each target image block and the corresponding fused sub-block The similarity between each target image block and the corresponding land surface feature of each image sub-block in each target image block; obtain the land surface feature corresponding to the image sub-block whose similarity is greater than or equal to the preset similarity threshold, and compare the image corresponding to the land surface feature The similarity of the sub-blocks is used as the recognition probability value; the land surface feature and the recognition probability value are marked in the corresponding fused image sub-block in the fused image block, thereby obtaining a feature map containing the semantic information of the land surface.
在一实施例中,对多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;通过经过预先训练的神经网络对融合图像块进行处理,得到包含地表语义信息的特征图。其中,经过预先训练的神经网络可以从融合图像块中提取地表语义信息,从而得到包含地表语义信息的特征图。需要说明的是,该神经网络可以为卷积神经网络,还可以为循环卷积神经网络,本申请对此不作具体限定。In one embodiment, a plurality of color channel information and image depth information are fused to obtain a fused image block; the fused image block is processed through a pre-trained neural network to obtain a feature map containing semantic information on the surface. Among them, the pre-trained neural network can extract the surface semantic information from the fused image block, thereby obtaining a feature map containing the surface semantic information. It should be noted that the neural network may be a convolutional neural network or a cyclic convolutional neural network, which is not specifically limited in this application.
其中,神经网络的训练方式具体为:获取大量标注有地表语义信息的地表图像,并对标注有地表语义信息的地表图像进行归一化处理以及数据增强处理,从而得到样本数据;将样本数据输入至神经网络,对神经网络进行训练,直到该神经网络收敛,使得通过收敛后的神经网络对地表图像信息进行处理,可以 得到包含地表语义信息的特征图。通过对标注有地表语义信息的地表图像进行归一化处理以及数据增强处理,可以保证训练得到的神经网络对地表图像的处理效果,可以得到准确的包含地表语义信息的特征图,提高地表特征识别的准确性。Among them, the neural network training method is specifically: acquiring a large number of surface images marked with surface semantic information, and performing normalization and data enhancement processing on the surface images marked with surface semantic information, thereby obtaining sample data; inputting the sample data To the neural network, the neural network is trained until the neural network converges, so that the ground surface image information is processed through the converged neural network, and a feature map containing the semantic information of the ground surface can be obtained. Through the normalization processing and data enhancement processing of the surface image marked with the surface semantic information, the processing effect of the trained neural network on the surface image can be guaranteed, and the accurate feature map containing the surface semantic information can be obtained, and the surface feature recognition can be improved. Accuracy.
S103、根据所述特征图中的地表语义信息,确定地表特征的识别结果。S103: Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
在得到包含特征图中的地表语义信息之后,根据该特征图中的地表语义信息,确定地表特征的识别结果。具体地,从该地表语义信息中获取每个地表特征的置信度,并将每个地表特征的置信度与预设的置信度阈值进行比较,获取该置信度大于或等于置信度阈值的地表特征,并根据置信度大于或等于置信度阈值的地表特征,确定地表特征的识别结果。需要说明的是,上述置信度阈值可基于实际情况进行设置,本申请对此不作具体限定。After obtaining the surface semantic information contained in the feature map, the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
其中,地表特征的识别结果包括地表受灾类型、地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,地表受灾类型用于描述地表发生的灾害的种类,地表受灾区域信息用于描述地表上受灾的区域,地表受灾程度信息用于描述受灾的区域的受灾程度,地表受灾面积信息用于描述受灾的区域的面积。需要说明的是,灾害的种类包括但不限于倒伏、病虫害、水灾和旱灾,本申请对此不作具体限定。Among them, the recognition results of surface features include surface disaster type, surface disaster area information, surface disaster degree information, and surface disaster area information. Surface disaster type is used to describe the type of disaster that occurs on the surface, and surface disaster area information is used to describe the disaster on the surface. The information on the degree of damage on the surface is used to describe the degree of damage in the affected area, and the information on the damage on the surface of the surface is used to describe the area of the affected area. It should be noted that the types of disasters include but are not limited to lodging, plant diseases and insect pests, floods and droughts, which are not specifically limited in this application.
上述实施例提供的地表特征识别方法,通过对地表图像信息中的多个颜色通道信息和图像深度信息进行处理,可以得到包含地表语义信息的特征图,通过该特征图中的地表语义信息,可以准确的确定地表特征的识别结果,整个识别过程不需要人工参与,可以提高地表特征识别的准确性和便利性。The surface feature recognition method provided by the foregoing embodiment can obtain a feature map containing surface semantic information by processing multiple color channel information and image depth information in the surface image information, and by using the surface semantic information in the feature map, Accurately determine the recognition results of surface features, the entire recognition process does not require human involvement, which can improve the accuracy and convenience of surface feature recognition.
请参阅图4,图4是本申请一实施例提供的另一种地表特征识别方法的步骤示意流程图。Please refer to FIG. 4, which is a schematic flowchart of the steps of another method for identifying ground features according to an embodiment of the present application.
具体地,如图4所示,该地表特征识别方法包括步骤S201至S205。Specifically, as shown in FIG. 4, the method for identifying features of the land surface includes steps S201 to S205.
S201、获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息。S201. Acquire ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information.
在需要对地表特征进行识别时,获取所需的地表图像信息,其中,该地表图像信息由多个颜色通道的图像信息和图像深度信息融合得到,多个颜色通道的图像信息至少包括R、G、B三通道信息。通过对多个颜色通道的图像信息和图像深度信息进行融合,可以丰富地表图像信息,可以间接的提高地表特征识别的准确性。When it is necessary to identify the surface features, obtain the required surface image information, where the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information. By fusing the image information and image depth information of multiple color channels, the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
S202、对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表 语义信息的特征图。S202. Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
在获取到地表图像信息之后,对地表图像信息中的多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图。其中,该地表语义信息包括每个地表特征以及每个地表特征的识别概率值。After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface. Wherein, the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
S203、根据所述特征图中的地表语义信息,确定地表特征的识别结果。S203: Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
在得到包含特征图中的地表语义信息之后,根据该特征图中的地表语义信息,确定地表特征的识别结果。具体地,从该地表语义信息中获取每个地表特征的置信度,并将每个地表特征的置信度与预设的置信度阈值进行比较,获取该置信度大于或等于置信度阈值的地表特征,并根据置信度大于或等于置信度阈值的地表特征,确定地表特征的识别结果。需要说明的是,上述置信度阈值可基于实际情况进行设置,本申请对此不作具体限定。After obtaining the surface semantic information contained in the feature map, the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
S204、获取地表特征的至少一个历史识别结果,其中,所述历史识别结果为在当前时刻之前确定的地表特征的识别结果。S204. Acquire at least one historical recognition result of the surface feature, where the historical recognition result is the recognition result of the surface feature determined before the current moment.
在确定地表特征的识别结果之后,获取地表特征的至少一个历史识别结果。其中,该历史识别结果为在当前时刻之前确定的地表特征的识别结果。该历史识别结果存储在本地磁盘中,或者,该历史识别结果存储在云端。After the recognition result of the surface feature is determined, at least one historical recognition result of the surface feature is obtained. Wherein, the historical recognition result is the recognition result of the surface features determined before the current moment. The historical recognition result is stored in the local disk, or the historical recognition result is stored in the cloud.
在一实施例中,可以根据地表特征的识别结果的地理位置信息,将地表特征的识别结果分地区进行存储,还可以在地区的维度下进一步地分地表区域进行存储,便于后续查询。In one embodiment, the recognition result of the surface feature can be stored in regions according to the geographic location information of the recognition result of the surface feature, and the storage can also be further divided into surface regions under the dimension of the region to facilitate subsequent queries.
S205、根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势。S205. Determine a change trend of the surface according to the recognition result of the surface feature and the at least one historical recognition result of the surface feature.
在获取到地表特征的至少一个历史识别结果之后,根据地表特征的识别结果和地表特征的至少一个历史识别结果,确定地表变化趋势,即将地表特征的识别结果与历史识别结果进行比较,即可知道地表变化趋势。其中,地表变化趋势包括但不限于病虫害变化趋势、倒伏变化趋势、水灾变化趋势和旱灾变化趋势,病虫害变化趋势包括病虫害继续蔓延和病虫害强度减弱,倒伏变化趋势包括倒伏继续蔓延和倒伏强度减弱,水灾变化趋势包括水灾继续蔓延和水灾强度减弱,旱灾变化趋势包括旱灾继续蔓延和旱灾强度减弱。After acquiring at least one historical recognition result of the surface feature, determine the surface change trend according to the recognition result of the surface feature and the at least one historical recognition result of the surface feature, that is, compare the recognition result of the surface feature with the historical recognition result to know The changing trend of the surface. Among them, the surface change trend includes, but is not limited to, the change trend of plant diseases and insect pests, the change trend of lodging, the change trend of floods, and the change trend of drought. The change trend of plant diseases and insect pests includes the continuous spread of plant diseases and insect pests and the weakening of the intensity of plant diseases and insect pests. The trend of change includes the continued spread of floods and the weakening of the intensity of the flood, and the trend of changes of droughts includes the continued spread of drought and the weakening of the intensity of drought.
在一实施例中,获取地表特征的识别结果的第一确定时刻点以及每个历史识别结果的第二确定时刻点;根据第一确定时刻点和每个第二确定时刻点,对识别结果和每个历史识别结果进行排序,得到识别结果队列;根据识别结果队列中相邻的每两个识别结果,确定多个候选地表变化趋势;对多个候选地表变 化趋势进行处理,得到地表变化趋势。In an embodiment, the first definite time point of the recognition result of the surface feature and the second definite time point of each historical recognition result are acquired; according to the first definite time point and each second definite time point, the recognition result and Each historical recognition result is sorted to obtain a recognition result queue; according to every two adjacent recognition results in the recognition result queue, multiple candidate surface change trends are determined; multiple candidate surface change trends are processed to obtain a surface change trend.
其中,候选地表变化趋势的确定方式具体为:获取识别结果队列中相邻的每两个识别结果,并将两个识别结果进行相互比较,从而得到候选地表变化趋势。需要说明的是,识别结果的确定时刻点越小,则该识别结果在识别结果队列中的位置越靠前,识别结果的确定时刻点越大,则该识别结果在识别结果队列中的位置越靠后。其中,候选地表变化趋势的处理方式具体为:获取每个候选地表变化趋势各自对应的时间顺序,并按照每个候选地表变化趋势各自对应的时间顺序,依次连接每个候选地表变化趋势,从而得到地表变化趋势。The method for determining the change trend of the candidate land surface is specifically: obtaining every two adjacent recognition results in the recognition result queue, and comparing the two recognition results with each other to obtain the change trend of the candidate land surface. It should be noted that the smaller the identification result's determination time point is, the higher the position of the identification result in the identification result queue is, and the larger the identification result time point is, the greater the position of the identification result in the identification result queue. lean back. Among them, the processing method of the change trend of the candidate land surface is specifically: obtain the time sequence corresponding to each candidate land surface change trend, and connect each candidate land surface change trend in turn according to the time sequence corresponding to each candidate land surface change trend, so as to obtain The changing trend of the surface.
上述实施例提供的地表特征识别方法,在准确的确定地表特征的识别结果之后,获取之前确定的地表特征的识别结果,通过对当前确定的地表特征的识别结果与之前确定的地表特征的识别结果,可以准确的确定地表变化趋势,便于用户决策,极大的提高了用户体验。The land surface feature recognition method provided by the above embodiment obtains the recognition result of the previously determined land surface feature after the recognition result of the land surface feature is accurately determined, through the recognition result of the currently determined land feature and the recognition result of the previously determined land feature , Can accurately determine the trend of surface changes, facilitate user decision-making, and greatly improve user experience.
请参阅图5,图5是本申请一实施例提供的又一种地表特征识别方法的步骤示意流程图。Please refer to FIG. 5, which is a schematic flowchart of the steps of yet another method for identifying ground features according to an embodiment of the present application.
具体地,如图5所示,该地表特征识别方法包括步骤S301至步骤S305。Specifically, as shown in FIG. 5, the land surface feature recognition method includes step S301 to step S305.
S301、获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息。S301. Acquire ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information.
在需要对地表特征进行识别时,获取所需的地表图像信息,其中,该地表图像信息由多个颜色通道的图像信息和图像深度信息融合得到,多个颜色通道的图像信息至少包括R、G、B三通道信息。通过对多个颜色通道的图像信息和图像深度信息进行融合,可以丰富地表图像信息,可以间接的提高地表特征识别的准确性。When it is necessary to identify the surface features, obtain the required surface image information, where the surface image information is obtained by fusing the image information of multiple color channels and the image depth information, and the image information of multiple color channels includes at least R, G , B three-channel information. By fusing the image information and image depth information of multiple color channels, the surface image information can be enriched, and the accuracy of surface feature recognition can be indirectly improved.
S302、对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图。S302. Process the multiple color channel information and image depth information to obtain a feature map containing semantic information of the ground surface.
在获取到地表图像信息之后,对地表图像信息中的多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图。其中,该地表语义信息包括每个地表特征以及每个地表特征的识别概率值。After the surface image information is obtained, multiple color channel information and image depth information in the surface image information are processed to obtain a feature map containing the semantic information of the surface. Wherein, the surface semantic information includes each surface feature and the recognition probability value of each surface feature.
S303、根据所述特征图中的地表语义信息,确定地表特征的识别结果。S303. Determine the recognition result of the surface feature according to the semantic information of the surface in the feature map.
在得到包含特征图中的地表语义信息之后,根据该特征图中的地表语义信息,确定地表特征的识别结果。具体地,从该地表语义信息中获取每个地表特征的置信度,并将每个地表特征的置信度与预设的置信度阈值进行比较,获取该置信度大于或等于置信度阈值的地表特征,并根据置信度大于或等于置信度 阈值的地表特征,确定地表特征的识别结果。需要说明的是,上述置信度阈值可基于实际情况进行设置,本申请对此不作具体限定。After obtaining the surface semantic information contained in the feature map, the recognition result of the surface feature is determined according to the surface semantic information in the feature map. Specifically, the confidence level of each surface feature is obtained from the surface semantic information, and the confidence level of each surface feature is compared with a preset confidence threshold to obtain the surface feature whose confidence is greater than or equal to the confidence threshold , And determine the recognition result of the surface features based on the surface features whose confidence is greater than or equal to the confidence threshold. It should be noted that the above-mentioned confidence threshold may be set based on actual conditions, which is not specifically limited in this application.
S304、获取三维地表地图,并从所述识别结果中获取地表受灾区域信息、地表受灾程度信息和地表受灾面积信息。S304. Obtain a three-dimensional surface map, and obtain surface disaster area information, surface disaster degree information, and surface disaster area information from the recognition result.
在确定地表特征的识别结果之后,获取三维地表地图,并从识别结果中获取地表受灾区域信息、地表受灾程度信息和地表受灾面积信息。其中,地表特征的识别结果包括地表受灾类型、地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,地表受灾类型用于描述地表发生的灾害的种类,地表受灾区域信息用于描述地表上受灾的区域,地表受灾程度信息用于描述受灾的区域的受灾程度,地表受灾面积信息用于描述受灾的区域的面积。需要说明的是,灾害的种类包括但不限于倒伏、病虫害、水灾和旱灾,本申请对此不作具体限定。After determining the recognition result of the surface features, a three-dimensional surface map is obtained, and the surface disaster area information, the surface damage degree information, and the surface damage area information are obtained from the recognition results. Among them, the recognition results of surface features include surface disaster type, surface disaster area information, surface disaster degree information, and surface disaster area information. Surface disaster type is used to describe the type of disaster that occurs on the surface, and surface disaster area information is used to describe the disaster on the surface. The information on the degree of damage on the surface is used to describe the degree of damage in the affected area, and the information on the damage on the surface of the surface is used to describe the area of the affected area. It should be noted that the types of disasters include but are not limited to lodging, plant diseases and insect pests, floods and droughts, which are not specifically limited in this application.
需要说明的是,三维地表地图基于三维构建算法生成,该三维构建算法可基于实际情况进行设置,本申请对此不作具体限定。It should be noted that the three-dimensional surface map is generated based on a three-dimensional construction algorithm, and the three-dimensional construction algorithm can be set based on actual conditions, which is not specifically limited in this application.
S305、根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图。S305: Mark the three-dimensional surface map according to the information on the surface disaster area, the degree of damage information, and the information on the surface area affected by the disaster, to obtain a target three-dimensional map marked with the area affected by the disaster, the extent of the disaster, and the area affected by the disaster.
根据地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图。用户可以通过目标三维地图快速简单的知晓受灾区域、受灾程度和受灾面积,便于用户参考,提高了用户体验。其中,在得到目标三维地图之后,可以存储目标三维地图;和/或将目标三维地图发送至终端设备,以供终端设备显示目标三维地图;和/或将目标三维地图发送至云端,以供云端存储目标三维地图。Mark the three-dimensional surface map according to the information of the surface disaster area, the degree of surface damage, and the information of the surface area affected to obtain a target three-dimensional map marked with the affected area, the extent of the disaster, and the area affected by the disaster. The user can quickly and simply know the disaster area, the degree of the disaster and the disaster area through the target three-dimensional map, which is convenient for users to refer to and improves the user experience. Among them, after obtaining the target three-dimensional map, the target three-dimensional map can be stored; and/or the target three-dimensional map is sent to the terminal device for the terminal device to display the target three-dimensional map; and/or the target three-dimensional map is sent to the cloud for the cloud Store a three-dimensional map of the target.
具体地,根据地表受灾区域信息,在三维地表地图中标记每个受灾区域,即从地表受灾区域信息中获取每个受灾区域的地理位置信息,按照每个受灾区域的地理位置信息,在三维地表地图中标记每个受灾区域;根据地表受灾程度信息,标记每个受灾区域各自对应的受灾程度;根据地表受灾面积信息,标记每个受灾区域各自对应的受灾面积,即从地表受灾面积信息中获取每个受灾区域各自对应的受灾面积,并在三维地表地图中的受灾区域内标记受灾面积。需要说明的是,受灾区域、受灾程度和受灾面积的标记方式可基于实际情况进行设置,本申请对此不作具体限定。可选地,受灾程度的标记方式为颜色标记。Specifically, according to the surface disaster area information, mark each disaster area in the three-dimensional surface map, that is, obtain the geographic location information of each disaster area from the surface disaster area information. According to the geographic location information of each disaster area, the three-dimensional surface Mark each disaster area on the map; mark the corresponding disaster degree of each disaster area according to the information of the disaster degree on the surface; mark the disaster area corresponding to each disaster area according to the information of the disaster area on the surface, which is obtained from the information of the disaster area on the surface Each disaster area corresponds to the disaster area, and the disaster area is marked in the disaster area on the three-dimensional surface map. It should be noted that the marking method of the affected area, the extent of the disaster, and the affected area can be set based on the actual situation, which is not specifically limited in this application. Optionally, the marking method of the degree of damage is color marking.
其中,根据地表受灾程度信息,确定每个受灾区域各自对应的受灾程度颜色,即从地表受灾程度信息中获取每个受灾区域各自对应的受灾程度,并获取 预存的受灾程度与颜色之间的映射关系表,然后查询该映射关系表,可以确定每个受灾区域各自对应的受灾程度颜色;根据每个受灾区域各自对应的受灾程度颜色,标记每个受灾区域各自对应的受灾程度。Among them, according to the information of the degree of damage on the surface, the corresponding color of each affected area is determined, that is, the degree of damage corresponding to each area is obtained from the information of the degree of damage on the surface, and the pre-stored mapping between the degree of damage and the color is obtained The relationship table, and then query the mapping relationship table, can determine the corresponding disaster degree color of each disaster area; according to the disaster degree color of each disaster area, mark the disaster degree of each disaster area.
上述实施例提供的地表特征识别方法,在确定地表特征的识别结果之后,根据地表特征的识别结果中的地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图,使得用户可以通过目标三维地图快速简单的知晓受灾区域、受灾程度和受灾面积,便于用户参考,提高了用户体验。The surface feature recognition method provided by the above embodiment, after determining the recognition result of the surface feature, mark the three-dimensional surface map according to the surface disaster area information, the surface damage degree information and the surface damage area information in the surface feature recognition result, and obtain The target three-dimensional map marked with the affected area, the extent of the disaster and the affected area allows the user to quickly and simply know the affected area, the extent of the disaster and the affected area through the target three-dimensional map, which is convenient for users to refer to and improve the user experience.
请参阅图6,图6是本申请一实施例提供的无人机的结构示意图。无人机可以为旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。Please refer to FIG. 6, which is a schematic structural diagram of a drone provided by an embodiment of the present application. The UAV can be a rotary-wing UAV, such as a quadrotor UAV, a six-rotor UAV, an eight-rotor UAV, a fixed-wing UAV, or a rotary-wing UAV and a fixed-wing UAV. The combination of is not limited here.
如图6所示,无人机400包括喷洒装置401和处理器402,该无人机400用于农耕产业中对农产品、林木等进行农药、水等液体喷洒作业活动。无人机400可以实现移动、转动、翻转等动作,可以带动喷洒装置401运动到不同的位置或者不同的角度以在预设区域内进行喷洒作业。处理器402安装在无人机400内部,在图6中不可见。As shown in FIG. 6, the drone 400 includes a spraying device 401 and a processor 402. The drone 400 is used for spraying agricultural products, forests, and other liquids such as pesticides and water. The unmanned aerial vehicle 400 can realize movement, rotation, turning, etc., and can drive the spraying device 401 to move to different positions or different angles to perform spraying operations in a preset area. The processor 402 is installed inside the drone 400 and is not visible in FIG. 6.
请参阅图6,在一些实施例中,喷洒装置401包括泵组件4011、供液箱4012、喷头组件4013和导液管4014。供液箱4012与泵组件4011连通。喷头组件4013用于实现喷洒作业。导液管4014与泵组件4011及喷头组件4013连接,用于将从泵组件4011泵出的液体输送至喷头组件4013。其中,喷头组件4013的数量为至少一个,可以为一个、两个、三个、四个或者更多,本申请对此不作限定。Referring to FIG. 6, in some embodiments, the spraying device 401 includes a pump assembly 4011, a liquid supply tank 4012, a spray head assembly 4013 and a catheter 4014. The liquid supply tank 4012 communicates with the pump assembly 4011. The spray head assembly 4013 is used to implement spraying operations. The liquid guide tube 4014 is connected with the pump assembly 4011 and the spray head assembly 4013 and is used to transport the liquid pumped from the pump assembly 4011 to the spray head assembly 4013. Wherein, the number of the nozzle assembly 4013 is at least one, which can be one, two, three, four or more, which is not limited in this application.
具体地,处理器402可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。Specifically, the processor 402 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
请参阅图7,图7为本申请一实施例提供的无人机执行喷洒任务的步骤流程示意图。如图7所示,处理器402用于实现步骤S401至步骤S402。Please refer to FIG. 7, which is a schematic flow chart of the steps of a drone performing a spraying task according to an embodiment of the application. As shown in FIG. 7, the processor 402 is configured to implement step S401 to step S402.
S401、获取飞行喷洒任务,其中,所述飞行喷洒任务根据地表特征的识别结果确定。S401. Acquire a flying spraying task, where the flying spraying task is determined according to the recognition result of the ground surface feature.
具体地,在得到地表特征的识别结果之后,发现农作物或者果树等出现倒伏、病虫害或者缺水等情况,需要对农作物或者果树喷洒农药或者浇水等,可以根据地表特征的识别结果确定飞行喷洒任务。无人机400获取飞行喷洒任务,其中,该飞行喷洒任务包括飞行喷洒航线和每个航点的喷洒参数,该喷洒参数 包括喷洒时间、喷洒角度、喷洒流量和喷洒箱。Specifically, after obtaining the recognition results of ground features, it is found that crops or fruit trees are found to be lodging, diseases and insect pests, or water shortages, and the crops or fruit trees need to be sprayed with pesticides or watering, etc., and the flying spraying tasks can be determined according to the recognition results of the ground features . The drone 400 acquires a flying spraying task, where the flying spraying task includes a flying spraying route and spraying parameters of each waypoint, and the spraying parameters include spraying time, spraying angle, spraying flow rate, and spraying box.
在一实施例中,从本地磁盘、地面终端或者服务器获取地表特征的识别结果,其中,地表特征的识别结果包括地表受灾区域信息和地表受灾程度信息;根据地表受灾区域信息和地表受灾程度信息,生成对应的飞行喷洒任务。In one embodiment, the recognition result of the surface feature is obtained from a local disk, a ground terminal or a server, where the recognition result of the surface feature includes the information of the surface disaster area and the information of the degree of damage to the surface; according to the information of the disaster area of the surface and the information of the degree of damage of the surface, Generate the corresponding flying spray task.
进一步地,飞行喷洒任务的生成方式具体为:根据地表受灾区域信息,确定待规划的飞行喷洒航线的航点信息,并根据航点信息,生成对应的飞行喷洒航线;根据地表受灾程度信息,设置飞行喷洒航线上每个航点的喷洒参数,以生成对应的飞行喷洒任务。Further, the method of generating the flight spraying task is specifically: according to the information of the disaster area on the surface, determine the waypoint information of the flight spraying route to be planned, and generate the corresponding flight spraying route according to the waypoint information; set according to the information on the degree of surface damage The spraying parameters of each waypoint on the flying spraying route to generate the corresponding flying spraying task.
进一步地,航点信息的确定方式具体为:根据地表受灾区域信息,确定受灾区域的形状和面积;根据受灾区域的形状,确定待规划的飞行喷洒航线的航线类型;根据受灾区域的面积,确定待规划的飞行喷洒航线的航点数量;根据航线类型、地表受灾区域信息和航点数量,确定待规划的飞行喷洒航线的航点信息。Further, the waypoint information is determined specifically as follows: determine the shape and area of the disaster area according to the information of the disaster area on the surface; determine the type of flight spraying route to be planned according to the shape of the disaster area; determine according to the area of the disaster area The number of waypoints of the spraying route to be planned; the waypoint information of the spraying route to be planned is determined according to the route type, the information of the disaster-affected area on the surface and the number of waypoints.
进一步地,受灾区域的形状和面积的确定方式具体为:从地表受灾区域信息中获取地表受灾区域的轮廓信息和每个轮廓点的地理位置,并根据每个轮廓点的地理位置,确定受灾区域的面积;根据轮廓信息,确定地表受灾区域的轮廓形状,并计算该轮廓形状与每个预设形状之间的相似度,且将相似度最高的预设形状作为受灾区域的形状。Further, the method for determining the shape and area of the disaster area is specifically as follows: obtaining the contour information of the disaster area on the surface and the geographic location of each contour point from the information of the disaster area on the surface, and determining the disaster area according to the geographic location of each contour point According to the contour information, determine the contour shape of the disaster area on the surface, and calculate the similarity between the contour shape and each preset shape, and use the preset shape with the highest similarity as the shape of the disaster area.
进一步地,航线类型的确定方式具体为:获取预存的形状与航线类型之间的映射关系表,并查询该映射关系表,获取受灾区域的形状对应的航线类型,且将获取到的航线类型作为待规划的飞行喷洒航线的航线类型。航线类型包括带状航线和环状航线。需要说明的是,上述形状与航线类型之间的映射关系表可基于实际情况进行设置,本申请对此不作具体限定。Further, the method for determining the route type is specifically: obtaining the mapping relationship table between the pre-stored shape and the route type, and querying the mapping relationship table, obtaining the route type corresponding to the shape of the disaster-affected area, and using the obtained route type as The route type of the spraying route to be planned. Route types include strip routes and loop routes. It should be noted that the above-mentioned mapping relationship table between the shape and the route type can be set based on the actual situation, which is not specifically limited in this application.
进一步地,航点数量的确定方式具体为:获取预存的面积与航点数量之间的映射关系表,并查询该映射关系表,获取受灾区域的面积对应的航点数量,并将获取到的航点数量作为待规划的飞行喷洒航线的航点数量。需要说明的是,上述面积与航点数量之间的映射关系表可基于实际情况进行设置,本申请对此不作具体限定。Further, the method for determining the number of waypoints is specifically: obtaining a mapping relationship table between the pre-stored area and the number of waypoints, and querying the mapping relationship table to obtain the number of waypoints corresponding to the area of the disaster-affected area, and to obtain the obtained The number of waypoints is used as the number of waypoints of the spraying route to be planned. It should be noted that the above-mentioned mapping relationship table between the area and the number of waypoints can be set based on actual conditions, which is not specifically limited in this application.
进一步地,航点信息的确定方式进一步为:获取预存的地图,并根据地表受灾区域信息在预存的地图中标记对应的地表受灾区域;计算标记的地表受灾区域的面积,并根据该面积和航点数量,确定航点之间的间距;按照该间距和航线类型,在该地表受灾区域中依次标记每个航点,并获取每个航点的标记顺 序和每个航点在该地表受灾区域中的地理位置;将每个航点的标记顺序和每个航点在该地表受灾区域中的地理位置作为每个航点的航点顺序和航点位置,从而得到待规划的飞行喷洒航线的航点信息。Further, the waypoint information is determined further by: obtaining a pre-stored map, and marking the corresponding surface disaster area in the pre-stored map according to the information of the surface disaster area; calculating the area of the marked surface disaster area, and calculating the area according to the area and navigation The number of points, determine the distance between the waypoints; according to the distance and route type, mark each waypoint in the disaster-stricken area on the surface in turn, and obtain the marking order of each waypoint and the location of each waypoint in the disaster-affected area on the surface The location of each waypoint; the marking sequence of each waypoint and the geographical location of each waypoint in the disaster area on the surface are used as the waypoint sequence and position of each waypoint, so as to obtain the flight spraying route to be planned Waypoint information.
进一步地,飞行喷洒航线的生成方式具体为:从航点信息中获取每个航点的航行顺序和航点位置;按照每个航点的航行顺序的先后,依次连接每个航点位置,以生成对应的飞行喷洒航线。其中,飞行喷洒航线包括环绕航线和/或带状航线。Further, the method of generating the flight spray route is specifically as follows: obtain the navigation order and position of each waypoint from the waypoint information; connect each waypoint position in turn according to the navigation sequence of each waypoint, and Generate the corresponding flight spraying route. Among them, the flight spraying route includes a circumnavigation route and/or a strip route.
图8为本申请实施例中的飞行喷洒航线的一示意图,如图8所示,该飞行喷洒航线为环绕航线,且该飞行喷洒航线包括四个航点,而四个航点分别为航点A、航点B、航点C和航点D,且航行顺序为航点A→航点B→航点C→航点D。如此,该生成以航点A、航点B、航点C和航点D所围合而成的环绕航线航点A→航点B→航点C→航点D→航点A。FIG. 8 is a schematic diagram of the flying spraying route in the embodiment of the application. As shown in FIG. 8, the flying spraying route is a circle route, and the flying spraying route includes four waypoints, and the four waypoints are respectively waypoints. A. Waypoint B, Waypoint C and Waypoint D, and the navigation sequence is waypoint A→waypoint B→waypoint C→waypoint D. In this way, this generates a circle route waypoint A→waypoint B→waypoint C→waypoint D→waypoint A enclosed by waypoint A, waypoint B, waypoint C, and waypoint D.
图9为本申请实施例中的飞行喷洒航线的一示意图,如图9所示,该飞行喷洒航线为带状航线,且该飞行喷洒航线包括四个航点,四个航点分别为航点E、航点F、航点G和航点H,其中起始点为航点E,结束点为航点G。依次连接航点E、航点F、航点G和航点H,形成一闭合喷洒区域,并在此喷洒区域根据预先设置的起始航点E、结束航点G、以及预设的航线间隔等自动规划航线,例如图9中所示的弓字形航线。Fig. 9 is a schematic diagram of the flying spraying route in the embodiment of the application. As shown in Fig. 9, the flying spraying route is a strip route, and the flying spraying route includes four waypoints, and the four waypoints are respectively waypoints. E. Waypoint F, Waypoint G and Waypoint H, where the starting point is waypoint E and the ending point is waypoint G. Connect waypoint E, waypoint F, waypoint G and waypoint H in turn to form a closed spray area, where the spray area is based on the preset start waypoint E, end waypoint G, and preset route interval Wait for automatic route planning, such as the bow-shaped route shown in Figure 9.
进一步地,飞行喷洒任务的生成方式具体为:获取预存的地表受灾程度与喷洒参数之间的映射关系表;根据地表受灾程度信息和映射关系表,确定飞行喷洒航线上每个航点的喷洒参数;根据确定的飞行喷洒航线上每个航点的喷洒参数,设置飞行喷洒航线上每个航点的喷洒参数,以生成对应的飞行喷洒任务。需要说明的是,上述地表受灾程度与喷洒参数之间的映射关系表可基于实际情况进行设置,本申请对此不作具体限定。Further, the method of generating the flight spraying task is specifically: obtaining the pre-stored mapping relationship table between the surface damage degree and the spraying parameter; according to the surface damage degree information and the mapping relationship table, determining the spraying parameter of each waypoint on the flight spraying route ; According to the determined spraying parameters of each waypoint on the flying spraying route, set the spraying parameters of each waypoint on the flying spraying route to generate the corresponding flying spraying task. It should be noted that the above-mentioned mapping relationship table between the degree of surface damage and spraying parameters can be set based on actual conditions, which is not specifically limited in this application.
进一步地,飞行喷洒任务的生成方式还可以为:根据获得的地表受灾程度信息,确定地表受灾区域的灾害蔓延边界,并确定每个航点与该灾害蔓延边界之间的位置关系;根据每个航点与该灾害蔓延边界之间的位置关系,确定飞行喷洒航线上每个航点的喷洒参数,使得位于灾害蔓延边界的已蔓延侧的航点的喷洒时间、喷洒浓度和/或喷洒流量等喷洒参数大于位于灾害蔓延边界的待蔓延侧的航点的喷洒时间、喷洒浓度和/或喷洒流量等喷洒参数;根据确定的飞行喷洒航线上每个航点的喷洒参数,设置飞行喷洒航线上每个航点的喷洒参数,以生成对应的飞行喷洒任务。其中,位置关系包括位于灾害蔓延边界的已蔓延侧 和待蔓延侧,已蔓延侧的受灾程度大于待蔓延侧的受灾程度。通过航点与灾害蔓延边界之间的位置关系,确定喷洒参数,使得无人机按照喷洒参数对地表受灾区域进行喷洒,以抑制或延缓灾害继续蔓延。Further, the method of generating the flying spray task can also be: determining the disaster spread boundary of the disaster area on the surface according to the obtained surface disaster degree information, and determining the position relationship between each waypoint and the disaster spread boundary; according to each The positional relationship between the waypoint and the disaster spread boundary, and determine the spraying parameters of each waypoint on the flight spraying route, so that the spraying time, spraying concentration and/or spraying flow rate of the waypoints on the spread side of the disaster spreading boundary are determined. The spraying parameters are greater than the spraying time, spraying concentration and/or spraying flow rate of the waypoints on the side to be spread on the disaster spreading boundary; according to the spraying parameters of each waypoint on the determined flying spraying route, set every spraying route on the flight The spraying parameters of each waypoint to generate the corresponding flying spraying mission. Among them, the positional relationship includes the spread side and the side to be spread located on the boundary of the disaster spread. The extent of the disaster on the spread side is greater than that of the side to be spread. According to the positional relationship between the waypoint and the disaster spread boundary, the spraying parameters are determined, so that the UAV can spray the disaster-stricken area on the surface according to the spraying parameters, so as to suppress or delay the continuous spread of the disaster.
请参阅图10,图10是本申请一实施例中灾害蔓延边界的一示意图,如图10所示,灾害蔓延边界的已蔓延侧为地表受灾区域A所在的一侧,灾害蔓延边界的待蔓延侧为地表区域B所在的一侧,且灾害由地表受灾区域A向地表区域B进行蔓延,地表受灾区域A的作物与地表区域B的作物可以是相同的,也可以是不同的。Please refer to FIG. 10, which is a schematic diagram of the disaster spread boundary in an embodiment of the present application. As shown in FIG. 10, the spread side of the disaster spread boundary is the side where the surface disaster area A is located, and the disaster spread boundary is to be spread The side is the side where the surface area B is located, and the disaster spreads from the surface disaster area A to the surface area B. The crops of the surface disaster area A and the surface area B can be the same or different.
进一步地,喷洒参数还包括喷洒箱标签,喷洒箱标签用于标识喷洒箱,无人机包括至少两个喷洒箱,且每个喷洒箱的农药种类和/或农药浓度不同,不同地表受灾程度对应的农药种类和/或农药浓度不同,地表受灾程度越高对应的农药浓度也越高,地表受灾程度越低对应的农药浓度也越低,针对一个地表受灾区域,按照每个航点所在位置区域的地表受灾程度,设置每个航点的农药种类和/或农药浓度对应的喷洒箱标签。通过不同农药种类和/或农药浓度的组合喷洒,可以有效的治理灾害,也可以抑制或者延缓灾害继续蔓延。Further, the spraying parameters also include spray box labels, which are used to identify spray boxes. The drone includes at least two spray boxes, and each spray box has different pesticide types and/or pesticide concentrations, corresponding to different levels of surface damage Different types of pesticides and/or concentrations of pesticides. The higher the degree of surface damage, the higher the corresponding pesticide concentration. The lower the degree of surface damage, the lower the corresponding pesticide concentration. For a surface disaster area, according to the location area of each waypoint Set the spray box label corresponding to the pesticide type and/or pesticide concentration of each waypoint. By spraying different types of pesticides and/or pesticide concentrations in combination, disasters can be effectively treated, and the continued spread of disasters can also be suppressed or delayed.
进一步地,针对一个地表受灾区域,可以通过至少两台无人机协同完成飞行喷洒任务,每台无人机负责地表受灾区域中的一个喷洒作业区域,至少两台无人机之间的喷洒作业区域是交叠的,交叠的区域为地表受灾程度严重对应的区域,且在交叠的区域内,各无人机位于不同高度或各无人机的喷洒时刻点不同,或者通过传感器实现避障,可以避免无人机在交叠的区域内喷洒时发生相撞,由于至少两台无人机均对交叠的区域(受灾程度严重的区域)进行组合喷洒,可以有效的提高对受灾程度严重的区域的治理效果,也可以快速完成对地表受灾区域的喷洒,抑制或者延缓灾害继续蔓延。Furthermore, for a surface disaster area, at least two drones can be used to coordinate the flight spraying task. Each drone is responsible for a spraying area in the surface disaster area, and spraying operations between at least two drones. The areas are overlapped. The overlapped area is the area corresponding to the severe damage on the surface. In the overlapped area, the drones are located at different heights or the spraying time points of the drones are different, or avoidance can be achieved through sensors. It can avoid the collision of drones when spraying in the overlapping area. Since at least two drones are spraying the overlapping area (the severely damaged area), it can effectively improve the damage to the area. The treatment effect of severe areas can also quickly complete the spraying of disaster-stricken areas on the surface to inhibit or delay the continued spread of disasters.
以两台无人机协同完成地表受灾区域的飞行喷洒任务为例进行解释说明,两台无人机中的第一无人机负责地表受灾区域中的一个喷洒作业区域,两台无人机中的第二无人机负责地表受灾区域中的另一个喷洒作业区域,第一无人机负责的喷洒作业区域与第二无人机负责的喷洒区域是交叠的,且交叠的区域为地表受灾程度严重对应的区域,在交叠的区域内,第一无人机和第二无人机位于不同高度,或者第一无人机和第二无人机无人机的喷洒时刻点不同,或者第一无人机和第二无人机通过传感器实现避障,可以避免第一无人机和第二无人机在交叠的区域内喷洒时发生相撞。请参照图11,图11是本申请一实施例中喷洒作业区域交叠的一示意图,如图11所示,地表受灾区域包括喷洒作业区域 A和喷洒作业区域B,且喷洒作业区域A与喷洒作业区域B交叠的区域为区域C,第一无人机负责喷洒作业区域A,第二无人机负责喷洒作业区域B,第一无人机和第二无人机均对交叠的区域C进行喷洒作业。Take two drones to complete the flight spraying tasks in the disaster area on the surface as an example to explain. The first drone of the two drones is responsible for a spraying area in the disaster area on the surface, and the two drones are The second drone is responsible for another spraying area in the disaster area on the surface. The spraying area of the first drone and the spraying area of the second drone are overlapped, and the overlapping area is the surface In the area corresponding to the severe damage degree, in the overlapping area, the first drone and the second drone are located at different heights, or the spraying time of the first drone and the second drone are different. Or the first UAV and the second UAV can avoid obstacles through sensors, which can prevent the first UAV and the second UAV from colliding when spraying in the overlapping area. Please refer to Figure 11. Figure 11 is a schematic diagram of the overlap of the spraying operation area in an embodiment of the present application. As shown in Figure 11, the surface disaster area includes the spraying operation area A and the spraying operation area B, and the spraying operation area A and the spraying area The overlapping area of operation area B is area C. The first drone is responsible for spraying operation area A, and the second drone is responsible for spraying operation area B. Both the first drone and the second drone are in the overlapping area C Perform spraying operations.
请参照图12,图12是本申请一实施例中喷洒作业区域交叠的另一示意图,如图12所示,根据获得的地表受灾程度信息,确定的地表受灾区域的灾害蔓延方向为地表受灾区域A向地表区域B蔓延,给四台无人机分配飞行喷洒区域,并在各自对应的飞行喷洒区域中规划飞行喷洒航线,无人机1的飞行喷洒区域为a,无人机2的飞行喷洒区域为b,无人机3的飞行喷洒区域为c,无人机4的飞行喷洒区域为d,且飞行喷洒区域a、飞行喷洒区域为b和飞行喷洒区域为c与飞行喷洒区域d存在交叠,无人机1、无人机2和无人机3主要负责已蔓延侧的地表受灾区域A的喷洒,无人机4主要负责待蔓延侧的地表区域B的喷洒,交叠的区域包含地表受灾区域A的一部分和地表区域B的一部分。在感知到地表受灾程度信息后,可以给多台无人机分配飞行喷洒区域,并规划飞行喷洒航线,使得多台无人机协同完成喷洒作业,可以在无人机的续航里程内合理规划喷洒路径,同时对交叠的区域进行组合喷洒,可以抑制或者延缓灾害继续蔓延。作为对比实施例,如果在未使用多无人机协同作业实施交叠喷洒的情况下,单个无人机如果对灾害较为严重或者灾害的蔓延区域实施喷洒,那么需要在固定区域停留较久,以增大该区域位置的喷洒药量,而无人机由于能源的限制,续航里程固定,在该情况下难以对整个要喷洒的区域进行喷洒。Please refer to FIG. 12, which is another schematic diagram of the overlap of the spraying operation area in an embodiment of the present application. As shown in FIG. 12, according to the obtained information on the degree of surface damage, the determined disaster spreading direction of the surface affected area is the surface damage The area A spreads to the surface area B. The four drones are assigned the flight spraying area, and the flight spraying route is planned in their corresponding flight spraying area. The flight spraying area of UAV 1 is a, and the flight of UAV 2 is flying. The spraying area is b, the flying spraying area of drone 3 is c, the flying spraying area of drone 4 is d, and the flying spraying area a, the flying spraying area is b, and the flying spraying area is c and the flying spraying area d exists. Overlapping, UAV 1, UAV 2 and UAV 3 are mainly responsible for spraying on the surface affected area A on the spread side, and UAV 4 is mainly responsible for spraying on the surface area B on the side to be spread, the overlapping area Contains a part of the disaster-affected area A and a part of the surface area B. After perceiving the information on the degree of damage to the surface, it is possible to allocate the spraying area to multiple drones and plan the flight spraying route, so that the multiple drones can coordinate the spraying operation, and the spraying can be planned reasonably within the range of the drone At the same time, the combined spraying of overlapping areas can inhibit or delay the spread of disasters. As a comparative example, if multiple drones are not used for cooperative operation to implement overlapping spraying, if a single drone sprays areas with more serious disasters or disasters, it needs to stay in a fixed area for a long time. Increase the amount of spraying agent in this area, and the UAV has a fixed cruising range due to energy constraints. In this case, it is difficult to spray the entire area to be sprayed.
可以理解的是,以上实施例仅为多台无人机协同完成喷洒作业的示例性说明,也可以根据实际需要,灵活对无人机的数量进行设置,例如无人机的数量为2台、3台、4台和5台等,本申请对此不作限定。It is understandable that the above embodiment is only an exemplary description of the spraying operation performed by multiple drones in coordination, and the number of drones can also be flexibly set according to actual needs. For example, the number of drones is two. 3 sets, 4 sets, 5 sets, etc., this application does not limit this.
S402、执行所述飞行喷洒任务,并控制所述喷洒装置按照所述飞行喷洒任务中的喷洒参数执行对应的喷洒动作。S402: Execute the flying spraying task, and control the spraying device to execute a corresponding spraying action according to the spraying parameters in the flying spraying task.
无人机400在获取到飞行喷洒任务,执行飞行喷洒任务,并控制喷洒装置按照飞行喷洒任务中的喷洒参数执行对应的喷洒动作,即从飞行喷洒任务中获取飞行喷洒航线和每个航点的喷洒参数,并按照该飞行喷洒航线飞行,且在飞行过程中,控制喷洒装置401按照每个航点的喷洒参数执行对应的喷洒动作,以完成飞行喷洒任务。The drone 400 obtains the flying spraying task, executes the flying spraying task, and controls the spraying device to perform the corresponding spraying action according to the spraying parameters in the flying spraying task, that is, obtain the flying spraying route and the flight spraying route and each waypoint from the flying spraying task. Spraying parameters and flying according to the flying spraying route, and during the flight, the spraying device 401 is controlled to perform corresponding spraying actions according to the spraying parameters of each waypoint to complete the flying spraying task.
无人机可以执行根据地表特征的识别结果确定的飞行喷洒任务,可以自动完成对农作物或者果树喷洒农药或者浇水,对农作物或者果树等的倒伏、病虫 害或者缺水等情况进行防治。UAVs can perform flying spraying tasks determined based on the recognition results of ground features, and can automatically spray pesticides or water crops or fruit trees, and prevent and control crops or fruit trees from lodging, diseases and insect pests, or water shortages.
本申请还提供一种地表特征识别设备。The application also provides a ground feature recognition device.
请参照图13,图13是本申请一实施例提供的地表特征识别设备的示意性框图。如图13所示,该地表特征识别设500包括处理器501和存储器502,处理器501和存储器502通过总线503连接,该总线503比如为I2C(Inter-integrated Circuit)总线。其中,该地表特征识别设备500可以为地面控制平台、服务器或无人机,地面控制平台包括笔记本电脑和PC电脑等,服务器可以为单台的服务器,也可以为由多台服务器组成的服务器集群,无人机包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。Please refer to FIG. 13, which is a schematic block diagram of a surface feature recognition device provided by an embodiment of the present application. As shown in FIG. 13, the surface feature recognition device 500 includes a processor 501 and a memory 502, and the processor 501 and the memory 502 are connected by a bus 503, which is, for example, an I2C (Inter-integrated Circuit) bus. Wherein, the surface feature recognition device 500 can be a ground control platform, a server or a drone. The ground control platform includes a laptop computer and a PC computer. The server can be a single server or a server cluster composed of multiple servers. , UAVs include rotary-wing UAVs, such as four-rotor UAVs, six-rotor UAVs, and eight-rotor UAVs. It can also be a fixed-wing UAV, or a rotary-wing UAV and a fixed-wing UAV. The combination of is not limited here.
具体地,处理器501可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。Specifically, the processor 501 may be a micro-controller unit (MCU), a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
具体地,存储器502可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。Specifically, the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
其中,所述处理器501用于运行存储在存储器502中的计算机程序,并在执行所述计算机程序时实现如下步骤:Wherein, the processor 501 is configured to run a computer program stored in the memory 502, and implement the following steps when the computer program is executed:
获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息;Acquiring ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information;
对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图;Processing the multiple color channel information and image depth information to obtain a feature map containing semantic information on the surface;
根据所述特征图中的地表语义信息,确定地表特征的识别结果。According to the semantic information of the surface in the feature map, the recognition result of the surface feature is determined.
进一步地,所述地表图像信息包括俯视正视图。Further, the surface image information includes a top view and a front view.
进一步地,所述图像深度信息为所述俯视正视图下的高度信息。Further, the image depth information is height information in the top and front view.
进一步地,所述地表图像信息包括地表图像所对应的地理位置信息。Further, the surface image information includes geographic location information corresponding to the surface image.
进一步地,所述地理位置信息包括通过全球卫星导航定位系统获得的定位信息;Further, the geographic location information includes positioning information obtained through a global satellite navigation and positioning system;
和/或,通过实时差分定位系统获得的定位信息。And/or, positioning information obtained through a real-time differential positioning system.
进一步地,所述多个颜色通道的图像信息至少包括R、G、B三通道信息。Further, the image information of the multiple color channels includes at least R, G, and B three-channel information.
进一步地,所述图像深度信息基于双目测距算法和所述多个颜色通道的图像信息确定。Further, the image depth information is determined based on a binocular ranging algorithm and image information of the multiple color channels.
进一步地,所述图像深度信息基于单目测距算法和所述多个颜色通道的图 像信息的关联帧确定。Further, the image depth information is determined based on a monocular ranging algorithm and an associated frame of the image information of the multiple color channels.
进一步地,所述处理器在实现对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图时,用于实现:Further, when the processor realizes the processing of the multiple color channel information and the image depth information to obtain a feature map containing the semantic information of the ground surface, it is used to realize:
对所述多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;Performing fusion processing on the multiple color channel information and image depth information to obtain a fused image block;
将所述融合图像块与预设的图像块集中的图像块进行匹配,得到所述融合图像块与每个所述图像块之间的匹配程度;Matching the fused image block with an image block in a preset image block set to obtain a degree of matching between the fused image block and each of the image blocks;
根据所述融合图像块与每个所述图像块之间的匹配程度,确定包含地表语义信息的特征图。According to the degree of matching between the fused image block and each of the image blocks, a feature map containing the semantic information of the ground surface is determined.
进一步地,所述处理器在实现对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图时,用于实现:Further, when the processor realizes the processing of the multiple color channel information and the image depth information to obtain a feature map containing the semantic information of the ground surface, it is used to realize:
对所述多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;Performing fusion processing on the multiple color channel information and image depth information to obtain a fused image block;
通过经过预先训练的神经网络对所述融合图像块进行处理,得到包含地表语义信息的特征图。The fused image block is processed by a pre-trained neural network to obtain a feature map containing semantic information of the ground surface.
进一步地,所述处理器在实现获取地表图像信息时,用于实现:Further, when the processor realizes the acquisition of surface image information, it is used to realize:
获取地表图像集,并根据所述地表图像集中的每个地表图像,生成对应的深度图;Acquiring a ground surface image set, and generating a corresponding depth map according to each ground surface image in the ground surface image set;
对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息。Each surface image in the surface image set and the depth map are processed to obtain surface image information.
进一步地,所述处理器在实现对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息时,用于实现:Further, when the processor realizes the processing of each ground surface image and the depth map in the ground surface image set to obtain ground surface image information, it is used to realize:
对所述地表图像集中的每个地表图像进行拼接,得到拼接地表图像;Splicing each ground surface image in the ground surface image set to obtain a spliced ground surface image;
对所述深度图和拼接地表图像进行融合,得到地表图像信息。The depth map and the spliced ground surface image are fused to obtain ground surface image information.
进一步地,所述处理器在实现对所述地表图像集中的每个地表图像进行拼接,得到拼接地表图像时,用于实现:Further, when the processor implements the splicing of each ground surface image in the ground surface image set to obtain a spliced ground surface image, it is used to implement:
确定每个所述地表图像各自对应的拼接参数,其中,所述拼接参数包括拼接顺序和拼接关系;Determine the respective stitching parameters corresponding to each of the surface images, where the stitching parameters include a stitching order and a stitching relationship;
根据每个所述地表图像各自对应的拼接参数,对每个所述地表图像进行拼接,得到拼接地表图像。According to the splicing parameters corresponding to each of the surface images, splicing each of the surface images to obtain a spliced surface image.
进一步地,所述处理器在实现确定每个所述地表图像各自对应的拼接参数时,用于实现:Further, when the processor implements the determination of the stitching parameters corresponding to each of the surface images, it is used to implement:
获取每个所述地表图像各自对应的航拍时刻点和航拍位置;Acquiring an aerial photographing time point and aerial photographing position corresponding to each of the ground surface images;
根据每个所述地表图像各自对应的航拍时刻点,确定每个所述地表图像各 自对应的拼接顺序;Determine the corresponding splicing sequence of each of the surface images according to the respective aerial time points corresponding to each of the surface images;
根据每个所述地表图像各自对应的航拍位置,确定每个所述地表图像各自对应的拼接关系。According to the aerial position corresponding to each of the surface images, the stitching relationship corresponding to each of the surface images is determined.
进一步地,所述处理器在实现根据所述特征图中的地表语义信息,确定地表特征的识别结果之后,还用于实现:Further, after the processor realizes the recognition result of the ground surface feature is determined according to the ground surface semantic information in the feature map, it is further used to realize:
获取地表特征的至少一个历史识别结果,其中,所述历史识别结果为在当前时刻之前确定的地表特征的识别结果;Acquiring at least one historical recognition result of a surface feature, where the historical recognition result is a recognition result of the surface feature determined before the current moment;
根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势。According to the recognition result of the land surface feature and the at least one historical recognition result of the land surface feature, a change trend of the land surface is determined.
进一步地,所述处理器在实现根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势时,用于实现:Further, when the processor realizes the determination of the change trend of the surface according to the recognition result of the surface feature and the at least one historical recognition result of the surface feature, it is used to achieve:
获取所述地表特征的识别结果的第一确定时刻点以及每个所述历史识别结果的第二确定时刻点;Acquiring the first determined time point of the recognition result of the surface feature and the second determined time point of each of the historical recognition results;
根据所述第一确定时刻点和每个所述第二确定时刻点,对所述识别结果和每个所述历史识别结果进行排序,得到识别结果队列;Sort the recognition results and each of the historical recognition results according to the first determined time point and each of the second determined time points to obtain a recognition result queue;
根据所述识别结果队列中相邻的每两个识别结果,确定多个候选地表变化趋势;Determine a plurality of candidate land surface change trends according to every two adjacent recognition results in the recognition result queue;
对所述多个候选地表变化趋势进行处理,得到地表变化趋势。The multiple candidate land surface change trends are processed to obtain the land surface change trends.
进一步地,所述处理器在实现根据所述特征图中的地表语义信息,确定地表特征的识别结果之后,还用于实现:Further, after the processor realizes the recognition result of the ground surface feature is determined according to the ground surface semantic information in the feature map, it is further used to realize:
获取三维地表地图,并从所述识别结果中获取地表受灾区域信息、地表受灾程度信息和地表受灾面积信息;Obtaining a three-dimensional surface map, and obtaining surface disaster area information, surface damage degree information, and surface damage area information from the recognition result;
根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图。Mark the three-dimensional surface map according to the information of the surface disaster area, the degree of surface damage, and the information of the surface area.
进一步地,所述处理器在实现根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记时,用于实现:Further, when the processor implements marking the three-dimensional surface map according to the information on the surface disaster area, the information on the degree of the surface disaster, and the information on the surface area affected by the disaster, the processor is used to implement:
根据所述地表受灾区域信息,在所述三维地表地图中标记每个受灾区域;Mark each disaster-stricken area in the three-dimensional surface map according to the information of the disaster-stricken area on the surface;
根据所述地表受灾程度信息,标记每个所述受灾区域各自对应的受灾程度;According to the information on the degree of damage to the surface, mark the degree of damage corresponding to each of the disaster-stricken areas;
根据所述地表受灾面积信息,标记每个所述受灾区域各自对应的受灾面积。According to the information of the disaster area on the surface, the disaster area corresponding to each of the disaster areas is marked.
进一步地,所述处理器在实现根据所述地表受灾程度信息,标记每个所述受灾区域各自对应的受灾程度时,用于实现:Further, when the processor realizes marking the corresponding disaster degree of each disaster-affected area according to the information of the disaster degree of the surface, it is used to realize:
根据所述地表受灾程度信息,确定每个所述受灾区域各自对应的受灾程度颜色;Determine the color of the disaster degree corresponding to each of the disaster-affected areas according to the information on the degree of disaster on the surface;
根据每个所述受灾区域各自对应的受灾程度颜色,标记每个所述受灾区域各自对应的受灾程度。According to the disaster degree color corresponding to each disaster area, mark the disaster degree corresponding to each disaster area.
进一步地,所述处理器在实现根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图之后,用于实现:Further, the processor implements marking the three-dimensional surface map according to the information of the surface disaster area, the information of the degree of damage of the surface and the information of the area of the surface of the disaster, to obtain the target three-dimensional map marked with the disaster area, the degree of damage, and the area of the disaster. After the map, it is used to achieve:
存储所述目标三维地图;和/或Store the three-dimensional map of the target; and/or
将所述目标三维地图发送至终端设备,以供所述终端设备显示所述目标三维地图;和/或Sending the target three-dimensional map to a terminal device for the terminal device to display the target three-dimensional map; and/or
将所述目标三维地图发送至云端,以供所述云端存储所述目标三维地图。The target three-dimensional map is sent to the cloud, so that the cloud stores the target three-dimensional map.
需要说明的是,所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的地表特征识别设备的具体工作过程,可以参考前述飞行任务生成方法实施例中的对应过程,在此不再赘述。It should be noted that those skilled in the art can clearly understand that for the convenience and brevity of description, the specific working process of the surface feature recognition device described above can refer to the corresponding process in the foregoing flight mission generation method embodiment. This will not be repeated here.
本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的地表特征识别方法的步骤。The embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement the foregoing implementation The steps of the surface feature recognition method provided in the example.
其中,所述计算机可读存储介质可以是前述任一实施例所述的地表特征识别设备的内部存储单元,例如所述地表特征识别设备的硬盘或内存。所述计算机可读存储介质也可以是所述地表特征识别设备的外部存储设备,例如所述地表特征识别设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。Wherein, the computer-readable storage medium may be an internal storage unit of the surface feature recognition device described in any of the foregoing embodiments, such as a hard disk or memory of the surface feature recognition device. The computer-readable storage medium may also be an external storage device of the surface feature recognition device, such as a plug-in hard disk equipped on the surface feature recognition device, a smart media card (SMC), a secure digital ( Secure Digital, SD card, Flash Card, etc.
应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should be understood that the terms used in the specification of this application are only for the purpose of describing specific embodiments and are not intended to limit the application. As used in the specification of this application and the appended claims, unless the context clearly indicates other circumstances, the singular forms "a", "an" and "the" are intended to include plural forms. It should also be understood that the term "and/or" used in the specification and appended claims of this application refers to any combination of one or more of the associated listed items and all possible combinations, and includes these combinations.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Anyone familiar with the technical field can easily think of various equivalents within the technical scope disclosed in this application. Modifications or replacements, these modifications or replacements shall be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (47)

  1. 一种地表特征识别方法,其特征在于,包括:A land surface feature recognition method, which is characterized in that it includes:
    获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息;Acquiring ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information;
    对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图;Processing the multiple color channel information and image depth information to obtain a feature map containing semantic information on the surface;
    根据所述特征图中的地表语义信息,确定地表特征的识别结果。According to the semantic information of the surface in the feature map, the recognition result of the surface feature is determined.
  2. 根据权利要求1所述的地表特征识别方法,其特征在于,所述地表图像信息包括俯视正视图。The land surface feature recognition method according to claim 1, wherein the land surface image information includes a top view.
  3. 根据权利要求2所述的地表特征识别方法,其特征在于,所述图像深度信息为所述俯视正视图下的高度信息。The method for recognizing ground features according to claim 2, wherein the image depth information is height information in the top and front view.
  4. 根据权利要求1所述的地表特征识别方法,其特征在于,所述地表图像信息包括地表图像所对应的地理位置信息。The land surface feature recognition method according to claim 1, wherein the land surface image information includes geographic location information corresponding to the land surface image.
  5. 根据权利要求4所述的地表特征识别方法,其特征在于,所述地理位置信息包括通过全球卫星导航定位系统获得的定位信息;The land surface feature recognition method according to claim 4, wherein the geographic location information includes positioning information obtained through a global satellite navigation and positioning system;
    和/或,通过实时差分定位系统获得的定位信息。And/or, positioning information obtained through a real-time differential positioning system.
  6. 根据权利要求1所述的地表特征识别方法,其特征在于,所述多个颜色通道的图像信息至少包括R、G、B三通道信息。The land surface feature recognition method according to claim 1, wherein the image information of the multiple color channels includes at least R, G, and B three-channel information.
  7. 根据权利要求1所述的地表特征识别方法,其特征在于,所述图像深度信息基于双目测距算法和所述多个颜色通道的图像信息确定。The land surface feature recognition method according to claim 1, wherein the image depth information is determined based on a binocular ranging algorithm and image information of the multiple color channels.
  8. 根据权利要求1所述的地表特征识别方法,其特征在于,所述图像深度信息基于单目测距算法和所述多个颜色通道的图像信息的关联帧确定。The land surface feature recognition method according to claim 1, wherein the image depth information is determined based on a monocular ranging algorithm and an associated frame of the image information of the multiple color channels.
  9. 根据权利要求1所述的地表特征识别方法,其特征在于,所述对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图,包括:The land surface feature recognition method according to claim 1, wherein the processing the multiple color channel information and image depth information to obtain a feature map containing surface semantic information comprises:
    对所述多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;Performing fusion processing on the multiple color channel information and image depth information to obtain a fused image block;
    将所述融合图像块与预设的图像块集中的图像块进行匹配,得到所述融合图像块与每个所述图像块之间的匹配程度;Matching the fused image block with an image block in a preset image block set to obtain a degree of matching between the fused image block and each of the image blocks;
    根据所述融合图像块与每个所述图像块之间的匹配程度,确定包含地表语义信息的特征图。According to the degree of matching between the fused image block and each of the image blocks, a feature map containing the semantic information of the ground surface is determined.
  10. 根据权利要求1所述的地表特征识别方法,其特征在于,所述对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图,包括:The land surface feature recognition method according to claim 1, wherein the processing the multiple color channel information and image depth information to obtain a feature map containing surface semantic information comprises:
    对所述多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;Performing fusion processing on the multiple color channel information and image depth information to obtain a fused image block;
    通过经过预先训练的神经网络对所述融合图像块进行处理,得到包含地表语义信息的特征图。The fused image block is processed by a pre-trained neural network to obtain a feature map containing semantic information of the ground surface.
  11. 根据权利要求1至10中任一项所述的地表特征识别方法,其特征在于,所述获取地表图像信息,包括:The land surface feature recognition method according to any one of claims 1 to 10, wherein said acquiring land surface image information comprises:
    获取地表图像集,并根据所述地表图像集中的每个地表图像,生成对应的深度图;Acquiring a ground surface image set, and generating a corresponding depth map according to each ground surface image in the ground surface image set;
    对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息。Each surface image in the surface image set and the depth map are processed to obtain surface image information.
  12. 根据权利要求11所述的地表特征识别方法,其特征在于,所述对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息,包括:The land surface feature recognition method according to claim 11, wherein the processing each land surface image in the land surface image set and the depth map to obtain land surface image information comprises:
    对所述地表图像集中的每个地表图像进行拼接,得到拼接地表图像;Splicing each ground surface image in the ground surface image set to obtain a spliced ground surface image;
    对所述深度图和拼接地表图像进行融合,得到地表图像信息。The depth map and the spliced ground surface image are fused to obtain ground surface image information.
  13. 根据权利要求12所述的地表特征识别方法,其特征在于,所述对所述地表图像集中的每个地表图像进行拼接,得到拼接地表图像,包括:The land surface feature recognition method according to claim 12, wherein said splicing each land surface image in said land surface image set to obtain a spliced land surface image comprises:
    确定每个所述地表图像各自对应的拼接参数,其中,所述拼接参数包括拼接顺序和拼接关系;Determine the respective stitching parameters corresponding to each of the surface images, where the stitching parameters include a stitching order and a stitching relationship;
    根据每个所述地表图像各自对应的拼接参数,对每个所述地表图像进行拼接,得到拼接地表图像。According to the splicing parameters corresponding to each of the surface images, splicing each of the surface images to obtain a spliced surface image.
  14. 根据权利要求13所述的地表特征识别方法,其特征在于,所述确定每个所述地表图像各自对应的拼接参数,包括:The land surface feature recognition method according to claim 13, wherein said determining the respective stitching parameters corresponding to each of the land surface images comprises:
    获取每个所述地表图像各自对应的航拍时刻点和航拍位置;Acquiring an aerial photographing time point and aerial photographing position corresponding to each of the ground surface images;
    根据每个所述地表图像各自对应的航拍时刻点,确定每个所述地表图像各自对应的拼接顺序;Determine the corresponding splicing sequence of each of the surface images according to the respective aerial time points corresponding to each of the surface images;
    根据每个所述地表图像各自对应的航拍位置,确定每个所述地表图像各自对应的拼接关系。According to the aerial position corresponding to each of the surface images, the stitching relationship corresponding to each of the surface images is determined.
  15. 根据权利要求1至10中任一项所述的地表特征识别方法,其特征在于,所述根据所述特征图中的地表语义信息,确定地表特征的识别结果之后,还包 括:The land surface feature recognition method according to any one of claims 1 to 10, characterized in that, after determining the recognition result of the land surface feature according to the surface semantic information in the feature map, the method further comprises:
    获取地表特征的至少一个历史识别结果,其中,所述历史识别结果为在当前时刻之前确定的地表特征的识别结果;Acquiring at least one historical recognition result of a surface feature, where the historical recognition result is a recognition result of the surface feature determined before the current moment;
    根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势。According to the recognition result of the land surface feature and the at least one historical recognition result of the land surface feature, a change trend of the land surface is determined.
  16. 根据权利要求15所述的地表特征识别方法,其特征在于,所述根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势,包括:The land surface feature recognition method according to claim 15, wherein the determining the land surface change trend based on the recognition result of the land surface feature and the at least one historical recognition result of the land surface feature comprises:
    获取所述地表特征的识别结果的第一确定时刻点以及每个所述历史识别结果的第二确定时刻点;Acquiring the first determined time point of the recognition result of the surface feature and the second determined time point of each of the historical recognition results;
    根据所述第一确定时刻点和每个所述第二确定时刻点,对所述识别结果和每个所述历史识别结果进行排序,得到识别结果队列;Sort the recognition results and each of the historical recognition results according to the first determined time point and each of the second determined time points to obtain a recognition result queue;
    根据所述识别结果队列中相邻的每两个识别结果,确定多个候选地表变化趋势;Determine a plurality of candidate land surface change trends according to every two adjacent recognition results in the recognition result queue;
    对所述多个候选地表变化趋势进行处理,得到地表变化趋势。The multiple candidate land surface change trends are processed to obtain the land surface change trends.
  17. 根据权利要求1至10中任一项所述的地表特征识别方法,其特征在于,所述根据所述特征图中的地表语义信息,确定地表特征的识别结果之后,还包括:The land surface feature recognition method according to any one of claims 1 to 10, characterized in that, after determining the recognition result of the land surface feature according to the surface semantic information in the feature map, the method further comprises:
    获取三维地表地图,并从所述识别结果中获取地表受灾区域信息、地表受灾程度信息和地表受灾面积信息;Obtaining a three-dimensional surface map, and obtaining surface disaster area information, surface damage degree information, and surface damage area information from the recognition result;
    根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图。Mark the three-dimensional surface map according to the information of the surface disaster area, the degree of surface damage, and the information of the surface area.
  18. 根据权利要求17所述的地表特征识别方法,其特征在于,所述根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,包括:The land surface feature recognition method according to claim 17, wherein the marking the three-dimensional surface map according to the information on the disaster area on the surface, the information on the degree of damage on the surface, and the information on the area on the surface by the disaster comprises:
    根据所述地表受灾区域信息,在所述三维地表地图中标记每个受灾区域;Mark each disaster-stricken area in the three-dimensional surface map according to the information of the disaster-stricken area on the surface;
    根据所述地表受灾程度信息,标记每个所述受灾区域各自对应的受灾程度;According to the information on the degree of damage to the surface, mark the degree of damage corresponding to each of the disaster-stricken areas;
    根据所述地表受灾面积信息,标记每个所述受灾区域各自对应的受灾面积。According to the information of the disaster area on the surface, the disaster area corresponding to each of the disaster areas is marked.
  19. 根据权利要求18所述的地表特征识别方法,其特征在于,所述根据所述地表受灾程度信息,标记每个所述受灾区域各自对应的受灾程度,包括:The land surface feature recognition method according to claim 18, wherein the marking the corresponding disaster degree of each of the disaster-stricken areas according to the information of the disaster degree of the ground surface comprises:
    根据所述地表受灾程度信息,确定每个所述受灾区域各自对应的受灾程度 颜色;Determine the color of the disaster degree corresponding to each of the disaster-stricken areas according to the information about the degree of disaster on the surface;
    根据每个所述受灾区域各自对应的受灾程度颜色,标记每个所述受灾区域各自对应的受灾程度。According to the disaster degree color corresponding to each disaster area, mark the disaster degree corresponding to each disaster area.
  20. 根据权利要求17所述的地表特征识别方法,其特征在于,所述根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图之后,还包括:The land surface feature recognition method according to claim 17, wherein the three-dimensional surface map is marked according to the information of the surface disaster area, the information of the degree of surface damage, and the information of the area of the surface disaster, so as to obtain the marked area of the disaster. After the target three-dimensional map of the extent of the disaster and the area affected by the disaster, it also includes:
    存储所述目标三维地图;和/或Store the three-dimensional map of the target; and/or
    将所述目标三维地图发送至终端设备,以供所述终端设备显示所述目标三维地图;和/或Sending the target three-dimensional map to a terminal device for the terminal device to display the target three-dimensional map; and/or
    将所述目标三维地图发送至云端,以供所述云端存储所述目标三维地图。The target three-dimensional map is sent to the cloud, so that the cloud stores the target three-dimensional map.
  21. 一种无人机,其特征在于,所述无人机包括喷洒装置和处理器,所述处理器,用于实现如下步骤:An unmanned aerial vehicle, characterized in that the unmanned aerial vehicle includes a spraying device and a processor, and the processor is configured to implement the following steps:
    获取飞行喷洒任务,其中,所述飞行喷洒任务根据地表特征的识别结果确定;Acquiring a flying spraying task, wherein the flying spraying task is determined according to the recognition result of the ground surface features;
    执行所述飞行喷洒任务,并控制所述喷洒装置按照所述飞行喷洒任务中的喷洒参数执行对应的喷洒动作。Execute the flying spraying task, and control the spraying device to execute the corresponding spraying action according to the spraying parameters in the flying spraying task.
  22. 根据权利要求21所述的无人机,其特征在于,所述处理器实现获取飞行喷洒任务时,用于实现:The unmanned aerial vehicle according to claim 21, wherein the processor is used to realize:
    获取地表特征的识别结果,其中,所述地表特征的识别结果包括地表受灾区域信息和地表受灾程度信息;Acquiring the recognition result of the ground surface feature, wherein the recognition result of the ground surface feature includes the information of the disaster area on the ground and the information of the degree of the disaster on the ground;
    根据所述地表受灾区域信息和所述地表受灾程度信息,生成对应的飞行喷洒任务。According to the surface disaster area information and the surface disaster degree information, a corresponding flying spray mission is generated.
  23. 根据权利要求22所述的无人机,其特征在于,所述处理器实现根据所述地表受灾区域信息和所述地表受灾程度信息,生成对应的飞行喷洒任务时,用于实现:The unmanned aerial vehicle according to claim 22, characterized in that, when the processor is configured to generate a corresponding flying spray task according to the information on the disaster area on the surface and the information on the degree of damage on the surface, it is used to achieve:
    根据所述地表受灾区域信息,确定待规划的飞行喷洒航线的航点信息,并根据所述航点信息,生成对应的飞行喷洒航线;Determine the waypoint information of the flight spraying route to be planned according to the information of the disaster-affected area on the surface, and generate the corresponding flight spraying route according to the waypoint information;
    根据所述地表受灾程度信息,设置所述飞行喷洒航线上每个航点的喷洒参数,以生成对应的飞行喷洒任务。According to the information on the degree of damage to the surface, the spraying parameters of each waypoint on the flying spraying route are set to generate a corresponding flying spraying task.
  24. 根据权利要求23所述的无人机,其特征在于,所述根据所述地表受灾区域信息,确定待规划的飞行喷洒航线的航点信息,包括:The unmanned aerial vehicle according to claim 23, wherein the determining the waypoint information of the flight spraying route to be planned according to the information of the disaster-affected area on the surface comprises:
    根据所述地表受灾区域信息,确定受灾区域的形状和面积;Determine the shape and area of the disaster area according to the information of the disaster area on the surface;
    根据所述受灾区域的形状,确定待规划的飞行喷洒航线的航线类型;According to the shape of the disaster-affected area, determine the type of flight spraying route to be planned;
    根据所述受灾区域的面积,确定待规划的飞行喷洒航线的航点数量;According to the area of the disaster-affected area, determine the number of waypoints of the flight spraying route to be planned;
    根据所述航线类型、地表受灾区域信息和航点数量,确定待规划的飞行喷洒航线的航点信息。Determine the waypoint information of the flight spraying route to be planned according to the route type, the information of the disaster-affected area on the surface and the number of waypoints.
  25. 根据权利要求23所述的无人机,其特征在于,所述根据所述航点信息,生成对应的飞行喷洒航线,包括:The UAV according to claim 23, wherein said generating a corresponding flight spraying route according to said waypoint information comprises:
    从所述航点信息中获取每个航点的航行顺序和航点位置;Obtain the navigation sequence and position of each waypoint from the waypoint information;
    按照每个航点的所述航行顺序的先后,依次连接每个所述航点位置,以生成对应的飞行喷洒航线。According to the sequence of the navigation sequence of each waypoint, the position of each waypoint is connected in turn to generate the corresponding flight spraying route.
  26. 根据权利要求23所述的无人机,其特征在于,所述根据所述地表受灾程度信息,设置所述飞行喷洒航线上每个航点的喷洒参数,以生成对应的飞行喷洒任务,包括:The unmanned aerial vehicle according to claim 23, wherein the setting spraying parameters of each waypoint on the flying spraying route to generate a corresponding flying spraying task according to the information on the degree of damage to the ground surface comprises:
    获取预存的地表受灾程度与喷洒参数之间的映射关系表;Obtain the mapping relationship table between the pre-stored surface damage degree and spraying parameters;
    根据所述地表受灾程度信息和映射关系表,确定所述飞行喷洒航线上每个航点的喷洒参数;Determine the spraying parameters of each waypoint on the flying spraying route according to the information on the degree of damage to the surface and the mapping relationship table;
    根据确定的所述飞行喷洒航线上每个航点的喷洒参数,设置所述飞行喷洒航线上每个航点的喷洒参数,以生成对应的飞行喷洒任务。According to the determined spraying parameters of each waypoint on the flying spraying route, the spraying parameters of each waypoint on the flying spraying route are set to generate a corresponding flying spraying task.
  27. 一种地表特征识别设备,其特征在于,所述地表特征识别设备包括存储器和处理器;A surface feature recognition device, characterized in that the surface feature recognition device includes a memory and a processor;
    所述存储器用于存储计算机程序;The memory is used to store a computer program;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:The processor is configured to execute the computer program and, when executing the computer program, implement the following steps:
    获取地表图像信息,其中,所述地表图像信息包括多个颜色通道的图像信息和图像深度信息;Acquiring ground surface image information, where the ground surface image information includes image information of multiple color channels and image depth information;
    对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图;Processing the multiple color channel information and image depth information to obtain a feature map containing semantic information on the surface;
    根据所述特征图中的地表语义信息,确定地表特征的识别结果。According to the semantic information of the surface in the feature map, the recognition result of the surface feature is determined.
  28. 根据权利要求27所述的地表特征识别设备,其特征在于,所述地表图像信息包括俯视正视图。The land surface feature recognition device according to claim 27, wherein the land surface image information includes a top view.
  29. 根据权利要求28所述的地表特征识别设备,其特征在于,所述图像深度信息为所述俯视正视图下的高度信息。The surface feature recognition device according to claim 28, wherein the image depth information is height information in the top and front view.
  30. 根据权利要求27所述的地表特征识别设备,其特征在于,所述地表图像信息包括地表图像所对应的地理位置信息。The land surface feature recognition device according to claim 27, wherein the land surface image information includes geographic location information corresponding to the land surface image.
  31. 根据权利要求30所述的地表特征识别设备,其特征在于,所述地理位置信息包括通过全球卫星导航定位系统获得的定位信息;The land surface feature recognition device according to claim 30, wherein the geographic location information includes positioning information obtained through a global satellite navigation and positioning system;
    和/或,通过实时差分定位系统获得的定位信息。And/or, positioning information obtained through a real-time differential positioning system.
  32. 根据权利要求27所述的地表特征识别设备,其特征在于,所述多个颜色通道的图像信息至少包括R、G、B三通道信息。The surface feature recognition device according to claim 27, wherein the image information of the multiple color channels includes at least R, G, and B three-channel information.
  33. 根据权利要求27所述的地表特征识别设备,其特征在于,所述图像深度信息基于双目测距算法和所述多个颜色通道的图像信息确定。The surface feature recognition device according to claim 27, wherein the image depth information is determined based on a binocular ranging algorithm and image information of the multiple color channels.
  34. 根据权利要求27所述的地表特征识别设备,其特征在于,所述图像深度信息基于单目测距算法和所述多个颜色通道的图像信息的关联帧确定。The surface feature recognition device according to claim 27, wherein the image depth information is determined based on a monocular ranging algorithm and an associated frame of the image information of the multiple color channels.
  35. 根据权利要求27所述的地表特征识别设备,其特征在于,所述处理器在实现对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图时,用于实现:The land surface feature recognition device according to claim 27, wherein the processor is used to implement processing of the multiple color channel information and image depth information to obtain a feature map containing surface semantic information. :
    对所述多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;Performing fusion processing on the multiple color channel information and image depth information to obtain a fused image block;
    将所述融合图像块与预设的图像块集中的图像块进行匹配,得到所述融合图像块与每个所述图像块之间的匹配程度;Matching the fused image block with an image block in a preset image block set to obtain a degree of matching between the fused image block and each of the image blocks;
    根据所述融合图像块与每个所述图像块之间的匹配程度,确定包含地表语义信息的特征图。According to the degree of matching between the fused image block and each of the image blocks, a feature map containing the semantic information of the ground surface is determined.
  36. 根据权利要求27所述的地表特征识别设备,其特征在于,所述处理器在实现对所述多个颜色通道信息和图像深度信息进行处理,得到包含地表语义信息的特征图时,用于实现:The land surface feature recognition device according to claim 27, wherein the processor is used to implement processing of the multiple color channel information and image depth information to obtain a feature map containing surface semantic information. :
    对所述多个颜色通道信息和图像深度信息进行融合处理,得到融合图像块;Performing fusion processing on the multiple color channel information and image depth information to obtain a fused image block;
    通过经过预先训练的神经网络对所述融合图像块进行处理,得到包含地表语义信息的特征图。The fused image block is processed by a pre-trained neural network to obtain a feature map containing semantic information of the ground surface.
  37. 根据权利要求27至36中任一项所述的地表特征识别设备,其特征在于,所述处理器在实现获取地表图像信息时,用于实现:The land surface feature recognition device according to any one of claims 27 to 36, wherein the processor is configured to achieve: when realizing the acquisition of ground surface image information:
    获取地表图像集,并根据所述地表图像集中的每个地表图像,生成对应的深度图;Acquiring a ground surface image set, and generating a corresponding depth map according to each ground surface image in the ground surface image set;
    对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息。Each surface image in the surface image set and the depth map are processed to obtain surface image information.
  38. 根据权利要求37所述的地表特征识别设备,其特征在于,所述处理器 在实现对所述地表图像集中的每个地表图像和所述深度图进行处理,得到地表图像信息时,用于实现:The land surface feature recognition device according to claim 37, wherein the processor is used to implement processing for each land surface image and the depth map in the land surface image set to obtain land surface image information. :
    对所述地表图像集中的每个地表图像进行拼接,得到拼接地表图像;Splicing each ground surface image in the ground surface image set to obtain a spliced ground surface image;
    对所述深度图和拼接地表图像进行融合,得到地表图像信息。The depth map and the spliced ground surface image are fused to obtain ground surface image information.
  39. 根据权利要求38所述的地表特征识别设备,其特征在于,所述处理器在实现对所述地表图像集中的每个地表图像进行拼接,得到拼接地表图像时,用于实现:The land surface feature recognition device according to claim 38, wherein, when the processor implements splicing each land surface image in the land surface image set to obtain a spliced land surface image, it is used to implement:
    确定每个所述地表图像各自对应的拼接参数,其中,所述拼接参数包括拼接顺序和拼接关系;Determine the respective stitching parameters corresponding to each of the surface images, where the stitching parameters include a stitching order and a stitching relationship;
    根据每个所述地表图像各自对应的拼接参数,对每个所述地表图像进行拼接,得到拼接地表图像。According to the splicing parameters corresponding to each of the surface images, splicing each of the surface images to obtain a spliced surface image.
  40. 根据权利要求39所述的地表特征识别设备,其特征在于,所述处理器在实现确定每个所述地表图像各自对应的拼接参数时,用于实现:The land surface feature recognition device according to claim 39, wherein, when the processor realizes the determination of the stitching parameters corresponding to each of the land surface images, it is configured to realize:
    获取每个所述地表图像各自对应的航拍时刻点和航拍位置;Acquiring an aerial photographing time point and aerial photographing position corresponding to each of the ground surface images;
    根据每个所述地表图像各自对应的航拍时刻点,确定每个所述地表图像各自对应的拼接顺序;Determine the corresponding splicing sequence of each of the surface images according to the respective aerial time points corresponding to each of the surface images;
    根据每个所述地表图像各自对应的航拍位置,确定每个所述地表图像各自对应的拼接关系。According to the aerial position corresponding to each of the surface images, the stitching relationship corresponding to each of the surface images is determined.
  41. 根据权利要求27至36中任一项所述的地表特征识别设备,其特征在于,所述处理器在实现根据所述特征图中的地表语义信息,确定地表特征的识别结果之后,还用于实现:The land surface feature recognition device according to any one of claims 27 to 36, wherein the processor is further configured to determine the recognition result of the land surface feature based on the surface semantic information in the feature map. achieve:
    获取地表特征的至少一个历史识别结果,其中,所述历史识别结果为在当前时刻之前确定的地表特征的识别结果;Acquiring at least one historical recognition result of a surface feature, where the historical recognition result is a recognition result of the surface feature determined before the current moment;
    根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势。According to the recognition result of the land surface feature and the at least one historical recognition result of the land surface feature, a change trend of the land surface is determined.
  42. 根据权利要求41所述的地表特征识别设备,其特征在于,所述处理器在实现根据所述地表特征的识别结果和所述地表特征的至少一个历史识别结果,确定地表变化趋势时,用于实现:The land surface feature recognition device according to claim 41, wherein the processor is configured to determine a land surface change trend based on the recognition result of the land surface feature and at least one historical recognition result of the land surface feature. achieve:
    获取所述地表特征的识别结果的第一确定时刻点以及每个所述历史识别结果的第二确定时刻点;Acquiring the first determined time point of the recognition result of the surface feature and the second determined time point of each of the historical recognition results;
    根据所述第一确定时刻点和每个所述第二确定时刻点,对所述识别结果和每个所述历史识别结果进行排序,得到识别结果队列;Sort the recognition results and each of the historical recognition results according to the first determined time point and each of the second determined time points to obtain a recognition result queue;
    根据所述识别结果队列中相邻的每两个识别结果,确定多个候选地表变化趋势;Determine a plurality of candidate land surface change trends according to every two adjacent recognition results in the recognition result queue;
    对所述多个候选地表变化趋势进行处理,得到地表变化趋势。The multiple candidate land surface change trends are processed to obtain the land surface change trends.
  43. 根据权利要求27至36中任一项所述的地表特征识别设备,其特征在于,所述处理器在实现根据所述特征图中的地表语义信息,确定地表特征的识别结果之后,还用于实现:The land surface feature recognition device according to any one of claims 27 to 36, wherein the processor is further configured to determine the recognition result of the land surface feature based on the surface semantic information in the feature map. achieve:
    获取三维地表地图,并从所述识别结果中获取地表受灾区域信息、地表受灾程度信息和地表受灾面积信息;Obtaining a three-dimensional surface map, and obtaining surface disaster area information, surface damage degree information, and surface damage area information from the recognition result;
    根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图。Mark the three-dimensional surface map according to the information of the surface disaster area, the degree of surface damage, and the information of the surface area.
  44. 根据权利要求43所述的地表特征识别设备,其特征在于,所述处理器在实现根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记时,用于实现:The surface feature recognition device according to claim 43, wherein when the processor implements marking the three-dimensional surface map according to the surface disaster area information, the surface disaster degree information, and the surface disaster area information, Used to achieve:
    根据所述地表受灾区域信息,在所述三维地表地图中标记每个受灾区域;Mark each disaster-stricken area in the three-dimensional surface map according to the information of the disaster-stricken area on the surface;
    根据所述地表受灾程度信息,标记每个所述受灾区域各自对应的受灾程度;According to the information on the degree of damage to the surface, mark the degree of damage corresponding to each of the disaster-stricken areas;
    根据所述地表受灾面积信息,标记每个所述受灾区域各自对应的受灾面积。According to the information of the disaster area on the surface, the disaster area corresponding to each of the disaster areas is marked.
  45. 根据权利要求44所述的地表特征识别设备,其特征在于,所述处理器在实现根据所述地表受灾程度信息,标记每个所述受灾区域各自对应的受灾程度时,用于实现:The land surface feature recognition device according to claim 44, wherein the processor is configured to: when the processor realizes marking the corresponding disaster degree of each of the disaster-stricken areas according to the information of the disaster degree of the ground surface:
    根据所述地表受灾程度信息,确定每个所述受灾区域各自对应的受灾程度颜色;Determine the color of the disaster degree corresponding to each of the disaster-affected areas according to the information on the degree of disaster on the surface;
    根据每个所述受灾区域各自对应的受灾程度颜色,标记每个所述受灾区域各自对应的受灾程度。According to the disaster degree color corresponding to each disaster area, mark the disaster degree corresponding to each disaster area.
  46. 根据权利要求43所述的地表特征识别设备,其特征在于,所述处理器在实现根据所述地表受灾区域信息、地表受灾程度信息和地表受灾面积信息,对所述三维地表地图进行标记,得到标记有受灾区域、受灾程度和受灾面积的目标三维地图之后,用于实现:The surface feature recognition device according to claim 43, wherein the processor implements marking the three-dimensional surface map according to the surface disaster area information, the surface disaster degree information, and the surface disaster area information to obtain After marking the target three-dimensional map with the affected area, the extent of the disaster and the affected area, it is used to achieve:
    存储所述目标三维地图;和/或Store the three-dimensional map of the target; and/or
    将所述目标三维地图发送至终端设备,以供所述终端设备显示所述目标三维地图;和/或Sending the target three-dimensional map to a terminal device for the terminal device to display the target three-dimensional map; and/or
    将所述目标三维地图发送至云端,以供所述云端存储所述目标三维地图。The target three-dimensional map is sent to the cloud, so that the cloud stores the target three-dimensional map.
  47. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至20中任一项所述的地表特征识别方法。A computer-readable storage medium, characterized in that, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes as described in any one of claims 1 to 20. The surface feature recognition method described.
PCT/CN2019/106228 2019-09-17 2019-09-17 Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium WO2021051278A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033702.0A CN112154447A (en) 2019-09-17 2019-09-17 Surface feature recognition method and device, unmanned aerial vehicle and computer-readable storage medium
PCT/CN2019/106228 WO2021051278A1 (en) 2019-09-17 2019-09-17 Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/106228 WO2021051278A1 (en) 2019-09-17 2019-09-17 Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2021051278A1 true WO2021051278A1 (en) 2021-03-25

Family

ID=73891556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106228 WO2021051278A1 (en) 2019-09-17 2019-09-17 Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112154447A (en)
WO (1) WO2021051278A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113296537A (en) * 2021-05-25 2021-08-24 湖南博瑞通航航空技术有限公司 Electric power unmanned aerial vehicle inspection method and system based on electric power tower model matching
CN113312991A (en) * 2021-05-14 2021-08-27 华能阜新风力发电有限责任公司 Front-end intelligent recognition system based on unmanned aerial vehicle
CN113537309A (en) * 2021-06-30 2021-10-22 北京百度网讯科技有限公司 Object identification method and device and electronic equipment
CN114067245A (en) * 2021-11-16 2022-02-18 中国铁路兰州局集团有限公司 Method and system for identifying hidden danger of external environment of railway
CN114299699A (en) * 2021-12-03 2022-04-08 浙江朱道模块集成有限公司 Landscape plant intelligence pronunciation sight identification system based on thing networking
CN114675695A (en) * 2022-03-26 2022-06-28 太仓武港码头有限公司 Control method, system, equipment and storage medium for dust suppression of storage yard
CN116630828A (en) * 2023-05-30 2023-08-22 中国公路工程咨询集团有限公司 Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11904871B2 (en) * 2019-10-30 2024-02-20 Deere & Company Predictive machine control
CN115903855B (en) * 2023-01-10 2023-05-09 北京航科星云科技有限公司 Forest farm pesticide spraying path planning method, device and equipment based on satellite remote sensing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105173085A (en) * 2015-09-18 2015-12-23 山东农业大学 Automatic control system and method for variable pesticide spraying of unmanned aerial vehicle
CN105654103A (en) * 2014-11-12 2016-06-08 联想(北京)有限公司 Image identification method and electronic equipment
CN106778888A (en) * 2016-12-27 2017-05-31 浙江大学 A kind of orchard pest and disease damage survey system and method based on unmanned aerial vehicle remote sensing
CN106956778A (en) * 2017-05-23 2017-07-18 广东容祺智能科技有限公司 A kind of unmanned plane pesticide spraying method and system
CN109446959A (en) * 2018-10-18 2019-03-08 广州极飞科技有限公司 Partitioning method and device, the sprinkling control method of drug of target area
US20190180119A1 (en) * 2017-03-30 2019-06-13 Hrl Laboratories, Llc System for real-time object detection and recognition using both image and size features

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102909A1 (en) * 2016-03-11 2019-04-04 Siemens Aktiengesellschaft Automated identification of parts of an assembly
CN109978947B (en) * 2019-03-21 2021-08-17 广州极飞科技股份有限公司 Method, device, equipment and storage medium for monitoring unmanned aerial vehicle
CN109977924A (en) * 2019-04-15 2019-07-05 北京麦飞科技有限公司 For real time image processing and system on the unmanned plane machine of crops
CN110232418B (en) * 2019-06-19 2021-12-17 达闼机器人有限公司 Semantic recognition method, terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654103A (en) * 2014-11-12 2016-06-08 联想(北京)有限公司 Image identification method and electronic equipment
CN105173085A (en) * 2015-09-18 2015-12-23 山东农业大学 Automatic control system and method for variable pesticide spraying of unmanned aerial vehicle
CN106778888A (en) * 2016-12-27 2017-05-31 浙江大学 A kind of orchard pest and disease damage survey system and method based on unmanned aerial vehicle remote sensing
US20190180119A1 (en) * 2017-03-30 2019-06-13 Hrl Laboratories, Llc System for real-time object detection and recognition using both image and size features
CN106956778A (en) * 2017-05-23 2017-07-18 广东容祺智能科技有限公司 A kind of unmanned plane pesticide spraying method and system
CN109446959A (en) * 2018-10-18 2019-03-08 广州极飞科技有限公司 Partitioning method and device, the sprinkling control method of drug of target area

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312991A (en) * 2021-05-14 2021-08-27 华能阜新风力发电有限责任公司 Front-end intelligent recognition system based on unmanned aerial vehicle
CN113296537A (en) * 2021-05-25 2021-08-24 湖南博瑞通航航空技术有限公司 Electric power unmanned aerial vehicle inspection method and system based on electric power tower model matching
CN113296537B (en) * 2021-05-25 2024-03-12 湖南博瑞通航航空技术有限公司 Electric power unmanned aerial vehicle inspection method and system based on electric power pole tower model matching
CN113537309A (en) * 2021-06-30 2021-10-22 北京百度网讯科技有限公司 Object identification method and device and electronic equipment
CN113537309B (en) * 2021-06-30 2023-07-28 北京百度网讯科技有限公司 Object identification method and device and electronic equipment
CN114067245A (en) * 2021-11-16 2022-02-18 中国铁路兰州局集团有限公司 Method and system for identifying hidden danger of external environment of railway
CN114299699A (en) * 2021-12-03 2022-04-08 浙江朱道模块集成有限公司 Landscape plant intelligence pronunciation sight identification system based on thing networking
CN114299699B (en) * 2021-12-03 2023-10-10 浙江朱道模块集成有限公司 Landscape plant intelligent voice scene identification system based on Internet of things
CN114675695A (en) * 2022-03-26 2022-06-28 太仓武港码头有限公司 Control method, system, equipment and storage medium for dust suppression of storage yard
CN114675695B (en) * 2022-03-26 2023-04-18 太仓武港码头有限公司 Control method, system, equipment and storage medium for dust suppression of storage yard
CN116630828A (en) * 2023-05-30 2023-08-22 中国公路工程咨询集团有限公司 Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation
CN116630828B (en) * 2023-05-30 2023-11-24 中国公路工程咨询集团有限公司 Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation

Also Published As

Publication number Publication date
CN112154447A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2021051278A1 (en) Earth surface feature identification method and device, unmanned aerial vehicle, and computer readable storage medium
AU2019276115B2 (en) Target Region Operation Planning Method and Apparatus, Storage Medium, and Processor
AU2019238711B2 (en) Method and apparatus for acquiring boundary of area to be operated, and operation route planning method
CN109845715B (en) Pesticide spraying control method, device, equipment and storage medium
CN104615146B (en) Unmanned aerial vehicle spraying operation automatic navigation method without need of external navigation signal
EP3119178B1 (en) Method and system for navigating an agricultural vehicle on a land area
CN106873630B (en) Flight control method and device and execution equipment
Duggal et al. Plantation monitoring and yield estimation using autonomous quadcopter for precision agriculture
CN109341702B (en) Route planning method, device and equipment in operation area and storage medium
CN110254722B (en) Aircraft system, aircraft system method and computer-readable storage medium
CN109283937A (en) A kind of plant protection based on unmanned plane sprays the method and system of operation
CN110832494A (en) Semantic generation method, equipment, aircraft and storage medium
CN113994171A (en) Path planning method, device and system
Kamat et al. A survey on autonomous navigation techniques
WO2021081896A1 (en) Operation planning method, system, and device for spraying unmanned aerial vehicle
CN111982096B (en) Operation path generation method and device and unmanned aerial vehicle
US20220214700A1 (en) Control method and device, and storage medium
CN114283067A (en) Prescription chart acquisition method and device, storage medium and terminal equipment
Basso A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques
Hroob et al. Learned Long-Term Stability Scan Filtering for Robust Robot Localisation in Continuously Changing Environments
Parlange et al. Leveraging single-shot detection and random sample consensus for wind turbine blade inspection
US20230023069A1 (en) Vision-based landing system
Sarkar Intelligent Energy-Efficient Drones: Path Planning, Real-Time Monitoring and Decision-Making
Wendel Scalable visual navigation for micro aerial vehicles using geometric prior knowledge
Cielniak et al. Learned Long-Term Stability Scan Filtering for Robust Robot Localisation in Continuously Changing Environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945940

Country of ref document: EP

Kind code of ref document: A1