WO2021146969A1 - Distance measurement method, movable platform, device, and storage medium - Google Patents

Distance measurement method, movable platform, device, and storage medium Download PDF

Info

Publication number
WO2021146969A1
WO2021146969A1 PCT/CN2020/073656 CN2020073656W WO2021146969A1 WO 2021146969 A1 WO2021146969 A1 WO 2021146969A1 CN 2020073656 W CN2020073656 W CN 2020073656W WO 2021146969 A1 WO2021146969 A1 WO 2021146969A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
drone
target area
pixel
distance
Prior art date
Application number
PCT/CN2020/073656
Other languages
French (fr)
Chinese (zh)
Inventor
刘宝恩
李鑫超
王涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080004149.0A priority Critical patent/CN112639881A/en
Priority to PCT/CN2020/073656 priority patent/WO2021146969A1/en
Publication of WO2021146969A1 publication Critical patent/WO2021146969A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present invention relates to the technical field of image recognition, in particular to a distance measurement method, a movable platform, equipment and storage medium.
  • UAV is an unmanned aerial vehicle operated by radio remote control equipment and self-provided program control device. Compared with manned aircraft, it has the characteristics of small size and low cost. It has been widely used in many fields, such as street scene shooting, power inspection, traffic monitoring, post-disaster rescue and so on.
  • the distance measurement is usually realized by using a distance measuring sensor, such as a lidar, which is configured on the UAV.
  • a distance measuring sensor such as a lidar
  • the distance measurement cannot be achieved, so that the UAV has a greater risk of damage during the return process.
  • the invention provides a distance measurement method, a movable platform, equipment and a storage medium, which are used to enable an unmanned aerial vehicle not equipped with a distance measuring sensor to realize distance measurement and ensure measurement accuracy.
  • the first aspect of the present invention is to provide a distance measurement method, which includes:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the second aspect of the present invention is to provide a movable platform, the movable platform includes: a body, a power system and a control device;
  • the power system is arranged on the body and used to provide power for the movable platform
  • the control device includes a memory and a processor
  • the memory is used to store a computer program
  • the processor is configured to run a computer program stored in the memory to realize:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the third aspect of the present invention is to provide a distance measuring device, which includes:
  • Memory used to store computer programs
  • the processor is configured to run a computer program stored in the memory to realize:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the fourth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, the computer-readable storage medium stores program instructions, and the program instructions are used in the first aspect.
  • the distance measurement method, movable platform, equipment, and storage medium provided by the present invention first obtain the first image of the airspace above the drone, and then perform semantic recognition on the first image, so as to determine the influence of no one in the first image.
  • the target area for the aircraft to fly.
  • the drone is controlled to move a preset distance in a designated direction to obtain a second image of the airspace above the drone, that is, the first image and the second image are shot at different locations.
  • the pixel points of the target area in the first image and the second image are matched to obtain matched pixels, and the distance between the target area and the drone is determined according to the matched pixels.
  • the present invention provides a method for realizing distance measurement through image recognition. Since the camera is an indispensable device to ensure the normal execution of the task of the UAV, the measurement method provided by the present invention will not affect the size and cost of the UAV while ensuring the accuracy of the measurement. In addition, using the measurement method provided by the present invention, distance measurement can also be achieved for drones that are not equipped with a depth sensor.
  • FIG. 1 is a schematic flowchart of a distance measurement method provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of the UAV according to an embodiment of the present invention when its configured PTZ is in different states;
  • FIG. 3 is a schematic flowchart of a second image acquisition manner provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an annular field of view corresponding to an image provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a first image acquisition manner according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a method for measuring the distance between a drone and a target area provided by an embodiment of the present invention
  • FIG. 7 is a positional relationship of various parameters in a measurement method provided by an embodiment of the present invention as shown in FIG. 6; FIG.
  • FIG. 8 shows the positional relationship of various parameters in another measurement method provided by the embodiment of the present invention as shown in FIG. 6;
  • FIG. 8 shows the positional relationship of various parameters in another measurement method provided by the embodiment of the present invention as shown in FIG. 6;
  • FIG. 9 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention.
  • the distance measurement method provided by the present invention is to measure the distance between the target area above the drone that affects the flight of the drone and the drone, and this distance measurement is particularly important in the automatic return process of the drone.
  • the UAV when the drone completes the flight mission or encounters harsh natural environments such as protruding mountains during the flight, or the communication connection with the ground base station is disconnected, in order to ensure the safety of the drone, avoid In the event of a damage accident, the UAV often needs to turn on and return home automatically. And since there is an ascending flight phase during the automatic return of the drone, determining the distance between the drone and the target area becomes an important condition for judging whether the drone can return automatically. At this time, the obstacle distance measurement method provided by each embodiment of the present invention can be used to realize distance measurement.
  • FIG. 1 is a schematic flowchart of a distance measurement method provided by an embodiment of the present invention.
  • the main body of the distance measurement method is the measurement equipment.
  • the measurement device can be implemented as software or a combination of software and hardware.
  • the measurement equipment implements the distance measurement method to realize the measurement of the distance between the UAV and the target area that affects the unmanned flight.
  • the measurement equipment in this embodiment and the following embodiments may specifically be movable platforms, such as unmanned aerial vehicles, unmanned vehicles, unmanned ships, and so on. The following embodiments will be described by taking a drone as an execution subject as an example.
  • the method may include:
  • the camera configured on the drone may be a monocular camera.
  • This camera can be placed on a platform that can be lifted upwards.
  • the first image of the airspace above the drone can be taken.
  • the non-raised and raised state of the pan/tilt can be shown in Figure 2.
  • the angle of view corresponding to the first image may be the same as the angle of view of the monocular camera.
  • the drone can identify the semantic category to which each pixel in the first image belongs, that is, perform pixel-level semantic recognition on the first image.
  • this kind of semantic recognition can be accomplished with the aid of a neural network model, and the specific recognition process can be referred to the following related descriptions.
  • the above neural network model can realize two classifications, that is, distinguish the sky and obstacles in the first image; it can also realize multi-classification, that is, distinguish the sky, trees, buildings, etc. in the first image. .
  • the target area affecting the flight of the drone is determined in the first image according to the semantic category of each pixel. If the target area contains obstacles, it will affect the normal ascent of the drone.
  • an alternative way is to determine the pixels used to describe the sky as the target pixels, and the target pixels constitute the target area.
  • Another alternative is to use the first image to describe the sky and the obstacles adjacent to the sky. Determine the pixel point as the target pixel point, and the target pixel point constitutes the target area.
  • the selection of the target area can also take into account the flying environment of the drone and/or the size of the drone.
  • the category to which each pixel in the image belongs determines the candidate area, where the candidate area may be composed of pixels used to describe the sky and the obstacles closest to the sky.
  • the range of the candidate area is adjusted. For example, when the size of the drone is small or the obstacles in the flying environment are sparsely distributed, the candidate area can be reduced by a preset multiple to obtain the target area; otherwise, the candidate area can be expanded Preset multiples to get the target area.
  • the drone in response to the flight control instruction, can fly a preset distance from the current position in the designated direction to another position, and at this other position, a second image of the airspace above the drone can be taken.
  • the current position of the drone can be called the first position, and the first image is taken at the first position; the other position is called the second position, and the second image is taken at the second position. have to.
  • the above-mentioned designated direction may be upward, that is, after the first image is taken at the first position, the drone may respond to the received ascending flight control instruction to fly from the first position.
  • the monocular camera can take a second image at the second position.
  • the difference between the first position and the second position is usually small, for example, a few centimeters.
  • the ascent flight control command can be generated autonomously by the drone, or sent to the drone by the pilot through the control device.
  • the drone can match the pixel points in the target area of the first image with the pixel points in the second image to obtain matching pixels, that is, to obtain at least one matching pixel pair.
  • the objects contained in the first image and the second image are usually the same, but there will be a slight difference in position.
  • the two pixels contained in any matched pixel pair they describe the same object.
  • whether the pixels are matched or not can be reflected by the similarity between the pixels.
  • S105 Determine the distance between the target area and the drone according to the matching pixels.
  • the established ranging model can be preset to realize the ranging, or the triangular ranging principle can be used to realize the ranging.
  • the ranging refer to the following embodiments shown in Figs. 6 to 8.
  • the first image of the airspace above the drone is acquired, and after semantic recognition is performed on it, the target area that affects the flight of the drone can be obtained. Then control the drone to move a preset distance in a designated direction to obtain a second image of the airspace above the drone. Then, the pixel points of the target area in the first image and the second image are matched to obtain matching pixels. Finally, the distance between the target area and the drone is determined according to the matching pixels, and further, the flight status of the drone can be controlled according to this distance. It can be seen that the present invention provides a method for realizing obstacle distance measurement through image recognition. Since the image to be recognized is taken by the necessary camera of the drone itself, it will not affect the size and cost of the drone. At the same time, this method can also be used to measure distances for drones that are not equipped with depth sensors to ensure the accuracy of the measurement.
  • step 102 of the embodiment shown in FIG. 1 the above mentioned using the neural network model to perform pixel-level semantic recognition of the first image, and the semantic recognition process can be described in detail below:
  • the neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • the neural network model can include multiple computing nodes. Each computing node can include a convolution (Conv) layer, batch normalization (BN), and an activation function ReLU.
  • the computing nodes can use skip connection (Skip Connection). ) Way to connect.
  • K ⁇ H ⁇ W can be input into the neural network model, and after the neural network model is processed, the output data of C ⁇ H ⁇ W can be obtained.
  • K can represent the number of input channels, and K can be equal to 4, corresponding to the four channels of red (R, red), green (G, green), blue (B, blue) and depth (D, deep) respectively;
  • H can represent the height of the input image (that is, the first image),
  • W can represent the width of the input image, and C can represent the number of categories.
  • the input image when the input image is too large, an input image can be cut into N sub-images.
  • the input data can be N ⁇ K ⁇ H' ⁇ W'
  • the output data can be N ⁇ C ⁇ H' ⁇ W', where H'can represent the height of the sub-image, and W'can represent the width of the sub-image.
  • H'can represent the height of the sub-image
  • W'can represent the width of the sub-image.
  • the feature map may also be obtained in other ways, which is not limited in this application.
  • Using the above-mentioned pre-trained neural network model to process the environment image to obtain a feature map may specifically include the following steps:
  • Step 1 Input the environment image into the neural network model to obtain the model output result of the neural network model.
  • the model output result of the neural network model may include the confidence feature maps output by multiple output channels, and the multiple output channels can correspond to multiple object categories one-to-one, and the pixel values of the confidence feature maps of a single object category are used To characterize the probability that a pixel is an object category.
  • Step 2 According to the model output result of the neural network model, a feature map containing semantic information is obtained.
  • the object category corresponding to the confidence feature map with the largest pixel value at the same pixel location in the multiple confidence feature maps one-to-one corresponding to the multiple output channels may be used as the object category of the pixel location to obtain the feature map.
  • the number of output channels of the neural network model is 4, and the output result of each channel is a confidence feature map, that is, the 4 confidence feature maps are the confidence feature map 1 to the confidence feature map 4, and the confidence The degree characteristic map 1 corresponds to the sky, the confidence characteristic map 2 corresponds to buildings, the confidence characteristic map 3 corresponds to trees, and the confidence characteristic map 4 corresponds to "other". In these categories, except for the sky, the rest can be regarded as obstacles.
  • the pixel value at the pixel location (100, 100) in the confidence feature map 1 is 70
  • the pixel value at the pixel location (100, 100) in the confidence feature map 2 is 50
  • the pixel at the pixel location (100, 100) in the confidence feature map 3 When the value is 20, and the pixel value of the pixel position (100, 100) in the confidence feature map 4 is 20, it can be determined that the pixel position (100, 100) is the sky.
  • the pixel value at the pixel location (100, 80) in the confidence feature map 1 is 20
  • the pixel value at the pixel location (100, 80) in the confidence feature map 2 is 30, and the pixel location in the confidence feature map 3
  • the pixel value of (100,80) is 20
  • the pixel value of pixel position (100,80) in the confidence feature figure 4 is 70
  • step 104 of the embodiment shown in FIG. 1 the above has also provided a pixel matching method, that is, first calculate the similarity between each pixel in the target area and each pixel in the second image, and then At least one pair of matching pixels is obtained according to the similarity.
  • a pixel matching method that is, first calculate the similarity between each pixel in the target area and each pixel in the second image, and then At least one pair of matching pixels is obtained according to the similarity.
  • the target area and the second image each contain a large number of pixels, this results in a large amount of calculation in the matching process, which will occupy more computing resources of the drone, and also make the matching efficiency inefficient. high.
  • the drone can use a feature point detection algorithm pre-configured in itself to extract the target area and feature pixel points in the second image respectively.
  • the detected characteristic pixel points are usually corner points in the image.
  • the aforementioned feature point detection algorithm may be a Scale-invariant Feature Transform (SIFT) algorithm, a Speeded-Up Robust Features (SURF) algorithm, or a binary robust independent feature. (Binary Robust Independent Elementary Features, BRIEF for short) algorithm and so on.
  • SIFT Scale-invariant Feature Transform
  • SURF Speeded-Up Robust Features
  • BRIEF Binary Robust Independent Elementary Features
  • the drone can perform matching processing on the target area and the characteristic pixel points in the second image to obtain at least one matching pixel pair.
  • the two characteristic pixels in it also describe the same object.
  • the similarity between pixels may be the similarity between the descriptors of characteristic pixels. If the similarity is greater than or equal to the preset threshold, it is determined that the two characteristic pixel points match, so as to form a pair of matched pixels from the two.
  • the feature pixel points obtained by using different feature point detection algorithms may use different similarity calculation methods.
  • the similarity can be obtained by calculating the Hamming distance between the feature pixels; for the descriptor obtained by the SIFT algorithm or the SURF algorithm, the description corresponding to the feature pixel can be calculated The Euclidean distance between the children to get the similarity.
  • the number of pixels for matching can be greatly reduced, which also greatly reduces the amount of calculation in the matching process to ensure matching efficiency.
  • the field of view corresponding to the first image is the same as the field of view of the monocular camera, and both are smaller. of. It is easy to understand that the larger the field of view corresponding to the first image, the more comprehensive the description of the airspace above the drone.
  • the first image with this large field of view can more accurately calculate the target area and the unmanned area.
  • the distance between the drones can more accurately control the automatic return of the drone.
  • an optional implementation manner may be:
  • S1011 Respond to the first flight control instruction to make the UAV rotate and fly one circle in situ at the first position.
  • S1012 Obtain the first image taken by the monocular camera of the drone during the rotating flight of the drone.
  • the drone When the drone is hovering in the first position, it responds to the first flight control command, so that the drone rotates and flies once in the first position. During this rotating flight, the monocular camera on the drone can take the first image corresponding to the circular field of view. This annular field of view can be shown in Figure 4.
  • an optional implementation manner may be:
  • S1031 Respond to the second flight control instruction to cause the drone to descend and fly a preset distance from the first position to the second position.
  • the monocular camera on the drone can take an image of the airspace above the drone, which is the first image.
  • the drone will descend and fly from the current first position to the second position.
  • the monocular camera configured on the drone will again take an image of the airspace above the drone, which is the second image.
  • the drone has obtained the first image and the second image.
  • the field of view corresponding to the second image may be the same as the field of view of the monocular camera.
  • the field of view of the monocular camera is usually small. Similar to the description in the embodiment shown in Figure 3, the larger the field of view corresponding to the second image, the more comprehensive the description of the airspace above the drone by the second image.
  • the second image with this large field of view can also more accurately calculate the distance between the target area and the drone, and can more accurately control the drone to return home.
  • the drone after the drone responds to the second flight control command to fly down to the second position, it can also respond to the third flight control command, so that the drone can rotate and fly in the second position for one full circle, and obtain the The second image taken by the monocular camera during the rotating flight of a person on the plane. After rotating and flying, the second image obtained corresponds to the circular field of view in the airspace above the drone. This annular field of view can also be shown in Figure 4.
  • the drone can obtain the first image and the second image with a circular field of view, so as to more accurately calculate the relationship between the target area and the drone based on the image of the large field of view.
  • the distance between the two can control the drone to return home more accurately.
  • the drone can take a first image at the first position, and after a descending flight, it can obtain a second image taken at the second position.
  • this method of obtaining the first image and the second image through descending flight after the UAV carries out pixel matching, as shown in Figure 6, an option is to determine the difference between the UAV and the target area based on the matched pixel pair.
  • the distance between them that is, an optional implementation of step 105 can be:
  • S1051 Determine a first distance between the first pixel point and the image center of the first image.
  • S1052 Determine a second distance between the second pixel point and the image center of the second image.
  • S1053 Determine the distance between the target area and the drone according to the preset distance, the camera parameters of the monocular camera, the first distance, and the second distance.
  • the drone can already obtain at least one matched pixel pair, where any matched pixel pair may consist of a first pixel in the first image and a second pixel in the second image.
  • the drone can use any matching pixel pair to determine the distance between the drone and the target area.
  • any matching pixel pair A includes a first pixel A 1 and a second pixel A 2
  • the image center of the first image including the first pixel A 1 is O 1
  • the first pixel including the second pixel A 2 The image center of the second image is O 2 .
  • the distance between the target area and the drone can be calculated according to the following formula:
  • a 0 is the target area
  • x is the horizontal distance between the drone and the target area, that is, the line segment O 0 A 0
  • z is the vertical distance between the drone and the target area, that is, the line segment O 0 P 1
  • O 0 is the optical center of the monocular camera configured on the UAV.
  • d 0 is the distance between the first position P 1 where the first image is captured and the second position P 2 where the second image is captured.
  • d 1 is the distance between the first pixel A 1 and the image center O 1 of the first image.
  • d 2 is the distance between the second pixel A 2 and the image center O 2 of the second image.
  • f is the focal length of the monocular camera, that is, the line segment O 1 P 1 in the figure, that is, the line segment O 2 P 2 .
  • the drone can first obtain the pixel coordinates of the first pixel A 1 and the image center O 1 of the first image in the first image. Then, the first distance d 1 between the first pixel point A 1 and the image center O 1 is determined according to the pixel coordinates of the two.
  • the calculation method of the second distance d 2 is also similar, and will not be repeated here.
  • the distance between the target area and the drone can be determined by using any matching pixel pair.
  • the above calculation can be performed on multiple matched pixel pairs to obtain multiple distances, and the distance between the drone and the target area can be determined based on the multiple distances. .
  • the average or median of multiple distances is determined as the distance between the drone and the target area.
  • the drone can take a first image at the first position, and then after an ascending flight, obtain a second image taken at the second position.
  • the above-mentioned method can also be used to realize distance measurement.
  • the above-mentioned first position P 1 , the image center O 1 of the first image, the first pixel A 1 and the second position P 2 , the image center O 2 of the second image, and the second pixel A 2 are between The positional relationship becomes as shown in Figure 8.
  • a 0 is the target area
  • x is the horizontal distance between the drone and the target area, that is, the line segment O 0 A 0
  • z is the vertical distance between the drone and the target area, that is, the line segment O 0 P 1 , O 0
  • It is the optical center of the monocular camera configured on the UAV.
  • d 0 is the distance between the first position P 1 where the first image is captured and the second position P 2 where the second image is captured.
  • d 1 is the distance between the first characteristic pixel A 1 and the image center O 1 of the first image.
  • d 2 is the distance between the second characteristic pixel A 2 and the image center O 2 of the second image.
  • f is the focal length of the monocular camera, that is, the line segment O 1 P 1 in the figure, that is, the line segment O 2 P 2 .
  • pixel point matching can also be performed on the characteristic pixel points in the first image and the second image, and any matching pixel pair obtained at this time contains the first image in the first image.
  • a characteristic pixel and a second characteristic pixel in the second image Then the first distance between the first characteristic pixel point and the image center of the first image can be further determined, and the second distance between the second characteristic pixel point and the image center of the first image can be determined, and then use Figure 7 or Figure 7
  • the method shown in 8 calculates the distance between the target area and the drone.
  • the movement of the drone can be controlled according to the distance. Specifically, if this distance meets the preset conditions, it indicates that the target area above the drone is far, and the ascending flight phase of the drone during the return home process will not be affected. At this time, the drone can control the return home Respond to instructions to control the drone to return home automatically. Otherwise, control the drone to continue hovering.
  • the unmanned return when it is determined that the distance between the drone and the target area meets the preset conditions, the unmanned return can be controlled. It is easy to understand that any flight process of a drone requires battery power. Therefore, before controlling the drone to return to home, you can also determine the power required during the return process. If the current remaining power is more than the return point When power is needed, the drone will be controlled to return home.
  • an alternative way is to first estimate the wind speed information from the current position to the destination of the return home based on the historical wind speed information. Then determine the ground speed information for landing from the current position to the return destination, so as to determine the power required for the drone's return process based on the wind speed information and ground speed information.
  • the point cloud data corresponding to the target area can be further obtained.
  • This point cloud data can describe the flight environment in which the drone is located.
  • This point cloud data can be used to plan the return path for the drone to ensure that the drone can return to home automatically and safely according to this path.
  • FIG. 9 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention. referring to FIG. 9, this embodiment provides a distance measuring device, which can execute the above-mentioned distance measuring method; specifically, Distance measuring devices include:
  • the acquiring module 11 is configured to acquire a first image of the airspace above the drone.
  • the area determining module 12 is configured to determine, in the first image, a target area that affects the flight of the drone according to the semantic category of pixels in the first image.
  • the response module 13 is configured to respond to flight instructions, so that the UAV moves a predetermined distance in a designated direction, and obtains a second image of the airspace above the UAV.
  • the matching module 14 is configured to match the pixel points of the target area in the first image and the second image to obtain matching pixels;
  • the distance determining module 15 is configured to determine the distance between the target area and the UAV according to the matching pixel points.
  • the device shown in FIG. 9 can also execute the methods of the embodiments shown in FIG. 1 to FIG. 8.
  • FIG. 10 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a movable platform, and the movable platform is at least one of the following: Aircraft, unmanned ships, unmanned vehicles; specifically, the movable platform includes: a body 21, a power system 22, and a control device 23.
  • the power system 22 is arranged on the body 21 and is used to provide power for the movable platform.
  • the control device 23 includes a memory 231 and a processor 232.
  • the memory is used to store a computer program
  • the processor is configured to run a computer program stored in the memory to realize:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the movable platform also includes a monocular camera 24, which is arranged on the body 21;
  • the processor 232 is further configured to: respond to the first flight control instruction, so that the UAV rotates and flies once at the first position;
  • the first image captured by the monocular camera configured on the drone is acquired.
  • the processor 232 is further configured to: respond to a second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
  • the second image captured by the monocular camera is acquired.
  • the monocular camera 24 is placed on a pan-tilt that can swing upward, so that the monocular camera 24 can capture the first image and the second image;
  • the processor 232 is further configured to: respond to a third flight control instruction, so that the UAV rotates and flies in situ at the second position for one circle;
  • processor 232 is further configured to: identify the semantic category of pixels in the first image;
  • the target area in the first image is determined.
  • the semantic category of the pixel includes sky and obstacles
  • the processor 232 is further configured to: according to the semantic category of the pixel, determine the description of the sky and the obstacle closest to the sky in the first image.
  • Target pixels the target area is formed by the target pixels.
  • processor 232 is further configured to: determine a candidate area in the first image according to the semantic category of the pixel;
  • the candidate area is adjusted to obtain the target area.
  • processor 232 is further configured to: calculate the similarity between the pixels in the target area and the pixels in the second image;
  • a pixel point that matches the pixel point in the target area is determined in the second image.
  • the processor 232 is further configured to: if the distance between the drone and the target area meets a preset condition, respond to a return home control instruction to make the drone return home automatically.
  • the processor 232 is further configured to determine the point cloud data corresponding to the target area according to the distance between the target area and the drone.
  • the movable platform shown in FIG. 10 can execute the methods of the embodiments shown in FIGS. 1 to 8.
  • FIGS. 1 to 8 For parts that are not described in detail in this embodiment, please refer to the related descriptions of the embodiments shown in FIGS. 1 to 8.
  • FIG. 1 to FIG. 8 For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
  • the structure of the distance measuring device shown in FIG. 11 can be implemented as an electronic device, which can be a drone.
  • the electronic device may include: one or more processors 31 and one or more memories 32.
  • the memory 32 is used to store a program that supports the electronic device to execute the distance measurement method provided in the embodiments shown in FIGS. 1 to 8 above.
  • the processor 31 is configured to execute a program stored in the memory 32.
  • the program includes one or more computer instructions, and the following steps can be implemented when one or more computer instructions are executed by the processor 31:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the structure of the distance measuring device may further include a communication interface 33 for the electronic device to communicate with other devices or a communication network.
  • the device also includes a monocular camera
  • the processor 31 is further configured to: respond to the first flight control instruction to make the UAV rotate and fly one circle in situ at the first position;
  • the first image captured by the monocular camera configured on the drone is acquired.
  • the processor 31 is further configured to: respond to a second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
  • the second image captured by the monocular camera is acquired.
  • the monocular camera is placed on a pan-tilt that can swing upward, so that the monocular camera 24 can capture the first image and the second image;
  • the processor 31 is further configured to: respond to a third flight control instruction, so that the UAV rotates and flies in situ at the second position for one circle;
  • processor 31 is further configured to: identify the semantic category of pixels in the first image;
  • the target area in the first image is determined.
  • the semantic category of the pixel includes sky and obstacle
  • the processor 31 is further configured to: according to the semantic category of the pixel, determine the description of the sky and the obstacle closest to the sky in the first image.
  • Target pixels the target area is formed by the target pixels.
  • the matching pixel pair is determined according to the similarity between the characteristic pixel points.
  • the processor 31 is further configured to: determine a candidate area in the first image according to the semantic category of the pixel;
  • the candidate area is adjusted to obtain the target area.
  • processor 31 is further configured to: calculate the similarity between the pixels in the target area and the pixels in the second image;
  • a pixel point that matches the pixel point in the target area is determined in the second image.
  • the processor 31 is further configured to: if the distance between the drone and the target area meets a preset condition, respond to a return home control instruction to make the drone return home automatically.
  • the processor 31 is further configured to: determine the point cloud data corresponding to the target area according to the distance between the target area and the drone.
  • the device shown in FIG. 11 can execute the methods of the embodiments shown in FIG. 1 to FIG. 8. For parts that are not described in detail in this embodiment, reference may be made to the related description of the embodiments shown in FIG. 1 to FIG. 8. For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
  • an embodiment of the present invention provides a computer-readable storage medium.
  • the storage medium is a computer-readable storage medium.
  • the computer-readable storage medium stores program instructions. The distance measurement method.
  • the related detection device for example: IMU
  • the embodiments of the remote control device described above are merely illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or components. It can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, remote control devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer processor (processor) execute all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, Read-Only Memory (ROM), Random Access Memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes.

Abstract

Disclosed are a distance measurement method, a movable platform, a device, and a storage medium. The method comprises: obtaining a first image of airspace above an unmanned aerial vehicle; after performing semantic recognition on the first image, obtaining a target region affecting the flight of the unmanned aerial vehicle; then controlling the unmanned aerial vehicle to move a preset distance towards a designated direction so as to obtain a second image of the airspace above the unmanned aerial vehicle; then, matching the pixels of the target region in the first image with those in the second image to obtain matched pixels; and finally, determining a distance between the target region and the unmanned aerial vehicle according to the matched pixels, and further controlling a flight state of the unmanned aerial vehicle according to the distance. In distance measurement by means of image recognition, the used image is captured by a camera necessarily provided on the unmanned aerial vehicle, and therefore, distance measurement can also be achieved by the unmanned aerial vehicle on which a depth sensor is not configured.

Description

距离测量方法、可移动平台、设备和存储介质Distance measurement method, movable platform, equipment and storage medium 技术领域Technical field
本发明涉及图像识别技术领域,尤其涉及一种距离测量方法、可移动平台、设备和存储介质。The present invention relates to the technical field of image recognition, in particular to a distance measurement method, a movable platform, equipment and storage medium.
背景技术Background technique
无人机是利用无线电遥控设备和自备的程序控制装置操纵的不载人飞行器。与载人飞机相比,它具有体积小、造价低等特点,目前已经广泛使用到众多领域中,比如街景拍摄、电力巡检、交通监视、灾后救援等等。UAV is an unmanned aerial vehicle operated by radio remote control equipment and self-provided program control device. Compared with manned aircraft, it has the characteristics of small size and low cost. It has been widely used in many fields, such as street scene shooting, power inspection, traffic monitoring, post-disaster rescue and so on.
在无人机飞行的各个阶段,都需要躲避障碍物。特别是无人机的返航阶段,此阶段中无人机存在一个上升飞行的过程,因此,就需要对无人机与无人机上方影响其飞行的区域之间的距离进行检测,以确定此区域是否对无人机上升飞行产生影响,其中,此区域中通常包含障碍物。In all stages of UAV flight, obstacles need to be avoided. Especially in the return phase of the drone, during this stage, the drone has an ascending flight. Therefore, it is necessary to detect the distance between the drone and the area above the drone that affects its flight to determine this. Whether the area has an impact on the UAV's ascent flight, where obstacles are usually contained in this area.
现有技术中,距离的测量通常是利用无人机上配置的测距传感器比如激光雷达等来实现的。而当无人机未配置有测距传感器件时,则无法实现距离的测量,使无人机在返航过程中存在较大的损毁风险。In the prior art, the distance measurement is usually realized by using a distance measuring sensor, such as a lidar, which is configured on the UAV. However, when the UAV is not equipped with a ranging sensor, the distance measurement cannot be achieved, so that the UAV has a greater risk of damage during the return process.
发明内容Summary of the invention
本发明提供了一种距离测量方法、可移动平台、设备和存储介质,用于使未配置测距传感器的无人机实现距离测量,保证测量准确性。The invention provides a distance measurement method, a movable platform, equipment and a storage medium, which are used to enable an unmanned aerial vehicle not equipped with a distance measuring sensor to realize distance measurement and ensure measurement accuracy.
本发明的第一方面是为了提供一种距离测量方法,所述方法包括:The first aspect of the present invention is to provide a distance measurement method, which includes:
获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
本发明的第二方面是为了提供一种可移动平台,所述可移动平台包括:机体、动力系统以及控制装置;The second aspect of the present invention is to provide a movable platform, the movable platform includes: a body, a power system and a control device;
所述动力系统,设置于所述机体上,用于为所述可移动平台提供动力;The power system is arranged on the body and used to provide power for the movable platform;
所述控制装置包含存储器和处理器;The control device includes a memory and a processor;
所述存储器,用于存储计算机程序;The memory is used to store a computer program;
所述处理器,用于运行所述存储器中存储的计算机程序以实现:The processor is configured to run a computer program stored in the memory to realize:
获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
本发明的第三方面是为了提供一种距离测量设备,所述设备包括:The third aspect of the present invention is to provide a distance measuring device, which includes:
存储器,用于存储计算机程序;Memory, used to store computer programs;
处理器,用于运行所述存储器中存储的计算机程序以实现:The processor is configured to run a computer program stored in the memory to realize:
获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
本发明的第四方面是为了提供一种计算机可读存储介质,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于第一方面所述的距离测量方法。The fourth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, the computer-readable storage medium stores program instructions, and the program instructions are used in the first aspect. The distance measurement method described.
本发明提供的距离测量方法、可移动平台、设备和存储介质,先获取无人机上方空域的第一图像,再对第一图像进行语义识别,以在此第一图像中 确定出影响无人机飞行的目标区域。然后,控制无人机向指定方向移动预设距离,以获取无人机上方空域的第二图像,即第一图像和第二图像拍摄于不同位置。然后,匹配目标区域在第一图像和第二图像中的像素点,以得到匹配像素点,根据此匹配像素点确定目标区域与无人机之间的距离。进一步地,还可以根据此距离控制无人机的飞行状态即返航还是原地悬停。The distance measurement method, movable platform, equipment, and storage medium provided by the present invention first obtain the first image of the airspace above the drone, and then perform semantic recognition on the first image, so as to determine the influence of no one in the first image. The target area for the aircraft to fly. Then, the drone is controlled to move a preset distance in a designated direction to obtain a second image of the airspace above the drone, that is, the first image and the second image are shot at different locations. Then, the pixel points of the target area in the first image and the second image are matched to obtain matched pixels, and the distance between the target area and the drone is determined according to the matched pixels. Furthermore, it is possible to control the flight status of the drone, namely returning home or hovering in place according to this distance.
根据上述描述可知,与现有技术中利用深度传感器测距的方式相比,本发明提供了通过图像识别来实现距离测量的方法。由于摄像头是保证无人机正常执行任务不可缺少的器件,因此,使用本发明提供的测量方法在保证测量准确性的同时,也不会对无人机的体积和成本产生影响。并且使用本发明提供的测量方法,对于未配置深度传感器的无人机也能够实现距离测量。According to the above description, compared with the distance measurement method using a depth sensor in the prior art, the present invention provides a method for realizing distance measurement through image recognition. Since the camera is an indispensable device to ensure the normal execution of the task of the UAV, the measurement method provided by the present invention will not affect the size and cost of the UAV while ensuring the accuracy of the measurement. In addition, using the measurement method provided by the present invention, distance measurement can also be achieved for drones that are not equipped with a depth sensor.
附图说明Description of the drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described here are used to provide a further understanding of the application and constitute a part of the application. The exemplary embodiments and descriptions of the application are used to explain the application, and do not constitute an improper limitation of the application. In the attached picture:
图1为本发明实施例提供的一种距离测量方法的流程示意图;FIG. 1 is a schematic flowchart of a distance measurement method provided by an embodiment of the present invention;
图2为本发明实施例提供的无人机在其配置的云台处于不同状态下的结构示意图;FIG. 2 is a schematic structural diagram of the UAV according to an embodiment of the present invention when its configured PTZ is in different states;
图3为本发明实施例提供的第二图像获取方式的流程示意图;FIG. 3 is a schematic flowchart of a second image acquisition manner provided by an embodiment of the present invention;
图4为本发明实施例提供的图像对应的环形视场的示意图;4 is a schematic diagram of an annular field of view corresponding to an image provided by an embodiment of the present invention;
图5为本发明实施例提供的第一图像获取方式的流程示意图;FIG. 5 is a schematic flowchart of a first image acquisition manner according to an embodiment of the present invention;
图6为本发明实施例提供的无人机与目标区域之间距离测量方式的流程示意图;6 is a schematic flowchart of a method for measuring the distance between a drone and a target area provided by an embodiment of the present invention;
图7为如图6所示本发明实施例提供的一种测量方式中各参量的位置关系;FIG. 7 is a positional relationship of various parameters in a measurement method provided by an embodiment of the present invention as shown in FIG. 6; FIG.
图8为如图6所示本发明实施例提供的另一种测量方式中各参量的位置关系;FIG. 8 shows the positional relationship of various parameters in another measurement method provided by the embodiment of the present invention as shown in FIG. 6; FIG.
图9为本发明实施例提供的一种距离测量装置的结构示意图;FIG. 9 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention;
图10为本发明实施例提供的一种可移动平台的结构示意图;FIG. 10 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention;
图11为本发明实施例提供的一种距离测量设备的结构示意图。FIG. 11 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of the present invention. The terms used in the specification of the present invention herein are only for the purpose of describing specific embodiments, and are not intended to limit the present invention.
本发明提供的距离测量方法是要测量无人机上方影响无人机飞行的目标区域与无人机之间的距离,这种距离的测量在无人机的自动返航过程中显得尤为重要。The distance measurement method provided by the present invention is to measure the distance between the target area above the drone that affects the flight of the drone and the drone, and this distance measurement is particularly important in the automatic return process of the drone.
具体来说,当无人机完成飞行任务或者在飞行过程中遇到恶劣的自然环境比如突起的山峰,又或者与地面基站之间的通信连接断开时,为了保证无人机的安全,避免出现损毁事故,无人机往往需要开启自动返航。并且由于无人机的自动返航过程中存在一个上升飞行阶段,因此,确定无人机与目标区域之间的距离就成为判断无人机能否自动返航的重要条件。此时,便可以使用本发明各实施例提供的障碍物距离的测量方法实现测距。Specifically, when the drone completes the flight mission or encounters harsh natural environments such as protruding mountains during the flight, or the communication connection with the ground base station is disconnected, in order to ensure the safety of the drone, avoid In the event of a damage accident, the UAV often needs to turn on and return home automatically. And since there is an ascending flight phase during the automatic return of the drone, determining the distance between the drone and the target area becomes an important condition for judging whether the drone can return automatically. At this time, the obstacle distance measurement method provided by each embodiment of the present invention can be used to realize distance measurement.
下面结合附图,对本发明的一些实施方式作详细说明。在各实施例之间不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。In the following, some embodiments of the present invention will be described in detail with reference to the accompanying drawings. As long as there is no conflict between the embodiments, the following embodiments and the features in the embodiments can be combined with each other.
图1为本发明实施例提供的一种距离测量方法的流程示意图。该距离测量方法的执行主体是测量设备。可以理解的是,该测量设备可以实现为软件、或者软件和硬件的组合。测量设备执行该距离测量方法则可以实现对无人机与影响无人飞行的目标区域之间距离的测量。本实施例以及下述各实施例中的测量设备具体来说可以是可移动平台,比如无人机、无人车、无人船等等。以无人机作为执行主体为例对下述各实施例进行说明。FIG. 1 is a schematic flowchart of a distance measurement method provided by an embodiment of the present invention. The main body of the distance measurement method is the measurement equipment. It can be understood that the measurement device can be implemented as software or a combination of software and hardware. The measurement equipment implements the distance measurement method to realize the measurement of the distance between the UAV and the target area that affects the unmanned flight. The measurement equipment in this embodiment and the following embodiments may specifically be movable platforms, such as unmanned aerial vehicles, unmanned vehicles, unmanned ships, and so on. The following embodiments will be described by taking a drone as an execution subject as an example.
具体的,该方法可以包括:Specifically, the method may include:
S101,获取无人机上方空域的第一图像。S101: Obtain a first image of the airspace above the drone.
无人机在飞行过程中,自身配置的摄像头可以在无人机当前所处的位置拍得第一图像。可选地,无人机上配置的摄像头可以是单目摄像头。此摄像头可以放置于能够向上抬起的云台上。当云台处于抬起状态时,便能够拍得 无人机上方空域的第一图像。云台的非抬起和抬起状态可以如图2所示。此时,第一图像对应的视角可以与单目摄像头的视角相同。During the flight of the UAV, its own camera can take the first image at the current location of the UAV. Optionally, the camera configured on the drone may be a monocular camera. This camera can be placed on a platform that can be lifted upwards. When the gimbal is in the raised state, the first image of the airspace above the drone can be taken. The non-raised and raised state of the pan/tilt can be shown in Figure 2. At this time, the angle of view corresponding to the first image may be the same as the angle of view of the monocular camera.
S102、在第一图像中,根据第一图像中像素点的语义类别确定影响无人机飞行的目标区域。S102. In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image.
在获得第一图像后,无人机可以识别出第一图像中的各像素点所属的语义类别,也即是对第一图像进行像素级别的语义识别。可选地,此种语义识别可以借助一个神经网络模型来完成,具体地识别过程可以参见下述相关描述。可选地,上述的神经网络模型可以实现二分类,即区分出第一图像中的天空和障碍物;也可以实现多分类,即区分出第一图像中的天空、树木、建筑、其他等等。进而,根据每个像素点的语义类别在第一图像中确定影响无人机飞行的目标区域。若此目标区域中包含障碍物,则会影响无人机的正常上升飞行。After obtaining the first image, the drone can identify the semantic category to which each pixel in the first image belongs, that is, perform pixel-level semantic recognition on the first image. Optionally, this kind of semantic recognition can be accomplished with the aid of a neural network model, and the specific recognition process can be referred to the following related descriptions. Optionally, the above neural network model can realize two classifications, that is, distinguish the sky and obstacles in the first image; it can also realize multi-classification, that is, distinguish the sky, trees, buildings, etc. in the first image. . Furthermore, the target area affecting the flight of the drone is determined in the first image according to the semantic category of each pixel. If the target area contains obstacles, it will affect the normal ascent of the drone.
当神经网络模型能够实现二分类时,一种可选地方式,可以将用于描述天空的像素点确定为目标像素点,并由此目标像素点构成目标区域。When the neural network model can realize the two classification, an alternative way is to determine the pixels used to describe the sky as the target pixels, and the target pixels constitute the target area.
容易理解地,与天空相邻的障碍物所在的位置也会影响无人机的飞行,则另一种可选地方式,可以将第一图像中用于描述天空以及与天空相邻的障碍物的像素点确定为目标像素点,以由此目标像素点构成目标区域。It is easy to understand that the location of obstacles adjacent to the sky will also affect the flight of the drone. Another alternative is to use the first image to describe the sky and the obstacles adjacent to the sky. Determine the pixel point as the target pixel point, and the target pixel point constitutes the target area.
除此之外,在实际应用中,对于目标区域的选择还可以将无人机的飞行环境和/或无人机的体积大小考虑进来,则另一种可选地方式,可以先根据第一图像中各像素点所属的类别确定备选区域,其中,此备选区域可以由用于描述天空以及与天空最相邻的障碍物的像素点构成。然后,再根据无人机的体积和/或无人机当前所处的飞行环境内障碍物的分布情况,对备选区域的范围进行调整。举例来说,当无人机的体积较小或者所处飞行环境内的障碍物分布较为稀疏时,可以将备选区域缩小预设倍数,以得到目标区域;反之,则可以将备选区域扩大预设倍数,以得到目标区域。In addition, in practical applications, the selection of the target area can also take into account the flying environment of the drone and/or the size of the drone. In another alternative, you can first select the target area. The category to which each pixel in the image belongs determines the candidate area, where the candidate area may be composed of pixels used to describe the sky and the obstacles closest to the sky. Then, according to the size of the drone and/or the distribution of obstacles in the current flight environment of the drone, the range of the candidate area is adjusted. For example, when the size of the drone is small or the obstacles in the flying environment are sparsely distributed, the candidate area can be reduced by a preset multiple to obtain the target area; otherwise, the candidate area can be expanded Preset multiples to get the target area.
S103,对飞行指令进行响应,以使无人机向指定方向移动预设距离,获得无人机上方空域的第二图像。S103: Respond to the flight instruction to make the drone move a preset distance in a specified direction to obtain a second image of the airspace above the drone.
接着,无人机响应于飞行控制指令,可以从当前位置向指定方向飞行预设距离至另一位置,并在此另一位置处拍得无人机上方空域的第二图像。为了后续描述的清晰,可以将无人机所处的当前位置称为第一位置,第一图像在第一位置拍得;将另一位置称为第二位置,第二图像在第二位置拍得。Then, in response to the flight control instruction, the drone can fly a preset distance from the current position in the designated direction to another position, and at this other position, a second image of the airspace above the drone can be taken. For the clarity of the subsequent description, the current position of the drone can be called the first position, and the first image is taken at the first position; the other position is called the second position, and the second image is taken at the second position. have to.
其中,可选地,上述的指定方向可以是向上,也即是在第一位置拍得第一图像后,无人机可以对接收到的上升飞行控制指令进行响应,以由第一位置上升飞行至第二位置,单目摄像头则可以在第二位置处拍得第二图像。在实际应用中,第一位置和第二位置通常差距较小,比如为几厘米。另外,上升飞行控制指令可以由无人机自主产生,也可以由飞手通过控制设备发送至无人机。Wherein, optionally, the above-mentioned designated direction may be upward, that is, after the first image is taken at the first position, the drone may respond to the received ascending flight control instruction to fly from the first position. To the second position, the monocular camera can take a second image at the second position. In practical applications, the difference between the first position and the second position is usually small, for example, a few centimeters. In addition, the ascent flight control command can be generated autonomously by the drone, or sent to the drone by the pilot through the control device.
S104,匹配目标区域在第一图像和第二图像中的像素点,以得到匹配像素点。S104, matching the pixel points of the target area in the first image and the second image to obtain matching pixels.
在得到第二图像后,无人机可以将第一图像的目标区域中的像素点分别与第二图像中的像素点进行匹配,以得到匹配像素点,也即是得到至少一个匹配像素对。正如步骤103中描述的,由于第一位置和第二位置通常距离较小,因此,第一图像和第二图像中包含的物体通常也是相同的,只是位置会存在稍许差异。此时,对于任一匹配像素对中包含的两个像素点,二者描述的是同一物体。可选地,像素点是否匹配可以通过像素点之间的相似度来体现。After obtaining the second image, the drone can match the pixel points in the target area of the first image with the pixel points in the second image to obtain matching pixels, that is, to obtain at least one matching pixel pair. As described in step 103, since the distance between the first position and the second position is usually small, the objects contained in the first image and the second image are usually the same, but there will be a slight difference in position. At this time, for the two pixels contained in any matched pixel pair, they describe the same object. Optionally, whether the pixels are matched or not can be reflected by the similarity between the pixels.
S105,根据匹配像素点确定目标区域与无人机之间的距离。S105: Determine the distance between the target area and the drone according to the matching pixels.
在得到匹配像素点后,可选地,可以预设建立的测距模型来实现测距,也可以采用三角测距原理来实现测距。测距的具体实现方式可以参见下述如图6~图8所示的实施例。After the matched pixel points are obtained, optionally, the established ranging model can be preset to realize the ranging, or the triangular ranging principle can be used to realize the ranging. For the specific implementation of the ranging, refer to the following embodiments shown in Figs. 6 to 8.
本实施例提供的距离测量方法,获取无人机上方空域的第一图像,对其进行语义识别后,即可得到影响无人机飞行的目标区域。再控制无人机向指定方向移动预设距离,以获取无人机上方空域的第二图像。然后,匹配目标区域在第一图像和第二图像中的像素点,以得到匹配像素点。最终,根据匹配像素点确定目标区域与无人机之间的距离,进一步地,还可以根据此距离控制无人机的飞行状态。可见,本发明提供的是一种通过图像识别方式实现障碍物距离测量的方法。由于进行识别的图像是无人机自身必备的摄像头拍得的,因此,对于无人机的体积和成本都不会产生影响。同时对于未配置深度传感器的无人机也能够使用这种方法实现距离的测量,保证测量的准确性。In the distance measurement method provided in this embodiment, the first image of the airspace above the drone is acquired, and after semantic recognition is performed on it, the target area that affects the flight of the drone can be obtained. Then control the drone to move a preset distance in a designated direction to obtain a second image of the airspace above the drone. Then, the pixel points of the target area in the first image and the second image are matched to obtain matching pixels. Finally, the distance between the target area and the drone is determined according to the matching pixels, and further, the flight status of the drone can be controlled according to this distance. It can be seen that the present invention provides a method for realizing obstacle distance measurement through image recognition. Since the image to be recognized is taken by the necessary camera of the drone itself, it will not affect the size and cost of the drone. At the same time, this method can also be used to measure distances for drones that are not equipped with depth sensors to ensure the accuracy of the measurement.
对于图1所示实施例的步骤102,上述已经提及了使用神经网络模型对第一图像进行像素级别的语义识别,下面可以此对语义识别过程进行详细说明:Regarding step 102 of the embodiment shown in FIG. 1, the above mentioned using the neural network model to perform pixel-level semantic recognition of the first image, and the semantic recognition process can be described in detail below:
神经网络模型具体可以为卷积神经网络(Convolutional Neural  Networks,CNN)模型。神经网络模型可以包括多个计算节点,每个计算节点中可以包括卷积(Conv)层、批量归一化(Batch Normalization,BN)以及激活函数ReLU,计算节点之间可以采用跳跃连接(Skip Connection)方式连接。The neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model. The neural network model can include multiple computing nodes. Each computing node can include a convolution (Conv) layer, batch normalization (BN), and an activation function ReLU. The computing nodes can use skip connection (Skip Connection). ) Way to connect.
K×H×W的输入数据可以输入神经网络模型,经过神经网络模型处理后,可以获得C×H×W的输出数据。其中,K可以表示输入通道的个数,K可以等于4,分别对应红(R,red)、绿(G,green)、蓝(B,blue)和深度(D,deep)共四个通道;H可以表示输入图像(即第一图像)的高,W可以表示输入图像的宽,C可以表示类别数。The input data of K×H×W can be input into the neural network model, and after the neural network model is processed, the output data of C×H×W can be obtained. Among them, K can represent the number of input channels, and K can be equal to 4, corresponding to the four channels of red (R, red), green (G, green), blue (B, blue) and depth (D, deep) respectively; H can represent the height of the input image (that is, the first image), W can represent the width of the input image, and C can represent the number of categories.
需要说明的是,当输入图像过大时,可以将一个输入图像切割为N个子图像,相应的,输入数据可以为N×K×H’×W’,输出数据可以为N×C×H’×W’,其中,H’可以表示子图像的高,W’可以表示子图像的宽。当然,在其他实施例中,也可以通过其他方式获得特征图,本申请对此不做限定。It should be noted that when the input image is too large, an input image can be cut into N sub-images. Correspondingly, the input data can be N×K×H'×W', and the output data can be N×C×H' ×W', where H'can represent the height of the sub-image, and W'can represent the width of the sub-image. Of course, in other embodiments, the feature map may also be obtained in other ways, which is not limited in this application.
利用上述预先训练好的神经网络模型处理环境图像,以得到特征图,具体来说可以包括如下步骤:Using the above-mentioned pre-trained neural network model to process the environment image to obtain a feature map may specifically include the following steps:
步骤1,将环境图像输入神经网络模型,得到神经网络模型的模型输出结果。Step 1. Input the environment image into the neural network model to obtain the model output result of the neural network model.
其中,神经网络模型的模型输出结果可以包括多个输出通道分别输出的置信度特征图,该多个输出通道可以与多个对象类别一一对应,单个对象类别的置信度特征图的像素值用于表征像素是对象类别的概率。Among them, the model output result of the neural network model may include the confidence feature maps output by multiple output channels, and the multiple output channels can correspond to multiple object categories one-to-one, and the pixel values of the confidence feature maps of a single object category are used To characterize the probability that a pixel is an object category.
步骤2,根据神经网络模型的模型输出结果,得到包含语义信息的特征图。Step 2: According to the model output result of the neural network model, a feature map containing semantic information is obtained.
可以将与该多个输出通道一一对应的多个置信度特征图中同一像素位置像素值最大的置信度特征图对应的对象类别,作为像素位置的对象类别,从而得到特征图。The object category corresponding to the confidence feature map with the largest pixel value at the same pixel location in the multiple confidence feature maps one-to-one corresponding to the multiple output channels may be used as the object category of the pixel location to obtain the feature map.
假设,神经网络模型的输出通道的个数为4,每个通道的输出结果是一个置信度特征图,即4个置信度特征图分别为置信度特征图1至置信度特征图4,且置信度特征图1对应天空、置信度特征图2对应建筑物、置信度特征图3对应树木、置信度特征图4对应“其他”。在这几种分类中,除了天空,剩余都可以认为是障碍物。Suppose that the number of output channels of the neural network model is 4, and the output result of each channel is a confidence feature map, that is, the 4 confidence feature maps are the confidence feature map 1 to the confidence feature map 4, and the confidence The degree characteristic map 1 corresponds to the sky, the confidence characteristic map 2 corresponds to buildings, the confidence characteristic map 3 corresponds to trees, and the confidence characteristic map 4 corresponds to "other". In these categories, except for the sky, the rest can be regarded as obstacles.
例如,当置信度特征图1中像素位置(100,100)的像素值是70,置信度特征图2中像素位置(100,100)的像素值是50,置信度特征图3中像素位置 (100,100)的像素值是20,置信度特征图4中像素位置(100,100)的像素值是20时,可以确定像素位置(100,100)是天空。For example, when the pixel value at the pixel location (100, 100) in the confidence feature map 1 is 70, the pixel value at the pixel location (100, 100) in the confidence feature map 2 is 50, and the pixel at the pixel location (100, 100) in the confidence feature map 3 When the value is 20, and the pixel value of the pixel position (100, 100) in the confidence feature map 4 is 20, it can be determined that the pixel position (100, 100) is the sky.
又例如,当置信度特征图1中像素位置(100,80)的像素值是20,置信度特征图2中像素位置(100,80)的像素值是30,置信度特征图3中像素位置(100,80)的像素值是20,置信度特征图4中像素位置(100,80)的像素值是70时,可以确定像素位置(100,80)是其他,即不是树木、建筑物和树木中的任意一种。For another example, when the pixel value at the pixel location (100, 80) in the confidence feature map 1 is 20, the pixel value at the pixel location (100, 80) in the confidence feature map 2 is 30, and the pixel location in the confidence feature map 3 When the pixel value of (100,80) is 20, and the pixel value of pixel position (100,80) in the confidence feature figure 4 is 70, it can be determined that the pixel position (100,80) is other, that is, it is not trees, buildings, and Any of the trees.
同时,对于如图1所示实施例步骤104,上述也已经提供了一种像素点的匹配方式,即先计算目标区域中各像素点与第二图像中各像素点之间的相似度,再根据相似度得到的至少一对匹配像素点。但由于目标区域和第二图像各自包含的像素点数量都是较多的,此时导致匹配过程中的计算量较大,这样会占用无人机较多的计算资源,同时也使得匹配效率不高。At the same time, for step 104 of the embodiment shown in FIG. 1, the above has also provided a pixel matching method, that is, first calculate the similarity between each pixel in the target area and each pixel in the second image, and then At least one pair of matching pixels is obtained according to the similarity. However, since the target area and the second image each contain a large number of pixels, this results in a large amount of calculation in the matching process, which will occupy more computing resources of the drone, and also make the matching efficiency inefficient. high.
因此,为了避免上述问题,可选地,无人机可以采用自身内预先配置的特征点检测算法来分别提取出目标区域以及第二图像中的特征像素点。其中,检测出的特征像素点通常是图像中的角点。可选地,上述特征点检测算具体可以是尺度不变特征变换(Scale-invariant feature transform,简称SIFT)算法、快速鲁棒特征(Speeded-Up Robust Features,简称SURF)算法或者二进制鲁棒独立特征(Binary Robust Independent Elementary Features,简称BRIEF)算法等等。在实际应用中,根据上述算法得到特征像素点的同时,也可以得到用于描述该特征像素点属性的描述子,此描述子具体可以表现为向量形式。Therefore, in order to avoid the above-mentioned problems, optionally, the drone can use a feature point detection algorithm pre-configured in itself to extract the target area and feature pixel points in the second image respectively. Among them, the detected characteristic pixel points are usually corner points in the image. Optionally, the aforementioned feature point detection algorithm may be a Scale-invariant Feature Transform (SIFT) algorithm, a Speeded-Up Robust Features (SURF) algorithm, or a binary robust independent feature. (Binary Robust Independent Elementary Features, BRIEF for short) algorithm and so on. In practical applications, while obtaining a characteristic pixel according to the above algorithm, a descriptor used to describe the attribute of the characteristic pixel can also be obtained, and this descriptor can be expressed in a vector form.
接着,无人机可以对目标区域和第二图像中的特征像素点进行匹配处理,以得到至少一个匹配像素对。对于任一匹配像素对,其中的两个特征像素点描述的也是同一物体。Then, the drone can perform matching processing on the target area and the characteristic pixel points in the second image to obtain at least one matching pixel pair. For any matched pixel pair, the two characteristic pixels in it also describe the same object.
可选地,特征像素点是否匹配同样可以通过特征像素点之间的相似度来体现。像素点之间的相似度具体来说又可以是特征像素点的描述子之间的相似度。若相似度大于或等于预设阈值,则确定这两个特征像素点匹配,以由二者组成一对匹配像素对。Optionally, whether the characteristic pixels are matched can also be reflected by the similarity between the characteristic pixels. Specifically, the similarity between pixels may be the similarity between the descriptors of characteristic pixels. If the similarity is greater than or equal to the preset threshold, it is determined that the two characteristic pixel points match, so as to form a pair of matched pixels from the two.
对于描述子之间的相似度,可选地,采用不同特征点检测算法得到的特征像素点,可以采用不同的相似度计算方式。比如,对于采用BRIEF算法得到 的特征像素点,可以通过计算特征像素点之间的汉明距离来得到相似度;对于采用SIFT算法或者SURF算法得到的描述子,可以通过计算特征像素点对应的描述子之间的欧式距离来得到相似度。For the similarity between the descriptors, optionally, the feature pixel points obtained by using different feature point detection algorithms may use different similarity calculation methods. For example, for the feature pixels obtained by the BRIEF algorithm, the similarity can be obtained by calculating the Hamming distance between the feature pixels; for the descriptor obtained by the SIFT algorithm or the SURF algorithm, the description corresponding to the feature pixel can be calculated The Euclidean distance between the children to get the similarity.
综上所述,经过对目标区域和第二图像的特征像素点提取处理后,能够使得进行匹配处理的像素点的数量大大减小,也就使得匹配过程中的计算量大大减小,保证匹配效率。In summary, after extracting the feature pixels of the target area and the second image, the number of pixels for matching can be greatly reduced, which also greatly reduces the amount of calculation in the matching process to ensure matching efficiency.
另外,对于第一图像的获取方式,图1所示实施例的步骤101中已经提供了一种方式,此时第一图像对应的视场与单目摄像头具有的视场相同,都是较小的。而容易理解的,第一图像对应的视场越大,其对于无人机上方空域的描述就越全面,利用此大视场的第一图像也就能够更准确地计算出目标区域与无人机之间的距离,能更准确地控制无人机自动返航。In addition, for the method of acquiring the first image, a method has been provided in step 101 of the embodiment shown in FIG. 1. At this time, the field of view corresponding to the first image is the same as the field of view of the monocular camera, and both are smaller. of. It is easy to understand that the larger the field of view corresponding to the first image, the more comprehensive the description of the airspace above the drone. The first image with this large field of view can more accurately calculate the target area and the unmanned area. The distance between the drones can more accurately control the automatic return of the drone.
基于此,如图3所示,另一种可选地第一图像获取方式,也即是步骤101一种可选地实现方式可以为:Based on this, as shown in FIG. 3, another optional first image acquisition manner, that is, step 101, an optional implementation manner may be:
S1011,对第一飞行控制指令进行响应,使无人机在第一位置原地旋转飞行一周。S1011: Respond to the first flight control instruction to make the UAV rotate and fly one circle in situ at the first position.
S1012,无人机在旋转飞行过程中,获取无人机配置的单目摄像头拍得的第一图像。S1012: Obtain the first image taken by the monocular camera of the drone during the rotating flight of the drone.
无人机悬停于第一位置时,对第一飞行控制指令进行响应,以使无人机在第一位置旋转飞行一周。在此旋转飞行期间,无人机上的单目摄像头便可以拍得对应于环形视场的第一图像。这种环形视场可以如图4所示。When the drone is hovering in the first position, it responds to the first flight control command, so that the drone rotates and flies once in the first position. During this rotating flight, the monocular camera on the drone can take the first image corresponding to the circular field of view. This annular field of view can be shown in Figure 4.
对于第二图像的获取方式,图1所示实施例的步骤103中已经提供了一种方式,但在该获取方式中无人机存在一个上升飞行过程,此上升过程很容易使无人机落入目标区域,并与目标区域中的障碍物发生碰撞,甚至造成无人机损毁。因此,为了避免上述情况,如图5所示,另一种可选地第二图像获取方式,也即是步骤103一种可选地实现方式可以为:For the acquisition method of the second image, a method has been provided in step 103 of the embodiment shown in FIG. 1. However, in this acquisition method, the drone has an ascending flight process, and this ascending process can easily cause the drone to fall. Enter the target area and collide with obstacles in the target area, or even cause the drone to be damaged. Therefore, in order to avoid the above situation, as shown in FIG. 5, another optional second image acquisition manner, that is, step 103, an optional implementation manner may be:
S1031,对第二飞行控制指令进行响应,使无人机由第一位置下降飞行预设距离至第二位置。S1031: Respond to the second flight control instruction to cause the drone to descend and fly a preset distance from the first position to the second position.
S1032,无人机位于第二位置时,获取单目摄像头拍得的第二图像。S1032: When the drone is in the second position, acquire a second image taken by the monocular camera.
无人机悬停于第一位置时,无人机上的单目摄像头可以拍得无人机上方 空域的图像即第一图像。无人机对接收到的第二飞行控制指令进行响应,便会由当前的第一位置下降飞行至第二位置。此时,无人机上配置的单目摄像头会再次拍摄无人机上方空域的图像即第二图像。通过上述方式无人机也即是获取到了第一图像和第二图像。通过这种下降飞行的方式,不仅可以得到不同高度拍摄位置拍得的图像,还能够避免无人机落入自身上方的目标区域。此时,第二图像对应的视场可以与单目摄像头的视场相同。When the drone is hovering in the first position, the monocular camera on the drone can take an image of the airspace above the drone, which is the first image. In response to the received second flight control command, the drone will descend and fly from the current first position to the second position. At this time, the monocular camera configured on the drone will again take an image of the airspace above the drone, which is the second image. Through the above method, the drone has obtained the first image and the second image. Through this method of descending flight, not only can the images taken at different heights be obtained, but it can also prevent the drone from falling into the target area above itself. At this time, the field of view corresponding to the second image may be the same as the field of view of the monocular camera.
单目摄像头具有的视场通常较小,与图3所示实施例中的描述类似的,第二图像对应的视场越大,第二图像对于无人机上方空域的描述就越全面,利用此大视场的第二图像同样也能够更准确地计算出目标区域与无人机之间的距离,能更准确地控制无人机自动返航。The field of view of the monocular camera is usually small. Similar to the description in the embodiment shown in Figure 3, the larger the field of view corresponding to the second image, the more comprehensive the description of the airspace above the drone by the second image. The second image with this large field of view can also more accurately calculate the distance between the target area and the drone, and can more accurately control the drone to return home.
因此,在无人机响应第二飞行控制指令飞行下降飞行至第二位置后,还可以对第三飞行控制指令进行响应,以使无人机在第二位置原地旋转飞行一周,并获取无人在机旋转飞行过程中,单目摄像头拍得的第二图像。经过旋转飞行后,得到的第二图像对应于无人机上方空域的环形视场。这种环形视场同样可以如图4所示。Therefore, after the drone responds to the second flight control command to fly down to the second position, it can also respond to the third flight control command, so that the drone can rotate and fly in the second position for one full circle, and obtain the The second image taken by the monocular camera during the rotating flight of a person on the plane. After rotating and flying, the second image obtained corresponds to the circular field of view in the airspace above the drone. This annular field of view can also be shown in Figure 4.
综合图3~图5所示的实施例,无人机可以获取具有环形视场的第一图像和第二图像,以便根据此大视场的图像更加准确地计算出目标区域与无人机之间的距离,能更准确地控制无人机自动返航。Combining the embodiments shown in Figures 3 to 5, the drone can obtain the first image and the second image with a circular field of view, so as to more accurately calculate the relationship between the target area and the drone based on the image of the large field of view. The distance between the two can control the drone to return home more accurately.
根据上述图3~图5所示的实施例可知,无人机可以在第一位置处拍得第一图像,经过下降飞行,可以得到在第二位置处拍得的第二图像。基于这种通过下降飞行得到第一图像、第二图像的方式,在无人机进行像素点匹配后,如图6所示,一种可选地根据匹配像素对确定无人机与目标区域之间距离的方式,也即是步骤105一种可选地实现方式可以为:According to the embodiments shown in FIGS. 3 to 5 above, it can be seen that the drone can take a first image at the first position, and after a descending flight, it can obtain a second image taken at the second position. Based on this method of obtaining the first image and the second image through descending flight, after the UAV carries out pixel matching, as shown in Figure 6, an option is to determine the difference between the UAV and the target area based on the matched pixel pair. The distance between them, that is, an optional implementation of step 105 can be:
S1051,确定第一像素点与第一图像的图像中心之间的第一距离。S1051: Determine a first distance between the first pixel point and the image center of the first image.
S1052,确定第二像素点与第二图像的图像中心之间的第二距离。S1052: Determine a second distance between the second pixel point and the image center of the second image.
S1053,根据预设距离、单目摄像头的摄像头参数、第一距离以及第二距离确定目标区域与无人机之间的距离。S1053: Determine the distance between the target area and the drone according to the preset distance, the camera parameters of the monocular camera, the first distance, and the second distance.
经过步骤104后,无人机已经可以得到至少一个匹配像素对,其中,任一匹配像素对可以由第一图像中的第一像素点和第二图像中的第二像素点组成。无人机可以利用任一匹配像素对来确定无人机与目标区域之间的距离。After step 104, the drone can already obtain at least one matched pixel pair, where any matched pixel pair may consist of a first pixel in the first image and a second pixel in the second image. The drone can use any matching pixel pair to determine the distance between the drone and the target area.
假设任一匹配像素对A中包括第一像素点A 1和第二像素点A 2,包含第一像素点A 1的第一图像的图像中心为O 1,包含第二像素点A 2的第二图像的图像中心为O 2。此时,则可以根据下述公式计算目标区域与无人机之间的距离: Assuming that any matching pixel pair A includes a first pixel A 1 and a second pixel A 2 , the image center of the first image including the first pixel A 1 is O 1 , and the first pixel including the second pixel A 2 The image center of the second image is O 2 . At this time, the distance between the target area and the drone can be calculated according to the following formula:
Figure PCTCN2020073656-appb-000001
Figure PCTCN2020073656-appb-000001
整理后可得:
Figure PCTCN2020073656-appb-000002
After finishing:
Figure PCTCN2020073656-appb-000002
其中,公式中各参数的具体位置关系可以如图7所示。具体的,A 0为目标区域,x为无人机与目标区域之间的水平距离即线段O 0A 0,z为无人机与目标区域之间的垂直距离即线段O 0P 1,O 0为无人机上配置的单目摄像头的光心。d 0为拍得第一图像的第一位置P 1和拍得第二图像的第二位置P 2之间的距离。d 1为第一像素点A 1与第一图像的图像中心O 1之间的距离。d 2为第二像素点A 2与第二图像的图像中心O 2之间的距离。f为单目摄像头的焦距即图中线段O 1P 1,也即线段O 2P 2Among them, the specific positional relationship of the parameters in the formula can be as shown in Figure 7. Specifically, A 0 is the target area, x is the horizontal distance between the drone and the target area, that is, the line segment O 0 A 0 , and z is the vertical distance between the drone and the target area, that is, the line segment O 0 P 1 , O 0 is the optical center of the monocular camera configured on the UAV. d 0 is the distance between the first position P 1 where the first image is captured and the second position P 2 where the second image is captured. d 1 is the distance between the first pixel A 1 and the image center O 1 of the first image. d 2 is the distance between the second pixel A 2 and the image center O 2 of the second image. f is the focal length of the monocular camera, that is, the line segment O 1 P 1 in the figure, that is, the line segment O 2 P 2 .
其中,无人机可以先得到第一像素点A 1以及第一图像的图像中心O 1各自在第一图像中的像素坐标。然后,根据二者的像素坐标确定第一像素点A 1与图像中心O 1之间的第一距离d 1。第二距离d 2的计算方式也类似,在此不再赘述。 Among them, the drone can first obtain the pixel coordinates of the first pixel A 1 and the image center O 1 of the first image in the first image. Then, the first distance d 1 between the first pixel point A 1 and the image center O 1 is determined according to the pixel coordinates of the two. The calculation method of the second distance d 2 is also similar, and will not be repeated here.
根据上述描述可知,使用任一匹配像素对即可确定出目标区域与无人机之间的距离。但为了保证距离测量的准确性,可选地,还可以分别对多个匹配像素对分别进行上述计算,以得到多个距离,并根据这多个距离确定无人机与目标区域与之间距离。比如将多个距离的平均数或者中位数确定为无人机与目标区域与之间距离。According to the above description, the distance between the target area and the drone can be determined by using any matching pixel pair. However, in order to ensure the accuracy of distance measurement, optionally, the above calculation can be performed on multiple matched pixel pairs to obtain multiple distances, and the distance between the drone and the target area can be determined based on the multiple distances. . For example, the average or median of multiple distances is determined as the distance between the drone and the target area.
除此之外,正如图1所示实施中提供的方式,无人机可以在第一位置处拍得第一图像,然后经过上升飞行,得到在第二位置处拍得的第二图像。对于这种通过上升飞行得到第一图像、第二图像的方式,同样也可以采用上述方式来实现距离测量。只不过,上述的第一位置P 1、第一图像的图像中心O 1、第一像素点A 1和第二位置P 2、第二图像的图像中心O 2、第二像素点A 2之间的位置关系变为如图8所示。 In addition, as shown in the implementation method provided in Figure 1, the drone can take a first image at the first position, and then after an ascending flight, obtain a second image taken at the second position. For this method of obtaining the first image and the second image by ascending flight, the above-mentioned method can also be used to realize distance measurement. However, the above-mentioned first position P 1 , the image center O 1 of the first image, the first pixel A 1 and the second position P 2 , the image center O 2 of the second image, and the second pixel A 2 are between The positional relationship becomes as shown in Figure 8.
此时,可以采用以下公式计算目标区域与无人机之间的距离:At this time, the following formula can be used to calculate the distance between the target area and the drone:
Figure PCTCN2020073656-appb-000003
Figure PCTCN2020073656-appb-000003
整理后可得:
Figure PCTCN2020073656-appb-000004
After finishing:
Figure PCTCN2020073656-appb-000004
其中,A 0为目标区域,x为无人机与目标区域之间的水平距离即线段O 0A 0,z为无人机与目标区域之间的垂直距离即线段O 0P 1,O 0为无人机上配置的单目摄像头的光心。d 0为拍得第一图像的第一位置P 1和拍得第二图像的第二位置P 2之间的距离。d 1为第一特征像素点A 1与第一图像的图像中心O 1之间的距离。d 2为第二特征像素点A 2与第二图像的图像中心O 2之间的距离。f为单目摄像头的焦距即图中线段O 1P 1,也即线段O 2P 2Among them, A 0 is the target area, x is the horizontal distance between the drone and the target area, that is, the line segment O 0 A 0 , and z is the vertical distance between the drone and the target area, that is, the line segment O 0 P 1 , O 0 It is the optical center of the monocular camera configured on the UAV. d 0 is the distance between the first position P 1 where the first image is captured and the second position P 2 where the second image is captured. d 1 is the distance between the first characteristic pixel A 1 and the image center O 1 of the first image. d 2 is the distance between the second characteristic pixel A 2 and the image center O 2 of the second image. f is the focal length of the monocular camera, that is, the line segment O 1 P 1 in the figure, that is, the line segment O 2 P 2 .
此处未详细描述的内容可以参见图7所示实施例中的相关描述,在此不再赘述。For content not described in detail here, reference may be made to related descriptions in the embodiment shown in FIG. 7, and details are not described herein again.
需要说明的有,根据上述描述可知,像素点匹配也可以在第一图像和第二图像中的特征像素点中进行,此时得到的任一匹配像素对中包含的是第一图像中的第一特征像素点和第二图像中的第二特征像素点。然后可以进一步确定第一特征像素点与第一图像的图像中心之间的第一距离,确定第二特征像素点与第一图像的图像中心之间的第二距离,再利用如图7或图8所示的方式计算目标区域与无人机之间的距离。It should be noted that, according to the above description, pixel point matching can also be performed on the characteristic pixel points in the first image and the second image, and any matching pixel pair obtained at this time contains the first image in the first image. A characteristic pixel and a second characteristic pixel in the second image. Then the first distance between the first characteristic pixel point and the image center of the first image can be further determined, and the second distance between the second characteristic pixel point and the image center of the first image can be determined, and then use Figure 7 or Figure 7 The method shown in 8 calculates the distance between the target area and the drone.
由于图像中特征像素点的数量是远远小于全部像素点的数量的,因此使用包含特征像素点的至少一个匹配像素对来实现测距,计算量更小,计算效率也更高。Since the number of characteristic pixels in an image is much smaller than the number of all pixels, using at least one matching pixel pair containing characteristic pixels to achieve distance measurement requires less calculation and higher calculation efficiency.
在利用上述各实施例提供的测距方法得到的无人机与目标区域之间的距离之后,便可以根据距离来控制无人机的运动。具体来说,若此距离满足预设条件,表明无人机上方的目标区域较远,无人机在返航过程中具有的上升飞行阶段不会受到影响,此时,无人机可以对返航控制指令进行响应,以控制无人机自动返航。否则,则控制无人机继续悬停。After the distance between the drone and the target area is obtained by using the distance measurement methods provided in the foregoing embodiments, the movement of the drone can be controlled according to the distance. Specifically, if this distance meets the preset conditions, it indicates that the target area above the drone is far, and the ascending flight phase of the drone during the return home process will not be affected. At this time, the drone can control the return home Respond to instructions to control the drone to return home automatically. Otherwise, control the drone to continue hovering.
此外,在上述实施例的基础上,当确定出无人机与目标区域之间的距离 满足预设条件后,即可控制无人返航。有容易理解的,无人机的任何飞行过程都是需要电池供电的,因此,在控制无人机返航之前,还可以先确定返航过程中所需的电量,若当前的剩余电量多于返航所需电量时,才会控制无人机返航。In addition, on the basis of the foregoing embodiment, when it is determined that the distance between the drone and the target area meets the preset conditions, the unmanned return can be controlled. It is easy to understand that any flight process of a drone requires battery power. Therefore, before controlling the drone to return to home, you can also determine the power required during the return process. If the current remaining power is more than the return point When power is needed, the drone will be controlled to return home.
而对于返航过程中所需电量的确定,一种可选地方式,可以先根据历史风速信息估计从当前位置降落至返航目的地的风速信息。再确定从当前位置降落至返航目的地的地速信息,以根据风速信息和地速信息确定无人机的返航过程所需的电量。As for the determination of the power required during the return home process, an alternative way is to first estimate the wind speed information from the current position to the destination of the return home based on the historical wind speed information. Then determine the ground speed information for landing from the current position to the return destination, so as to determine the power required for the drone's return process based on the wind speed information and ground speed information.
同时,在得到无人机与目标区域之间的距离后,还可以进一步得到此目标区域对应的点云数据。此点云数据可以描述无人机所处的飞行环境,利用此点云数据可以为进一步无人机规划返航路径,以保证无人机按照此路径实现自动安全返航。At the same time, after obtaining the distance between the drone and the target area, the point cloud data corresponding to the target area can be further obtained. This point cloud data can describe the flight environment in which the drone is located. This point cloud data can be used to plan the return path for the drone to ensure that the drone can return to home automatically and safely according to this path.
图9为本发明实施例提供的一种距离测量装置的结构示意图;参考附图9所示,本实施例提供了一种距离测量装置,该测量装置可以执行上述的距离测量方法;具体的,距离测量装置包括:FIG. 9 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention; referring to FIG. 9, this embodiment provides a distance measuring device, which can execute the above-mentioned distance measuring method; specifically, Distance measuring devices include:
获取模块11,用于获取所述无人机上方空域的第一图像。The acquiring module 11 is configured to acquire a first image of the airspace above the drone.
区域确定模块12,用于在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域。The area determining module 12 is configured to determine, in the first image, a target area that affects the flight of the drone according to the semantic category of pixels in the first image.
响应模块13,用于对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像。The response module 13 is configured to respond to flight instructions, so that the UAV moves a predetermined distance in a designated direction, and obtains a second image of the airspace above the UAV.
匹配模块14,用于匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;The matching module 14 is configured to match the pixel points of the target area in the first image and the second image to obtain matching pixels;
距离确定模块15,用于根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance determining module 15 is configured to determine the distance between the target area and the UAV according to the matching pixel points.
图9所示装置还可以执行图1~图8所示实施例的方法,本实施例未详细描述的部分,可参考对图1~图8所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1~图8所示实施例中的描述,在此不再赘述。The device shown in FIG. 9 can also execute the methods of the embodiments shown in FIG. 1 to FIG. 8. For parts that are not described in detail in this embodiment, please refer to the related description of the embodiments shown in FIG. 1 to FIG. 8. For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
图10为本发明实施例提供的一种可移动平台的结构示意图;参考附图10所示,本发明实施例的提供了一种可移动平台,该可移动平台为以下至少之 一:无人飞行器、无人船、无人车;具体的,该可移动平台包括:机体21、动力系统22以及控制装置23。FIG. 10 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention; referring to FIG. 10, an embodiment of the present invention provides a movable platform, and the movable platform is at least one of the following: Aircraft, unmanned ships, unmanned vehicles; specifically, the movable platform includes: a body 21, a power system 22, and a control device 23.
所述动力系统22,设置于所述机体21上,用于为所述可移动平台提供动力。The power system 22 is arranged on the body 21 and is used to provide power for the movable platform.
所述控制装置23包括存储器231和处理器232。The control device 23 includes a memory 231 and a processor 232.
所述存储器,用于存储计算机程序;The memory is used to store a computer program;
所述处理器,用于运行所述存储器中存储的计算机程序以实现:The processor is configured to run a computer program stored in the memory to realize:
获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
进一步的,可移动平台还包括单目摄像头24,其设置于机体21上;Further, the movable platform also includes a monocular camera 24, which is arranged on the body 21;
理器232还用于:对第一飞行控制指令进行响应,使所述无人机在第一位置原地旋转飞行一周;The processor 232 is further configured to: respond to the first flight control instruction, so that the UAV rotates and flies once at the first position;
所述无人机在旋转飞行过程中,获取所述无人机配置的单目摄像头拍得的所述第一图像。During the rotating flight of the drone, the first image captured by the monocular camera configured on the drone is acquired.
进一步的,处理器232还用于:对第二飞行控制指令进行响应,使所述无人机由所述第一位置下降飞行预设距离至第二位置;Further, the processor 232 is further configured to: respond to a second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
所述无人机位于所述第二位置时,获取所述单目摄像头拍得的所述第二图像。When the drone is located at the second position, the second image captured by the monocular camera is acquired.
进一步的,单目摄像头24放置于能够向上摆动的云台上,以使单目摄像头24能够拍得所述第一图像和所述第二图像;Further, the monocular camera 24 is placed on a pan-tilt that can swing upward, so that the monocular camera 24 can capture the first image and the second image;
处理器232还用于:对第三飞行控制指令进行响应,使所述无人机在所述第二位置原地旋转飞行一周;The processor 232 is further configured to: respond to a third flight control instruction, so that the UAV rotates and flies in situ at the second position for one circle;
获取所述无人在机旋转飞行过程中,所述单目摄像头拍得的所述第二图像,其中,所述第一图像和所述第二图像对应于所述无人机上方空域的环形视场。Acquire the second image taken by the monocular camera during the unmanned aircraft rotating flight, wherein the first image and the second image correspond to the ring shape in the airspace above the drone Field of view.
进一步的,处理器232还用于:识别所述第一图像中像素点的语义类别;Further, the processor 232 is further configured to: identify the semantic category of pixels in the first image;
根据所述像素点的语义类别,确定所述第一图像中的所述目标区域。According to the semantic category of the pixel point, the target area in the first image is determined.
进一步的,像素点的语义类别包括天空和障碍物,则处理器232还用于:根据像素点的语义类别,确定所述第一图像中用于描述天空以及与天空最相邻的障碍物的目标像素点,以由所述目标像素点构成所述目标区域。Further, the semantic category of the pixel includes sky and obstacles, and the processor 232 is further configured to: according to the semantic category of the pixel, determine the description of the sky and the obstacle closest to the sky in the first image. Target pixels, the target area is formed by the target pixels.
进一步的,处理器232还用于:根据所述像素点的语义类别,在所述第一图像中确定备选区域;Further, the processor 232 is further configured to: determine a candidate area in the first image according to the semantic category of the pixel;
根据所述无人机的体积和/或所述无人机所处飞行环境的障碍物分布情况,调整所述备选区域,以得到所述目标区域。According to the volume of the unmanned aerial vehicle and/or the distribution of obstacles in the flying environment of the unmanned aerial vehicle, the candidate area is adjusted to obtain the target area.
进一步的,处理器232还用于:计算所述目标区域中像素点与所述第二图像中像素点之间的相似度;Further, the processor 232 is further configured to: calculate the similarity between the pixels in the target area and the pixels in the second image;
根据像素点之间的相似度,在所述第二图像中确定与所述目标区域中像素点匹配的像素点。According to the similarity between the pixel points, a pixel point that matches the pixel point in the target area is determined in the second image.
进一步的,处理器232还用于:若所述无人机与所述目标区域之间的距离满足预设条件,则对返航控制指令进行响应,使所述无人机自动返航。Further, the processor 232 is further configured to: if the distance between the drone and the target area meets a preset condition, respond to a return home control instruction to make the drone return home automatically.
进一步的,处理器232还用于:根据所述目标区域与所述无人机之间的距离,确定所述目标区域对应的点云数据。Further, the processor 232 is further configured to determine the point cloud data corresponding to the target area according to the distance between the target area and the drone.
图10所示的可移动平台可以执行图1~图8所示实施例的方法,本实施例未详细描述的部分,可参考对图1~图8所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1~图8所示实施例中的描述,在此不再赘述。The movable platform shown in FIG. 10 can execute the methods of the embodiments shown in FIGS. 1 to 8. For parts that are not described in detail in this embodiment, please refer to the related descriptions of the embodiments shown in FIGS. 1 to 8. For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
在一个可能的设计中,图11所示距离测量设备的结构可实现为一电子设备,该电子设备可以是无人机。如图11所示,该电子设备可以包括:一个或多个处理器31和一个或多个存储器32。其中,存储器32用于存储支持电子设备执行上述图1~图8所示实施例中提供的距离测量方法的程序。处理器31被配置为用于执行存储器32中存储的程序。In a possible design, the structure of the distance measuring device shown in FIG. 11 can be implemented as an electronic device, which can be a drone. As shown in FIG. 11, the electronic device may include: one or more processors 31 and one or more memories 32. Wherein, the memory 32 is used to store a program that supports the electronic device to execute the distance measurement method provided in the embodiments shown in FIGS. 1 to 8 above. The processor 31 is configured to execute a program stored in the memory 32.
具体的,程序包括一条或多条计算机指令,其中,一条或多条计算机指令被处理器31执行时能够实现如下步骤:Specifically, the program includes one or more computer instructions, and the following steps can be implemented when one or more computer instructions are executed by the processor 31:
获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
其中,该距离测量设备的结构中还可以包括通信接口33,用于电子设备与其他设备或通信网络通信。Wherein, the structure of the distance measuring device may further include a communication interface 33 for the electronic device to communicate with other devices or a communication network.
进一步的,设备上还包括单目摄像头;Further, the device also includes a monocular camera;
处理器31还用于:对第一飞行控制指令进行响应,使所述无人机在第一位置原地旋转飞行一周;The processor 31 is further configured to: respond to the first flight control instruction to make the UAV rotate and fly one circle in situ at the first position;
所述无人机在旋转飞行过程中,获取所述无人机配置的单目摄像头拍得的所述第一图像。During the rotating flight of the drone, the first image captured by the monocular camera configured on the drone is acquired.
进一步的,处理器31还用于:对第二飞行控制指令进行响应,使所述无人机由所述第一位置下降飞行预设距离至第二位置;Further, the processor 31 is further configured to: respond to a second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
所述无人机位于所述第二位置时,获取所述单目摄像头拍得的所述第二图像。When the drone is located at the second position, the second image captured by the monocular camera is acquired.
进一步的,单目摄像头放置于能够向上摆动的云台上,以使单目摄像头24能够拍得所述第一图像和所述第二图像;Further, the monocular camera is placed on a pan-tilt that can swing upward, so that the monocular camera 24 can capture the first image and the second image;
处理器31还用于:对第三飞行控制指令进行响应,使所述无人机在所述第二位置原地旋转飞行一周;The processor 31 is further configured to: respond to a third flight control instruction, so that the UAV rotates and flies in situ at the second position for one circle;
获取所述无人在机旋转飞行过程中,所述单目摄像头拍得的所述第二图像,其中,所述第一图像和所述第二图像对应于所述无人机上方空域的环形视场。Acquire the second image taken by the monocular camera during the unmanned aircraft rotating flight, wherein the first image and the second image correspond to the ring shape in the airspace above the drone Field of view.
进一步的,处理器31还用于:识别所述第一图像中像素点的语义类别;Further, the processor 31 is further configured to: identify the semantic category of pixels in the first image;
根据所述像素点的语义类别,确定所述第一图像中的所述目标区域。According to the semantic category of the pixel point, the target area in the first image is determined.
进一步的,像素点的语义类别包括天空和障碍物,则处理器31还用于:根据像素点的语义类别,确定所述第一图像中用于描述天空以及与天空最相邻的障碍物的目标像素点,以由所述目标像素点构成所述目标区域。Further, the semantic category of the pixel includes sky and obstacle, and the processor 31 is further configured to: according to the semantic category of the pixel, determine the description of the sky and the obstacle closest to the sky in the first image. Target pixels, the target area is formed by the target pixels.
根据所述特征像素点之间的相似度确定所述匹配像素对。The matching pixel pair is determined according to the similarity between the characteristic pixel points.
进一步的,处理器31还用于:根据所述像素点的语义类别,在所述第一图像中确定备选区域;Further, the processor 31 is further configured to: determine a candidate area in the first image according to the semantic category of the pixel;
根据所述无人机的体积和/或所述无人机所处飞行环境的障碍物分布情况,调整所述备选区域,以得到所述目标区域。According to the volume of the unmanned aerial vehicle and/or the distribution of obstacles in the flying environment of the unmanned aerial vehicle, the candidate area is adjusted to obtain the target area.
进一步的,处理器31还用于:计算所述目标区域中像素点与所述第二图像中像素点之间的相似度;Further, the processor 31 is further configured to: calculate the similarity between the pixels in the target area and the pixels in the second image;
根据像素点之间的相似度,在所述第二图像中确定与所述目标区域中像素点匹配的像素点。According to the similarity between the pixel points, a pixel point that matches the pixel point in the target area is determined in the second image.
进一步的,处理器31还用于:若所述无人机与所述目标区域之间的距离满足预设条件,则对返航控制指令进行响应,使所述无人机自动返航。Further, the processor 31 is further configured to: if the distance between the drone and the target area meets a preset condition, respond to a return home control instruction to make the drone return home automatically.
进一步的,处理器31还用于:根据所述目标区域与所述无人机之间的距离,确定所述目标区域对应的点云数据。Further, the processor 31 is further configured to: determine the point cloud data corresponding to the target area according to the distance between the target area and the drone.
图11所示的设备可以执行图1~图8所示实施例的方法,本实施例未详细描述的部分,可参考对图1~图8所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1~图8所示实施例中的描述,在此不再赘述。The device shown in FIG. 11 can execute the methods of the embodiments shown in FIG. 1 to FIG. 8. For parts that are not described in detail in this embodiment, reference may be made to the related description of the embodiments shown in FIG. 1 to FIG. 8. For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
另外,本发明实施例提供了一种计算机可读存储介质,存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,程序指令用于实现上述图1~图8所示的距离测量方法。In addition, an embodiment of the present invention provides a computer-readable storage medium. The storage medium is a computer-readable storage medium. The computer-readable storage medium stores program instructions. The distance measurement method.
以上各个实施例中的技术方案、技术特征在与本相冲突的情况下均可以单独,或者进行组合,只要未超出本领域技术人员的认知范围,均属于本申请保护范围内的等同实施例。The technical solutions and technical features in each of the above embodiments can be singly or combined in case of conflict with the present invention, as long as they do not exceed the cognitive scope of those skilled in the art, they all belong to the equivalent embodiments within the protection scope of this application. .
在本发明所提供的几个实施例中,应该理解到,所揭露的相关检测装置(例如:IMU)和方法,可以通过其它的方式实现。例如,以上所描述的遥控装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,遥控装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the related detection device (for example: IMU) and method disclosed may be implemented in other ways. For example, the embodiments of the remote control device described above are merely illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or components. It can be combined or integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, remote control devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer processor (processor) execute all or part of the steps of the method described in each embodiment of the present invention. The aforementioned storage media include: U disk, mobile hard disk, Read-Only Memory (ROM), Random Access Memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes.
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only the embodiments of the present invention, which do not limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the content of the description and drawings of the present invention, or directly or indirectly applied to other related technologies In the same way, all fields are included in the scope of patent protection of the present invention.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the foregoing embodiments can still be modified, or some or all of the technical features can be equivalently replaced; and these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the technical solutions of the embodiments of the present invention. scope.

Claims (32)

  1. 一种距离测量方法,用于无人机,其特征在于,所述方法包括:A distance measurement method for drones, characterized in that the method includes:
    获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
    在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
    对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
    匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
    根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述无人机上方空域的第一图像,包括:The method according to claim 1, wherein the acquiring a first image of the airspace above the drone comprises:
    对第一飞行控制指令进行响应,使所述无人机在第一位置原地旋转飞行一周;Respond to the first flight control instruction to make the UAV rotate and fly one circle in the first position;
    所述无人机在旋转飞行过程中,获取所述无人机配置的单目摄像头拍得的所述第一图像。During the rotating flight of the drone, the first image captured by the monocular camera configured on the drone is acquired.
  3. 根据权利要求2所述的方法,其特征在于,所述对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像,包括:The method according to claim 2, wherein the responding to a flight instruction so that the drone moves a predetermined distance in a specified direction to obtain a second image of the airspace above the drone, comprising :
    对第二飞行控制指令进行响应,使所述无人机由所述第一位置下降飞行预设距离至第二位置;Respond to the second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
    所述无人机位于所述第二位置时,获取所述单目摄像头拍得的所述第二图像。When the drone is located at the second position, the second image captured by the monocular camera is acquired.
  4. 根据权利要求2所述的方法,其特征在于,所述获取所述无人机位于所述第二位置时,所述单目摄像头拍得的所述第二图像,包括:The method according to claim 2, wherein the acquiring the second image taken by the monocular camera when the drone is at the second position comprises:
    对第三飞行控制指令进行响应,使所述无人机在所述第二位置原地旋转飞行一周;Respond to the third flight control instruction to make the UAV rotate and fly one circle in situ at the second position;
    获取所述无人在机旋转飞行过程中,所述单目摄像头拍得的所述第二图像。5、根据权利要求2或4所述的方法,其特征在于,所述第一图像和所述第二图像对应于所述无人机上方空域的环形视场。Acquiring the second image taken by the monocular camera during the unmanned on-board rotating flight. 5. The method according to claim 2 or 4, wherein the first image and the second image correspond to a circular field of view in the airspace above the drone.
  5. 根据权利要求2或4所述的方法,其特征在于,所述单目摄像头放置于 能够向上摆动的云台上,以使所述单目摄像头能够拍得所述第一图像和所述第二图像。The method according to claim 2 or 4, wherein the monocular camera is placed on a pan-tilt that can swing upwards, so that the monocular camera can capture the first image and the second image. image.
  6. 根据权利要求1所述的方法,其特征在于,所述在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域,包括:The method according to claim 1, wherein, in the first image, determining the target area that affects the flight of the drone according to the semantic category of the pixels in the first image comprises:
    识别所述第一图像中像素点的语义类别;Identifying the semantic category of pixels in the first image;
    根据所述像素点的语义类别,确定所述第一图像中的所述目标区域。According to the semantic category of the pixel point, the target area in the first image is determined.
  7. 根据权利要求7所述的方法,其特征在于,像素点的语义类别包括天空和障碍物;The method according to claim 7, wherein the semantic categories of pixels include sky and obstacles;
    所述根据所述像素点的语义类别,确定所述第一图像中的所述目标区域,包括:The determining the target area in the first image according to the semantic category of the pixel includes:
    根据像素点的语义类别,确定所述第一图像中用于描述天空以及与天空最相邻的障碍物的目标像素点,以由所述目标像素点构成所述目标区域。According to the semantic category of the pixel, the target pixel used to describe the sky and the obstacle closest to the sky in the first image is determined, so that the target pixel constitutes the target area.
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述像素点的语义类别,确定所述第一图像中的所述目标区域,包括:The method according to claim 7, wherein the determining the target area in the first image according to the semantic category of the pixel point comprises:
    根据所述像素点的语义类别,在所述第一图像中确定备选区域;Determining a candidate area in the first image according to the semantic category of the pixel;
    根据所述无人机的体积和/或所述无人机所处飞行环境的障碍物分布情况,调整所述备选区域,以得到所述目标区域。According to the volume of the unmanned aerial vehicle and/or the distribution of obstacles in the flying environment of the unmanned aerial vehicle, the candidate area is adjusted to obtain the target area.
  9. 根据权利要求7至9中任一项所述的方法,其特征在于,所述匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点,包括:The method according to any one of claims 7 to 9, wherein the matching the pixel points of the target area in the first image and the second image to obtain matching pixels includes :
    计算所述目标区域中像素点与所述第二图像中像素点之间的相似度;Calculating the similarity between the pixels in the target area and the pixels in the second image;
    根据像素点之间的相似度,在所述第二图像中确定与所述目标区域中像素点匹配的像素点。According to the similarity between the pixel points, a pixel point that matches the pixel point in the target area is determined in the second image.
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    若所述无人机与所述目标区域之间的距离满足预设条件,则对返航控制指令进行响应,使所述无人机自动返航。If the distance between the drone and the target area satisfies a preset condition, respond to the return home control instruction to make the drone return home automatically.
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    根据所述目标区域与所述无人机之间的距离,确定所述目标区域对应的点云数据。According to the distance between the target area and the drone, the point cloud data corresponding to the target area is determined.
  12. 一种可移动平台,其特征在于,至少包括:机体、动力系统以及控 制装置;A movable platform, characterized in that it at least includes a body, a power system and a control device;
    所述动力系统,设置于所述机体上,用于为所述可移动平台提供动力;The power system is arranged on the body and used to provide power for the movable platform;
    所述控制装置包括存储器和处理器;The control device includes a memory and a processor;
    所述存储器,用于存储计算机程序;The memory is used to store a computer program;
    所述处理器,用于运行所述存储器中存储的计算机程序以实现:The processor is configured to run a computer program stored in the memory to realize:
    获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
    在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
    对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
    匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
    根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
  13. 根据权利要求13所述的平台,其特征在于,所述平台还包括设置于所述机体上的单目摄像头;The platform according to claim 13, wherein the platform further comprises a monocular camera arranged on the body;
    所述处理器还用于:The processor is also used for:
    对第一飞行控制指令进行响应,使所述无人机在第一位置原地旋转飞行一周;Respond to the first flight control instruction to make the UAV rotate and fly one circle in the first position;
    所述无人机在旋转飞行过程中,获取所述无人机配置的单目摄像头拍得的所述第一图像。During the rotating flight of the drone, the first image captured by the monocular camera configured on the drone is acquired.
  14. 根据权利要求14所述的平台,其特征在于,所述处理器还用于:The platform according to claim 14, wherein the processor is further configured to:
    对第二飞行控制指令进行响应,使所述无人机由所述第一位置下降飞行预设距离至第二位置;Respond to the second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
    所述无人机位于所述第二位置时,获取所述单目摄像头拍得的所述第二图像。When the drone is located at the second position, the second image captured by the monocular camera is acquired.
  15. 根据权利要求14所述的平台,其特征在于,所述单目摄像头放置于能够向上摆动的云台上,以使所述单目摄像头能够拍得所述第一图像和所述第二图像;The platform according to claim 14, wherein the monocular camera is placed on a pan-tilt that can swing upward, so that the monocular camera can capture the first image and the second image;
    所述处理器还用于:The processor is also used for:
    对第三飞行控制指令进行响应,使所述无人机在所述第二位置原地旋转飞行一周;Respond to the third flight control instruction to make the UAV rotate and fly one circle in situ at the second position;
    获取所述无人在机旋转飞行过程中,所述单目摄像头拍得的所述第二图像,其中,所述第一图像和所述第二图像对应于所述无人机上方空域的环形视场。Acquire the second image taken by the monocular camera during the unmanned aircraft rotating flight, wherein the first image and the second image correspond to the ring shape in the airspace above the drone Field of view.
  16. 根据权利要求13所述的平台,其特征在于,所述处理器还用于:The platform according to claim 13, wherein the processor is further configured to:
    识别所述第一图像中像素点的语义类别;Identifying the semantic category of pixels in the first image;
    根据所述像素点的语义类别,确定所述第一图像中的所述目标区域。According to the semantic category of the pixel point, the target area in the first image is determined.
  17. 根据权利要求17所述的平台,其特征在于,像素点的语义类别包括天空和障碍物;The platform according to claim 17, wherein the semantic categories of pixels include sky and obstacles;
    所述处理器还用于:The processor is also used for:
    所述根据所述像素点的语义类别,确定所述第一图像中的所述目标区域,包括:The determining the target area in the first image according to the semantic category of the pixel includes:
    根据像素点的语义类别,确定所述第一图像中用于描述天空以及与天空最相邻的障碍物的目标像素点,以由所述目标像素点构成所述目标区域。According to the semantic category of the pixel, the target pixel used to describe the sky and the obstacle closest to the sky in the first image is determined, so that the target pixel constitutes the target area.
  18. 根据权利要求17所述的平台,其特征在于,所述处理器还用于:The platform according to claim 17, wherein the processor is further configured to:
    根据所述像素点的语义类别,在所述第一图像中确定备选区域;Determining a candidate area in the first image according to the semantic category of the pixel;
    根据所述无人机的体积和/或所述无人机所处飞行环境的障碍物分布情况,调整所述备选区域,以得到所述目标区域。According to the volume of the unmanned aerial vehicle and/or the distribution of obstacles in the flying environment of the unmanned aerial vehicle, the candidate area is adjusted to obtain the target area.
  19. 根据权利要求17至19中任一项所述的平台,其特征在于,所述处理器还用于:The platform according to any one of claims 17 to 19, wherein the processor is further configured to:
    计算所述目标区域中像素点与所述第二图像中像素点之间的相似度;Calculating the similarity between the pixels in the target area and the pixels in the second image;
    根据像素点之间的相似度,在所述第二图像中确定与所述目标区域中像素点匹配的像素点。According to the similarity between the pixel points, a pixel point that matches the pixel point in the target area is determined in the second image.
  20. 根据权利要求13所述的平台,其特征在于,所述处理器还用于:The platform according to claim 13, wherein the processor is further configured to:
    若所述无人机与所述目标区域之间的距离满足预设条件,则对返航控制指令进行响应,使所述无人机自动返航。If the distance between the drone and the target area satisfies a preset condition, respond to the return home control instruction to make the drone return home automatically.
  21. 根据权利要求13所述的平台,其特征在于,所述处理器还用于:The platform according to claim 13, wherein the processor is further configured to:
    根据所述目标区域与所述无人机之间的距离,确定所述目标区域对应的点云数据。According to the distance between the target area and the drone, the point cloud data corresponding to the target area is determined.
  22. 一种距离测量设备,其特征在于,所述测量设备包括:A distance measuring device, characterized in that the measuring device includes:
    存储器,用于存储计算机程序;Memory, used to store computer programs;
    处理器,用于运行所述存储器中存储的计算机程序以实现:The processor is configured to run a computer program stored in the memory to realize:
    获取所述无人机上方空域的第一图像;Acquiring a first image of the airspace above the drone;
    在所述第一图像中,根据所述第一图像中像素点的语义类别确定影响所述无人机飞行的目标区域;In the first image, determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image;
    对飞行指令进行响应,以使所述无人机向指定方向移动预设距离,获得所述无人机上方空域的第二图像;Respond to the flight instruction, so that the UAV moves a preset distance in a designated direction to obtain a second image of the airspace above the UAV;
    匹配所述目标区域在所述第一图像和所述第二图像中的像素点,以得到匹配像素点;Matching the pixel points of the target area in the first image and the second image to obtain matching pixels;
    根据所述匹配像素点确定所述目标区域与所述无人机之间的距离。The distance between the target area and the drone is determined according to the matching pixel points.
  23. 根据权利要求23所述的设备,其特征在于,所述设备还包括:单目摄像头;The device according to claim 23, wherein the device further comprises: a monocular camera;
    所述处理器还用于:The processor is also used for:
    对第一飞行控制指令进行响应,使所述无人机在第一位置原地旋转飞行一周;Respond to the first flight control instruction to make the UAV rotate and fly one circle in the first position;
    所述无人机在旋转飞行过程中,获取所述无人机配置的单目摄像头拍得的所述第一图像。During the rotating flight of the drone, the first image captured by the monocular camera configured on the drone is acquired.
  24. 根据权利要求24所述的设备,其特征在于,所述处理器还用于:The device according to claim 24, wherein the processor is further configured to:
    对第二飞行控制指令进行响应,使所述无人机由所述第一位置下降飞行预设距离至第二位置;Respond to the second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
    所述无人机位于所述第二位置时,获取所述单目摄像头拍得的所述第二图像。When the drone is located at the second position, the second image captured by the monocular camera is acquired.
  25. 根据权利要求24所述的设备,其特征在于,所述设备还包括云台,所述单目摄像头放置于能够向上摆动的所述云台上,以使所述单目摄像头能够拍得所述第一图像和所述第二图像;The device according to claim 24, wherein the device further comprises a pan/tilt, and the monocular camera is placed on the pan/tilt capable of swinging upward, so that the monocular camera can take pictures of the The first image and the second image;
    所述处理器还用于:The processor is also used for:
    对第三飞行控制指令进行响应,使所述无人机在所述第二位置原地旋转飞行一周;Respond to the third flight control instruction to make the UAV rotate and fly one circle in situ at the second position;
    获取所述无人在机旋转飞行过程中,所述单目摄像头拍得的所述第二图像,其中,所述第一图像和所述第二图像对应于所述无人机上方空域的环形视场。Acquire the second image taken by the monocular camera during the unmanned aircraft rotating flight, wherein the first image and the second image correspond to the ring shape in the airspace above the drone Field of view.
  26. 根据权利要求23所述的设备,其特征在于,所述设备还包括:The device according to claim 23, wherein the device further comprises:
    识别所述第一图像中像素点的语义类别;Identifying the semantic category of pixels in the first image;
    根据所述像素点的语义类别,确定所述第一图像中的所述目标区域。According to the semantic category of the pixel point, the target area in the first image is determined.
  27. 根据权利要求27所述的设备,其特征在于,像素点的语义类别包括天空和障碍物;The device according to claim 27, wherein the semantic categories of pixels include sky and obstacles;
    所述设备还包括:The device also includes:
    根据像素点的语义类别,确定所述第一图像中用于描述天空以及与天空最相邻的障碍物的目标像素点,以由所述目标像素点构成所述目标区域。According to the semantic category of the pixel, the target pixel used to describe the sky and the obstacle closest to the sky in the first image is determined, so that the target pixel constitutes the target area.
  28. 根据权利要求27所述的设备,其特征在于,所述设备还包括:The device according to claim 27, wherein the device further comprises:
    根据所述像素点的语义类别,在所述第一图像中确定备选区域;Determining a candidate area in the first image according to the semantic category of the pixel;
    根据所述无人机的体积和/或所述无人机所处飞行环境的障碍物分布情况,调整所述备选区域,以得到所述目标区域。According to the volume of the unmanned aerial vehicle and/or the distribution of obstacles in the flying environment of the unmanned aerial vehicle, the candidate area is adjusted to obtain the target area.
  29. 根据权利要求27至29中任一项所述的设备,其特征在于,所述设备还包括:The device according to any one of claims 27 to 29, wherein the device further comprises:
    计算所述目标区域中像素点与所述第二图像中像素点之间的相似度;Calculating the similarity between the pixels in the target area and the pixels in the second image;
    根据像素点之间的相似度,在所述第二图像中确定与所述目标区域中像素点匹配的像素点。According to the similarity between the pixel points, a pixel point that matches the pixel point in the target area is determined in the second image.
  30. 根据权利要求23所述的设备,其特征在于,所述设备还包括:The device according to claim 23, wherein the device further comprises:
    若所述无人机与所述目标区域之间的距离满足预设条件,则对返航控制指令进行响应,使所述无人机自动返航。If the distance between the drone and the target area satisfies a preset condition, respond to the return home control instruction to make the drone return home automatically.
  31. 根据权利要求23所述的设备,其特征在于,所述设备还包括:The device according to claim 23, wherein the device further comprises:
    根据所述目标区域与所述无人机之间的距离,确定所述目标区域对应的点云数据。According to the distance between the target area and the drone, the point cloud data corresponding to the target area is determined.
  32. 一种计算机可读存储介质,其特征在于,所述存储介质为计算机可读存储介质,该计算机可读存储介质中存储有程序指令,所述程序指令用于实现权利要求1至12中任一项所述的距离测量方法。A computer-readable storage medium, wherein the storage medium is a computer-readable storage medium, and program instructions are stored in the computer-readable storage medium, and the program instructions are used to implement any one of claims 1 to 12 The distance measurement method described in the item.
PCT/CN2020/073656 2020-01-21 2020-01-21 Distance measurement method, movable platform, device, and storage medium WO2021146969A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080004149.0A CN112639881A (en) 2020-01-21 2020-01-21 Distance measuring method, movable platform, device and storage medium
PCT/CN2020/073656 WO2021146969A1 (en) 2020-01-21 2020-01-21 Distance measurement method, movable platform, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073656 WO2021146969A1 (en) 2020-01-21 2020-01-21 Distance measurement method, movable platform, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021146969A1 true WO2021146969A1 (en) 2021-07-29

Family

ID=75291161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073656 WO2021146969A1 (en) 2020-01-21 2020-01-21 Distance measurement method, movable platform, device, and storage medium

Country Status (2)

Country Link
CN (1) CN112639881A (en)
WO (1) WO2021146969A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107367262A (en) * 2017-06-17 2017-11-21 周超 Positioning mapping in real time shows interconnection type control method to a kind of unmanned plane at a distance
CN108140245A (en) * 2017-12-25 2018-06-08 深圳市道通智能航空技术有限公司 Distance measuring method, device and unmanned plane
CN109154657A (en) * 2017-11-29 2019-01-04 深圳市大疆创新科技有限公司 Detecting devices and moveable platform
US20190120950A1 (en) * 2017-10-24 2019-04-25 Canon Kabushiki Kaisha Distance detecting apparatus, image capturing apparatus, distance detecting method, and storage medium
US20190178633A1 (en) * 2017-12-07 2019-06-13 Fujitsu Limited Distance measuring device, distance measuring method, and non-transitory computer-readable storage medium for storing program
CN109891351A (en) * 2016-11-15 2019-06-14 深圳市大疆创新科技有限公司 The method and system of object detection and corresponding mobile adjustment manipulation based on image
CN110068826A (en) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device of ranging
CN110132226A (en) * 2019-05-14 2019-08-16 广东电网有限责任公司 The distance and azimuth angle measurement system and method for a kind of unmanned plane line walking

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104677329B (en) * 2015-03-19 2017-06-30 广东欧珀移动通信有限公司 Target ranging method and device based on camera
CN106558038B (en) * 2015-09-18 2019-07-02 中国人民解放军国防科学技术大学 A kind of detection of sea-level and device
CN109074476A (en) * 2016-08-01 2018-12-21 深圳市大疆创新科技有限公司 The system and method evaded for barrier
CN107687841A (en) * 2017-09-27 2018-02-13 中科创达软件股份有限公司 A kind of distance-finding method and device
CN108427438A (en) * 2018-04-11 2018-08-21 北京木业邦科技有限公司 Flight environment of vehicle detection method, device, electronic equipment and storage medium
CN109948616B (en) * 2019-03-26 2021-05-25 北京迈格威科技有限公司 Image detection method and device, electronic equipment and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109891351A (en) * 2016-11-15 2019-06-14 深圳市大疆创新科技有限公司 The method and system of object detection and corresponding mobile adjustment manipulation based on image
CN107367262A (en) * 2017-06-17 2017-11-21 周超 Positioning mapping in real time shows interconnection type control method to a kind of unmanned plane at a distance
US20190120950A1 (en) * 2017-10-24 2019-04-25 Canon Kabushiki Kaisha Distance detecting apparatus, image capturing apparatus, distance detecting method, and storage medium
CN109154657A (en) * 2017-11-29 2019-01-04 深圳市大疆创新科技有限公司 Detecting devices and moveable platform
US20190178633A1 (en) * 2017-12-07 2019-06-13 Fujitsu Limited Distance measuring device, distance measuring method, and non-transitory computer-readable storage medium for storing program
CN108140245A (en) * 2017-12-25 2018-06-08 深圳市道通智能航空技术有限公司 Distance measuring method, device and unmanned plane
CN110068826A (en) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device of ranging
CN110132226A (en) * 2019-05-14 2019-08-16 广东电网有限责任公司 The distance and azimuth angle measurement system and method for a kind of unmanned plane line walking

Also Published As

Publication number Publication date
CN112639881A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
US11797028B2 (en) Unmanned aerial vehicle control method and device and obstacle notification method and device
US11726498B2 (en) Aerial vehicle touchdown detection
US20220234733A1 (en) Aerial Vehicle Smart Landing
WO2020187095A1 (en) Target tracking method and apparatus, and unmanned aerial vehicle
WO2020107372A1 (en) Control method and apparatus for photographing device, and device and storage medium
Patruno et al. A vision-based approach for unmanned aerial vehicle landing
JP2024053085A (en) Aircraft control device, aircraft control method, and program
US11713977B2 (en) Information processing apparatus, information processing method, and medium
WO2021047502A1 (en) Target state estimation method and apparatus, and unmanned aerial vehicle
WO2019082301A1 (en) Unmanned aircraft control system, unmanned aircraft control method, and program
US11449076B2 (en) Method for controlling palm landing of unmanned aerial vehicle, control device, and unmanned aerial vehicle
WO2022016534A1 (en) Flight control method of unmanned aerial vehicle and unmanned aerial vehicle
WO2021174539A1 (en) Object detection method, mobile platform, device and storage medium
WO2021146973A1 (en) Unmanned aerial vehicle return-to-home control method, device, movable platform and storage medium
WO2021056139A1 (en) Method and device for acquiring landing position, unmanned aerial vehicle, system, and storage medium
TWI711560B (en) Apparatus and method for landing unmanned aerial vehicle
Bikmullina et al. Stand for development of tasks of detection and recognition of objects on image
KR102289752B1 (en) A drone for performring route flight in gps blocked area and methed therefor
US11964775B2 (en) Mobile object, information processing apparatus, information processing method, and program
WO2021146969A1 (en) Distance measurement method, movable platform, device, and storage medium
KR20190097350A (en) Precise Landing Method of Drone, Recording Medium for Performing the Method, and Drone Employing the Method
WO2021146970A1 (en) Semantic segmentation-based distance measurement method and apparatus, device and system
JP6775748B2 (en) Computer system, location estimation method and program
JP7391114B2 (en) information processing equipment
WO2021146972A1 (en) Airspace detection method, movable platform, device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915774

Country of ref document: EP

Kind code of ref document: A1