WO2021146969A1 - Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage - Google Patents

Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage Download PDF

Info

Publication number
WO2021146969A1
WO2021146969A1 PCT/CN2020/073656 CN2020073656W WO2021146969A1 WO 2021146969 A1 WO2021146969 A1 WO 2021146969A1 CN 2020073656 W CN2020073656 W CN 2020073656W WO 2021146969 A1 WO2021146969 A1 WO 2021146969A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
drone
target area
pixel
distance
Prior art date
Application number
PCT/CN2020/073656
Other languages
English (en)
Chinese (zh)
Inventor
刘宝恩
李鑫超
王涛
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/073656 priority Critical patent/WO2021146969A1/fr
Priority to CN202080004149.0A priority patent/CN112639881A/zh
Publication of WO2021146969A1 publication Critical patent/WO2021146969A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present invention relates to the technical field of image recognition, in particular to a distance measurement method, a movable platform, equipment and storage medium.
  • UAV is an unmanned aerial vehicle operated by radio remote control equipment and self-provided program control device. Compared with manned aircraft, it has the characteristics of small size and low cost. It has been widely used in many fields, such as street scene shooting, power inspection, traffic monitoring, post-disaster rescue and so on.
  • the distance measurement is usually realized by using a distance measuring sensor, such as a lidar, which is configured on the UAV.
  • a distance measuring sensor such as a lidar
  • the distance measurement cannot be achieved, so that the UAV has a greater risk of damage during the return process.
  • the invention provides a distance measurement method, a movable platform, equipment and a storage medium, which are used to enable an unmanned aerial vehicle not equipped with a distance measuring sensor to realize distance measurement and ensure measurement accuracy.
  • the first aspect of the present invention is to provide a distance measurement method, which includes:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the second aspect of the present invention is to provide a movable platform, the movable platform includes: a body, a power system and a control device;
  • the power system is arranged on the body and used to provide power for the movable platform
  • the control device includes a memory and a processor
  • the memory is used to store a computer program
  • the processor is configured to run a computer program stored in the memory to realize:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the third aspect of the present invention is to provide a distance measuring device, which includes:
  • Memory used to store computer programs
  • the processor is configured to run a computer program stored in the memory to realize:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the fourth aspect of the present invention is to provide a computer-readable storage medium, the storage medium is a computer-readable storage medium, the computer-readable storage medium stores program instructions, and the program instructions are used in the first aspect.
  • the distance measurement method, movable platform, equipment, and storage medium provided by the present invention first obtain the first image of the airspace above the drone, and then perform semantic recognition on the first image, so as to determine the influence of no one in the first image.
  • the target area for the aircraft to fly.
  • the drone is controlled to move a preset distance in a designated direction to obtain a second image of the airspace above the drone, that is, the first image and the second image are shot at different locations.
  • the pixel points of the target area in the first image and the second image are matched to obtain matched pixels, and the distance between the target area and the drone is determined according to the matched pixels.
  • the present invention provides a method for realizing distance measurement through image recognition. Since the camera is an indispensable device to ensure the normal execution of the task of the UAV, the measurement method provided by the present invention will not affect the size and cost of the UAV while ensuring the accuracy of the measurement. In addition, using the measurement method provided by the present invention, distance measurement can also be achieved for drones that are not equipped with a depth sensor.
  • FIG. 1 is a schematic flowchart of a distance measurement method provided by an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of the UAV according to an embodiment of the present invention when its configured PTZ is in different states;
  • FIG. 3 is a schematic flowchart of a second image acquisition manner provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an annular field of view corresponding to an image provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a first image acquisition manner according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a method for measuring the distance between a drone and a target area provided by an embodiment of the present invention
  • FIG. 7 is a positional relationship of various parameters in a measurement method provided by an embodiment of the present invention as shown in FIG. 6; FIG.
  • FIG. 8 shows the positional relationship of various parameters in another measurement method provided by the embodiment of the present invention as shown in FIG. 6;
  • FIG. 8 shows the positional relationship of various parameters in another measurement method provided by the embodiment of the present invention as shown in FIG. 6;
  • FIG. 9 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention.
  • the distance measurement method provided by the present invention is to measure the distance between the target area above the drone that affects the flight of the drone and the drone, and this distance measurement is particularly important in the automatic return process of the drone.
  • the UAV when the drone completes the flight mission or encounters harsh natural environments such as protruding mountains during the flight, or the communication connection with the ground base station is disconnected, in order to ensure the safety of the drone, avoid In the event of a damage accident, the UAV often needs to turn on and return home automatically. And since there is an ascending flight phase during the automatic return of the drone, determining the distance between the drone and the target area becomes an important condition for judging whether the drone can return automatically. At this time, the obstacle distance measurement method provided by each embodiment of the present invention can be used to realize distance measurement.
  • FIG. 1 is a schematic flowchart of a distance measurement method provided by an embodiment of the present invention.
  • the main body of the distance measurement method is the measurement equipment.
  • the measurement device can be implemented as software or a combination of software and hardware.
  • the measurement equipment implements the distance measurement method to realize the measurement of the distance between the UAV and the target area that affects the unmanned flight.
  • the measurement equipment in this embodiment and the following embodiments may specifically be movable platforms, such as unmanned aerial vehicles, unmanned vehicles, unmanned ships, and so on. The following embodiments will be described by taking a drone as an execution subject as an example.
  • the method may include:
  • the camera configured on the drone may be a monocular camera.
  • This camera can be placed on a platform that can be lifted upwards.
  • the first image of the airspace above the drone can be taken.
  • the non-raised and raised state of the pan/tilt can be shown in Figure 2.
  • the angle of view corresponding to the first image may be the same as the angle of view of the monocular camera.
  • the drone can identify the semantic category to which each pixel in the first image belongs, that is, perform pixel-level semantic recognition on the first image.
  • this kind of semantic recognition can be accomplished with the aid of a neural network model, and the specific recognition process can be referred to the following related descriptions.
  • the above neural network model can realize two classifications, that is, distinguish the sky and obstacles in the first image; it can also realize multi-classification, that is, distinguish the sky, trees, buildings, etc. in the first image. .
  • the target area affecting the flight of the drone is determined in the first image according to the semantic category of each pixel. If the target area contains obstacles, it will affect the normal ascent of the drone.
  • an alternative way is to determine the pixels used to describe the sky as the target pixels, and the target pixels constitute the target area.
  • Another alternative is to use the first image to describe the sky and the obstacles adjacent to the sky. Determine the pixel point as the target pixel point, and the target pixel point constitutes the target area.
  • the selection of the target area can also take into account the flying environment of the drone and/or the size of the drone.
  • the category to which each pixel in the image belongs determines the candidate area, where the candidate area may be composed of pixels used to describe the sky and the obstacles closest to the sky.
  • the range of the candidate area is adjusted. For example, when the size of the drone is small or the obstacles in the flying environment are sparsely distributed, the candidate area can be reduced by a preset multiple to obtain the target area; otherwise, the candidate area can be expanded Preset multiples to get the target area.
  • the drone in response to the flight control instruction, can fly a preset distance from the current position in the designated direction to another position, and at this other position, a second image of the airspace above the drone can be taken.
  • the current position of the drone can be called the first position, and the first image is taken at the first position; the other position is called the second position, and the second image is taken at the second position. have to.
  • the above-mentioned designated direction may be upward, that is, after the first image is taken at the first position, the drone may respond to the received ascending flight control instruction to fly from the first position.
  • the monocular camera can take a second image at the second position.
  • the difference between the first position and the second position is usually small, for example, a few centimeters.
  • the ascent flight control command can be generated autonomously by the drone, or sent to the drone by the pilot through the control device.
  • the drone can match the pixel points in the target area of the first image with the pixel points in the second image to obtain matching pixels, that is, to obtain at least one matching pixel pair.
  • the objects contained in the first image and the second image are usually the same, but there will be a slight difference in position.
  • the two pixels contained in any matched pixel pair they describe the same object.
  • whether the pixels are matched or not can be reflected by the similarity between the pixels.
  • S105 Determine the distance between the target area and the drone according to the matching pixels.
  • the established ranging model can be preset to realize the ranging, or the triangular ranging principle can be used to realize the ranging.
  • the ranging refer to the following embodiments shown in Figs. 6 to 8.
  • the first image of the airspace above the drone is acquired, and after semantic recognition is performed on it, the target area that affects the flight of the drone can be obtained. Then control the drone to move a preset distance in a designated direction to obtain a second image of the airspace above the drone. Then, the pixel points of the target area in the first image and the second image are matched to obtain matching pixels. Finally, the distance between the target area and the drone is determined according to the matching pixels, and further, the flight status of the drone can be controlled according to this distance. It can be seen that the present invention provides a method for realizing obstacle distance measurement through image recognition. Since the image to be recognized is taken by the necessary camera of the drone itself, it will not affect the size and cost of the drone. At the same time, this method can also be used to measure distances for drones that are not equipped with depth sensors to ensure the accuracy of the measurement.
  • step 102 of the embodiment shown in FIG. 1 the above mentioned using the neural network model to perform pixel-level semantic recognition of the first image, and the semantic recognition process can be described in detail below:
  • the neural network model may specifically be a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • the neural network model can include multiple computing nodes. Each computing node can include a convolution (Conv) layer, batch normalization (BN), and an activation function ReLU.
  • the computing nodes can use skip connection (Skip Connection). ) Way to connect.
  • K ⁇ H ⁇ W can be input into the neural network model, and after the neural network model is processed, the output data of C ⁇ H ⁇ W can be obtained.
  • K can represent the number of input channels, and K can be equal to 4, corresponding to the four channels of red (R, red), green (G, green), blue (B, blue) and depth (D, deep) respectively;
  • H can represent the height of the input image (that is, the first image),
  • W can represent the width of the input image, and C can represent the number of categories.
  • the input image when the input image is too large, an input image can be cut into N sub-images.
  • the input data can be N ⁇ K ⁇ H' ⁇ W'
  • the output data can be N ⁇ C ⁇ H' ⁇ W', where H'can represent the height of the sub-image, and W'can represent the width of the sub-image.
  • H'can represent the height of the sub-image
  • W'can represent the width of the sub-image.
  • the feature map may also be obtained in other ways, which is not limited in this application.
  • Using the above-mentioned pre-trained neural network model to process the environment image to obtain a feature map may specifically include the following steps:
  • Step 1 Input the environment image into the neural network model to obtain the model output result of the neural network model.
  • the model output result of the neural network model may include the confidence feature maps output by multiple output channels, and the multiple output channels can correspond to multiple object categories one-to-one, and the pixel values of the confidence feature maps of a single object category are used To characterize the probability that a pixel is an object category.
  • Step 2 According to the model output result of the neural network model, a feature map containing semantic information is obtained.
  • the object category corresponding to the confidence feature map with the largest pixel value at the same pixel location in the multiple confidence feature maps one-to-one corresponding to the multiple output channels may be used as the object category of the pixel location to obtain the feature map.
  • the number of output channels of the neural network model is 4, and the output result of each channel is a confidence feature map, that is, the 4 confidence feature maps are the confidence feature map 1 to the confidence feature map 4, and the confidence The degree characteristic map 1 corresponds to the sky, the confidence characteristic map 2 corresponds to buildings, the confidence characteristic map 3 corresponds to trees, and the confidence characteristic map 4 corresponds to "other". In these categories, except for the sky, the rest can be regarded as obstacles.
  • the pixel value at the pixel location (100, 100) in the confidence feature map 1 is 70
  • the pixel value at the pixel location (100, 100) in the confidence feature map 2 is 50
  • the pixel at the pixel location (100, 100) in the confidence feature map 3 When the value is 20, and the pixel value of the pixel position (100, 100) in the confidence feature map 4 is 20, it can be determined that the pixel position (100, 100) is the sky.
  • the pixel value at the pixel location (100, 80) in the confidence feature map 1 is 20
  • the pixel value at the pixel location (100, 80) in the confidence feature map 2 is 30, and the pixel location in the confidence feature map 3
  • the pixel value of (100,80) is 20
  • the pixel value of pixel position (100,80) in the confidence feature figure 4 is 70
  • step 104 of the embodiment shown in FIG. 1 the above has also provided a pixel matching method, that is, first calculate the similarity between each pixel in the target area and each pixel in the second image, and then At least one pair of matching pixels is obtained according to the similarity.
  • a pixel matching method that is, first calculate the similarity between each pixel in the target area and each pixel in the second image, and then At least one pair of matching pixels is obtained according to the similarity.
  • the target area and the second image each contain a large number of pixels, this results in a large amount of calculation in the matching process, which will occupy more computing resources of the drone, and also make the matching efficiency inefficient. high.
  • the drone can use a feature point detection algorithm pre-configured in itself to extract the target area and feature pixel points in the second image respectively.
  • the detected characteristic pixel points are usually corner points in the image.
  • the aforementioned feature point detection algorithm may be a Scale-invariant Feature Transform (SIFT) algorithm, a Speeded-Up Robust Features (SURF) algorithm, or a binary robust independent feature. (Binary Robust Independent Elementary Features, BRIEF for short) algorithm and so on.
  • SIFT Scale-invariant Feature Transform
  • SURF Speeded-Up Robust Features
  • BRIEF Binary Robust Independent Elementary Features
  • the drone can perform matching processing on the target area and the characteristic pixel points in the second image to obtain at least one matching pixel pair.
  • the two characteristic pixels in it also describe the same object.
  • the similarity between pixels may be the similarity between the descriptors of characteristic pixels. If the similarity is greater than or equal to the preset threshold, it is determined that the two characteristic pixel points match, so as to form a pair of matched pixels from the two.
  • the feature pixel points obtained by using different feature point detection algorithms may use different similarity calculation methods.
  • the similarity can be obtained by calculating the Hamming distance between the feature pixels; for the descriptor obtained by the SIFT algorithm or the SURF algorithm, the description corresponding to the feature pixel can be calculated The Euclidean distance between the children to get the similarity.
  • the number of pixels for matching can be greatly reduced, which also greatly reduces the amount of calculation in the matching process to ensure matching efficiency.
  • the field of view corresponding to the first image is the same as the field of view of the monocular camera, and both are smaller. of. It is easy to understand that the larger the field of view corresponding to the first image, the more comprehensive the description of the airspace above the drone.
  • the first image with this large field of view can more accurately calculate the target area and the unmanned area.
  • the distance between the drones can more accurately control the automatic return of the drone.
  • an optional implementation manner may be:
  • S1011 Respond to the first flight control instruction to make the UAV rotate and fly one circle in situ at the first position.
  • S1012 Obtain the first image taken by the monocular camera of the drone during the rotating flight of the drone.
  • the drone When the drone is hovering in the first position, it responds to the first flight control command, so that the drone rotates and flies once in the first position. During this rotating flight, the monocular camera on the drone can take the first image corresponding to the circular field of view. This annular field of view can be shown in Figure 4.
  • an optional implementation manner may be:
  • S1031 Respond to the second flight control instruction to cause the drone to descend and fly a preset distance from the first position to the second position.
  • the monocular camera on the drone can take an image of the airspace above the drone, which is the first image.
  • the drone will descend and fly from the current first position to the second position.
  • the monocular camera configured on the drone will again take an image of the airspace above the drone, which is the second image.
  • the drone has obtained the first image and the second image.
  • the field of view corresponding to the second image may be the same as the field of view of the monocular camera.
  • the field of view of the monocular camera is usually small. Similar to the description in the embodiment shown in Figure 3, the larger the field of view corresponding to the second image, the more comprehensive the description of the airspace above the drone by the second image.
  • the second image with this large field of view can also more accurately calculate the distance between the target area and the drone, and can more accurately control the drone to return home.
  • the drone after the drone responds to the second flight control command to fly down to the second position, it can also respond to the third flight control command, so that the drone can rotate and fly in the second position for one full circle, and obtain the The second image taken by the monocular camera during the rotating flight of a person on the plane. After rotating and flying, the second image obtained corresponds to the circular field of view in the airspace above the drone. This annular field of view can also be shown in Figure 4.
  • the drone can obtain the first image and the second image with a circular field of view, so as to more accurately calculate the relationship between the target area and the drone based on the image of the large field of view.
  • the distance between the two can control the drone to return home more accurately.
  • the drone can take a first image at the first position, and after a descending flight, it can obtain a second image taken at the second position.
  • this method of obtaining the first image and the second image through descending flight after the UAV carries out pixel matching, as shown in Figure 6, an option is to determine the difference between the UAV and the target area based on the matched pixel pair.
  • the distance between them that is, an optional implementation of step 105 can be:
  • S1051 Determine a first distance between the first pixel point and the image center of the first image.
  • S1052 Determine a second distance between the second pixel point and the image center of the second image.
  • S1053 Determine the distance between the target area and the drone according to the preset distance, the camera parameters of the monocular camera, the first distance, and the second distance.
  • the drone can already obtain at least one matched pixel pair, where any matched pixel pair may consist of a first pixel in the first image and a second pixel in the second image.
  • the drone can use any matching pixel pair to determine the distance between the drone and the target area.
  • any matching pixel pair A includes a first pixel A 1 and a second pixel A 2
  • the image center of the first image including the first pixel A 1 is O 1
  • the first pixel including the second pixel A 2 The image center of the second image is O 2 .
  • the distance between the target area and the drone can be calculated according to the following formula:
  • a 0 is the target area
  • x is the horizontal distance between the drone and the target area, that is, the line segment O 0 A 0
  • z is the vertical distance between the drone and the target area, that is, the line segment O 0 P 1
  • O 0 is the optical center of the monocular camera configured on the UAV.
  • d 0 is the distance between the first position P 1 where the first image is captured and the second position P 2 where the second image is captured.
  • d 1 is the distance between the first pixel A 1 and the image center O 1 of the first image.
  • d 2 is the distance between the second pixel A 2 and the image center O 2 of the second image.
  • f is the focal length of the monocular camera, that is, the line segment O 1 P 1 in the figure, that is, the line segment O 2 P 2 .
  • the drone can first obtain the pixel coordinates of the first pixel A 1 and the image center O 1 of the first image in the first image. Then, the first distance d 1 between the first pixel point A 1 and the image center O 1 is determined according to the pixel coordinates of the two.
  • the calculation method of the second distance d 2 is also similar, and will not be repeated here.
  • the distance between the target area and the drone can be determined by using any matching pixel pair.
  • the above calculation can be performed on multiple matched pixel pairs to obtain multiple distances, and the distance between the drone and the target area can be determined based on the multiple distances. .
  • the average or median of multiple distances is determined as the distance between the drone and the target area.
  • the drone can take a first image at the first position, and then after an ascending flight, obtain a second image taken at the second position.
  • the above-mentioned method can also be used to realize distance measurement.
  • the above-mentioned first position P 1 , the image center O 1 of the first image, the first pixel A 1 and the second position P 2 , the image center O 2 of the second image, and the second pixel A 2 are between The positional relationship becomes as shown in Figure 8.
  • a 0 is the target area
  • x is the horizontal distance between the drone and the target area, that is, the line segment O 0 A 0
  • z is the vertical distance between the drone and the target area, that is, the line segment O 0 P 1 , O 0
  • It is the optical center of the monocular camera configured on the UAV.
  • d 0 is the distance between the first position P 1 where the first image is captured and the second position P 2 where the second image is captured.
  • d 1 is the distance between the first characteristic pixel A 1 and the image center O 1 of the first image.
  • d 2 is the distance between the second characteristic pixel A 2 and the image center O 2 of the second image.
  • f is the focal length of the monocular camera, that is, the line segment O 1 P 1 in the figure, that is, the line segment O 2 P 2 .
  • pixel point matching can also be performed on the characteristic pixel points in the first image and the second image, and any matching pixel pair obtained at this time contains the first image in the first image.
  • a characteristic pixel and a second characteristic pixel in the second image Then the first distance between the first characteristic pixel point and the image center of the first image can be further determined, and the second distance between the second characteristic pixel point and the image center of the first image can be determined, and then use Figure 7 or Figure 7
  • the method shown in 8 calculates the distance between the target area and the drone.
  • the movement of the drone can be controlled according to the distance. Specifically, if this distance meets the preset conditions, it indicates that the target area above the drone is far, and the ascending flight phase of the drone during the return home process will not be affected. At this time, the drone can control the return home Respond to instructions to control the drone to return home automatically. Otherwise, control the drone to continue hovering.
  • the unmanned return when it is determined that the distance between the drone and the target area meets the preset conditions, the unmanned return can be controlled. It is easy to understand that any flight process of a drone requires battery power. Therefore, before controlling the drone to return to home, you can also determine the power required during the return process. If the current remaining power is more than the return point When power is needed, the drone will be controlled to return home.
  • an alternative way is to first estimate the wind speed information from the current position to the destination of the return home based on the historical wind speed information. Then determine the ground speed information for landing from the current position to the return destination, so as to determine the power required for the drone's return process based on the wind speed information and ground speed information.
  • the point cloud data corresponding to the target area can be further obtained.
  • This point cloud data can describe the flight environment in which the drone is located.
  • This point cloud data can be used to plan the return path for the drone to ensure that the drone can return to home automatically and safely according to this path.
  • FIG. 9 is a schematic structural diagram of a distance measuring device provided by an embodiment of the present invention. referring to FIG. 9, this embodiment provides a distance measuring device, which can execute the above-mentioned distance measuring method; specifically, Distance measuring devices include:
  • the acquiring module 11 is configured to acquire a first image of the airspace above the drone.
  • the area determining module 12 is configured to determine, in the first image, a target area that affects the flight of the drone according to the semantic category of pixels in the first image.
  • the response module 13 is configured to respond to flight instructions, so that the UAV moves a predetermined distance in a designated direction, and obtains a second image of the airspace above the UAV.
  • the matching module 14 is configured to match the pixel points of the target area in the first image and the second image to obtain matching pixels;
  • the distance determining module 15 is configured to determine the distance between the target area and the UAV according to the matching pixel points.
  • the device shown in FIG. 9 can also execute the methods of the embodiments shown in FIG. 1 to FIG. 8.
  • FIG. 10 is a schematic structural diagram of a movable platform provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a movable platform, and the movable platform is at least one of the following: Aircraft, unmanned ships, unmanned vehicles; specifically, the movable platform includes: a body 21, a power system 22, and a control device 23.
  • the power system 22 is arranged on the body 21 and is used to provide power for the movable platform.
  • the control device 23 includes a memory 231 and a processor 232.
  • the memory is used to store a computer program
  • the processor is configured to run a computer program stored in the memory to realize:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the movable platform also includes a monocular camera 24, which is arranged on the body 21;
  • the processor 232 is further configured to: respond to the first flight control instruction, so that the UAV rotates and flies once at the first position;
  • the first image captured by the monocular camera configured on the drone is acquired.
  • the processor 232 is further configured to: respond to a second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
  • the second image captured by the monocular camera is acquired.
  • the monocular camera 24 is placed on a pan-tilt that can swing upward, so that the monocular camera 24 can capture the first image and the second image;
  • the processor 232 is further configured to: respond to a third flight control instruction, so that the UAV rotates and flies in situ at the second position for one circle;
  • processor 232 is further configured to: identify the semantic category of pixels in the first image;
  • the target area in the first image is determined.
  • the semantic category of the pixel includes sky and obstacles
  • the processor 232 is further configured to: according to the semantic category of the pixel, determine the description of the sky and the obstacle closest to the sky in the first image.
  • Target pixels the target area is formed by the target pixels.
  • processor 232 is further configured to: determine a candidate area in the first image according to the semantic category of the pixel;
  • the candidate area is adjusted to obtain the target area.
  • processor 232 is further configured to: calculate the similarity between the pixels in the target area and the pixels in the second image;
  • a pixel point that matches the pixel point in the target area is determined in the second image.
  • the processor 232 is further configured to: if the distance between the drone and the target area meets a preset condition, respond to a return home control instruction to make the drone return home automatically.
  • the processor 232 is further configured to determine the point cloud data corresponding to the target area according to the distance between the target area and the drone.
  • the movable platform shown in FIG. 10 can execute the methods of the embodiments shown in FIGS. 1 to 8.
  • FIGS. 1 to 8 For parts that are not described in detail in this embodiment, please refer to the related descriptions of the embodiments shown in FIGS. 1 to 8.
  • FIG. 1 to FIG. 8 For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
  • the structure of the distance measuring device shown in FIG. 11 can be implemented as an electronic device, which can be a drone.
  • the electronic device may include: one or more processors 31 and one or more memories 32.
  • the memory 32 is used to store a program that supports the electronic device to execute the distance measurement method provided in the embodiments shown in FIGS. 1 to 8 above.
  • the processor 31 is configured to execute a program stored in the memory 32.
  • the program includes one or more computer instructions, and the following steps can be implemented when one or more computer instructions are executed by the processor 31:
  • the target area that affects the flight of the drone determine the target area that affects the flight of the drone according to the semantic category of the pixels in the first image
  • the distance between the target area and the drone is determined according to the matching pixel points.
  • the structure of the distance measuring device may further include a communication interface 33 for the electronic device to communicate with other devices or a communication network.
  • the device also includes a monocular camera
  • the processor 31 is further configured to: respond to the first flight control instruction to make the UAV rotate and fly one circle in situ at the first position;
  • the first image captured by the monocular camera configured on the drone is acquired.
  • the processor 31 is further configured to: respond to a second flight control instruction to cause the drone to descend and fly a preset distance from the first position to a second position;
  • the second image captured by the monocular camera is acquired.
  • the monocular camera is placed on a pan-tilt that can swing upward, so that the monocular camera 24 can capture the first image and the second image;
  • the processor 31 is further configured to: respond to a third flight control instruction, so that the UAV rotates and flies in situ at the second position for one circle;
  • processor 31 is further configured to: identify the semantic category of pixels in the first image;
  • the target area in the first image is determined.
  • the semantic category of the pixel includes sky and obstacle
  • the processor 31 is further configured to: according to the semantic category of the pixel, determine the description of the sky and the obstacle closest to the sky in the first image.
  • Target pixels the target area is formed by the target pixels.
  • the matching pixel pair is determined according to the similarity between the characteristic pixel points.
  • the processor 31 is further configured to: determine a candidate area in the first image according to the semantic category of the pixel;
  • the candidate area is adjusted to obtain the target area.
  • processor 31 is further configured to: calculate the similarity between the pixels in the target area and the pixels in the second image;
  • a pixel point that matches the pixel point in the target area is determined in the second image.
  • the processor 31 is further configured to: if the distance between the drone and the target area meets a preset condition, respond to a return home control instruction to make the drone return home automatically.
  • the processor 31 is further configured to: determine the point cloud data corresponding to the target area according to the distance between the target area and the drone.
  • the device shown in FIG. 11 can execute the methods of the embodiments shown in FIG. 1 to FIG. 8. For parts that are not described in detail in this embodiment, reference may be made to the related description of the embodiments shown in FIG. 1 to FIG. 8. For the implementation process and technical effects of this technical solution, please refer to the description in the embodiment shown in FIG. 1 to FIG. 8, which will not be repeated here.
  • an embodiment of the present invention provides a computer-readable storage medium.
  • the storage medium is a computer-readable storage medium.
  • the computer-readable storage medium stores program instructions. The distance measurement method.
  • the related detection device for example: IMU
  • the embodiments of the remote control device described above are merely illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or components. It can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, remote control devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer processor (processor) execute all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, Read-Only Memory (ROM), Random Access Memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

La présente invention concerne un procédé de mesure de distance, une plateforme mobile, un dispositif et un support de stockage. Le procédé consiste à : obtenir une première image de l'espace aérien au-dessus d'un véhicule aérien sans pilote ; après réalisation d'une reconnaissance sémantique sur la première image, obtenir une région cible affectant le vol du véhicule aérien sans pilote ; puis commander le véhicule aérien sans pilote pour qu'il se déplace d'une distance prédéfinie vers une direction désignée de façon à obtenir une seconde image de l'espace aérien au-dessus du véhicule aérien sans pilote ; puis mettre en correspondance les pixels de la région cible dans la première image avec ceux dans la seconde image pour obtenir des pixels mis en correspondance ; et enfin, déterminer une distance entre la région cible et le véhicule aérien sans pilote selon les pixels mis en correspondance, et commander en outre un état de vol du véhicule aérien sans pilote selon la distance. Dans une mesure de distance au moyen d'une reconnaissance d'image, l'image utilisée est capturée par une caméra nécessairement disposée sur le véhicule aérien sans pilote, et par conséquent, une mesure de distance peut également être obtenue par le véhicule aérien sans pilote sur lequel un capteur de profondeur n'est pas configuré.
PCT/CN2020/073656 2020-01-21 2020-01-21 Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage WO2021146969A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/073656 WO2021146969A1 (fr) 2020-01-21 2020-01-21 Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage
CN202080004149.0A CN112639881A (zh) 2020-01-21 2020-01-21 距离测量方法、可移动平台、设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073656 WO2021146969A1 (fr) 2020-01-21 2020-01-21 Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage

Publications (1)

Publication Number Publication Date
WO2021146969A1 true WO2021146969A1 (fr) 2021-07-29

Family

ID=75291161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073656 WO2021146969A1 (fr) 2020-01-21 2020-01-21 Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage

Country Status (2)

Country Link
CN (1) CN112639881A (fr)
WO (1) WO2021146969A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107367262A (zh) * 2017-06-17 2017-11-21 周超 一种无人机远距离实时定位测绘显示互联式控制方法
CN108140245A (zh) * 2017-12-25 2018-06-08 深圳市道通智能航空技术有限公司 测距方法、装置以及无人机
CN109154657A (zh) * 2017-11-29 2019-01-04 深圳市大疆创新科技有限公司 探测设备及可移动平台
US20190120950A1 (en) * 2017-10-24 2019-04-25 Canon Kabushiki Kaisha Distance detecting apparatus, image capturing apparatus, distance detecting method, and storage medium
US20190178633A1 (en) * 2017-12-07 2019-06-13 Fujitsu Limited Distance measuring device, distance measuring method, and non-transitory computer-readable storage medium for storing program
CN109891351A (zh) * 2016-11-15 2019-06-14 深圳市大疆创新科技有限公司 基于图像的对象检测和对应的移动调整操纵的方法和系统
CN110068826A (zh) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 一种测距的方法及装置
CN110132226A (zh) * 2019-05-14 2019-08-16 广东电网有限责任公司 一种无人机巡线的距离及方位角测量系统和方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104677329B (zh) * 2015-03-19 2017-06-30 广东欧珀移动通信有限公司 基于摄像头的目标测距方法及装置
CN106558038B (zh) * 2015-09-18 2019-07-02 中国人民解放军国防科学技术大学 一种水天线检测方法及装置
CN109074476A (zh) * 2016-08-01 2018-12-21 深圳市大疆创新科技有限公司 用于障碍物规避的系统以及方法
CN107687841A (zh) * 2017-09-27 2018-02-13 中科创达软件股份有限公司 一种测距方法及装置
CN108427438A (zh) * 2018-04-11 2018-08-21 北京木业邦科技有限公司 飞行环境检测方法、装置、电子设备及存储介质
CN109948616B (zh) * 2019-03-26 2021-05-25 北京迈格威科技有限公司 图像检测方法、装置、电子设备及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109891351A (zh) * 2016-11-15 2019-06-14 深圳市大疆创新科技有限公司 基于图像的对象检测和对应的移动调整操纵的方法和系统
CN107367262A (zh) * 2017-06-17 2017-11-21 周超 一种无人机远距离实时定位测绘显示互联式控制方法
US20190120950A1 (en) * 2017-10-24 2019-04-25 Canon Kabushiki Kaisha Distance detecting apparatus, image capturing apparatus, distance detecting method, and storage medium
CN109154657A (zh) * 2017-11-29 2019-01-04 深圳市大疆创新科技有限公司 探测设备及可移动平台
US20190178633A1 (en) * 2017-12-07 2019-06-13 Fujitsu Limited Distance measuring device, distance measuring method, and non-transitory computer-readable storage medium for storing program
CN108140245A (zh) * 2017-12-25 2018-06-08 深圳市道通智能航空技术有限公司 测距方法、装置以及无人机
CN110068826A (zh) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 一种测距的方法及装置
CN110132226A (zh) * 2019-05-14 2019-08-16 广东电网有限责任公司 一种无人机巡线的距离及方位角测量系统和方法

Also Published As

Publication number Publication date
CN112639881A (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
US11797028B2 (en) Unmanned aerial vehicle control method and device and obstacle notification method and device
US11726498B2 (en) Aerial vehicle touchdown detection
US20220234733A1 (en) Aerial Vehicle Smart Landing
WO2020187095A1 (fr) Procédé et appareil de suivi de cible, et véhicule aérien sans pilote
Patruno et al. A vision-based approach for unmanned aerial vehicle landing
WO2020107372A1 (fr) Procédé et appareil de commande destinés à un dispositif de photographie, et dispositif et support de stockage
Steder et al. Place recognition in 3D scans using a combination of bag of words and point feature based relative pose estimation
JP2024053085A (ja) 飛行体制御装置、飛行体制御方法、及びプログラム
US11713977B2 (en) Information processing apparatus, information processing method, and medium
US20200379487A1 (en) Unmanned aerial vehicle control system, unmanned aerial vehicle control method, and program
WO2021047502A1 (fr) Procédé et appareil d'estimation d'état cible et véhicule aérien sans pilote
US11449076B2 (en) Method for controlling palm landing of unmanned aerial vehicle, control device, and unmanned aerial vehicle
WO2022016534A1 (fr) Procédé de commande de vol d'engin volant sans pilote embarqué et engin volant sans pilote embarqué
WO2021146973A1 (fr) Procédé de commande de retour au point de départ de véhicule aérien sans pilote, dispositif, plate-forme mobile et support de stockage
WO2021056139A1 (fr) Procédé et dispositif d'acquisition de position d'atterrissage, véhicule aérien sans pilote, système et support de stockage
TWI711560B (zh) 無人機降落設備及方法
KR102289752B1 (ko) Gps 음영 지역에서 경로 비행을 수행하는 드론 및 그 방법
US11964775B2 (en) Mobile object, information processing apparatus, information processing method, and program
WO2021146969A1 (fr) Procédé de mesure de distance, plateforme mobile, dispositif et support de stockage
KR20190097350A (ko) 드론의 정밀착륙을 위한 방법, 이를 수행하기 위한 기록매체, 및 이를 적용한 드론
WO2021146970A1 (fr) Procédé de mesure de distance basé sur une segmentation sémantique et appareil, dispositif, et système
JP6775748B2 (ja) コンピュータシステム、位置推測方法及びプログラム
JP7391114B2 (ja) 情報処理装置
WO2021146972A1 (fr) Procédé de détection d'espace aérien, plate-forme mobile, dispositif, et support d'enregistrement
CN110262567B (zh) 一种路径中继点空间生成方法、装置和无人机

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915774

Country of ref document: EP

Kind code of ref document: A1