WO2021217403A1 - Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage - Google Patents

Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage Download PDF

Info

Publication number
WO2021217403A1
WO2021217403A1 PCT/CN2020/087423 CN2020087423W WO2021217403A1 WO 2021217403 A1 WO2021217403 A1 WO 2021217403A1 CN 2020087423 W CN2020087423 W CN 2020087423W WO 2021217403 A1 WO2021217403 A1 WO 2021217403A1
Authority
WO
WIPO (PCT)
Prior art keywords
target area
image
current image
target
feature point
Prior art date
Application number
PCT/CN2020/087423
Other languages
English (en)
Chinese (zh)
Inventor
刘洁
周游
陈希
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080030068.8A priority Critical patent/CN113853559A/zh
Priority to PCT/CN2020/087423 priority patent/WO2021217403A1/fr
Publication of WO2021217403A1 publication Critical patent/WO2021217403A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw

Definitions

  • the embodiments of the present application relate to the field of control, and in particular, to a control method, device, device, and storage medium of a movable platform.
  • Surround shooting is a common shooting scheme. During the shooting process, the movable platform moves around the target, and the shooting device on the movable platform can be used to shoot the target during this process. If you need to achieve surround shooting, you not only need to control the movable platform to surround the target, but also adjust the orientation of the shooting device. When the operator manually completes this kind of shooting, he needs high operating skills.
  • POI American full name: Point of Interest
  • the vision-based POI solution in the prior art is generally aimed at small targets that are easy to see the whole picture, but for larger targets, the effect is relatively poor. This is because the movable platform often only sees the larger targets. Part of the target, it is difficult to observe the full picture of the target, and the limitation is strong.
  • a POI solution for a huge target is based on a pre-established three-dimensional model of the target, but this requires the operator to establish a three-dimensional model of the target before the surround shooting, which is cumbersome and has an unfriendly user experience.
  • the embodiments of the present application provide a control method, device, device, and storage medium for a movable platform.
  • the shooting device of the movable platform can always shoot the target object, which is conducive to surrounding shooting of a relatively large target.
  • the first aspect of the embodiments of the present application is to provide a method for controlling a movable platform, which is used to control the movable platform to surround a target object, and the movable platform includes a photographing device, including:
  • a second aspect of the embodiments of the present application is to provide a control device for a movable platform, the control device is used to control the movable platform to surround a target object, the movable platform includes a photographing device, and the control device includes: Memory, processor;
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the third aspect of the embodiments of the present application is to provide a movable platform, including:
  • the power system is installed on the fuselage to provide power
  • the fourth aspect of the embodiments of the present application is to provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method as described in the first aspect.
  • the control method, device, device, and storage medium of the movable platform provided in this embodiment determine the location of the first feature point mapped to the next frame of image based on the first feature point in the target area of the previous frame of image, Then obtain the second feature point of the next frame of image; when the number of second feature points meets the preset condition, for example, when the number of second feature points is less than the number of first feature points, move the target of the previous frame of image Area, obtain the target area of the next frame of image, thereby update the target area of the image, and control the camera to face the three-dimensional space point corresponding to the updated target area; in turn, the camera of the movable platform can always capture the target, which is beneficial to Surround shooting of larger targets.
  • the above process does not require the establishment of a three-dimensional model in advance, the processing method is fast and simple, and the user experience is better.
  • FIG. 1 is a flowchart of a method for controlling a movable platform provided by an embodiment of the application
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 3 is a first schematic diagram of feature points provided by an embodiment of this application.
  • FIG. 4 is a second schematic diagram of feature points provided by an embodiment of this application.
  • FIG. 5 is a flowchart of a method for controlling a movable platform according to another embodiment of the application.
  • FIG. 6 is a first schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 7 is a second schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 8 is a third schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 9 is a fourth schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 10 is a third schematic diagram of feature points provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of movement of a movable platform provided by an embodiment of the application.
  • FIG. 12 is a flowchart of a method for controlling a movable platform provided by another embodiment of this application.
  • FIG. 13 is a first schematic diagram of an initial image provided by an embodiment of this application.
  • FIG. 14 is a second schematic diagram of an initial image provided by an embodiment of this application.
  • FIG. 15 is a third schematic diagram of an initial image provided by an embodiment of this application.
  • FIG. 16 is a structural diagram of a control device for a movable platform provided by an embodiment of the application.
  • FIG. 17 is a structural diagram of a movable platform provided by an embodiment of the application.
  • control device 160: control device
  • a component when referred to as being "fixed to” another component, it can be directly on the other component or a centered component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to the other component or there may be a centered component at the same time.
  • a POI solution for large targets is to perform three-dimensional modeling of the large target before the movable platform surrounds the large target for shooting; then, based on the pre-established three-dimensional model of the target, the surround shooting is performed.
  • this method requires the operator to establish a three-dimensional model of the target before the surround shooting, which is cumbersome and the algorithm is complicated; the user experience is not friendly.
  • the mobile platform control method, device, equipment and storage medium provided in the embodiments of the present application can solve the above-mentioned problems.
  • FIG. 1 is a flowchart of a method for controlling a movable platform provided by an embodiment of the application.
  • the method for controlling a movable platform provided in this embodiment is used to control the movable platform to surround a target object, and the movable platform includes a camera.
  • the method provided in this embodiment may include:
  • Step S101 Acquire a current image captured by the photographing device.
  • the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like.
  • the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the application; as shown in FIG. 2, the drone 20 is equipped with a photographing device 21, and the photographing device 21 may specifically be a camera, a video camera, or the like.
  • the camera 21 can be mounted on the drone 20 via a pan/tilt 22, or the camera 21 can be fixed on the drone 20 via other fixing devices.
  • the camera 21 can take real-time shooting to obtain video data or image data, and send the video data or image data to the control device 24 through the wireless communication interface 23 of the drone 20.
  • the control device 24 may specifically be a remote control corresponding to the drone 20, or a user terminal; among them, the user terminal may be a smart phone, a tablet computer, and the like.
  • the drone 20 may also include a control device, and the control device may include a general-purpose or special-purpose processor. It should be noted that this is only a schematic description, and does not limit the specific structure of the UAV.
  • the movable platform can obtain the current image captured by the camera in real time.
  • Step S102 Acquire the first feature point in the target area of the reference image, the reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
  • the photographing device outputs the current image; the movable platform determines that the previous frame of the image is a reference image.
  • the movable platform determines the target area of the reference image, which is the image area corresponding to the target object; then, extracts the first feature point in the image area (in order to facilitate the distinction, the feature point of the reference image is called the first Feature points).
  • the movable platform obtains the first frame of image output by the camera, determines the target area on the first frame of image, and extracts the first feature point on the target area of the first frame of image.
  • the movable platform obtains the second frame image output by the shooting device, and it can be determined that the first frame image is the reference image of the second frame image.
  • the target object may be a large-scale target, for example, the target object is a building, or the target object is a group of buildings.
  • the image captured by the photographing device 21 includes the target object 31 as shown in FIG. 2.
  • a corner detection algorithm (full English name: Corner Detection Algorithm) may be used to detect the first feature point on the target area of the reference image.
  • corner detection algorithms for example, are FAST (English full name: Features from Accelerated Segment Test) algorithm, SUSAN (English full name: Small Univalue Segment Assimilating Nucleus) algorithm, and Harris operator algorithm.
  • Step S103 Determine a second feature point corresponding to the first feature point in the current image.
  • the feature points on the current image are referred to as second feature points.
  • the position of the three-dimensional space point on the current image can be determined according to the three-dimensional space point, and then the second feature point in the current image can be obtained. For example, map the three-dimensional space point corresponding to the first feature point to the current image, and then obtain the second feature point corresponding to the first feature point.
  • the tracking algorithm is used to track the first feature point in the target area to determine that the first feature point in the target area of the reference image is in the current image s position.
  • the tracking algorithm for example, is the KLT (full English name: Kanade-Lucas-Tomasi Feature Tracker) feature tracking algorithm.
  • FIG. 3 is the first schematic diagram of the feature points provided by the embodiment of the application.
  • the reference image 40 has the target area 42 and the target area 42 has the target object 31;
  • a feature point, the first feature point includes feature point A, feature point B, and feature point C, for example.
  • the camera outputs the current image 41.
  • the position of the first feature point for example, feature point A, feature point B, and feature point C
  • the reference image 40 and the current image 41 may be adjacent images or non-adjacent images.
  • FIG. 3 is only a schematic illustration, and does not limit the type of target object or the number of feature points.
  • FIG. 4 is the second schematic diagram of the feature points provided by the embodiment of this application.
  • the movable platform moves from left to right (the direction shown by the arrow shown in FIG. 4).
  • 31 represents the target object
  • 51 and 52 represent the images output by the camera during the process of moving the camera around the target object 31 in the direction shown by the arrow (from left to right).
  • the three-dimensional points on the target 31 can be Mapped to the images 51 and 52
  • the mapping point of the three-dimensional space point in the image 51 may specifically be a feature point on the target area of the image 51
  • the mapping point of the three-dimensional space point in the image 52 may specifically be the target of the image 52 Feature points on the area.
  • Point A and point B are three-dimensional space points on the target object 31.
  • the point a1 and the point b1 represent feature points in the image 51, the point a1 corresponds to the point A, and the point b1 corresponds to the point B.
  • the point a2 and the point b2 represent feature points in the image 52, the point a2 corresponds to the point A, and the point b2 corresponds to the point B.
  • Step S104 When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
  • the preset condition is that the number of second feature points is less than the number of first feature points, that is The number of feature points is decreasing.
  • Step S105 Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • the target area of the current image has been obtained.
  • the target area of the current image is an image area.
  • the image area may correspond to a three-dimensional space point on the target object, so that the camera can be controlled to face the current image.
  • the target area corresponds to the three-dimensional space point, so that the shooting device can always shoot the target object.
  • step S101 to step S105 may be a process of repeated processing.
  • the current image captured by the photographing device is obtained; the first feature point in the target area of the reference image is obtained, and the reference image is the previous frame image of the current image ,
  • the target area is the image area corresponding to the target object; determine the second feature point corresponding to the first feature point in the current image; when the number of second feature points meets the preset condition, move the target area of the reference image to obtain the current image The target area; control the camera to face the three-dimensional space point corresponding to the target area of the current image.
  • the position where the first feature point is mapped to the next frame of image can be determined, and then the second feature point of the next frame of image can be obtained;
  • the target area of the reference image is moved to obtain the target area of the next frame of image.
  • the target area of the image is updated.
  • the shooting device is controlled to face the three-dimensional space point corresponding to the target area of the current image; furthermore, the shooting device of the movable platform can always shoot the target object.
  • the above process is an image-based processing process, no complicated three-dimensional model needs to be established in advance, the processing method is fast and simple, and the user experience is better.
  • FIG. 5 is a flowchart of a method for controlling a movable platform provided by another embodiment of the application.
  • the method provided in this embodiment is used to control the movable platform to surround the target object, and the movable platform includes a camera.
  • the method provided in this embodiment may include:
  • Step S201 Acquire a current image captured by the photographing device.
  • the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like.
  • the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
  • step S101 in FIG. 1 refers to step S101 in FIG. 1 and will not be repeated here.
  • Step S202 Divide the target area of the reference image into multiple grid areas.
  • step S202 specifically includes: dividing the target area of the reference image into multiple grid areas according to the direction in which the movable platform surrounds the target object.
  • the target area of the reference image may be divided into a plurality of grid areas; then, in step S203, feature points less than a preset number are extracted from each grid area.
  • the grid area may be divided according to the direction in which the movable platform surrounds the target object, so as to divide the target area of the reference image into multiple grid areas.
  • FIG. 6 is a first schematic diagram of a grid area provided by an embodiment of the application.
  • a movable platform surrounds a target object on a horizontal plane, and the target area can be divided into grids by horizontal lines to obtain multiple grids. Grid area.
  • the grid can be divided by facing the target area vertically to obtain multiple grid areas.
  • FIG. 7 is the second schematic diagram of the grid area provided by the embodiment of the application.
  • the movable platform surrounds the target object on the inclined surface, and the determination is made according to the inclination direction indicated by the inclined surface.
  • Corresponding slanted line; the target area is divided into grids according to the slanted line to obtain multiple grid areas.
  • the tilt direction is not limited to the direction shown in FIG. 7.
  • Step S203 Acquire a preset number of feature points in each grid area as the first feature point in the target area of the reference image.
  • the reference image is the previous frame of the current image
  • the target area is the image area corresponding to the target object.
  • the corner detection algorithm can be used to detect the feature point in each grid area, and limit the number of feature points in the detection process. Then, the preset number of feature points in each grid area are extracted, and the preset number of feature points in each grid area are used as the first feature point in the target area of the reference image.
  • Step S204 Determine a second feature point corresponding to the first feature point in the current image.
  • step S204 specifically includes:
  • the tracking algorithm is used to obtain the feature point obtained by tracking in the current image; the feature point obtained by tracking is filtered based on the epipolar constraint condition to obtain the second feature point.
  • step S103 shown in 1 the tracking algorithm is used to track the first feature point in the target area, and then the feature point obtained by tracking in the current image is obtained.
  • the feature points obtained by the tracking may be inaccurate, so the feature points obtained by the tracking need to be filtered to obtain the second feature point.
  • the feature points obtained by tracking can be filtered based on the epipolar constraint (English full name: Epipolar constraint) condition to obtain the second feature point.
  • the feature points obtained by tracking may be filtered based on the motion relationship of the feature points of the two images before and after. It should be noted that the method of filtering the tracked feature points based on the epipolar constraint condition can be combined with the method of filtering the tracked feature points based on the movement relationship of the feature points of the two frames of image before and after, so as to obtain more accurate results.
  • the second feature point can be combined with the method of filtering the tracked feature points based on the movement relationship of the feature points of the two frames of image before and after, so as to obtain more accurate results.
  • Step S205 When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
  • step S205 specifically includes: moving the target area of the reference image in a direction away from the grid area of the boundary.
  • the preset condition is that the number of second feature points is less than the number of first feature points.
  • the position of the movable platform relative to the target object is constantly changing.
  • the multiple grid areas of the reference image The first feature point in the grid area at the boundary of the middle may be lost, that is, the corresponding second feature point cannot be found in the current image. Therefore, the target area of the reference image needs to be moved to obtain the target area of the current image .
  • FIG. 8 is the third schematic diagram of the grid area provided by the embodiment of the application.
  • the diagram a in FIG. 8 is each grid area of the target area of the reference image, and the diagram a includes a plurality of first feature points.
  • Figure b in Figure 8 is the target area of the current image.
  • the target area of the reference image is divided into multiple grid areas; the first feature point is extracted from each grid area.
  • the first feature point (the solid point in Figure a) on the left boundary of Figure a in Figure 8 may be lost; thus, in the grid area determined to be located on the boundary
  • the target area of the reference image is moved to obtain the target area of the current image as shown in b in FIG. 8.
  • the moving distance can be the width of the grid area occupied by the missing feature points, for example, Figure a in Figure 8
  • the missing feature points in the grid area occupy a row of grids, and relative to the target area of the reference image, move the distance represented by the "column of grids".
  • the target of the reference image Area moving toward the right edge of the image.
  • the target area of the reference image is Move towards the left edge of the image.
  • the target area of the reference image is changed to Move towards the lower border of the image.
  • the target area of the reference image is Move towards the lower right border of the image.
  • Step S206 Acquire a third feature point in the target area of the current image, where the number of third feature points is greater than or equal to the number of second feature points.
  • the target area of the current image includes at least one second feature point
  • the second feature point is a feature point corresponding to the first feature point of the reference image.
  • candidate feature points of the target area of the current image can be extracted. If no candidate feature points are extracted, the above-mentioned second feature points are determined to be all feature points (that is, third feature points) in the target area of the current image. At this time, the number of third feature points is equal to the number of second feature points. If the candidate feature points are extracted, the above-mentioned second feature point and the newly extracted candidate feature points are used as all the feature points (ie, the third feature points) in the target area of the current image. At this time, the number of third feature points is greater than the number of second feature points.
  • FIG. 9 is a schematic diagram of the grid area provided by an embodiment of the application.
  • the graph a in FIG. 9 is each grid area of the target area of the reference image, and the graph a includes a plurality of first feature points.
  • Figure b in Figure 9 is the target area of the current image.
  • the first feature point (the solid point in Figure a) on the left boundary of Figure a in Figure 9 is lost in the current image; the target area will be moved one column to the right. Grid; Then for the target area of the current image, new candidate feature points (the solid points in Figure b) are extracted.
  • step 206 specifically includes: obtaining candidate feature points in the target area of the current image; filtering the candidate feature points according to the depth value of the candidate feature points and/or the semantic information of the candidate feature points to determine The third feature point.
  • the candidate feature points may have feature points that do not belong to the target object, that is, the three-dimensional space points corresponding to the candidate feature points are not three-dimensional space points on the target object, but other targets.
  • the candidate feature points need to be filtered. After filtering, all candidate feature points may be filtered out.
  • the number of third feature points is equal to the number of second feature points. After filtering, part of the candidate feature points may be filtered out. At this time, the number of third feature points is greater than the number of second feature points.
  • the depth value of the candidate feature point can be calculated; where the depth value represents the distance between the three-dimensional space point corresponding to the feature point and the reference point (for example, the optical center) of the imaging device. If the depth value of the candidate feature point belongs to the preset depth value range, the candidate feature point is determined to be the feature point in the target area of the current image, and the feature point does not need to be filtered; if the depth value of the candidate feature point does not belong to the preset depth value range, Set the depth value range to determine to filter the feature point.
  • the "preset depth value range” can be an empirical value; the “preset depth value range” represents the value range of the depth value of the target area, or the “preset depth value range” represents the three-dimensional value on the target object.
  • the semantic information of the candidate feature points can be obtained; wherein the semantic information represents the category of the three-dimensional space point corresponding to the feature point, for example, buildings, sky, and grass. Then, when it is determined that the semantic information of the candidate feature point corresponds to the target object, it is determined that the candidate feature point is a feature point in the target area of the current image, and the feature point does not need to be filtered. When it is determined that the semantic information of the candidate feature point does not correspond to the target object, the feature point is filtered. For example, if the target object is a building, but the semantic information of a candidate feature point indicates that the candidate feature point corresponds to the grass or the sky behind the building, the feature point needs to be filtered out.
  • the depth value and semantic information of the candidate feature points can be combined to determine whether to filter the candidate feature points. For example, if the depth value of the candidate feature point belongs to the preset depth value range, and the semantic information of the candidate feature point corresponds to the target object, the candidate feature point is determined to be the feature point in the target area of the current image. The feature point needs to be filtered. Otherwise, filter the feature point.
  • FIG. 10 is the third schematic diagram of the feature points provided by the embodiment of the application.
  • the movable platform moves from left to right (the direction shown by the arrow shown in FIG. 10).
  • 31 represents the target object
  • 51, 52, 53 represent the images output by the camera during the process of moving around the target object 31 in the direction indicated by the arrow (from left to right).
  • the mapping points of the three-dimensional space points in the image 51 may specifically be feature points on the target area of the image 51
  • the mapping points of the three-dimensional spatial points in the image 52 may specifically be A feature point on the target area of the image 52
  • the mapping point of the three-dimensional space point in the image 53 may specifically be a feature point on the target area of the image 53.
  • Point A, point B, and point C are three-dimensional space points on the target object 31.
  • the point a1 and the point b1 represent feature points in the image 51, the point a1 corresponds to the point A, and the point b1 corresponds to the point B.
  • Point a2, point b2, and point c2 represent feature points in image 52, point a2 corresponds to point A, point b2 corresponds to point B, and point c2 corresponds to point C.
  • Point a3, point b3, and point c3 represent feature points in the image 53, point a3 corresponds to point A, point b3 corresponds to point B, and point c3 corresponds to point C.
  • point A is mapped to the position of the feature point a2 in the image 52
  • point B is mapped to the location of the point feature b2 in the target image 52
  • point C is mapped to the location of the point feature c2 in the target image 52 Position; and then know that the feature point a2 is mapped to the feature point a3 in the image 53, the feature point b2 is mapped to the feature point b3 in the image 53, and the feature point c2 is mapped to the feature point c3 in the image 53.
  • the camera can always capture the surface of the target object 31, ensuring that the target object will not be lost during the surrounding process.
  • Step S207 Determine a three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image.
  • step S207 specifically includes: weighted average of the depth value of the third feature point in the target area of the current image; according to the weighted average and the shooting direction of the current image taken by the shooting device, it is determined that the target area of the current image corresponds to Points in three-dimensional space.
  • the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far away from the center of the target area of the current image.
  • the orientation of the camera on the movable platform needs to be adjusted.
  • step S206 the target area of the current image is obtained, and if the number of third feature points in the target area of the current image is not zero, the camera can be controlled to face the target area of the current image according to the target area of the current image Corresponding three-dimensional space point.
  • the depth value of each third feature point can be calculated; where the depth value represents the three-dimensional space corresponding to the feature point
  • the distance between the point and the reference point (such as the optical center) of the imaging device then the depth value of each third feature point in the target area of the current image can be calculated by weighted average to obtain the weighted average.
  • the movable platform can learn the shooting direction when the camera is shooting the current image. Furthermore, the movable platform determines the three-dimensional space point corresponding to the target area of the current image based on the above-mentioned weighted average value and the shooting direction.
  • the weight corresponding to the third feature point close to the center of the target area of the current image is greater than that far away from the current image.
  • a third feature point close to the center of the target area of the current image can be selected. Furthermore, the movable platform determines the three-dimensional space point corresponding to the target area of the current image based on the depth value and the shooting direction of the third feature point close to the center of the target area of the current image.
  • Step S208 Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • the camera of the movable platform can observe the surface of the target object corresponding to the target area.
  • FIG. 11 is a schematic diagram of the movement of the movable platform provided by an embodiment of the application.
  • the movable platform is, for example, a drone; when the drone is at the starting point A, the target of the image collected The area corresponds to the area a of the target object in three-dimensional space; during the flight to point B, the target area is continuously updated, and then when the drone is at point B, the target area of the collected image corresponds to the target object in the three-dimensional space ⁇ b.
  • step S208 specifically includes: when the number of third feature points in the target area of the current image is greater than a preset threshold, controlling the camera to face the three-dimensional space point corresponding to the target area of the current image.
  • step S207 it has been determined in step S207 that the number of third feature points in the target area of the current image is multiple.
  • the number of third feature points can be further determined; when the number of third feature points is greater than the preset threshold, it is determined that the number of feature points in the target area of the current image is large, and you can Continuously update the target area.
  • the shooting device of the movable platform can face the three-dimensional space point corresponding to the target area of the current image determined in step 207.
  • Step S209 When the number of third feature points in the target area of the current image is less than or equal to the preset threshold, the camera is controlled to face the three-dimensional space point corresponding to the target area of the reference image.
  • step S206 if the number of third feature points is less than or equal to the preset threshold, it is determined that the number of feature points in the target area of the current image is small.
  • the reference image ie , The feature points of the target area of the previous frame of the current image, determine the three-dimensional space point corresponding to the target area of the reference image, and then control the shooting device toward the three-dimensional space point corresponding to the target area of the reference image.
  • the movable platform when the number of feature points in the target area of the current image is small, the movable platform can also be controlled to return home; or, the shooting mode of the shooting device of the movable platform can be switched, for example, step S201- can no longer be used.
  • the shooting mode of the S208 large target object needs to be switched to another shooting mode.
  • the shooting is controlled
  • the device faces the three-dimensional space point corresponding to the target area of the reference image.
  • the feature points that are less than the preset number in the grid area of the reference image are extracted, and the reference image is not targeted.
  • Each pixel or feature point in the target area of the image is calculated, thereby reducing the complexity of the algorithm and the amount of data calculation.
  • the second feature point corresponding to the first feature point in the current image is obtained, and the target area of the reference image is moved to obtain the target area of the current image; thereby the target area is updated.
  • the camera of the movable platform can always observe the target object and shoot the surface corresponding to the target area; at the same time, it provides the exit logic, that is, the feature points in the target area of the current image are more
  • the camera is controlled to face the three-dimensional space point corresponding to the target area of the reference image, or the movable platform is controlled to return home to end the current shooting, or the camera of the movable platform can continue to perform shooting tasks in other shooting modes.
  • FIG. 12 is a flowchart of a method for controlling a movable platform provided by another embodiment of the application.
  • the method provided in this embodiment is used to control the movable platform to surround the target object, and the movable platform includes a photographing device.
  • the method provided in this embodiment may include:
  • Step S301 Send the initial image captured by the photographing device to the control device of the movable platform, so that the control device displays the initial image.
  • the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like.
  • the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
  • the initial image captured by the photographing device is sent to the control device of the movable platform, so that the control device displays the initial image.
  • the control device may specifically be a remote controller corresponding to the movable platform, or a user terminal; where the user terminal is, for example, a smart phone, a tablet computer, and the like.
  • Step S302 Obtain the user's instruction information for the target object.
  • the instruction information is generated according to the user's click operation or frame selection operation on the initial image displayed by the control device.
  • the user can input instruction information into the control device by means of touch, or gesture control, or voice information; the instruction information is used to indicate the target object in the initial image.
  • the user clicks a location point of the initial image through an operating medium (for example, a finger or a stylus); and then the control device receives a click operation input by the user.
  • an operating medium for example, a finger or a stylus
  • the user uses an operating medium (for example, a finger or a stylus) to frame select an area of the initial image; and then the control device receives the frame selection operation input by the user.
  • an operating medium for example, a finger or a stylus
  • FIG. 13 is a first schematic diagram of a reference image provided by an embodiment of the application. As shown in FIG. 13, the user selects an area on the initial image by using a finger frame.
  • Step S303 Determine the target area of the initial image according to the instruction information.
  • step S303 specifically includes the following steps:
  • the first step is to perform image segmentation on the initial image to obtain multiple segmented regions.
  • the second step is to determine the target area of the initial image according to the segmented area and the instruction information.
  • the second step specifically includes: when the proportion of the image area in the initial image indicated by the indication information in the target segmentation area is greater than the preset ratio, then determining the target area of the initial image according to the target segmentation area, and the target segmentation area It is at least one of a plurality of divided regions.
  • image segmentation is performed on the initial image, for example, image segmentation processing based on clustering is performed to obtain a segmentation result; wherein the segmentation result includes multiple segmented regions, and pixels in each segmented region have similar features.
  • the ratio value of the image area indicated by the instruction information (ie, the image area in the initial image) on the target segmented area is calculated, where the target segmented area is at least one of the plurality of segmented areas. If it is determined that the ratio value is greater than the preset ratio, the target area of the initial image can be determined according to the target segmentation area to ensure that the target area contains the complete target object.
  • the image area indicated by the indication information is A1; after image segmentation is performed on the initial image, segmentation areas B1, B2, B3, and B4 are obtained. If it is determined that the proportion of the image area A1 in the segmented area B1 is greater than the preset ratio, the target area of the initial image can be determined according to the segmented area B1.
  • FIG. 14 is the second schematic diagram of the reference image provided by the embodiment of the application.
  • the target object selected by the user frame is a small house; image segmentation is performed on the entire image, and the segmented image is A number of divided areas are divided; further, the small house in Figure a in Figure 14 is divided into one divided area. Then, according to the user's frame selection and the result of image segmentation, all the image information of the small house can be placed in the target area, as shown in Figure 15.
  • Step S304 Acquire the current image captured by the camera.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S305 Acquire the first feature point in the target area of the reference image, the reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S306 Determine a second feature point corresponding to the first feature point in the current image.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S307 Identify the target type of the target object; if the target type is a preset type, execute the step of moving the target area of the reference image to obtain the target area of the current image.
  • the target object in the target area is a preset target type. For example, it is necessary to identify whether the target object in the target area is a large building.
  • step S308 is executed. If not, there is no need to perform step S308, and the camera can be controlled to face the three-dimensional space point corresponding to the target area of the reference image.
  • Step S308 When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S309 Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S310 Control the movement of the movable platform so that the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is the surrounding radius.
  • step S309 the orientation of the camera is controlled. Since the camera is carried on the movable platform, it is also necessary to control the movement of the movable platform.
  • the drone if the distance value calculated based on the third feature point in the target area is less than the surrounding radius, the drone is controlled to fly backward; “backward” refers to the target and the drone
  • the first direction of the line between the two, the first direction is the direction in which the target object points to the drone. If the distance value is greater than the surrounding radius, control the drone to fly forward; “forward” refers to the second direction of the line between the target and the drone, the first direction is that the drone points to the target object Direction.
  • the distance between the drone and the surface of the target object can always be equal to the surrounding radius through the control of the drone.
  • the surrounding radius may be input by the user through the control device, or the distance between the movable platform and the three-dimensional space point corresponding to the target area of the initial image may be used as the surrounding radius.
  • the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is always the above-mentioned surrounding radius. That is to say, when the movable platform surrounds the building group to shoot, not only the surface of the building group can always be photographed, but also a certain distance can be kept from the surface of the building group.
  • the three-dimensional space point corresponding to the target area of the initial image may be used as the center point of the movable platform surrounding the target object.
  • the target area of the initial image corresponds to a three-dimensional space point (for example, the three-dimensional space point corresponding to the weighted average of each feature point in the target area of the initial image; or, the center position of the target area of the initial image corresponds to 3D space point)
  • the 3D space point corresponding to the target area of the initial image can be used as the center point around the movable platform.
  • the center point of the surround is unchanged. That is to say, when the movable platform is shooting around the building group, the camera always shoots the surface of the building group.
  • the center of the orbiting track of the movable platform is always the target area of the initial image corresponding to a three-dimensional space point.
  • the control device 160 is a structural diagram of a control device for a movable platform provided by an embodiment of the application.
  • the control device is used to control the movable platform to surround the target object.
  • the movable platform includes a photographing device 160.
  • the control device 160 includes: a memory 161, the processor 162;
  • the memory 162 is used to store program codes
  • the processor 162 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: obtain the current image captured by the camera; obtain the first feature point in the target area of the reference image, the reference image being the front of the current image
  • the target area is the image area corresponding to the target object
  • the second feature point corresponding to the first feature point in the current image is determined; when the number of second feature points meets the preset condition, the target area of the reference image is moved to Obtain the target area of the current image; control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • the processor before acquiring the first feature point in the target area of the reference image, is further configured to: divide the target area of the reference image into multiple grid areas.
  • the processor when the processor divides the target area of the reference image into multiple grid areas, it is specifically configured to: divide the target area of the reference image into multiple grid areas according to the direction in which the movable platform surrounds the target object.
  • the processor when the processor obtains the first feature points in the target area of the reference image, it is specifically configured to: obtain a preset number of feature points in each grid area as the first feature points.
  • the preset condition includes: the number of first feature points in the grid area on the border among the multiple grid areas of the reference image corresponding to the second feature points in the current image is zero; the processor moves the reference image When the target area is specifically used to: move the target area of the reference image in a direction away from the grid area of the boundary.
  • the number of second feature points meets the preset condition, including: the number of second feature points is less than the number of first feature points.
  • the processor determines the second feature point corresponding to the first feature point in the current image, it is specifically used to: based on the first feature point, use a tracking algorithm to obtain the feature points tracked in the current image; and based on the epipolar constraint The condition filters the tracked feature points to obtain the second feature point.
  • the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is also used to: obtain the third feature point in the target area of the current image, and the number of third feature points is greater than or equal to the second The number of feature points.
  • the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is specifically used to: determine the three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image ; Control the camera to face the three-dimensional point corresponding to the target area of the current image.
  • the processor determines the three-dimensional space point corresponding to the target area of the current image according to the third characteristic point in the target area of the current image, it is specifically used to: determine the depth of the third characteristic point in the target area of the current image The values are weighted and averaged; according to the weighted average and the shooting direction of the shooting device when shooting the current image, the three-dimensional space point corresponding to the target area of the current image is determined.
  • the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far from the center of the target area of the current image.
  • the processor when the processor obtains the third feature point in the target area of the current image, it is specifically used to:
  • the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is specifically used to: when the number of third feature points in the target area of the current image is greater than the preset threshold, control the camera to face forward The three-dimensional point corresponding to the target area of the image.
  • the processor is further configured to: when the number of third feature points in the target area of the current image is less than or equal to the preset threshold, control the camera to face the three-dimensional space point corresponding to the target area of the reference image.
  • the processor is further configured to: control the movement of the movable platform so that the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is the surrounding radius.
  • the processor moves the target area of the reference image to obtain the target area of the current image, it is also used to: identify the target type of the target object; if the target type is a preset type, execute moving the target area of the reference image to obtain The step of the target area of the current image.
  • control device 160 further includes a communication interface 163, which is connected to the processor.
  • the processor is also used for:
  • the initial image taken by the camera is sent to the control device of the movable platform, so that the control device displays the initial image; the user's instruction information to the target object is obtained through the communication interface 163, and the instruction information is based on the user's initial image displayed on the control device According to the instruction information, determine the target area of the initial image.
  • the processor determines the target area of the initial image according to the instruction information, it is specifically configured to: perform image segmentation on the initial image to obtain multiple segmented areas; and determine the target area of the initial image according to the segmented areas and the instruction information.
  • the processor determines the target area of the initial image according to the segmented area and the instruction information, it is specifically used to: when the proportion of the image area in the initial image indicated by the instruction information in the target segmented area is greater than the preset ratio , The target area of the initial image is determined according to the target segmentation area, and the target segmentation area is at least one of the plurality of segmentation areas.
  • the three-dimensional space point corresponding to the target area of the initial image is taken as the center point of the movable platform surrounding the target object, and the distance between the movable platform and the three-dimensional space point corresponding to the target area of the initial image is taken as the surrounding radius.
  • control device for the movable platform provided in the embodiments of the present application are similar to the foregoing embodiments, and will not be repeated here.
  • the position where the first feature point is mapped to the next frame of image is determined, and then the second feature point of the next frame of image is obtained;
  • the target area of the reference image is moved to obtain the target area of the next frame of image.
  • the target area of the image is updated.
  • the shooting device is controlled to face the three-dimensional space point corresponding to the target area of the image; furthermore, the shooting device of the movable platform can always shoot the target object.
  • the above process is an image-based processing process, no complicated three-dimensional model needs to be established in advance, the processing method is fast and simple, and the user experience is better.
  • FIG. 17 is a structural diagram of a movable platform provided by an embodiment of the application.
  • the movable platform 170 includes a body, a power system, a camera 174, and a control device 178.
  • the power system includes at least one of the following: The motor 171, the propeller 172 and the electronic governor 173, the power system is installed on the fuselage to provide power; the specific principle and implementation of the control device 178 are similar to the foregoing embodiment, and will not be repeated here.
  • the movable platform 170 also includes: a sensing system 175, a communication system 176, and a supporting device 177.
  • the supporting device 177 may specifically be a pan/tilt, and the camera 174 is mounted on the movable platform through the supporting device 177.
  • control device 178 may specifically be a flight controller of the movable platform 170.
  • the embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to realize the control device of the movable platform as described above.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute the method described in each embodiment of the present application. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé et un appareil de commande d'une plateforme mobile, un dispositif et un support de stockage. Le procédé est utilisé pour commander une plateforme mobile pour encercler un objet cible, la plateforme mobile comprend un appareil photographique et le procédé consiste à : acquérir des premiers points caractéristiques dans une région cible d'une image de référence photographiée par l'appareil photographique, l'image de référence étant une trame d'image précédente d'une image courante et la région cible étant une région d'image correspondant à l'objet cible ; déterminer des seconds points caractéristiques correspondant aux premiers points caractéristiques à l'intérieur de l'image courante ; lorsque le nombre de seconds points caractéristiques satisfait une condition prédéfinie, déplacer la région cible de l'image de référence de façon à obtenir une région cible de l'image courante ; et commander l'appareil photographique pour qu'il soit orienté vers des points spatiaux tridimensionnels correspondant à la région cible de l'image courante. Ainsi, l'appareil photographique de la plateforme mobile peut photographier en continu l'objet cible. De plus, la procédure décrite est une procédure de traitement à base d'image qui ne nécessite pas un modèle tridimensionnel complexe pré-établi, le moyen de traitement est rapide et simple et l'expérience d'utilisateur est bonne.
PCT/CN2020/087423 2020-04-28 2020-04-28 Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage WO2021217403A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080030068.8A CN113853559A (zh) 2020-04-28 2020-04-28 可移动平台的控制方法、装置、设备及存储介质
PCT/CN2020/087423 WO2021217403A1 (fr) 2020-04-28 2020-04-28 Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087423 WO2021217403A1 (fr) 2020-04-28 2020-04-28 Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage

Publications (1)

Publication Number Publication Date
WO2021217403A1 true WO2021217403A1 (fr) 2021-11-04

Family

ID=78331565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087423 WO2021217403A1 (fr) 2020-04-28 2020-04-28 Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage

Country Status (2)

Country Link
CN (1) CN113853559A (fr)
WO (1) WO2021217403A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281096A (zh) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 基于目标检测算法的无人机追踪控制方法、设备及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115898039B (zh) * 2023-03-10 2023-06-02 北京建工四建工程建设有限公司 钢筋对孔可视化调整方法、装置、设备、系统和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502275B1 (ko) * 2014-04-11 2015-03-13 중앙대학교 산학협력단 무인 헬기 자동 제어 장치 및 제어 방법
CN105043392A (zh) * 2015-08-17 2015-11-11 中国人民解放军63920部队 一种飞行器位姿确定方法及装置
CN107194339A (zh) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 障碍物识别方法、设备及无人飞行器
WO2020014987A1 (fr) * 2018-07-20 2020-01-23 深圳市大疆创新科技有限公司 Procédé et appareil de commande de robot mobile, dispositif et support d'informations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502275B1 (ko) * 2014-04-11 2015-03-13 중앙대학교 산학협력단 무인 헬기 자동 제어 장치 및 제어 방법
CN105043392A (zh) * 2015-08-17 2015-11-11 中国人民解放军63920部队 一种飞行器位姿确定方法及装置
CN107194339A (zh) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 障碍物识别方法、设备及无人飞行器
WO2020014987A1 (fr) * 2018-07-20 2020-01-23 深圳市大疆创新科技有限公司 Procédé et appareil de commande de robot mobile, dispositif et support d'informations

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281096A (zh) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 基于目标检测算法的无人机追踪控制方法、设备及介质

Also Published As

Publication number Publication date
CN113853559A (zh) 2021-12-28

Similar Documents

Publication Publication Date Title
EP3579192B1 (fr) Procédé, appareil et dispositif pour déterminer des informations de posture de caméra, et support de stockage
CN108702444B (zh) 一种图像处理方法、无人机及系统
CN109102537B (zh) 一种二维激光雷达和球幕相机结合的三维建模方法和系统
WO2020014909A1 (fr) Procédé et dispositif de photographie, et véhicule aérien sans pilote
WO2022000992A1 (fr) Procédé et appareil de prise de vues, dispositif électronique et support d'iformations
CN111436208B (zh) 一种测绘采样点的规划方法、装置、控制终端及存储介质
WO2018098824A1 (fr) Procédé et appareil de commande de prise de vues, et dispositif de commande
US20170305546A1 (en) Autonomous navigation method and system, and map modeling method and system
CN105678748A (zh) 三维监控系统中基于三维重构的交互式标定方法和装置
WO2020014987A1 (fr) Procédé et appareil de commande de robot mobile, dispositif et support d'informations
CN110276768B (zh) 图像分割方法、图像分割装置、图像分割设备及介质
KR102398478B1 (ko) 전자 디바이스 상에서의 환경 맵핑을 위한 피쳐 데이터 관리
WO2023093217A1 (fr) Procédé et appareil de marquage de données, et dispositif informatique, support de stockage et programme
WO2021217403A1 (fr) Procédé et appareil de commande de plateforme mobile et dispositif et support de stockage
CN108961423B (zh) 虚拟信息处理方法、装置、设备及存储介质
EP3629570A2 (fr) Appareil de capture d'images et procédé d'enregistrement d'images
JP2015114954A (ja) 撮影画像解析方法
CN113379901A (zh) 利用大众自拍全景数据建立房屋实景三维的方法及系统
WO2022033306A1 (fr) Procédé et appareil de suivi de cible
JP2020021368A (ja) 画像解析システム、画像解析方法及び画像解析プログラム
US11736795B2 (en) Shooting method, apparatus, and electronic device
WO2022040988A1 (fr) Procédé et appareil de traitement d'images, et plateforme mobile
CN112837375B (zh) 用于真实空间内部的相机定位的方法和系统
CN115499596B (zh) 一种处理图像的方法和装置
KR102644608B1 (ko) 디지털 트윈 기반의 카메라 위치 초기화 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933610

Country of ref document: EP

Kind code of ref document: A1