WO2021217403A1 - Method and apparatus for controlling movable platform, and device and storage medium - Google Patents

Method and apparatus for controlling movable platform, and device and storage medium Download PDF

Info

Publication number
WO2021217403A1
WO2021217403A1 PCT/CN2020/087423 CN2020087423W WO2021217403A1 WO 2021217403 A1 WO2021217403 A1 WO 2021217403A1 CN 2020087423 W CN2020087423 W CN 2020087423W WO 2021217403 A1 WO2021217403 A1 WO 2021217403A1
Authority
WO
WIPO (PCT)
Prior art keywords
target area
image
current image
target
feature point
Prior art date
Application number
PCT/CN2020/087423
Other languages
French (fr)
Chinese (zh)
Inventor
刘洁
周游
陈希
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080030068.8A priority Critical patent/CN113853559A/en
Priority to PCT/CN2020/087423 priority patent/WO2021217403A1/en
Publication of WO2021217403A1 publication Critical patent/WO2021217403A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw

Definitions

  • the embodiments of the present application relate to the field of control, and in particular, to a control method, device, device, and storage medium of a movable platform.
  • Surround shooting is a common shooting scheme. During the shooting process, the movable platform moves around the target, and the shooting device on the movable platform can be used to shoot the target during this process. If you need to achieve surround shooting, you not only need to control the movable platform to surround the target, but also adjust the orientation of the shooting device. When the operator manually completes this kind of shooting, he needs high operating skills.
  • POI American full name: Point of Interest
  • the vision-based POI solution in the prior art is generally aimed at small targets that are easy to see the whole picture, but for larger targets, the effect is relatively poor. This is because the movable platform often only sees the larger targets. Part of the target, it is difficult to observe the full picture of the target, and the limitation is strong.
  • a POI solution for a huge target is based on a pre-established three-dimensional model of the target, but this requires the operator to establish a three-dimensional model of the target before the surround shooting, which is cumbersome and has an unfriendly user experience.
  • the embodiments of the present application provide a control method, device, device, and storage medium for a movable platform.
  • the shooting device of the movable platform can always shoot the target object, which is conducive to surrounding shooting of a relatively large target.
  • the first aspect of the embodiments of the present application is to provide a method for controlling a movable platform, which is used to control the movable platform to surround a target object, and the movable platform includes a photographing device, including:
  • a second aspect of the embodiments of the present application is to provide a control device for a movable platform, the control device is used to control the movable platform to surround a target object, the movable platform includes a photographing device, and the control device includes: Memory, processor;
  • the memory is used to store program code
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the third aspect of the embodiments of the present application is to provide a movable platform, including:
  • the power system is installed on the fuselage to provide power
  • the fourth aspect of the embodiments of the present application is to provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method as described in the first aspect.
  • the control method, device, device, and storage medium of the movable platform provided in this embodiment determine the location of the first feature point mapped to the next frame of image based on the first feature point in the target area of the previous frame of image, Then obtain the second feature point of the next frame of image; when the number of second feature points meets the preset condition, for example, when the number of second feature points is less than the number of first feature points, move the target of the previous frame of image Area, obtain the target area of the next frame of image, thereby update the target area of the image, and control the camera to face the three-dimensional space point corresponding to the updated target area; in turn, the camera of the movable platform can always capture the target, which is beneficial to Surround shooting of larger targets.
  • the above process does not require the establishment of a three-dimensional model in advance, the processing method is fast and simple, and the user experience is better.
  • FIG. 1 is a flowchart of a method for controlling a movable platform provided by an embodiment of the application
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application
  • FIG. 3 is a first schematic diagram of feature points provided by an embodiment of this application.
  • FIG. 4 is a second schematic diagram of feature points provided by an embodiment of this application.
  • FIG. 5 is a flowchart of a method for controlling a movable platform according to another embodiment of the application.
  • FIG. 6 is a first schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 7 is a second schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 8 is a third schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 9 is a fourth schematic diagram of a grid area provided by an embodiment of this application.
  • FIG. 10 is a third schematic diagram of feature points provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of movement of a movable platform provided by an embodiment of the application.
  • FIG. 12 is a flowchart of a method for controlling a movable platform provided by another embodiment of this application.
  • FIG. 13 is a first schematic diagram of an initial image provided by an embodiment of this application.
  • FIG. 14 is a second schematic diagram of an initial image provided by an embodiment of this application.
  • FIG. 15 is a third schematic diagram of an initial image provided by an embodiment of this application.
  • FIG. 16 is a structural diagram of a control device for a movable platform provided by an embodiment of the application.
  • FIG. 17 is a structural diagram of a movable platform provided by an embodiment of the application.
  • control device 160: control device
  • a component when referred to as being "fixed to” another component, it can be directly on the other component or a centered component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to the other component or there may be a centered component at the same time.
  • a POI solution for large targets is to perform three-dimensional modeling of the large target before the movable platform surrounds the large target for shooting; then, based on the pre-established three-dimensional model of the target, the surround shooting is performed.
  • this method requires the operator to establish a three-dimensional model of the target before the surround shooting, which is cumbersome and the algorithm is complicated; the user experience is not friendly.
  • the mobile platform control method, device, equipment and storage medium provided in the embodiments of the present application can solve the above-mentioned problems.
  • FIG. 1 is a flowchart of a method for controlling a movable platform provided by an embodiment of the application.
  • the method for controlling a movable platform provided in this embodiment is used to control the movable platform to surround a target object, and the movable platform includes a camera.
  • the method provided in this embodiment may include:
  • Step S101 Acquire a current image captured by the photographing device.
  • the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like.
  • the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the application; as shown in FIG. 2, the drone 20 is equipped with a photographing device 21, and the photographing device 21 may specifically be a camera, a video camera, or the like.
  • the camera 21 can be mounted on the drone 20 via a pan/tilt 22, or the camera 21 can be fixed on the drone 20 via other fixing devices.
  • the camera 21 can take real-time shooting to obtain video data or image data, and send the video data or image data to the control device 24 through the wireless communication interface 23 of the drone 20.
  • the control device 24 may specifically be a remote control corresponding to the drone 20, or a user terminal; among them, the user terminal may be a smart phone, a tablet computer, and the like.
  • the drone 20 may also include a control device, and the control device may include a general-purpose or special-purpose processor. It should be noted that this is only a schematic description, and does not limit the specific structure of the UAV.
  • the movable platform can obtain the current image captured by the camera in real time.
  • Step S102 Acquire the first feature point in the target area of the reference image, the reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
  • the photographing device outputs the current image; the movable platform determines that the previous frame of the image is a reference image.
  • the movable platform determines the target area of the reference image, which is the image area corresponding to the target object; then, extracts the first feature point in the image area (in order to facilitate the distinction, the feature point of the reference image is called the first Feature points).
  • the movable platform obtains the first frame of image output by the camera, determines the target area on the first frame of image, and extracts the first feature point on the target area of the first frame of image.
  • the movable platform obtains the second frame image output by the shooting device, and it can be determined that the first frame image is the reference image of the second frame image.
  • the target object may be a large-scale target, for example, the target object is a building, or the target object is a group of buildings.
  • the image captured by the photographing device 21 includes the target object 31 as shown in FIG. 2.
  • a corner detection algorithm (full English name: Corner Detection Algorithm) may be used to detect the first feature point on the target area of the reference image.
  • corner detection algorithms for example, are FAST (English full name: Features from Accelerated Segment Test) algorithm, SUSAN (English full name: Small Univalue Segment Assimilating Nucleus) algorithm, and Harris operator algorithm.
  • Step S103 Determine a second feature point corresponding to the first feature point in the current image.
  • the feature points on the current image are referred to as second feature points.
  • the position of the three-dimensional space point on the current image can be determined according to the three-dimensional space point, and then the second feature point in the current image can be obtained. For example, map the three-dimensional space point corresponding to the first feature point to the current image, and then obtain the second feature point corresponding to the first feature point.
  • the tracking algorithm is used to track the first feature point in the target area to determine that the first feature point in the target area of the reference image is in the current image s position.
  • the tracking algorithm for example, is the KLT (full English name: Kanade-Lucas-Tomasi Feature Tracker) feature tracking algorithm.
  • FIG. 3 is the first schematic diagram of the feature points provided by the embodiment of the application.
  • the reference image 40 has the target area 42 and the target area 42 has the target object 31;
  • a feature point, the first feature point includes feature point A, feature point B, and feature point C, for example.
  • the camera outputs the current image 41.
  • the position of the first feature point for example, feature point A, feature point B, and feature point C
  • the reference image 40 and the current image 41 may be adjacent images or non-adjacent images.
  • FIG. 3 is only a schematic illustration, and does not limit the type of target object or the number of feature points.
  • FIG. 4 is the second schematic diagram of the feature points provided by the embodiment of this application.
  • the movable platform moves from left to right (the direction shown by the arrow shown in FIG. 4).
  • 31 represents the target object
  • 51 and 52 represent the images output by the camera during the process of moving the camera around the target object 31 in the direction shown by the arrow (from left to right).
  • the three-dimensional points on the target 31 can be Mapped to the images 51 and 52
  • the mapping point of the three-dimensional space point in the image 51 may specifically be a feature point on the target area of the image 51
  • the mapping point of the three-dimensional space point in the image 52 may specifically be the target of the image 52 Feature points on the area.
  • Point A and point B are three-dimensional space points on the target object 31.
  • the point a1 and the point b1 represent feature points in the image 51, the point a1 corresponds to the point A, and the point b1 corresponds to the point B.
  • the point a2 and the point b2 represent feature points in the image 52, the point a2 corresponds to the point A, and the point b2 corresponds to the point B.
  • Step S104 When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
  • the preset condition is that the number of second feature points is less than the number of first feature points, that is The number of feature points is decreasing.
  • Step S105 Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • the target area of the current image has been obtained.
  • the target area of the current image is an image area.
  • the image area may correspond to a three-dimensional space point on the target object, so that the camera can be controlled to face the current image.
  • the target area corresponds to the three-dimensional space point, so that the shooting device can always shoot the target object.
  • step S101 to step S105 may be a process of repeated processing.
  • the current image captured by the photographing device is obtained; the first feature point in the target area of the reference image is obtained, and the reference image is the previous frame image of the current image ,
  • the target area is the image area corresponding to the target object; determine the second feature point corresponding to the first feature point in the current image; when the number of second feature points meets the preset condition, move the target area of the reference image to obtain the current image The target area; control the camera to face the three-dimensional space point corresponding to the target area of the current image.
  • the position where the first feature point is mapped to the next frame of image can be determined, and then the second feature point of the next frame of image can be obtained;
  • the target area of the reference image is moved to obtain the target area of the next frame of image.
  • the target area of the image is updated.
  • the shooting device is controlled to face the three-dimensional space point corresponding to the target area of the current image; furthermore, the shooting device of the movable platform can always shoot the target object.
  • the above process is an image-based processing process, no complicated three-dimensional model needs to be established in advance, the processing method is fast and simple, and the user experience is better.
  • FIG. 5 is a flowchart of a method for controlling a movable platform provided by another embodiment of the application.
  • the method provided in this embodiment is used to control the movable platform to surround the target object, and the movable platform includes a camera.
  • the method provided in this embodiment may include:
  • Step S201 Acquire a current image captured by the photographing device.
  • the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like.
  • the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
  • step S101 in FIG. 1 refers to step S101 in FIG. 1 and will not be repeated here.
  • Step S202 Divide the target area of the reference image into multiple grid areas.
  • step S202 specifically includes: dividing the target area of the reference image into multiple grid areas according to the direction in which the movable platform surrounds the target object.
  • the target area of the reference image may be divided into a plurality of grid areas; then, in step S203, feature points less than a preset number are extracted from each grid area.
  • the grid area may be divided according to the direction in which the movable platform surrounds the target object, so as to divide the target area of the reference image into multiple grid areas.
  • FIG. 6 is a first schematic diagram of a grid area provided by an embodiment of the application.
  • a movable platform surrounds a target object on a horizontal plane, and the target area can be divided into grids by horizontal lines to obtain multiple grids. Grid area.
  • the grid can be divided by facing the target area vertically to obtain multiple grid areas.
  • FIG. 7 is the second schematic diagram of the grid area provided by the embodiment of the application.
  • the movable platform surrounds the target object on the inclined surface, and the determination is made according to the inclination direction indicated by the inclined surface.
  • Corresponding slanted line; the target area is divided into grids according to the slanted line to obtain multiple grid areas.
  • the tilt direction is not limited to the direction shown in FIG. 7.
  • Step S203 Acquire a preset number of feature points in each grid area as the first feature point in the target area of the reference image.
  • the reference image is the previous frame of the current image
  • the target area is the image area corresponding to the target object.
  • the corner detection algorithm can be used to detect the feature point in each grid area, and limit the number of feature points in the detection process. Then, the preset number of feature points in each grid area are extracted, and the preset number of feature points in each grid area are used as the first feature point in the target area of the reference image.
  • Step S204 Determine a second feature point corresponding to the first feature point in the current image.
  • step S204 specifically includes:
  • the tracking algorithm is used to obtain the feature point obtained by tracking in the current image; the feature point obtained by tracking is filtered based on the epipolar constraint condition to obtain the second feature point.
  • step S103 shown in 1 the tracking algorithm is used to track the first feature point in the target area, and then the feature point obtained by tracking in the current image is obtained.
  • the feature points obtained by the tracking may be inaccurate, so the feature points obtained by the tracking need to be filtered to obtain the second feature point.
  • the feature points obtained by tracking can be filtered based on the epipolar constraint (English full name: Epipolar constraint) condition to obtain the second feature point.
  • the feature points obtained by tracking may be filtered based on the motion relationship of the feature points of the two images before and after. It should be noted that the method of filtering the tracked feature points based on the epipolar constraint condition can be combined with the method of filtering the tracked feature points based on the movement relationship of the feature points of the two frames of image before and after, so as to obtain more accurate results.
  • the second feature point can be combined with the method of filtering the tracked feature points based on the movement relationship of the feature points of the two frames of image before and after, so as to obtain more accurate results.
  • Step S205 When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
  • step S205 specifically includes: moving the target area of the reference image in a direction away from the grid area of the boundary.
  • the preset condition is that the number of second feature points is less than the number of first feature points.
  • the position of the movable platform relative to the target object is constantly changing.
  • the multiple grid areas of the reference image The first feature point in the grid area at the boundary of the middle may be lost, that is, the corresponding second feature point cannot be found in the current image. Therefore, the target area of the reference image needs to be moved to obtain the target area of the current image .
  • FIG. 8 is the third schematic diagram of the grid area provided by the embodiment of the application.
  • the diagram a in FIG. 8 is each grid area of the target area of the reference image, and the diagram a includes a plurality of first feature points.
  • Figure b in Figure 8 is the target area of the current image.
  • the target area of the reference image is divided into multiple grid areas; the first feature point is extracted from each grid area.
  • the first feature point (the solid point in Figure a) on the left boundary of Figure a in Figure 8 may be lost; thus, in the grid area determined to be located on the boundary
  • the target area of the reference image is moved to obtain the target area of the current image as shown in b in FIG. 8.
  • the moving distance can be the width of the grid area occupied by the missing feature points, for example, Figure a in Figure 8
  • the missing feature points in the grid area occupy a row of grids, and relative to the target area of the reference image, move the distance represented by the "column of grids".
  • the target of the reference image Area moving toward the right edge of the image.
  • the target area of the reference image is Move towards the left edge of the image.
  • the target area of the reference image is changed to Move towards the lower border of the image.
  • the target area of the reference image is Move towards the lower right border of the image.
  • Step S206 Acquire a third feature point in the target area of the current image, where the number of third feature points is greater than or equal to the number of second feature points.
  • the target area of the current image includes at least one second feature point
  • the second feature point is a feature point corresponding to the first feature point of the reference image.
  • candidate feature points of the target area of the current image can be extracted. If no candidate feature points are extracted, the above-mentioned second feature points are determined to be all feature points (that is, third feature points) in the target area of the current image. At this time, the number of third feature points is equal to the number of second feature points. If the candidate feature points are extracted, the above-mentioned second feature point and the newly extracted candidate feature points are used as all the feature points (ie, the third feature points) in the target area of the current image. At this time, the number of third feature points is greater than the number of second feature points.
  • FIG. 9 is a schematic diagram of the grid area provided by an embodiment of the application.
  • the graph a in FIG. 9 is each grid area of the target area of the reference image, and the graph a includes a plurality of first feature points.
  • Figure b in Figure 9 is the target area of the current image.
  • the first feature point (the solid point in Figure a) on the left boundary of Figure a in Figure 9 is lost in the current image; the target area will be moved one column to the right. Grid; Then for the target area of the current image, new candidate feature points (the solid points in Figure b) are extracted.
  • step 206 specifically includes: obtaining candidate feature points in the target area of the current image; filtering the candidate feature points according to the depth value of the candidate feature points and/or the semantic information of the candidate feature points to determine The third feature point.
  • the candidate feature points may have feature points that do not belong to the target object, that is, the three-dimensional space points corresponding to the candidate feature points are not three-dimensional space points on the target object, but other targets.
  • the candidate feature points need to be filtered. After filtering, all candidate feature points may be filtered out.
  • the number of third feature points is equal to the number of second feature points. After filtering, part of the candidate feature points may be filtered out. At this time, the number of third feature points is greater than the number of second feature points.
  • the depth value of the candidate feature point can be calculated; where the depth value represents the distance between the three-dimensional space point corresponding to the feature point and the reference point (for example, the optical center) of the imaging device. If the depth value of the candidate feature point belongs to the preset depth value range, the candidate feature point is determined to be the feature point in the target area of the current image, and the feature point does not need to be filtered; if the depth value of the candidate feature point does not belong to the preset depth value range, Set the depth value range to determine to filter the feature point.
  • the "preset depth value range” can be an empirical value; the “preset depth value range” represents the value range of the depth value of the target area, or the “preset depth value range” represents the three-dimensional value on the target object.
  • the semantic information of the candidate feature points can be obtained; wherein the semantic information represents the category of the three-dimensional space point corresponding to the feature point, for example, buildings, sky, and grass. Then, when it is determined that the semantic information of the candidate feature point corresponds to the target object, it is determined that the candidate feature point is a feature point in the target area of the current image, and the feature point does not need to be filtered. When it is determined that the semantic information of the candidate feature point does not correspond to the target object, the feature point is filtered. For example, if the target object is a building, but the semantic information of a candidate feature point indicates that the candidate feature point corresponds to the grass or the sky behind the building, the feature point needs to be filtered out.
  • the depth value and semantic information of the candidate feature points can be combined to determine whether to filter the candidate feature points. For example, if the depth value of the candidate feature point belongs to the preset depth value range, and the semantic information of the candidate feature point corresponds to the target object, the candidate feature point is determined to be the feature point in the target area of the current image. The feature point needs to be filtered. Otherwise, filter the feature point.
  • FIG. 10 is the third schematic diagram of the feature points provided by the embodiment of the application.
  • the movable platform moves from left to right (the direction shown by the arrow shown in FIG. 10).
  • 31 represents the target object
  • 51, 52, 53 represent the images output by the camera during the process of moving around the target object 31 in the direction indicated by the arrow (from left to right).
  • the mapping points of the three-dimensional space points in the image 51 may specifically be feature points on the target area of the image 51
  • the mapping points of the three-dimensional spatial points in the image 52 may specifically be A feature point on the target area of the image 52
  • the mapping point of the three-dimensional space point in the image 53 may specifically be a feature point on the target area of the image 53.
  • Point A, point B, and point C are three-dimensional space points on the target object 31.
  • the point a1 and the point b1 represent feature points in the image 51, the point a1 corresponds to the point A, and the point b1 corresponds to the point B.
  • Point a2, point b2, and point c2 represent feature points in image 52, point a2 corresponds to point A, point b2 corresponds to point B, and point c2 corresponds to point C.
  • Point a3, point b3, and point c3 represent feature points in the image 53, point a3 corresponds to point A, point b3 corresponds to point B, and point c3 corresponds to point C.
  • point A is mapped to the position of the feature point a2 in the image 52
  • point B is mapped to the location of the point feature b2 in the target image 52
  • point C is mapped to the location of the point feature c2 in the target image 52 Position; and then know that the feature point a2 is mapped to the feature point a3 in the image 53, the feature point b2 is mapped to the feature point b3 in the image 53, and the feature point c2 is mapped to the feature point c3 in the image 53.
  • the camera can always capture the surface of the target object 31, ensuring that the target object will not be lost during the surrounding process.
  • Step S207 Determine a three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image.
  • step S207 specifically includes: weighted average of the depth value of the third feature point in the target area of the current image; according to the weighted average and the shooting direction of the current image taken by the shooting device, it is determined that the target area of the current image corresponds to Points in three-dimensional space.
  • the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far away from the center of the target area of the current image.
  • the orientation of the camera on the movable platform needs to be adjusted.
  • step S206 the target area of the current image is obtained, and if the number of third feature points in the target area of the current image is not zero, the camera can be controlled to face the target area of the current image according to the target area of the current image Corresponding three-dimensional space point.
  • the depth value of each third feature point can be calculated; where the depth value represents the three-dimensional space corresponding to the feature point
  • the distance between the point and the reference point (such as the optical center) of the imaging device then the depth value of each third feature point in the target area of the current image can be calculated by weighted average to obtain the weighted average.
  • the movable platform can learn the shooting direction when the camera is shooting the current image. Furthermore, the movable platform determines the three-dimensional space point corresponding to the target area of the current image based on the above-mentioned weighted average value and the shooting direction.
  • the weight corresponding to the third feature point close to the center of the target area of the current image is greater than that far away from the current image.
  • a third feature point close to the center of the target area of the current image can be selected. Furthermore, the movable platform determines the three-dimensional space point corresponding to the target area of the current image based on the depth value and the shooting direction of the third feature point close to the center of the target area of the current image.
  • Step S208 Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • the camera of the movable platform can observe the surface of the target object corresponding to the target area.
  • FIG. 11 is a schematic diagram of the movement of the movable platform provided by an embodiment of the application.
  • the movable platform is, for example, a drone; when the drone is at the starting point A, the target of the image collected The area corresponds to the area a of the target object in three-dimensional space; during the flight to point B, the target area is continuously updated, and then when the drone is at point B, the target area of the collected image corresponds to the target object in the three-dimensional space ⁇ b.
  • step S208 specifically includes: when the number of third feature points in the target area of the current image is greater than a preset threshold, controlling the camera to face the three-dimensional space point corresponding to the target area of the current image.
  • step S207 it has been determined in step S207 that the number of third feature points in the target area of the current image is multiple.
  • the number of third feature points can be further determined; when the number of third feature points is greater than the preset threshold, it is determined that the number of feature points in the target area of the current image is large, and you can Continuously update the target area.
  • the shooting device of the movable platform can face the three-dimensional space point corresponding to the target area of the current image determined in step 207.
  • Step S209 When the number of third feature points in the target area of the current image is less than or equal to the preset threshold, the camera is controlled to face the three-dimensional space point corresponding to the target area of the reference image.
  • step S206 if the number of third feature points is less than or equal to the preset threshold, it is determined that the number of feature points in the target area of the current image is small.
  • the reference image ie , The feature points of the target area of the previous frame of the current image, determine the three-dimensional space point corresponding to the target area of the reference image, and then control the shooting device toward the three-dimensional space point corresponding to the target area of the reference image.
  • the movable platform when the number of feature points in the target area of the current image is small, the movable platform can also be controlled to return home; or, the shooting mode of the shooting device of the movable platform can be switched, for example, step S201- can no longer be used.
  • the shooting mode of the S208 large target object needs to be switched to another shooting mode.
  • the shooting is controlled
  • the device faces the three-dimensional space point corresponding to the target area of the reference image.
  • the feature points that are less than the preset number in the grid area of the reference image are extracted, and the reference image is not targeted.
  • Each pixel or feature point in the target area of the image is calculated, thereby reducing the complexity of the algorithm and the amount of data calculation.
  • the second feature point corresponding to the first feature point in the current image is obtained, and the target area of the reference image is moved to obtain the target area of the current image; thereby the target area is updated.
  • the camera of the movable platform can always observe the target object and shoot the surface corresponding to the target area; at the same time, it provides the exit logic, that is, the feature points in the target area of the current image are more
  • the camera is controlled to face the three-dimensional space point corresponding to the target area of the reference image, or the movable platform is controlled to return home to end the current shooting, or the camera of the movable platform can continue to perform shooting tasks in other shooting modes.
  • FIG. 12 is a flowchart of a method for controlling a movable platform provided by another embodiment of the application.
  • the method provided in this embodiment is used to control the movable platform to surround the target object, and the movable platform includes a photographing device.
  • the method provided in this embodiment may include:
  • Step S301 Send the initial image captured by the photographing device to the control device of the movable platform, so that the control device displays the initial image.
  • the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like.
  • the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
  • the initial image captured by the photographing device is sent to the control device of the movable platform, so that the control device displays the initial image.
  • the control device may specifically be a remote controller corresponding to the movable platform, or a user terminal; where the user terminal is, for example, a smart phone, a tablet computer, and the like.
  • Step S302 Obtain the user's instruction information for the target object.
  • the instruction information is generated according to the user's click operation or frame selection operation on the initial image displayed by the control device.
  • the user can input instruction information into the control device by means of touch, or gesture control, or voice information; the instruction information is used to indicate the target object in the initial image.
  • the user clicks a location point of the initial image through an operating medium (for example, a finger or a stylus); and then the control device receives a click operation input by the user.
  • an operating medium for example, a finger or a stylus
  • the user uses an operating medium (for example, a finger or a stylus) to frame select an area of the initial image; and then the control device receives the frame selection operation input by the user.
  • an operating medium for example, a finger or a stylus
  • FIG. 13 is a first schematic diagram of a reference image provided by an embodiment of the application. As shown in FIG. 13, the user selects an area on the initial image by using a finger frame.
  • Step S303 Determine the target area of the initial image according to the instruction information.
  • step S303 specifically includes the following steps:
  • the first step is to perform image segmentation on the initial image to obtain multiple segmented regions.
  • the second step is to determine the target area of the initial image according to the segmented area and the instruction information.
  • the second step specifically includes: when the proportion of the image area in the initial image indicated by the indication information in the target segmentation area is greater than the preset ratio, then determining the target area of the initial image according to the target segmentation area, and the target segmentation area It is at least one of a plurality of divided regions.
  • image segmentation is performed on the initial image, for example, image segmentation processing based on clustering is performed to obtain a segmentation result; wherein the segmentation result includes multiple segmented regions, and pixels in each segmented region have similar features.
  • the ratio value of the image area indicated by the instruction information (ie, the image area in the initial image) on the target segmented area is calculated, where the target segmented area is at least one of the plurality of segmented areas. If it is determined that the ratio value is greater than the preset ratio, the target area of the initial image can be determined according to the target segmentation area to ensure that the target area contains the complete target object.
  • the image area indicated by the indication information is A1; after image segmentation is performed on the initial image, segmentation areas B1, B2, B3, and B4 are obtained. If it is determined that the proportion of the image area A1 in the segmented area B1 is greater than the preset ratio, the target area of the initial image can be determined according to the segmented area B1.
  • FIG. 14 is the second schematic diagram of the reference image provided by the embodiment of the application.
  • the target object selected by the user frame is a small house; image segmentation is performed on the entire image, and the segmented image is A number of divided areas are divided; further, the small house in Figure a in Figure 14 is divided into one divided area. Then, according to the user's frame selection and the result of image segmentation, all the image information of the small house can be placed in the target area, as shown in Figure 15.
  • Step S304 Acquire the current image captured by the camera.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S305 Acquire the first feature point in the target area of the reference image, the reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S306 Determine a second feature point corresponding to the first feature point in the current image.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S307 Identify the target type of the target object; if the target type is a preset type, execute the step of moving the target area of the reference image to obtain the target area of the current image.
  • the target object in the target area is a preset target type. For example, it is necessary to identify whether the target object in the target area is a large building.
  • step S308 is executed. If not, there is no need to perform step S308, and the camera can be controlled to face the three-dimensional space point corresponding to the target area of the reference image.
  • Step S308 When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S309 Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
  • Step S310 Control the movement of the movable platform so that the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is the surrounding radius.
  • step S309 the orientation of the camera is controlled. Since the camera is carried on the movable platform, it is also necessary to control the movement of the movable platform.
  • the drone if the distance value calculated based on the third feature point in the target area is less than the surrounding radius, the drone is controlled to fly backward; “backward” refers to the target and the drone
  • the first direction of the line between the two, the first direction is the direction in which the target object points to the drone. If the distance value is greater than the surrounding radius, control the drone to fly forward; “forward” refers to the second direction of the line between the target and the drone, the first direction is that the drone points to the target object Direction.
  • the distance between the drone and the surface of the target object can always be equal to the surrounding radius through the control of the drone.
  • the surrounding radius may be input by the user through the control device, or the distance between the movable platform and the three-dimensional space point corresponding to the target area of the initial image may be used as the surrounding radius.
  • the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is always the above-mentioned surrounding radius. That is to say, when the movable platform surrounds the building group to shoot, not only the surface of the building group can always be photographed, but also a certain distance can be kept from the surface of the building group.
  • the three-dimensional space point corresponding to the target area of the initial image may be used as the center point of the movable platform surrounding the target object.
  • the target area of the initial image corresponds to a three-dimensional space point (for example, the three-dimensional space point corresponding to the weighted average of each feature point in the target area of the initial image; or, the center position of the target area of the initial image corresponds to 3D space point)
  • the 3D space point corresponding to the target area of the initial image can be used as the center point around the movable platform.
  • the center point of the surround is unchanged. That is to say, when the movable platform is shooting around the building group, the camera always shoots the surface of the building group.
  • the center of the orbiting track of the movable platform is always the target area of the initial image corresponding to a three-dimensional space point.
  • the control device 160 is a structural diagram of a control device for a movable platform provided by an embodiment of the application.
  • the control device is used to control the movable platform to surround the target object.
  • the movable platform includes a photographing device 160.
  • the control device 160 includes: a memory 161, the processor 162;
  • the memory 162 is used to store program codes
  • the processor 162 calls the program code.
  • the program code When the program code is executed, it is used to perform the following operations: obtain the current image captured by the camera; obtain the first feature point in the target area of the reference image, the reference image being the front of the current image
  • the target area is the image area corresponding to the target object
  • the second feature point corresponding to the first feature point in the current image is determined; when the number of second feature points meets the preset condition, the target area of the reference image is moved to Obtain the target area of the current image; control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
  • the processor before acquiring the first feature point in the target area of the reference image, is further configured to: divide the target area of the reference image into multiple grid areas.
  • the processor when the processor divides the target area of the reference image into multiple grid areas, it is specifically configured to: divide the target area of the reference image into multiple grid areas according to the direction in which the movable platform surrounds the target object.
  • the processor when the processor obtains the first feature points in the target area of the reference image, it is specifically configured to: obtain a preset number of feature points in each grid area as the first feature points.
  • the preset condition includes: the number of first feature points in the grid area on the border among the multiple grid areas of the reference image corresponding to the second feature points in the current image is zero; the processor moves the reference image When the target area is specifically used to: move the target area of the reference image in a direction away from the grid area of the boundary.
  • the number of second feature points meets the preset condition, including: the number of second feature points is less than the number of first feature points.
  • the processor determines the second feature point corresponding to the first feature point in the current image, it is specifically used to: based on the first feature point, use a tracking algorithm to obtain the feature points tracked in the current image; and based on the epipolar constraint The condition filters the tracked feature points to obtain the second feature point.
  • the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is also used to: obtain the third feature point in the target area of the current image, and the number of third feature points is greater than or equal to the second The number of feature points.
  • the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is specifically used to: determine the three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image ; Control the camera to face the three-dimensional point corresponding to the target area of the current image.
  • the processor determines the three-dimensional space point corresponding to the target area of the current image according to the third characteristic point in the target area of the current image, it is specifically used to: determine the depth of the third characteristic point in the target area of the current image The values are weighted and averaged; according to the weighted average and the shooting direction of the shooting device when shooting the current image, the three-dimensional space point corresponding to the target area of the current image is determined.
  • the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far from the center of the target area of the current image.
  • the processor when the processor obtains the third feature point in the target area of the current image, it is specifically used to:
  • the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is specifically used to: when the number of third feature points in the target area of the current image is greater than the preset threshold, control the camera to face forward The three-dimensional point corresponding to the target area of the image.
  • the processor is further configured to: when the number of third feature points in the target area of the current image is less than or equal to the preset threshold, control the camera to face the three-dimensional space point corresponding to the target area of the reference image.
  • the processor is further configured to: control the movement of the movable platform so that the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is the surrounding radius.
  • the processor moves the target area of the reference image to obtain the target area of the current image, it is also used to: identify the target type of the target object; if the target type is a preset type, execute moving the target area of the reference image to obtain The step of the target area of the current image.
  • control device 160 further includes a communication interface 163, which is connected to the processor.
  • the processor is also used for:
  • the initial image taken by the camera is sent to the control device of the movable platform, so that the control device displays the initial image; the user's instruction information to the target object is obtained through the communication interface 163, and the instruction information is based on the user's initial image displayed on the control device According to the instruction information, determine the target area of the initial image.
  • the processor determines the target area of the initial image according to the instruction information, it is specifically configured to: perform image segmentation on the initial image to obtain multiple segmented areas; and determine the target area of the initial image according to the segmented areas and the instruction information.
  • the processor determines the target area of the initial image according to the segmented area and the instruction information, it is specifically used to: when the proportion of the image area in the initial image indicated by the instruction information in the target segmented area is greater than the preset ratio , The target area of the initial image is determined according to the target segmentation area, and the target segmentation area is at least one of the plurality of segmentation areas.
  • the three-dimensional space point corresponding to the target area of the initial image is taken as the center point of the movable platform surrounding the target object, and the distance between the movable platform and the three-dimensional space point corresponding to the target area of the initial image is taken as the surrounding radius.
  • control device for the movable platform provided in the embodiments of the present application are similar to the foregoing embodiments, and will not be repeated here.
  • the position where the first feature point is mapped to the next frame of image is determined, and then the second feature point of the next frame of image is obtained;
  • the target area of the reference image is moved to obtain the target area of the next frame of image.
  • the target area of the image is updated.
  • the shooting device is controlled to face the three-dimensional space point corresponding to the target area of the image; furthermore, the shooting device of the movable platform can always shoot the target object.
  • the above process is an image-based processing process, no complicated three-dimensional model needs to be established in advance, the processing method is fast and simple, and the user experience is better.
  • FIG. 17 is a structural diagram of a movable platform provided by an embodiment of the application.
  • the movable platform 170 includes a body, a power system, a camera 174, and a control device 178.
  • the power system includes at least one of the following: The motor 171, the propeller 172 and the electronic governor 173, the power system is installed on the fuselage to provide power; the specific principle and implementation of the control device 178 are similar to the foregoing embodiment, and will not be repeated here.
  • the movable platform 170 also includes: a sensing system 175, a communication system 176, and a supporting device 177.
  • the supporting device 177 may specifically be a pan/tilt, and the camera 174 is mounted on the movable platform through the supporting device 177.
  • control device 178 may specifically be a flight controller of the movable platform 170.
  • the embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to realize the control device of the movable platform as described above.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute the method described in each embodiment of the present application. Part of the steps.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

A method and apparatus for controlling a movable platform, a device, and a storage medium. The method is used for controlling a movable platform to encircle a target object, the movable platform comprises a photographing apparatus, and the method comprises: acquiring first feature points in a target region of a reference image photographed by the photographing apparatus, wherein the reference image is a previous image frame of a current image, and the target region is an image region corresponding to the target object; determining second feature points corresponding to the first feature points within the current image; when the number of second feature points meets a preset condition, moving the target region of the reference image so as to obtain a target region of the current image; and controlling the photographing apparatus to be oriented towards three-dimensional space points corresponding to the target region of the current image. Thus, the photographing apparatus of the movable platform can continuously photograph the target object. Moreover, the described procedure is an image-based processing procedure that does not require a pre-established complicated three-dimensional model, the processing means is quick and simple, and the user experience is good.

Description

可移动平台的控制方法、装置、设备及存储介质Control method, device, equipment and storage medium of movable platform 技术领域Technical field
本申请实施例涉及控制领域,尤其涉及一种可移动平台的控制方法、装置、设备及存储介质。The embodiments of the present application relate to the field of control, and in particular, to a control method, device, device, and storage medium of a movable platform.
背景技术Background technique
环绕拍摄是常见的一种拍摄方案,拍摄过程中,可移动平台会相对于目标进行环绕运动,并可以在此过程利用可移动平台上的拍摄装置拍摄目标。如果需要实现环绕拍摄,不仅需要控制可移动平台在目标附近环绕,还要调整拍摄装置的朝向。操作者在手动完成这种拍摄时,需要较高的操作技巧。Surround shooting is a common shooting scheme. During the shooting process, the movable platform moves around the target, and the shooting device on the movable platform can be used to shoot the target during this process. If you need to achieve surround shooting, you not only need to control the movable platform to surround the target, but also adjust the orientation of the shooting device. When the operator manually completes this kind of shooting, he needs high operating skills.
为了方便操作者完成环绕拍摄,自主的“兴趣点环绕”功能应运而生,简称为POI(英文全称:Point of Interest)。现有技术中基于视觉的POI方案,一般针对于容易看到全貌的小目标,但对于较为庞大的目标,效果比较差,这是由于可移动平台在环绕较为庞大的目标时常常只能看到目标的一部分,难以观测到目标的全貌,局限性较强。一种针对庞大的目标的POI方案是基于预先建立的目标的三维模型,但这就需要操作者在环绕拍摄之前先建立目标的三维模型,比较繁琐,用户体验不友好。In order to facilitate the operator to complete the surround shooting, the autonomous "point of interest surround" function came into being, referred to as POI (English full name: Point of Interest). The vision-based POI solution in the prior art is generally aimed at small targets that are easy to see the whole picture, but for larger targets, the effect is relatively poor. This is because the movable platform often only sees the larger targets. Part of the target, it is difficult to observe the full picture of the target, and the limitation is strong. A POI solution for a huge target is based on a pre-established three-dimensional model of the target, but this requires the operator to establish a three-dimensional model of the target before the surround shooting, which is cumbersome and has an unfriendly user experience.
发明内容Summary of the invention
本申请实施例提供一种可移动平台的控制方法、装置、设备及存储介质,通过更新目标区域,使得可移动平台的拍摄装置始终可以拍摄到目标对象,有利于较为庞大的目标的环绕拍摄。The embodiments of the present application provide a control method, device, device, and storage medium for a movable platform. By updating the target area, the shooting device of the movable platform can always shoot the target object, which is conducive to surrounding shooting of a relatively large target.
本申请实施例的第一方面是提供一种可移动平台的控制方法,用于控制所述可移动平台环绕目标对象,所述可移动平台包括拍摄装置,包括:The first aspect of the embodiments of the present application is to provide a method for controlling a movable platform, which is used to control the movable platform to surround a target object, and the movable platform includes a photographing device, including:
获取所述拍摄装置拍摄得到的当前图像;Acquiring the current image taken by the photographing device;
获取参考图像的目标区域中的第一特征点,所述参考图像为所述当前图像的前一帧图像,所述目标区域为所述目标对象对应的图像区域;Acquiring a first feature point in a target area of a reference image, where the reference image is an image of a previous frame of the current image, and the target area is an image area corresponding to the target object;
确定所述当前图像中与所述第一特征点对应的第二特征点;Determining a second feature point corresponding to the first feature point in the current image;
当所述第二特征点的数量满足预设条件时,移动所述参考图像的目标区域以得到所述当前图像的目标区域;When the number of the second feature points meets a preset condition, moving the target area of the reference image to obtain the target area of the current image;
控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。Controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
本申请实施例的第二方面是提供一种可移动平台的控制装置,所述控制装置用于控制所述可移动平台环绕目标对象,所述可移动平台包括拍摄装置,所述控制装置包括:存储器、处理器;A second aspect of the embodiments of the present application is to provide a control device for a movable platform, the control device is used to control the movable platform to surround a target object, the movable platform includes a photographing device, and the control device includes: Memory, processor;
所述存储器用于存储程序代码;The memory is used to store program code;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:The processor calls the program code, and when the program code is executed, is used to perform the following operations:
获取所述拍摄装置拍摄得到的当前图像;Acquiring the current image taken by the photographing device;
获取参考图像的目标区域中的第一特征点,所述参考图像为所述当前图像的前一帧图像,所述目标区域为所述目标对象对应的图像区域;Acquiring a first feature point in a target area of a reference image, where the reference image is an image of a previous frame of the current image, and the target area is an image area corresponding to the target object;
确定所述当前图像中与所述第一特征点对应的第二特征点;Determining a second feature point corresponding to the first feature point in the current image;
当所述第二特征点的数量满足预设条件时,移动所述参考图像的目标区域以得到所述当前图像的目标区域;When the number of the second feature points meets a preset condition, moving the target area of the reference image to obtain the target area of the current image;
控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。Controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
本申请实施例的第三方面是提供一种可移动平台,包括:The third aspect of the embodiments of the present application is to provide a movable platform, including:
机身;body;
动力系统,安装在所述机身,用于提供动力;The power system is installed on the fuselage to provide power;
拍摄装置;Camera
以及如第二方面所述的控制装置。And the control device as described in the second aspect.
本申请实施例的第四方面是提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现如第一方面所述的方法。The fourth aspect of the embodiments of the present application is to provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method as described in the first aspect.
本实施例提供的可移动平台的控制方法、装置、设备及存储介质,基于前一帧图像的目标区域中的第一特征点,确定出第一特征点映射到后一帧图像上的位置,进而得到后一帧图像的第二特征点;在第二特征点的数量满足预设条件时,例如,在第二特征点的数量小于第一特征点的数量时,移动前一帧图像的目标区域,得到后一帧图像的目标区域,从而更新图像的目标区域,并控制拍摄装置朝向更新的目标区域对应的三维空间点;进 而,使得可移动平台的拍摄装置始终可以拍摄到目标,有利于较为庞大的目标的环绕拍摄。并且,上述过程不需要预先建立三维模型,处理方式快速简单,用户体验较好。The control method, device, device, and storage medium of the movable platform provided in this embodiment determine the location of the first feature point mapped to the next frame of image based on the first feature point in the target area of the previous frame of image, Then obtain the second feature point of the next frame of image; when the number of second feature points meets the preset condition, for example, when the number of second feature points is less than the number of first feature points, move the target of the previous frame of image Area, obtain the target area of the next frame of image, thereby update the target area of the image, and control the camera to face the three-dimensional space point corresponding to the updated target area; in turn, the camera of the movable platform can always capture the target, which is beneficial to Surround shooting of larger targets. In addition, the above process does not require the establishment of a three-dimensional model in advance, the processing method is fast and simple, and the user experience is better.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为本申请实施例提供的可移动平台的控制方法的流程图;FIG. 1 is a flowchart of a method for controlling a movable platform provided by an embodiment of the application;
图2为本申请实施例提供的一种应用场景的示意图;Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the application;
图3为本申请实施例提供的特征点的示意图一;FIG. 3 is a first schematic diagram of feature points provided by an embodiment of this application;
图4为本申请实施例提供的特征点的示意图二;FIG. 4 is a second schematic diagram of feature points provided by an embodiment of this application;
图5为本申请另一实施例提供的可移动平台的控制方法的流程图;FIG. 5 is a flowchart of a method for controlling a movable platform according to another embodiment of the application;
图6为本申请实施例提供的栅格区域的示意图一;FIG. 6 is a first schematic diagram of a grid area provided by an embodiment of this application;
图7为本申请实施例提供的栅格区域的示意图二;FIG. 7 is a second schematic diagram of a grid area provided by an embodiment of this application;
图8为本申请实施例提供的栅格区域的示意图三;FIG. 8 is a third schematic diagram of a grid area provided by an embodiment of this application;
图9为本申请实施例提供的栅格区域的示意图四;FIG. 9 is a fourth schematic diagram of a grid area provided by an embodiment of this application;
图10为本申请实施例提供的特征点的示意图三;FIG. 10 is a third schematic diagram of feature points provided by an embodiment of this application;
图11为本申请实施例提供的可移动平台的移动示意图;FIG. 11 is a schematic diagram of movement of a movable platform provided by an embodiment of the application;
图12为本申请又一实施例提供的可移动平台的控制方法的流程图;FIG. 12 is a flowchart of a method for controlling a movable platform provided by another embodiment of this application;
图13为本申请实施例提供的初始图像的示意图一;FIG. 13 is a first schematic diagram of an initial image provided by an embodiment of this application;
图14为本申请实施例提供的初始图像的示意图二;FIG. 14 is a second schematic diagram of an initial image provided by an embodiment of this application;
图15为本申请实施例提供的初始图像的示意图三;FIG. 15 is a third schematic diagram of an initial image provided by an embodiment of this application;
图16为本申请实施例提供的可移动平台的控制装置的结构图;FIG. 16 is a structural diagram of a control device for a movable platform provided by an embodiment of the application;
图17为本申请实施例提供的可移动平台的结构图。FIG. 17 is a structural diagram of a movable platform provided by an embodiment of the application.
附图标记:Reference signs:
20:无人机;20: UAV;
21:拍摄装置;21: Camera;
22:云台;22: PTZ;
23:无线通讯接口;23: Wireless communication interface;
24:控制装置;24: Control device;
31:目标对象;31: target audience;
41:当前图像;41: current image;
42:参考图像的目标区域;42: The target area of the reference image;
51、52、53:图像;51, 52, 53: image;
160:控制装置;160: control device;
161:存储器;161: memory;
162:处理器;162: processor;
163:通讯接口;163: communication interface;
170:无人机;170: UAV;
171:电机;171: Motor;
172:螺旋桨;172: Propeller;
173:电子调速器;173: Electronic governor;
174:拍摄装置;174: Camera;
175:传感系统;175: Sensing system;
176:通信系统;176: Communication system;
177:支撑设备;177: Supporting equipment;
178:控制装置。178: Control device.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。It should be noted that when a component is referred to as being "fixed to" another component, it can be directly on the other component or a centered component may also exist. When a component is considered to be "connected" to another component, it can be directly connected to the other component or there may be a centered component at the same time.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术 领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terms used in the specification of the application herein are only for the purpose of describing specific embodiments, and are not intended to limit the application. The term "and/or" as used herein includes any and all combinations of one or more related listed items.
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Hereinafter, some embodiments of the present application will be described in detail with reference to the accompanying drawings. In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
在环绕拍摄技术中,为了方便操作者完成环绕拍摄,自主的“兴趣点环绕”功能应运而生。现有的基于视觉的POI方案,是针对于容易看到全貌的小目标的,对于较为庞大的目标,效果比较差,这是由于可移动平台在环绕较为庞大的目标时常常只能看到目标的一部分,难以观测到目标的全貌,局限性较强。一种针对庞大的目标的POI方案,是在可移动平台环绕大型目标进行拍摄之前,对大型目标进行三维建模;然后,基于预先建立的目标的三维模型,进行环绕拍摄。但这样的方式,需要操作者在环绕拍摄之前先建立目标的三维模型,比较繁琐,算法复杂;用户体验不友好。In the surround shooting technology, in order to facilitate the operator to complete the surround shooting, the autonomous "point of interest surround" function came into being. The existing vision-based POI solutions are aimed at small targets that are easy to see the whole picture. For larger targets, the effect is relatively poor. This is because the movable platform often only sees the target when it surrounds the larger target. Part of it is difficult to observe the full picture of the target and has strong limitations. A POI solution for large targets is to perform three-dimensional modeling of the large target before the movable platform surrounds the large target for shooting; then, based on the pre-established three-dimensional model of the target, the surround shooting is performed. However, this method requires the operator to establish a three-dimensional model of the target before the surround shooting, which is cumbersome and the algorithm is complicated; the user experience is not friendly.
本申请实施例提供的可移动平台的控制方法、装置、设备及存储介质,可以解决上述问题。The mobile platform control method, device, equipment and storage medium provided in the embodiments of the present application can solve the above-mentioned problems.
图1为本申请实施例提供的可移动平台的控制方法的流程图。本实施例提供的可移动平台的控制方法用于控制可移动平台环绕目标对象,可移动平台包括拍摄装置。如图1所示,本实施例提供的方法,可以包括:FIG. 1 is a flowchart of a method for controlling a movable platform provided by an embodiment of the application. The method for controlling a movable platform provided in this embodiment is used to control the movable platform to surround a target object, and the movable platform includes a camera. As shown in Figure 1, the method provided in this embodiment may include:
步骤S101、获取拍摄装置拍摄得到的当前图像。Step S101: Acquire a current image captured by the photographing device.
示例性地,本实施例的可移动平台具体可以是无人机、无人地面机器人、无人船、移动机器人等。这里为了方便解释,以可移动平台为无人机来进行示意性说明。可以理解的是,本申请中的无人机可以被同等地替代成可移动平台。Exemplarily, the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like. For the convenience of explanation, the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
图2为本申请实施例提供的一种应用场景的示意图;如图2所示,无人机20搭载有拍摄装置21,拍摄装置21具体可以是相机、摄像机等。具体的,拍摄装置21可通过云台22搭载在无人机20上,或者,拍摄装置21通过其他固定装置固定在无人机20上。该拍摄装置21可实时拍摄获得视频数据或图像数据,并将该视频数据或图像数据通过无人机20的无线通讯接口23发送给控制装置24。该控制装置24具体可以是无人机20对应的遥控器,也可以是用户终端;其中,用户终端例如智能手机、平板电 脑等。另外,该无人机20还可以包括控制装置,该控制装置可以包括通用或者专用的处理器。需要说明的是,此处只是示意性说明,并不限定该无人机的具体结构。FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the application; as shown in FIG. 2, the drone 20 is equipped with a photographing device 21, and the photographing device 21 may specifically be a camera, a video camera, or the like. Specifically, the camera 21 can be mounted on the drone 20 via a pan/tilt 22, or the camera 21 can be fixed on the drone 20 via other fixing devices. The camera 21 can take real-time shooting to obtain video data or image data, and send the video data or image data to the control device 24 through the wireless communication interface 23 of the drone 20. The control device 24 may specifically be a remote control corresponding to the drone 20, or a user terminal; among them, the user terminal may be a smart phone, a tablet computer, and the like. In addition, the drone 20 may also include a control device, and the control device may include a general-purpose or special-purpose processor. It should be noted that this is only a schematic description, and does not limit the specific structure of the UAV.
拍摄装置在拍摄图像的过程中,可移动平台可以实时的获取到拍摄装置所拍摄到的当前图像。In the process of capturing images by the camera, the movable platform can obtain the current image captured by the camera in real time.
步骤S102、获取参考图像的目标区域中的第一特征点,参考图像为当前图像的前一帧图像,目标区域为目标对象对应的图像区域。Step S102: Acquire the first feature point in the target area of the reference image, the reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
示例性地,在可移动平台上的拍摄装置拍摄目标对象的过程中,拍摄装置输出当前图像;可移动平台确定该图像的前一帧图像是参考图像。可移动平台确定出参考图像的目标区域,该目标区域为目标对象对应的图像区域;然后,提取该图像区域中的第一特征点(为了便于区别,将参考图像的特征点,称为第一特征点)。举例来说,可移动平台获取到拍摄装置输出的第1帧图像,确定出第1帧图像上的目标区域,提取第1帧图像的目标区域上的第一特征点。然后可移动平台获取到拍摄装置输出的第2帧图像,可以确定第1帧图像是第2帧图像的参考图像。Exemplarily, in the process of photographing the target object by the photographing device on the movable platform, the photographing device outputs the current image; the movable platform determines that the previous frame of the image is a reference image. The movable platform determines the target area of the reference image, which is the image area corresponding to the target object; then, extracts the first feature point in the image area (in order to facilitate the distinction, the feature point of the reference image is called the first Feature points). For example, the movable platform obtains the first frame of image output by the camera, determines the target area on the first frame of image, and extracts the first feature point on the target area of the first frame of image. Then the movable platform obtains the second frame image output by the shooting device, and it can be determined that the first frame image is the reference image of the second frame image.
其中,目标对象可以是一种大型目标,例如,目标对象是建筑物,或者目标对象是建筑物群。Among them, the target object may be a large-scale target, for example, the target object is a building, or the target object is a group of buildings.
一个示例中,如图2所示,拍摄装置21拍摄的图像中包括如图2所示的目标对象31。In an example, as shown in FIG. 2, the image captured by the photographing device 21 includes the target object 31 as shown in FIG. 2.
一个示例中,在提取参考图像的第一特征点的时候,可以采用角点检测算法(英文全称:Corner Detection Algorithm)检测参考图像的目标区域上的第一特征点。其中,角点检测算法,例如是FAST(英文全称:Features from Accelerated Segment Test)算法、SUSAN(英文全称:Small Univalue Segment Assimilating Nucleus)算法、以及Harris operator算法。In an example, when extracting the first feature point of the reference image, a corner detection algorithm (full English name: Corner Detection Algorithm) may be used to detect the first feature point on the target area of the reference image. Among them, corner detection algorithms, for example, are FAST (English full name: Features from Accelerated Segment Test) algorithm, SUSAN (English full name: Small Univalue Segment Assimilating Nucleus) algorithm, and Harris operator algorithm.
步骤S103、确定当前图像中与第一特征点对应的第二特征点。Step S103: Determine a second feature point corresponding to the first feature point in the current image.
示例性地,在可移动平台的移动过程中,随着可移动平台的位置变化,可移动平台上的拍摄装置所能拍摄到的区域也在变化,进而拍摄装置所输出的图像也在变化。针对于当前图像,需要确定出当前图像上的特征点(为了便于区域,将当前图像的特征点,称为第二特征点)。Exemplarily, during the movement of the movable platform, as the position of the movable platform changes, the area that can be photographed by the camera on the movable platform also changes, and the image output by the camera also changes. For the current image, it is necessary to determine the feature points on the current image (in order to facilitate the region, the feature points of the current image are referred to as second feature points).
在获取到参考图像的第一特征点之后,需要确定出第一特征点在当前 图像中的位置,进而得到当前图像中的第二特征点。After acquiring the first feature point of the reference image, it is necessary to determine the position of the first feature point in the current image, and then obtain the second feature point in the current image.
一个示例中,由于第一特征点对应了目标对象上的三维空间点,可以根据三维空间点,确定出三维空间点在当前图像上的位置,进而得到当前图像中的第二特征点。例如,将第一特征点对应的三维空间点映射到当前图像上,进而得到与第一特征点对应的第二特征点。In an example, since the first feature point corresponds to a three-dimensional space point on the target object, the position of the three-dimensional space point on the current image can be determined according to the three-dimensional space point, and then the second feature point in the current image can be obtained. For example, map the three-dimensional space point corresponding to the first feature point to the current image, and then obtain the second feature point corresponding to the first feature point.
另一个示例中,在获取到参考图像的第一特征点之后,利用跟踪算法对目标区域中的第一特征点进行跟踪,以确定出参考图像的目标区域中的第一特征点在当前图像中的位置。跟踪算法,例如是KLT(英文全称:Kanade-Lucas-Tomasi Feature Tracker)特征跟踪算法。In another example, after the first feature point of the reference image is acquired, the tracking algorithm is used to track the first feature point in the target area to determine that the first feature point in the target area of the reference image is in the current image s position. The tracking algorithm, for example, is the KLT (full English name: Kanade-Lucas-Tomasi Feature Tracker) feature tracking algorithm.
举例来说,图3为本申请实施例提供的特征点的示意图一,如图3所示,参考图像40中具有目标区域42,目标区域42中具有目标对象31;可以提取目标区域42的第一特征点,第一特征点例如有特征点A、特征点B和特征点C。在可移动平台环绕目标对象的过程中,拍摄装置输出当前图像41。可知,采用跟踪算法,可以确定出参考图像40的目标区域42中的第一特征点(例如有特征点A、特征点B和特征点C)在当前图像41中的位置。其中,参考图像40与当前图像41可以是相邻图像,也可以是非相邻图像。图3只是示意性说明,并不限定目标对象的种类,也不限定特征点的个数。For example, FIG. 3 is the first schematic diagram of the feature points provided by the embodiment of the application. As shown in FIG. 3, the reference image 40 has the target area 42 and the target area 42 has the target object 31; A feature point, the first feature point includes feature point A, feature point B, and feature point C, for example. During the process of the movable platform surrounding the target object, the camera outputs the current image 41. It can be seen that by using a tracking algorithm, the position of the first feature point (for example, feature point A, feature point B, and feature point C) in the target area 42 of the reference image 40 in the current image 41 can be determined. Wherein, the reference image 40 and the current image 41 may be adjacent images or non-adjacent images. FIG. 3 is only a schematic illustration, and does not limit the type of target object or the number of feature points.
举例来说,图4为本申请实施例提供的特征点的示意图二,如图4所示,可移动平台从左往右移动(图4所示的箭头所示的方向)。31表示目标对象,51、52表示拍摄装置环绕目标对象31按照箭头所示的方向(从左往右)移动的过程中,拍摄装置先后输出的图像,可以理解,目标31上的三维空间点可映射到图像51、52中,该三维空间点在图像51中的映射点具体可以是图像51的目标区域上的特征点,该三维空间点在图像52中的映射点具体可以是图像52的目标区域上的特征点。For example, FIG. 4 is the second schematic diagram of the feature points provided by the embodiment of this application. As shown in FIG. 4, the movable platform moves from left to right (the direction shown by the arrow shown in FIG. 4). 31 represents the target object, 51 and 52 represent the images output by the camera during the process of moving the camera around the target object 31 in the direction shown by the arrow (from left to right). It can be understood that the three-dimensional points on the target 31 can be Mapped to the images 51 and 52, the mapping point of the three-dimensional space point in the image 51 may specifically be a feature point on the target area of the image 51, and the mapping point of the three-dimensional space point in the image 52 may specifically be the target of the image 52 Feature points on the area.
点A和点B为目标对象31上的三维空间点。点a1和点b1表示图像51中的特征点,点a1与点A对应,点b1与点B对应。点a2和点b2表示图像52中的特征点,点a2与点A对应,点b2与点B对应。Point A and point B are three-dimensional space points on the target object 31. The point a1 and the point b1 represent feature points in the image 51, the point a1 corresponds to the point A, and the point b1 corresponds to the point B. The point a2 and the point b2 represent feature points in the image 52, the point a2 corresponds to the point A, and the point b2 corresponds to the point B.
在可移动平台从左往右移动的过程中,需要知道点A映射到图像51中的点a1的位置、点B映射到图像51中的特征点b1的位置;在可移动平台移动之后,点A映射到图像52中的特征点a2的位置、点B映射到目 标图像52中的点特征b2的位置;进而获知特征点a1映射到图像52中的特征点a2,特征点b1映射到图像52中的特征点b2。In the process of moving the movable platform from left to right, it is necessary to know the location of point A mapped to point a1 in image 51, and the location of point B mapped to feature point b1 in image 51; after the movable platform moves, point A is mapped to the position of the feature point a2 in the image 52, and the point B is mapped to the location of the point feature b2 in the target image 52; then it is learned that the feature point a1 is mapped to the feature point a2 in the image 52, and the feature point b1 is mapped to the image 52 The feature point in b2.
步骤S104、当第二特征点的数量满足预设条件时,移动参考图像的目标区域以得到当前图像的目标区域。Step S104: When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
在移动参考图像的目标区域之前,需要确定当前图像中的第二特征点的数量,是否满足预设条件;例如,预设条件为第二特征点的数量小于第一特征点的数量,也即特征点的个数在减少。Before moving the target area of the reference image, it is necessary to determine whether the number of second feature points in the current image meets a preset condition; for example, the preset condition is that the number of second feature points is less than the number of first feature points, that is The number of feature points is decreasing.
步骤S105、控制拍摄装置朝向当前图像的目标区域对应的三维空间点。Step S105: Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
示例性地,在步骤S104之后,已经得到当前图像的目标区域,当前图像的目标区域是一个图像区域,该图像区域可以对应于目标对象上的一个三维空间点,进而可以控制拍摄装置朝向当前图像的目标区域对应的三维空间点,使得拍摄装置始终可以拍摄到目标对象。Exemplarily, after step S104, the target area of the current image has been obtained. The target area of the current image is an image area. The image area may correspond to a three-dimensional space point on the target object, so that the camera can be controlled to face the current image. The target area corresponds to the three-dimensional space point, so that the shooting device can always shoot the target object.
并且,步骤S101-步骤S105可以是一个重复处理的过程。Moreover, step S101 to step S105 may be a process of repeated processing.
本实施例,通过提供用于控制可移动平台环绕目标对象的方法,获取拍摄装置拍摄得到的当前图像;获取参考图像的目标区域中的第一特征点,参考图像为当前图像的前一帧图像,目标区域为目标对象对应的图像区域;确定当前图像中与第一特征点对应的第二特征点;当第二特征点的数量满足预设条件时,移动参考图像的目标区域以得到当前图像的目标区域;控制拍摄装置朝向当前图像的目标区域对应的三维空间点。可以基于前一帧图像的目标区域中的第一特征点,确定出第一特征点映射到后一帧图像上的位置,进而得到后一帧图像的第二特征点;在第二特征点的数量满足预设条件时,例如,在第二特征点的个数较少时,移动参考图像的目标区域,得到后一帧图像的目标区域。从而基于特征点的分布情况,更新图像的目标区域。控制拍摄装置朝向当前图像的目标区域对应的三维空间点;进而,可移动平台的拍摄装置始终可以拍摄到目标对象。并且,上述过程是基于图像的处理过程,不需要预先建立复杂的三维模型,处理方式快速简单,用户体验较好。In this embodiment, by providing a method for controlling the movable platform to surround the target object, the current image captured by the photographing device is obtained; the first feature point in the target area of the reference image is obtained, and the reference image is the previous frame image of the current image , The target area is the image area corresponding to the target object; determine the second feature point corresponding to the first feature point in the current image; when the number of second feature points meets the preset condition, move the target area of the reference image to obtain the current image The target area; control the camera to face the three-dimensional space point corresponding to the target area of the current image. Based on the first feature point in the target area of the previous frame of image, the position where the first feature point is mapped to the next frame of image can be determined, and then the second feature point of the next frame of image can be obtained; When the number satisfies the preset condition, for example, when the number of second feature points is small, the target area of the reference image is moved to obtain the target area of the next frame of image. Thus, based on the distribution of feature points, the target area of the image is updated. The shooting device is controlled to face the three-dimensional space point corresponding to the target area of the current image; furthermore, the shooting device of the movable platform can always shoot the target object. In addition, the above process is an image-based processing process, no complicated three-dimensional model needs to be established in advance, the processing method is fast and simple, and the user experience is better.
图5为本申请另一实施例提供的可移动平台的控制方法的流程图。本实施例提供的方法,用于控制可移动平台环绕目标对象,可移动平台包括拍 摄装置。如图5所示,本实施例提供的方法,可以包括:FIG. 5 is a flowchart of a method for controlling a movable platform provided by another embodiment of the application. The method provided in this embodiment is used to control the movable platform to surround the target object, and the movable platform includes a camera. As shown in Figure 5, the method provided in this embodiment may include:
步骤S201、获取拍摄装置拍摄得到的当前图像。Step S201: Acquire a current image captured by the photographing device.
示例性地,本实施例的可移动平台具体可以是无人机、无人地面机器人、无人船、移动机器人等。这里为了方便解释,以可移动平台为无人机来进行示意性说明。可以理解的是,本申请中的无人机可以被同等地替代成可移动平台。Exemplarily, the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like. For the convenience of explanation, the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
本实施例的应用场景,可以参见图2。For the application scenario of this embodiment, refer to FIG. 2.
本步骤可以参见图1的步骤S101,不再赘述。For this step, refer to step S101 in FIG. 1 and will not be repeated here.
步骤S202、将参考图像的目标区域划分为多个栅格区域。Step S202: Divide the target area of the reference image into multiple grid areas.
一个示例中,步骤S202具体包括:根据可移动平台环绕目标对象的方向,将参考图像的目标区域划分为多个栅格区域。In an example, step S202 specifically includes: dividing the target area of the reference image into multiple grid areas according to the direction in which the movable platform surrounds the target object.
示例性地,在提取参考图像的目标区域的第一特征点之前,为了减少算法复杂度和数据计算量,并不会针对参考图像的目标区域中的每一像素点都进行计算。可以将参考图像的目标区域,划分为多个栅格区域;进而后续在步骤S203时,在每一个栅格区域中提取小于预设数量的特征点。Exemplarily, before extracting the first feature point of the target area of the reference image, in order to reduce the algorithm complexity and the amount of data calculation, calculation is not performed for every pixel in the target area of the reference image. The target area of the reference image may be divided into a plurality of grid areas; then, in step S203, feature points less than a preset number are extracted from each grid area.
在划分栅格区域的时候,可依据可移动平台环绕目标对象的方向,将进行栅格区域的划分,以将参考图像的目标区域划分为多个栅格区域。When dividing the grid area, the grid area may be divided according to the direction in which the movable platform surrounds the target object, so as to divide the target area of the reference image into multiple grid areas.
举例来说,图6为本申请实施例提供的栅格区域的示意图一,如图6所示,可移动平台在水平面上环绕目标对象,则可通过水平线对目标区域划分栅格,得到多个栅格区域。For example, FIG. 6 is a first schematic diagram of a grid area provided by an embodiment of the application. As shown in FIG. 6, a movable platform surrounds a target object on a horizontal plane, and the target area can be divided into grids by horizontal lines to obtain multiple grids. Grid area.
再举例来说,可移动平台在垂直面上环绕目标对象,则可通过垂直面对目标区域划分栅格,得到多个栅格区域。For another example, if the movable platform surrounds the target object on a vertical plane, the grid can be divided by facing the target area vertically to obtain multiple grid areas.
又举例来说,图7为本申请实施例提供的栅格区域的示意图二,如图7所示,可移动平台在倾斜面上环绕目标对象,则根据倾斜面所指示的倾斜方向,确定出对应的倾斜线;依据倾斜线对目标区域划分栅格,得到多个栅格区域。倾斜方向不限于图7所示的方向。For another example, FIG. 7 is the second schematic diagram of the grid area provided by the embodiment of the application. As shown in FIG. 7, the movable platform surrounds the target object on the inclined surface, and the determination is made according to the inclination direction indicated by the inclined surface. Corresponding slanted line; the target area is divided into grids according to the slanted line to obtain multiple grid areas. The tilt direction is not limited to the direction shown in FIG. 7.
步骤S203、在每个栅格区域中获取预设数量的特征点,作为参考图像的目标区域中的第一特征点。参考图像为当前图像的前一帧图像,目标区域为目标对象对应的图像区域。Step S203: Acquire a preset number of feature points in each grid area as the first feature point in the target area of the reference image. The reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
示例性地,在参考图像的目标区域的每一个栅格区域中,提取第一特 征点时,可以采用角点检测算法检测每一个栅格区域中的特征点,并且限制检测过程中特征点的个数,进而提取到每一个栅格区域中的预设数量的特征点,将各栅格区域中的预设数量的特征点,作为参考图像的目标区域中的第一特征点。Exemplarily, in each grid area of the target area of the reference image, when the first feature point is extracted, the corner detection algorithm can be used to detect the feature point in each grid area, and limit the number of feature points in the detection process. Then, the preset number of feature points in each grid area are extracted, and the preset number of feature points in each grid area are used as the first feature point in the target area of the reference image.
步骤S204、确定当前图像中与第一特征点对应的第二特征点。Step S204: Determine a second feature point corresponding to the first feature point in the current image.
一个示例中,步骤S204具体包括:In an example, step S204 specifically includes:
基于第一特征点,利用跟踪算法获取当前图像中跟踪得到的特征点;基于极线约束条件对跟踪得到的特征点进行过滤,以得到第二特征点。Based on the first feature point, the tracking algorithm is used to obtain the feature point obtained by tracking in the current image; the feature point obtained by tracking is filtered based on the epipolar constraint condition to obtain the second feature point.
示例性地,参见如1所示的步骤S103的介绍,利用跟踪算法对目标区域中的第一特征点进行跟踪,进而得到当前图像中跟踪得到的特征点。然而,由于采用跟踪算法本身的计算误差、或者其他因素,可能导致跟踪得到的特征点并不准确,所以需要对对跟踪得到的特征点进行过滤,以得到第二特征点。Exemplarily, referring to the introduction of step S103 shown in 1, the tracking algorithm is used to track the first feature point in the target area, and then the feature point obtained by tracking in the current image is obtained. However, due to the calculation error of the tracking algorithm itself or other factors, the feature points obtained by the tracking may be inaccurate, so the feature points obtained by the tracking need to be filtered to obtain the second feature point.
一个示例中,可以基于极线约束(英文全称:Epipolar constraint)条件,对跟踪得到的特征点进行过滤,得到第二特征点。另一个示例中,可以基于前后两帧图像的特征点的运动关系,对跟踪得到的特征点进行过滤。需要说明的是,基于极线约束条件对跟踪得到的特征点进行过滤的方式可以与基于前后两帧图像的特征点的运动关系对跟踪得到的特征点进行过滤的方式相结合,以得到更加准确的第二特征点。In an example, the feature points obtained by tracking can be filtered based on the epipolar constraint (English full name: Epipolar constraint) condition to obtain the second feature point. In another example, the feature points obtained by tracking may be filtered based on the motion relationship of the feature points of the two images before and after. It should be noted that the method of filtering the tracked feature points based on the epipolar constraint condition can be combined with the method of filtering the tracked feature points based on the movement relationship of the feature points of the two frames of image before and after, so as to obtain more accurate results. The second feature point.
步骤S205、当第二特征点的数量满足预设条件时,移动参考图像的目标区域以得到当前图像的目标区域。Step S205: When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
一个示例中,预设条件为参考图像的多个栅格区域中位于边界的栅格区域中的第一特征点在当前图像中对应的第二特征点的数量为零。此时,步骤S205具体包括:朝远离边界的栅格区域的方向移动参考图像的目标区域。In an example, the preset condition is that the number of first feature points located in the grid area on the border among the multiple grid areas of the reference image corresponds to zero in the current image. At this time, step S205 specifically includes: moving the target area of the reference image in a direction away from the grid area of the boundary.
另一个示例中,预设条件为第二特征点的数量小于第一特征点的数量。In another example, the preset condition is that the number of second feature points is less than the number of first feature points.
示例性地,在可移动平台环绕目标对象移动的过程中,可移动平台相对于目标对象的位置不断变化,举例来说,在无人机环绕飞行的过程中,参考图像的多个栅格区域中位于边界的栅格区域中的第一特征点可能会丢失,也即,在当前图像中无法找到对应的第二特征点,因此,需要移动 参考图像的目标区域,以得到当前图像的目标区域。Exemplarily, during the process of moving the movable platform around the target object, the position of the movable platform relative to the target object is constantly changing. For example, when the drone is flying around, the multiple grid areas of the reference image The first feature point in the grid area at the boundary of the middle may be lost, that is, the corresponding second feature point cannot be found in the current image. Therefore, the target area of the reference image needs to be moved to obtain the target area of the current image .
如图8所示,图8为本申请实施例提供的栅格区域的示意图三,图8中的图a是参考图像的目标区域的各栅格区域,图a中包括多个第一特征点;图8中的图b是当前图像的目标区域。将参考图像的目标区域,划分为多个栅格区域;从每一栅格区域中提取第一特征点。在可移动平台环绕目标对象移动的过程中,在图8中的图a的左边界上的第一特征点(图a中实心的点)可能会丢失;从而在确定位于边界的栅格区域中的第一特征点,在当前图像中对应的第二特征点的数量为零或者小于一定数目时,移动参考图像的目标区域,得到图8中的图b所示的当前图像的目标区域。此时,是朝远离边界的栅格区域的方向,去移动参考图像的目标区域;其中,移动的距离,可以是丢失的特征点所占的栅格区域的宽度,例如,图8的图a所示,栅格区域丢失的特征点占一列栅格,则相对于参考图像的目标区域,移动“一列栅格”所表征的距离长度。As shown in FIG. 8, FIG. 8 is the third schematic diagram of the grid area provided by the embodiment of the application. The diagram a in FIG. 8 is each grid area of the target area of the reference image, and the diagram a includes a plurality of first feature points. ; Figure b in Figure 8 is the target area of the current image. The target area of the reference image is divided into multiple grid areas; the first feature point is extracted from each grid area. In the process of moving the movable platform around the target object, the first feature point (the solid point in Figure a) on the left boundary of Figure a in Figure 8 may be lost; thus, in the grid area determined to be located on the boundary When the number of corresponding second feature points in the current image is zero or less than a certain number, the target area of the reference image is moved to obtain the target area of the current image as shown in b in FIG. 8. At this time, it is to move the target area of the reference image in the direction away from the grid area of the boundary; where the moving distance can be the width of the grid area occupied by the missing feature points, for example, Figure a in Figure 8 As shown, the missing feature points in the grid area occupy a row of grids, and relative to the target area of the reference image, move the distance represented by the "column of grids".
一个示例中,若参考图像中的左边界的栅格区域的第一特征点丢失(即,第一特征点在当前图像中对应的第二特征点的数量为零),则将参考图像的目标区域,朝向图像的右边界移动。In an example, if the first feature point of the grid area of the left border in the reference image is lost (that is, the number of second feature points corresponding to the first feature point in the current image is zero), then the target of the reference image Area, moving toward the right edge of the image.
或者,若参考图像中的右边界的栅格区域的第一特征点丢失(即,第一特征点在当前图像中对应的第二特征点的数量为零),则将参考图像的目标区域,朝向图像的左边界移动。Or, if the first feature point of the grid area of the right boundary in the reference image is lost (that is, the number of second feature points corresponding to the first feature point in the current image is zero), then the target area of the reference image is Move towards the left edge of the image.
或者,若参考图像中的上边界的栅格区域的第一特征点丢失(即,第一特征点在当前图像中对应的第二特征点的数量为零),则将参考图像的目标区域,朝向图像的下边界移动。Or, if the first feature point of the grid area of the upper boundary in the reference image is lost (that is, the number of second feature points corresponding to the first feature point in the current image is zero), then the target area of the reference image is changed to Move towards the lower border of the image.
或者,若参考图像中的左上边界的栅格区域的第一特征点丢失(即,第一特征点在当前图像中对应的第二特征点的数量为零),则将参考图像的目标区域,朝向图像的右下边界移动。Or, if the first feature point of the grid area of the upper left boundary in the reference image is lost (that is, the number of second feature points corresponding to the first feature point in the current image is zero), then the target area of the reference image is Move towards the lower right border of the image.
步骤S206、获取当前图像的目标区域中的第三特征点,第三特征点的数量大于或等于第二特征点的数量。Step S206: Acquire a third feature point in the target area of the current image, where the number of third feature points is greater than or equal to the number of second feature points.
示例性地,在步骤S205之后,当前图像的目标区域中包括了至少一个第二特征点,第二特征点是与参考图像的第一特征点对应的特征点。进一步地,可以提取当前图像的目标区域的备选特征点。若没有提取到备选 特征点,则确定上述第二特征点,为当前图像的目标区域中的所有特征点(即,第三特征点)。此时,第三特征点的数量等于第二特征点的数量。若提取到备选特征点,则将上述第二特征点和新提取的备选特征点,都作为当前图像的目标区域中的所有特征点(即,第三特征点)。此时,第三特征点的数量大于第二特征点的数量。Exemplarily, after step S205, the target area of the current image includes at least one second feature point, and the second feature point is a feature point corresponding to the first feature point of the reference image. Further, candidate feature points of the target area of the current image can be extracted. If no candidate feature points are extracted, the above-mentioned second feature points are determined to be all feature points (that is, third feature points) in the target area of the current image. At this time, the number of third feature points is equal to the number of second feature points. If the candidate feature points are extracted, the above-mentioned second feature point and the newly extracted candidate feature points are used as all the feature points (ie, the third feature points) in the target area of the current image. At this time, the number of third feature points is greater than the number of second feature points.
图9为本申请实施例提供的栅格区域的示意图四,如图9所示,图9中的图a是参考图像的目标区域的各栅格区域,图a中包括多个第一特征点;图9中的图b是当前图像的目标区域。如图9中的图a所示,在图9中的图a的左边界上的第一特征点(图a中实心的点)在当前图像中丢失;将目标区域会往右移动一列的栅格;然后针对当前图像的目标区域,提取到新的备选特征点(图b中实心的点)。9 is a schematic diagram of the grid area provided by an embodiment of the application. As shown in FIG. 9, the graph a in FIG. 9 is each grid area of the target area of the reference image, and the graph a includes a plurality of first feature points. ; Figure b in Figure 9 is the target area of the current image. As shown in Figure a in Figure 9, the first feature point (the solid point in Figure a) on the left boundary of Figure a in Figure 9 is lost in the current image; the target area will be moved one column to the right. Grid; Then for the target area of the current image, new candidate feature points (the solid points in Figure b) are extracted.
一个示例中,步骤206具体包括:获取当前图像的目标区域中的备选特征点;根据备选特征点的深度值和/或备选特征点的语义信息,对备选特征点进行过滤以确定第三特征点。In an example, step 206 specifically includes: obtaining candidate feature points in the target area of the current image; filtering the candidate feature points according to the depth value of the candidate feature points and/or the semantic information of the candidate feature points to determine The third feature point.
备选特征点中可能会存在不属于目标对象的特征点,即备选特征点对应的三维空间点不是目标对象上的三维空间点,而是其他的目标。此时,需要对备选特征点进行过滤处理。在过滤之后,可能会将各备选特征点都过滤掉,此时第三特征点的数量等于第二特征点的数量。在过滤之后,可能会将过滤掉部分的备选特征点,此时第三特征点的数量大于第二特征点的数量。The candidate feature points may have feature points that do not belong to the target object, that is, the three-dimensional space points corresponding to the candidate feature points are not three-dimensional space points on the target object, but other targets. At this time, the candidate feature points need to be filtered. After filtering, all candidate feature points may be filtered out. At this time, the number of third feature points is equal to the number of second feature points. After filtering, part of the candidate feature points may be filtered out. At this time, the number of third feature points is greater than the number of second feature points.
在对备选特征点进行过滤处理时,包括以下几种实施方式。When filtering the candidate feature points, the following implementation manners are included.
一种实施方式中,可以计算备选特征点的深度值;其中,深度值表征了特征点所对应的三维空间点、拍摄装置的参考点(例如光心)两者之间的距离。若备选特征点的深度值属于预设深度值范围,则确定备选特征点为当前图像的目标区域中的特征点,不需要过滤该特征点;若备选特征点的深度值不属于预设深度值范围,则确定过滤该特征点。其中,“预设深度值范围”可以是一个经验值;“预设深度值范围”表征了目标区域的深度值的取值范围,或者,“预设深度值范围”表征了目标对象上的三维空间点所对应的特征点的深度值取值范围。In one embodiment, the depth value of the candidate feature point can be calculated; where the depth value represents the distance between the three-dimensional space point corresponding to the feature point and the reference point (for example, the optical center) of the imaging device. If the depth value of the candidate feature point belongs to the preset depth value range, the candidate feature point is determined to be the feature point in the target area of the current image, and the feature point does not need to be filtered; if the depth value of the candidate feature point does not belong to the preset depth value range, Set the depth value range to determine to filter the feature point. Among them, the "preset depth value range" can be an empirical value; the "preset depth value range" represents the value range of the depth value of the target area, or the "preset depth value range" represents the three-dimensional value on the target object. The depth value range of the feature point corresponding to the spatial point.
另一种实施方式中,可以获取备选特征点的语义信息;其中,语义信 息表征了特征点对应的三维空间点的类别,例如是建筑物、天空、草地。然后,在确定备选特征点的语义信息与目标对象对应时,确定备选特征点为当前图像的目标区域中的特征点,不需要过滤该特征点。在确定备选特征点的语义信息不与目标对象对应时,过滤该特征点。比如说,目标对象是建筑物,但是一个备选特征点的语义信息表明该备选特征点对应于建筑物后面的草地或者天空,则需要将该特征点滤除。In another implementation manner, the semantic information of the candidate feature points can be obtained; wherein the semantic information represents the category of the three-dimensional space point corresponding to the feature point, for example, buildings, sky, and grass. Then, when it is determined that the semantic information of the candidate feature point corresponds to the target object, it is determined that the candidate feature point is a feature point in the target area of the current image, and the feature point does not need to be filtered. When it is determined that the semantic information of the candidate feature point does not correspond to the target object, the feature point is filtered. For example, if the target object is a building, but the semantic information of a candidate feature point indicates that the candidate feature point corresponds to the grass or the sky behind the building, the feature point needs to be filtered out.
需要说明的是,在对备选特征点进行过滤处理时,可以结合备选特征点的深度值和语义信息,判断是否过滤备选特征点。例如,若备选特征点的深度值属于预设深度值范围,并且,备选特征点的语义信息与目标对象对应时,则确定备选特征点为当前图像的目标区域中的特征点,不需要过滤该特征点。否则,过滤该特征点。It should be noted that when the candidate feature points are filtered, the depth value and semantic information of the candidate feature points can be combined to determine whether to filter the candidate feature points. For example, if the depth value of the candidate feature point belongs to the preset depth value range, and the semantic information of the candidate feature point corresponds to the target object, the candidate feature point is determined to be the feature point in the target area of the current image. The feature point needs to be filtered. Otherwise, filter the feature point.
举例来说,图10为本申请实施例提供的特征点的示意图三,如图10所示,可移动平台从左往右移动(图10所示的箭头所示的方向)。31表示目标对象,51、52、53表示拍摄装置环绕目标对象31按照箭头所示的方向(从左往右)移动的过程中,拍摄装置先后输出的图像,可以理解,目标31上的三维空间点可映射到图像51、52、53中,该三维空间点在图像51中的映射点具体可以是图像51的目标区域上的特征点,该三维空间点在图像52中的映射点具体可以是图像52的目标区域上的特征点,该三维空间点在图像53中的映射点具体可以是图像53的目标区域上的特征点。For example, FIG. 10 is the third schematic diagram of the feature points provided by the embodiment of the application. As shown in FIG. 10, the movable platform moves from left to right (the direction shown by the arrow shown in FIG. 10). 31 represents the target object, 51, 52, 53 represent the images output by the camera during the process of moving around the target object 31 in the direction indicated by the arrow (from left to right). It can be understood that the three-dimensional space on the target 31 The points can be mapped to the images 51, 52, and 53, the mapping points of the three-dimensional space points in the image 51 may specifically be feature points on the target area of the image 51, and the mapping points of the three-dimensional spatial points in the image 52 may specifically be A feature point on the target area of the image 52, and the mapping point of the three-dimensional space point in the image 53 may specifically be a feature point on the target area of the image 53.
点A、点B和点C为目标对象31上的三维空间点。点a1和点b1表示图像51中的特征点,点a1与点A对应,点b1与点B对应。点a2、点b2和点c2表示图像52中的特征点,点a2与点A对应,点b2与点B对应,点c2与点C对应。点a3、点b3和点c3表示图像53中的特征点,点a3与点A对应,点b3与点B对应,点c3与点C对应。Point A, point B, and point C are three-dimensional space points on the target object 31. The point a1 and the point b1 represent feature points in the image 51, the point a1 corresponds to the point A, and the point b1 corresponds to the point B. Point a2, point b2, and point c2 represent feature points in image 52, point a2 corresponds to point A, point b2 corresponds to point B, and point c2 corresponds to point C. Point a3, point b3, and point c3 represent feature points in the image 53, point a3 corresponds to point A, point b3 corresponds to point B, and point c3 corresponds to point C.
在可移动平台从左往右移动的过程中,需要知道点A映射到图像51中的点a1的位置、点B映射到图像51中的特征点b1的位置。在可移动平台移动之后,点A映射到图像52中的特征点a2的位置、点B映射到目标图像52中的点特征b2的位置;进而获知特征点a1映射到图像52中的特征点a2,特征点b1映射到图像52中的特征点b2,并且,图像52的目标区域中增加了新的特征点c2(采用本步骤S206得到的新的特征点)。 在可移动平台再次移动之后,点A映射到图像52中的特征点a2的位置、点B映射到目标图像52中的点特征b2的位置,点C映射到目标图像52中的点特征c2的位置;进而获知特征点a2映射到图像53中的特征点a3,特征点b2映射到图像53中的特征点b3,特征点c2映射到图像53中的特征点c3。通过纳入新的特征点,可以使拍摄装置始终可以拍摄到目标对象31的表面,确保在环绕过程中不会丢失目标对象。In the process of moving the movable platform from left to right, it is necessary to know the location of the point A mapped to the point a1 in the image 51 and the location of the point B mapped to the feature point b1 in the image 51. After the movable platform moves, point A is mapped to the position of the feature point a2 in the image 52, and point B is mapped to the location of the point feature b2 in the target image 52; then it is learned that the feature point a1 is mapped to the feature point a2 in the image 52 , The feature point b1 is mapped to the feature point b2 in the image 52, and a new feature point c2 is added to the target area of the image 52 (the new feature point obtained in this step S206 is adopted). After the movable platform moves again, point A is mapped to the position of the feature point a2 in the image 52, point B is mapped to the location of the point feature b2 in the target image 52, and point C is mapped to the location of the point feature c2 in the target image 52 Position; and then know that the feature point a2 is mapped to the feature point a3 in the image 53, the feature point b2 is mapped to the feature point b3 in the image 53, and the feature point c2 is mapped to the feature point c3 in the image 53. By incorporating new feature points, the camera can always capture the surface of the target object 31, ensuring that the target object will not be lost during the surrounding process.
步骤S207、根据当前图像的目标区域中的第三特征点,确定当前图像的目标区域对应的三维空间点。Step S207: Determine a three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image.
一个示例中,步骤S207具体包括:对当前图像的目标区域中的第三特征点的深度值进行加权平均;根据加权平均值和拍摄装置拍摄当前图像时的拍摄方向,确定当前图像的目标区域对应的三维空间点。In an example, step S207 specifically includes: weighted average of the depth value of the third feature point in the target area of the current image; according to the weighted average and the shooting direction of the current image taken by the shooting device, it is determined that the target area of the current image corresponds to Points in three-dimensional space.
一个示例中,靠近当前图像的目标区域的中心的第三特征点对应的权重大于远离当前图像的目标区域的中心的第三特征点对应的权重。In an example, the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far away from the center of the target area of the current image.
示例性地,在可移动平台移动的过程中,需要调控可移动平台上的拍摄装置的朝向。Exemplarily, during the movement of the movable platform, the orientation of the camera on the movable platform needs to be adjusted.
在步骤S206之后,得到了当前图像的目标区域,并且,当前图像的目标区域中的第三特征点的数量不是零,则可以根据当前图像的目标区域,控制拍摄装置的朝向当前图像的目标区域对应的三维空间点。After step S206, the target area of the current image is obtained, and if the number of third feature points in the target area of the current image is not zero, the camera can be controlled to face the target area of the current image according to the target area of the current image Corresponding three-dimensional space point.
一种实施方式中,当前图像的目标区域中的第三特征点的个数是多个,可以计算出每一第三特征点的深度值;其中,深度值表征了特征点所对应的三维空间点、拍摄装置的参考点(例如光心)两者之间的距离;则可以对当前图像的目标区域中的各第三特征点的深度值进行加权平均计算,得到加权平均值。同时,可移动平台可以获知拍摄装置拍摄当前图像时的拍摄方向。进而,可移动平台基于上述加权平均值和拍摄方向,确定当前图像的目标区域对应的三维空间点。其中,由于靠近当前图像的目标区域的中心的第三特征点是与当前图像的目标区域的中心临近的;从而靠近当前图像的目标区域的中心的第三特征点对应的权重,大于远离当前图像的目标区域的中心的第三特征点对应的权重。In one embodiment, there are multiple third feature points in the target area of the current image, and the depth value of each third feature point can be calculated; where the depth value represents the three-dimensional space corresponding to the feature point The distance between the point and the reference point (such as the optical center) of the imaging device; then the depth value of each third feature point in the target area of the current image can be calculated by weighted average to obtain the weighted average. At the same time, the movable platform can learn the shooting direction when the camera is shooting the current image. Furthermore, the movable platform determines the three-dimensional space point corresponding to the target area of the current image based on the above-mentioned weighted average value and the shooting direction. Among them, because the third feature point close to the center of the target area of the current image is adjacent to the center of the target area of the current image; thus, the weight corresponding to the third feature point close to the center of the target area of the current image is greater than that far away from the current image. The weight of the third feature point corresponding to the center of the target area.
另一种实施方式中,可以选取靠近当前图像的目标区域的中心的第三特征点。进而,可移动平台基于该靠近当前图像的目标区域的中心的第三 特征点的深度值和拍摄方向,确定当前图像的目标区域对应的三维空间点。In another implementation manner, a third feature point close to the center of the target area of the current image can be selected. Furthermore, the movable platform determines the three-dimensional space point corresponding to the target area of the current image based on the depth value and the shooting direction of the third feature point close to the center of the target area of the current image.
步骤S208、控制拍摄装置朝向当前图像的目标区域对应的三维空间点。Step S208: Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
控制拍摄装置朝向当前图像的目标区域对应的三维空间点,可以使得可移动平台的拍摄装置观测到目标对象上的与目标区域对应的表面。By controlling the camera to face the three-dimensional point corresponding to the target area of the current image, the camera of the movable platform can observe the surface of the target object corresponding to the target area.
举例来说,图11为本申请实施例提供的可移动平台的移动示意图,如图11所示,可移动平台例如是无人机;无人机在起始点A时,采集到的图像的目标区域在三维空间中对应目标对象的区域a;在飞至B点的过程中,目标区域被不断更新,进而无人机在点B时,采集到的图像的目标区域在三维空间中对应目标对象的区域b。For example, FIG. 11 is a schematic diagram of the movement of the movable platform provided by an embodiment of the application. As shown in FIG. 11, the movable platform is, for example, a drone; when the drone is at the starting point A, the target of the image collected The area corresponds to the area a of the target object in three-dimensional space; during the flight to point B, the target area is continuously updated, and then when the drone is at point B, the target area of the collected image corresponds to the target object in the three-dimensional space的区b.
一个示例中,步骤S208具体包括:当前图像的目标区域中的第三特征点的数量大于预设阈值时,控制拍摄装置朝向当前图像的目标区域对应的三维空间点。In an example, step S208 specifically includes: when the number of third feature points in the target area of the current image is greater than a preset threshold, controlling the camera to face the three-dimensional space point corresponding to the target area of the current image.
示例性地,在步骤S207中已经确定了当前图像的目标区域中的第三特征点的数量是多个。在本步骤,可以对第三特征点的数量进行进一步的判定;当第三特征点的数量大于预设阈值时,则确定当前图像的目标区域中的特征点的个数是较多的,可以持续更新目标区域。进而确定可移动平台的拍摄装置,可以朝向步骤207中所确定的当前图像的目标区域对应的三维空间点。Exemplarily, it has been determined in step S207 that the number of third feature points in the target area of the current image is multiple. In this step, the number of third feature points can be further determined; when the number of third feature points is greater than the preset threshold, it is determined that the number of feature points in the target area of the current image is large, and you can Continuously update the target area. Furthermore, it is determined that the shooting device of the movable platform can face the three-dimensional space point corresponding to the target area of the current image determined in step 207.
步骤S209、当前图像的目标区域中的第三特征点的数量小于或等于预设阈值时,控制拍摄装置朝向参考图像的目标区域对应的三维空间点。Step S209: When the number of third feature points in the target area of the current image is less than or equal to the preset threshold, the camera is controlled to face the three-dimensional space point corresponding to the target area of the reference image.
示例性地,在步骤S206之后,若第三特征点的数量小于等于预设阈值时,则确定当前图像的目标区域中的特征点的个数是较少的,此时,依据参考图像(即,当前图像的上一帧图像)的目标区域的特征点,确定参考图像的目标区域对应的三维空间点,进而控制拍摄装置朝向参考图像的目标区域对应的三维空间点。Exemplarily, after step S206, if the number of third feature points is less than or equal to the preset threshold, it is determined that the number of feature points in the target area of the current image is small. At this time, according to the reference image (ie , The feature points of the target area of the previous frame of the current image, determine the three-dimensional space point corresponding to the target area of the reference image, and then control the shooting device toward the three-dimensional space point corresponding to the target area of the reference image.
示例性地,在当前图像的目标区域中的特征点的个数较少时,还可以控制可移动平台进行返航;或者,切换可移动平台的拍摄装置的拍摄模式,例如无法再采用步骤S201-S208的大型目标对象的拍摄模式,需要切换到其他拍摄模式。Exemplarily, when the number of feature points in the target area of the current image is small, the movable platform can also be controlled to return home; or, the shooting mode of the shooting device of the movable platform can be switched, for example, step S201- can no longer be used. The shooting mode of the S208 large target object needs to be switched to another shooting mode.
或者,当前图像的目标区域中的第三特征点的数量小于或等于预设阈 值,并且之后的连续多帧图像的目标区域中的第三特征点的数量小于或等于预设阈值时,控制拍摄装置朝向参考图像的目标区域对应的三维空间点。Or, when the number of third feature points in the target area of the current image is less than or equal to the preset threshold, and the number of third feature points in the target area of subsequent frames of images is less than or equal to the preset threshold, the shooting is controlled The device faces the three-dimensional space point corresponding to the target area of the reference image.
本实施例,在上述实施例的基础上,在提取参考图像的目标区域的第一特征点的时候,是提取参考图像的栅格区域中的小于预设数量的特征点,并不会针对参考图像的目标区域中的每一像素点或者特征点都进行计算,进而减少算法复杂度和数据计算量。依据参考图像的目标区域的第一特征点,得到当前图像中与第一特征点对应的第二特征点,并移动参考图像的目标区域以得到当前图像的目标区域;从而更新目标区域。此外,对当前图像进行分析,丰富当前图像的目标区域中的特征点,并依据当前图像的目标区域中的特征点,调控可移动平台上的拍摄装置的朝向,控制拍摄装置朝向当前图像的目标区域对应的三维空间点,进而,可移动平台的拍摄装置始终可以观测到目标对象,并拍摄到目标区域对应的表面;同时,提供了退出逻辑,即在当前图像的目标区域中的特征点较少时,控制拍摄装置朝向参考图像的目标区域对应的三维空间点,或者控制可移动平台返航,结束当前拍摄,或者使得可移动平台的拍摄装置继续以其他拍摄模式,进行拍摄任务。In this embodiment, on the basis of the above-mentioned embodiment, when extracting the first feature points of the target area of the reference image, the feature points that are less than the preset number in the grid area of the reference image are extracted, and the reference image is not targeted. Each pixel or feature point in the target area of the image is calculated, thereby reducing the complexity of the algorithm and the amount of data calculation. According to the first feature point of the target area of the reference image, the second feature point corresponding to the first feature point in the current image is obtained, and the target area of the reference image is moved to obtain the target area of the current image; thereby the target area is updated. In addition, analyze the current image, enrich the feature points in the target area of the current image, and adjust the orientation of the camera on the movable platform according to the feature points in the target area of the current image, and control the camera to face the target of the current image The three-dimensional space point corresponding to the area, and further, the camera of the movable platform can always observe the target object and shoot the surface corresponding to the target area; at the same time, it provides the exit logic, that is, the feature points in the target area of the current image are more In a short period of time, the camera is controlled to face the three-dimensional space point corresponding to the target area of the reference image, or the movable platform is controlled to return home to end the current shooting, or the camera of the movable platform can continue to perform shooting tasks in other shooting modes.
图12为本申请又一实施例提供的可移动平台的控制方法的流程图。本实施例提供的方法,用于控制可移动平台环绕目标对象,可移动平台包括拍摄装置。如图12所示,本实施例提供的方法,可以包括:FIG. 12 is a flowchart of a method for controlling a movable platform provided by another embodiment of the application. The method provided in this embodiment is used to control the movable platform to surround the target object, and the movable platform includes a photographing device. As shown in Figure 12, the method provided in this embodiment may include:
步骤S301、将拍摄装置拍摄得到的初始图像发送给可移动平台的控制装置,以使得控制装置显示初始图像。Step S301: Send the initial image captured by the photographing device to the control device of the movable platform, so that the control device displays the initial image.
示例性地,本实施例的可移动平台具体可以是无人机、无人地面机器人、无人船、移动机器人等。这里为了方便解释,以可移动平台为无人机来进行示意性说明。可以理解的是,本申请中的无人机可以被同等地替代成可移动平台。Exemplarily, the movable platform of this embodiment may specifically be an unmanned aerial vehicle, an unmanned ground robot, an unmanned ship, a mobile robot, and the like. For the convenience of explanation, the movable platform is used as a drone for schematic illustration. It is understandable that the drone in this application can be equally replaced with a movable platform.
本实施例的应用场景,可以参见图2。For the application scenario of this embodiment, refer to FIG. 2.
将拍摄装置拍摄得到的初始图像发送给可移动平台的控制装置,以使得控制装置显示初始图像。The initial image captured by the photographing device is sent to the control device of the movable platform, so that the control device displays the initial image.
在可移动平台进行环绕移动之前,先通过拍摄装置拍摄到一帧初始图像。然后可移动平台将初始图像发送给可移动平台的控制装置,进而控制 装置显示初始图像,用户就可以查看到初始图像。其中,该控制装置具体可以是可移动平台对应的遥控器,也可以是用户终端;其中,用户终端例如是智能手机、平板电脑等。Before the movable platform moves around, an initial image is captured by the camera. Then the movable platform sends the initial image to the control device of the movable platform, and then the control device displays the initial image, and the user can view the initial image. Wherein, the control device may specifically be a remote controller corresponding to the movable platform, or a user terminal; where the user terminal is, for example, a smart phone, a tablet computer, and the like.
步骤S302、获取用户对目标对象的指示信息,指示信息是根据用户对控制装置显示的初始图像的点选操作或框选操作生成的。Step S302: Obtain the user's instruction information for the target object. The instruction information is generated according to the user's click operation or frame selection operation on the initial image displayed by the control device.
用户可以通过触控、或者手势控制、或者语音信息等方式,向控制装置中输入指示信息;该指示信息用于指示初始图像中的目标对象。The user can input instruction information into the control device by means of touch, or gesture control, or voice information; the instruction information is used to indicate the target object in the initial image.
例如,用户通过操作介质(例如,手指、或者触控笔),点选初始图像的一个位置点;进而控制装置接收到用户输入的点选操作。For example, the user clicks a location point of the initial image through an operating medium (for example, a finger or a stylus); and then the control device receives a click operation input by the user.
或者,用户通过操作介质(例如,手指、或者触控笔),框选初始图像的一个区域;进而控制装置接收到用户输入的框选操作。Or, the user uses an operating medium (for example, a finger or a stylus) to frame select an area of the initial image; and then the control device receives the frame selection operation input by the user.
举例来说,图13为本申请实施例提供的参考图像的示意图一,如图13所示,用户通过手指框选了初始图像上的区域。For example, FIG. 13 is a first schematic diagram of a reference image provided by an embodiment of the application. As shown in FIG. 13, the user selects an area on the initial image by using a finger frame.
步骤S303、根据指示信息,确定初始图像的目标区域。Step S303: Determine the target area of the initial image according to the instruction information.
一个示例中,步骤S303具体包括以下步骤:In an example, step S303 specifically includes the following steps:
第一步骤、对初始图像进行图像分割以获取多个分割区域。The first step is to perform image segmentation on the initial image to obtain multiple segmented regions.
第二步骤、根据分割区域和指示信息,确定初始图像的目标区域。The second step is to determine the target area of the initial image according to the segmented area and the instruction information.
其中,第二步骤具体包括:当指示信息表示的初始图像中的图像区域在目标分割区域中所占的比例大于预设比例时,则根据目标分割区域,确定初始图像的目标区域,目标分割区域为多个分割区域中的至少一个。Wherein, the second step specifically includes: when the proportion of the image area in the initial image indicated by the indication information in the target segmentation area is greater than the preset ratio, then determining the target area of the initial image according to the target segmentation area, and the target segmentation area It is at least one of a plurality of divided regions.
一个示例中,对初始图像做图像分割,例如进行基于聚类的图像分割处理,得到分割结果;其中,分割结果包括多个分割区域,每一个分割区域中的像素点具有相似特征。In one example, image segmentation is performed on the initial image, for example, image segmentation processing based on clustering is performed to obtain a segmentation result; wherein the segmentation result includes multiple segmented regions, and pixels in each segmented region have similar features.
然后,计算指示信息所指示的图像区域(即,初始图像中的图像区域),在目标分割区域上所占有的比例值,其中,目标分割区域为多个分割区域中的至少一个。若确定该比例值大于预设比例,则可以根据目标分割区域,确定初始图像的目标区域,以确保目标区域中包含完整的目标对象。Then, the ratio value of the image area indicated by the instruction information (ie, the image area in the initial image) on the target segmented area is calculated, where the target segmented area is at least one of the plurality of segmented areas. If it is determined that the ratio value is greater than the preset ratio, the target area of the initial image can be determined according to the target segmentation area to ensure that the target area contains the complete target object.
举例来说,指示信息指示出的图像区域为A1;将对初始图像进行图像分割之后,得到分割区域B1、B2、B3和B4。若确定图像区域A1在分割区域B1中所占的比例大于预设比例,则可以根据分割区域B1确定初始图 像的目标区域。For example, the image area indicated by the indication information is A1; after image segmentation is performed on the initial image, segmentation areas B1, B2, B3, and B4 are obtained. If it is determined that the proportion of the image area A1 in the segmented area B1 is greater than the preset ratio, the target area of the initial image can be determined according to the segmented area B1.
举例来说,图14为本申请实施例提供的参考图像的示意图二,如图14所示,用户框选出的目标对象为一个小房子;对整幅图像做图像分割,分割后的图像被划分了多个分割区域;进而,将图14中的图a的小房子被划分为一个分割区域。然后,根据用户的框选和图像分割的结果,可以将小房子的全部图像信息置于目标区域中,如图15所示。For example, FIG. 14 is the second schematic diagram of the reference image provided by the embodiment of the application. As shown in FIG. 14, the target object selected by the user frame is a small house; image segmentation is performed on the entire image, and the segmented image is A number of divided areas are divided; further, the small house in Figure a in Figure 14 is divided into one divided area. Then, according to the user's frame selection and the result of image segmentation, all the image information of the small house can be placed in the target area, as shown in Figure 15.
步骤S304、获取拍摄装置拍摄得到的当前图像。Step S304: Acquire the current image captured by the camera.
示例性,本步骤可以参见图1和图5所示的实施例,不再赘述。Exemplarily, this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
步骤S305、获取参考图像的目标区域中的第一特征点,参考图像为当前图像的前一帧图像,目标区域为目标对象对应的图像区域。Step S305: Acquire the first feature point in the target area of the reference image, the reference image is the previous frame of the current image, and the target area is the image area corresponding to the target object.
示例性,本步骤可以参见图1和图5所示的实施例,不再赘述。Exemplarily, this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
步骤S306、确定当前图像中与第一特征点对应的第二特征点。Step S306: Determine a second feature point corresponding to the first feature point in the current image.
示例性,本步骤可以参见图1和图5所示的实施例,不再赘述。Exemplarily, this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
步骤S307、识别目标对象的目标类型;若目标类型是预设类型,则执行移动参考图像的目标区域以得到当前图像的目标区域的步骤。Step S307: Identify the target type of the target object; if the target type is a preset type, execute the step of moving the target area of the reference image to obtain the target area of the current image.
示例性,在移动参考图像的目标区域之前,需要识别目标区域中的目标对象,是否为预设的目标类型。举例来说,需要识别目标区域中的目标对象,是否大型建筑物。Exemplarily, before moving the target area of the reference image, it is necessary to identify whether the target object in the target area is a preset target type. For example, it is necessary to identify whether the target object in the target area is a large building.
若是,则执行步骤S308。若否,则不需要执行步骤S308,可以控制拍摄装置朝向参考图像的目标区域对应的三维空间点。If yes, step S308 is executed. If not, there is no need to perform step S308, and the camera can be controlled to face the three-dimensional space point corresponding to the target area of the reference image.
步骤S308、当第二特征点的数量满足预设条件时,移动参考图像的目标区域以得到当前图像的目标区域。Step S308: When the number of second feature points meets the preset condition, move the target area of the reference image to obtain the target area of the current image.
示例性,本步骤可以参见图1和图5所示的实施例,不再赘述。Exemplarily, this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
步骤S309、控制拍摄装置朝向当前图像的目标区域对应的三维空间点。Step S309: Control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
示例性,本步骤可以参见图1和图5所示的实施例,不再赘述。Exemplarily, this step may refer to the embodiment shown in FIG. 1 and FIG. 5, and details are not described herein again.
步骤S310、控制可移动平台的移动,以使得拍摄装置与当前图像的目标区域对应的三维空间点之间的距离为环绕半径。Step S310: Control the movement of the movable platform so that the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is the surrounding radius.
示例性,步骤S309中是控制拍摄装置的朝向,由于拍摄装置承载在可移动平台上,还需要控制可移动平台的移动。For example, in step S309, the orientation of the camera is controlled. Since the camera is carried on the movable platform, it is also necessary to control the movement of the movable platform.
例如,以无人机为例;若基于目标区域中的第三特征点计算得到的距 离值小于环绕半径,则控制无人机向后飞行;“向后”指的是,目标与无人机之间的连线的第一方向,第一方向为目标对象指向无人机的方向。若该距离值大于环绕半径,则控制无人机向前飞行;“向前”指的是,目标与无人机之间的连线的第二方向,第一方向为无人机指向目标对象的方向。如此,可以通过对无人机的控制使得无人机和目标对象表面的距离始终等于环绕半径。其中,该环绕半径可以是用户通过控制装置输入的,也可以将可移动平台与初始图像的目标区域对应的三维空间点之间的距离作为环绕半径。For example, take the drone as an example; if the distance value calculated based on the third feature point in the target area is less than the surrounding radius, the drone is controlled to fly backward; “backward” refers to the target and the drone The first direction of the line between the two, the first direction is the direction in which the target object points to the drone. If the distance value is greater than the surrounding radius, control the drone to fly forward; "forward" refers to the second direction of the line between the target and the drone, the first direction is that the drone points to the target object Direction. In this way, the distance between the drone and the surface of the target object can always be equal to the surrounding radius through the control of the drone. Wherein, the surrounding radius may be input by the user through the control device, or the distance between the movable platform and the three-dimensional space point corresponding to the target area of the initial image may be used as the surrounding radius.
在上述控制的过程中,使得拍摄装置与当前图像的目标区域对应的三维空间点之间的距离,始终为上述环绕半径。也就是说,在可移动平台环绕建筑群拍摄时,不仅始终可以拍摄到建筑群的表面,还可以与建筑群的表面保持一定的距离。In the above control process, the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is always the above-mentioned surrounding radius. That is to say, when the movable platform surrounds the building group to shoot, not only the surface of the building group can always be photographed, but also a certain distance can be kept from the surface of the building group.
另一个示例中,可以将初始图像的目标区域对应的三维空间点作为可移动平台环绕目标对象的中心点。In another example, the three-dimensional space point corresponding to the target area of the initial image may be used as the center point of the movable platform surrounding the target object.
由于初始图像的目标区域对应了一个三维空间点(例如是,初始图像的目标区域中的各特征点的加权平均值所对应的三维空间点;或者是,初始图像的目标区域的中心位置所对应的三维空间点),可以将初始图像的目标区域所对应的三维空间点,作为可移动平台环绕的中心点。此时,随着可移动平台的移动,环绕的中心点是不变的。也就是说,在可移动平台环绕建筑群拍摄时,拍摄装置始终拍摄到建筑群的表面,然而,可移动平台的环绕轨迹中心始终是初始图像的目标区域对应了一个三维空间点。Since the target area of the initial image corresponds to a three-dimensional space point (for example, the three-dimensional space point corresponding to the weighted average of each feature point in the target area of the initial image; or, the center position of the target area of the initial image corresponds to 3D space point), the 3D space point corresponding to the target area of the initial image can be used as the center point around the movable platform. At this time, as the movable platform moves, the center point of the surround is unchanged. That is to say, when the movable platform is shooting around the building group, the camera always shoots the surface of the building group. However, the center of the orbiting track of the movable platform is always the target area of the initial image corresponding to a three-dimensional space point.
图16为本申请实施例提供的可移动平台的控制装置的结构图,控制装置用于控制可移动平台环绕目标对象,可移动平台包括拍摄装置160,如16所示,控制装置160包括:存储器161、处理器162;16 is a structural diagram of a control device for a movable platform provided by an embodiment of the application. The control device is used to control the movable platform to surround the target object. The movable platform includes a photographing device 160. As shown in 16, the control device 160 includes: a memory 161, the processor 162;
存储器162用于存储程序代码;The memory 162 is used to store program codes;
处理器162,调用程序代码,当程序代码被执行时,用于执行以下操作:获取拍摄装置拍摄得到的当前图像;获取参考图像的目标区域中的第一特征点,参考图像为当前图像的前一帧图像,目标区域为目标对象对应的图像区域;确定当前图像中与第一特征点对应的第二特征点;当第二特征点的数量满足预设条件时,移动参考图像的目标区域以得到当前图像的目 标区域;控制拍摄装置朝向当前图像的目标区域对应的三维空间点。The processor 162 calls the program code. When the program code is executed, it is used to perform the following operations: obtain the current image captured by the camera; obtain the first feature point in the target area of the reference image, the reference image being the front of the current image One frame of image, the target area is the image area corresponding to the target object; the second feature point corresponding to the first feature point in the current image is determined; when the number of second feature points meets the preset condition, the target area of the reference image is moved to Obtain the target area of the current image; control the shooting device to face the three-dimensional space point corresponding to the target area of the current image.
一个示例中,处理器在获取参考图像的目标区域中的第一特征点之前,还用于:将参考图像的目标区域划分为多个栅格区域。In an example, before acquiring the first feature point in the target area of the reference image, the processor is further configured to: divide the target area of the reference image into multiple grid areas.
一个示例中,处理器将参考图像的目标区域划分为多个栅格区域时,具体用于:根据可移动平台环绕目标对象的方向,将参考图像的目标区域划分为多个栅格区域。In an example, when the processor divides the target area of the reference image into multiple grid areas, it is specifically configured to: divide the target area of the reference image into multiple grid areas according to the direction in which the movable platform surrounds the target object.
一个示例中,处理器获取参考图像的目标区域中的第一特征点时,具体用于:在每个栅格区域中获取预设数量的特征点作为第一特征点。In an example, when the processor obtains the first feature points in the target area of the reference image, it is specifically configured to: obtain a preset number of feature points in each grid area as the first feature points.
一个示例中,预设条件包括:参考图像的多个栅格区域中位于边界的栅格区域中的第一特征点在当前图像中对应的第二特征点的数量为零;处理器移动参考图像的目标区域时,具体用于:朝远离边界的栅格区域的方向移动参考图像的目标区域。In an example, the preset condition includes: the number of first feature points in the grid area on the border among the multiple grid areas of the reference image corresponding to the second feature points in the current image is zero; the processor moves the reference image When the target area is specifically used to: move the target area of the reference image in a direction away from the grid area of the boundary.
一个示例中,第二特征点的数量满足预设条件,包括:第二特征点的数量小于第一特征点的数量。In an example, the number of second feature points meets the preset condition, including: the number of second feature points is less than the number of first feature points.
一个示例中,处理器确定当前图像中与第一特征点对应的第二特征点时,具体用于:基于第一特征点,利用跟踪算法获取当前图像中跟踪得到的特征点;基于极线约束条件对跟踪得到的特征点进行过滤,以得到第二特征点。In one example, when the processor determines the second feature point corresponding to the first feature point in the current image, it is specifically used to: based on the first feature point, use a tracking algorithm to obtain the feature points tracked in the current image; and based on the epipolar constraint The condition filters the tracked feature points to obtain the second feature point.
一个示例中,处理器控制拍摄装置朝向当前图像的目标区域对应的三维空间点之前,还用于:获取当前图像的目标区域中的第三特征点,第三特征点的数量大于或等于第二特征点的数量。In an example, before the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is also used to: obtain the third feature point in the target area of the current image, and the number of third feature points is greater than or equal to the second The number of feature points.
一个示例中,处理器控制拍摄装置朝向当前图像的目标区域对应的三维空间点时,具体用于:根据当前图像的目标区域中的第三特征点,确定当前图像的目标区域对应的三维空间点;控制拍摄装置朝向当前图像的目标区域对应的三维空间点。In an example, when the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is specifically used to: determine the three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image ; Control the camera to face the three-dimensional point corresponding to the target area of the current image.
一个示例中,处理器根据当前图像的目标区域中的第三特征点,确定当前图像的目标区域对应的三维空间点时,具体用于:对当前图像的目标区域中的第三特征点的深度值进行加权平均;根据加权平均值和拍摄装置拍摄当前图像时的拍摄方向,确定当前图像的目标区域对应的三维空间点。In an example, when the processor determines the three-dimensional space point corresponding to the target area of the current image according to the third characteristic point in the target area of the current image, it is specifically used to: determine the depth of the third characteristic point in the target area of the current image The values are weighted and averaged; according to the weighted average and the shooting direction of the shooting device when shooting the current image, the three-dimensional space point corresponding to the target area of the current image is determined.
一个示例中,靠近当前图像的目标区域的中心的第三特征点对应的权 重大于远离当前图像的目标区域的中心的第三特征点对应的权重。In an example, the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far from the center of the target area of the current image.
一个示例中,处理器获取当前图像的目标区域中的第三特征点时,具体用于:In an example, when the processor obtains the third feature point in the target area of the current image, it is specifically used to:
获取当前图像的目标区域中的备选特征点;根据备选特征点的深度值和/或备选特征点的语义信息,对备选特征点进行过滤以确定第三特征点。Obtain candidate feature points in the target area of the current image; filter the candidate feature points according to the depth value of the candidate feature points and/or the semantic information of the candidate feature points to determine the third feature point.
一个示例中,处理器控制拍摄装置朝向当前图像的目标区域对应的三维空间点时,具体用于:当前图像的目标区域中的第三特征点的数量大于预设阈值时,控制拍摄装置朝向前图像的目标区域对应的三维空间点。In an example, when the processor controls the camera to face the three-dimensional space point corresponding to the target area of the current image, it is specifically used to: when the number of third feature points in the target area of the current image is greater than the preset threshold, control the camera to face forward The three-dimensional point corresponding to the target area of the image.
一个示例中,处理器,还用于:当前图像的目标区域中的第三特征点的数量小于或等于预设阈值时,控制拍摄装置朝向参考图像的目标区域对应的三维空间点。In an example, the processor is further configured to: when the number of third feature points in the target area of the current image is less than or equal to the preset threshold, control the camera to face the three-dimensional space point corresponding to the target area of the reference image.
一个示例中,处理器,还用于:控制可移动平台的移动,以使得拍摄装置与当前图像的目标区域对应的三维空间点之间的距离为环绕半径。In an example, the processor is further configured to: control the movement of the movable platform so that the distance between the shooting device and the three-dimensional space point corresponding to the target area of the current image is the surrounding radius.
一个示例中,处理器移动参考图像的目标区域以得到当前图像的目标区域之前,还用于:识别目标对象的目标类型;若目标类型是预设类型,则执行移动参考图像的目标区域以得到当前图像的目标区域的步骤。In an example, before the processor moves the target area of the reference image to obtain the target area of the current image, it is also used to: identify the target type of the target object; if the target type is a preset type, execute moving the target area of the reference image to obtain The step of the target area of the current image.
一个示例中,控制装置160还包括:通讯接口163,通讯接口163与处理器连接。处理器,还用于:In an example, the control device 160 further includes a communication interface 163, which is connected to the processor. The processor is also used for:
将拍摄装置拍摄得到的初始图像发送给可移动平台的控制装置,以使得控制装置显示初始图像;通过通讯接口163获取用户对目标对象的指示信息,指示信息是根据用户对控制装置显示的初始图像的点选操作或框选操作生成的;根据指示信息,确定初始图像的目标区域。The initial image taken by the camera is sent to the control device of the movable platform, so that the control device displays the initial image; the user's instruction information to the target object is obtained through the communication interface 163, and the instruction information is based on the user's initial image displayed on the control device According to the instruction information, determine the target area of the initial image.
一个示例中,处理器根据指示信息,确定初始图像的目标区域时,具体用于:对初始图像进行图像分割以获取多个分割区域;根据分割区域和指示信息,确定初始图像的目标区域。In an example, when the processor determines the target area of the initial image according to the instruction information, it is specifically configured to: perform image segmentation on the initial image to obtain multiple segmented areas; and determine the target area of the initial image according to the segmented areas and the instruction information.
一个示例中,处理器根据分割区域和指示信息,确定初始图像的目标区域时,具体用于:当指示信息表示的初始图像中的图像区域在目标分割区域中所占的比例大于预设比例时,则根据目标分割区域,确定初始图像的目标区域,目标分割区域为多个分割区域中的至少一个。In one example, when the processor determines the target area of the initial image according to the segmented area and the instruction information, it is specifically used to: when the proportion of the image area in the initial image indicated by the instruction information in the target segmented area is greater than the preset ratio , The target area of the initial image is determined according to the target segmentation area, and the target segmentation area is at least one of the plurality of segmentation areas.
一个示例中,将初始图像的目标区域对应的三维空间点作为可移动平 台环绕目标对象的中心点,将可移动平台与初始图像的目标区域对应的三维空间点之间的距离作为环绕半径。In one example, the three-dimensional space point corresponding to the target area of the initial image is taken as the center point of the movable platform surrounding the target object, and the distance between the movable platform and the three-dimensional space point corresponding to the target area of the initial image is taken as the surrounding radius.
本申请实施例提供的可移动平台的控制装置的具体原理和实现方式均与上述实施例类似,此处不再赘述。The specific principles and implementation manners of the control device for the movable platform provided in the embodiments of the present application are similar to the foregoing embodiments, and will not be repeated here.
本实施例,通过基于前一帧图像的目标区域中的第一特征点,确定出第一特征点映射到后一帧图像上的位置,进而得到后一帧图像的第二特征点;在第二特征点的数量满足预设条件时,例如,在第二特征点的个数较少时,移动参考图像的目标区域,得到后一帧图像的目标区域。从而基于特征点的分布情况,更新图像的目标区域。控制拍摄装置朝向图像的目标区域对应的三维空间点;进而,可移动平台的拍摄装置始终可以拍摄到目标对象。并且,上述过程是基于图像的处理过程,不需要预先建立复杂的三维模型,处理方式快速简单,用户体验较好。In this embodiment, based on the first feature point in the target area of the previous frame of image, the position where the first feature point is mapped to the next frame of image is determined, and then the second feature point of the next frame of image is obtained; When the number of the second feature points meets the preset condition, for example, when the number of the second feature points is small, the target area of the reference image is moved to obtain the target area of the next frame of image. Thus, based on the distribution of feature points, the target area of the image is updated. The shooting device is controlled to face the three-dimensional space point corresponding to the target area of the image; furthermore, the shooting device of the movable platform can always shoot the target object. In addition, the above process is an image-based processing process, no complicated three-dimensional model needs to be established in advance, the processing method is fast and simple, and the user experience is better.
本申请实施例提供一种可移动平台,该可移动平台具体可以是无人机。图17为本申请实施例提供的可移动平台的结构图,如图17所示,可移动平台170包括:机身、动力系统、拍摄装置174和控制装置178,动力系统包括如下至少一种:电机171、螺旋桨172和电子调速器173,动力系统安装在机身,用于提供动力;控制装置178的具体原理和实现方式均与上述实施例类似,此处不再赘述。The embodiment of the present application provides a movable platform, and the movable platform may specifically be an unmanned aerial vehicle. FIG. 17 is a structural diagram of a movable platform provided by an embodiment of the application. As shown in FIG. 17, the movable platform 170 includes a body, a power system, a camera 174, and a control device 178. The power system includes at least one of the following: The motor 171, the propeller 172 and the electronic governor 173, the power system is installed on the fuselage to provide power; the specific principle and implementation of the control device 178 are similar to the foregoing embodiment, and will not be repeated here.
另外,如图17所示,可移动平台170还包括:传感系统175、通信系统176、支撑设备177,其中,支撑设备177具体可以是云台,拍摄装置174通过支撑设备177搭载在可移动平台170上。In addition, as shown in FIG. 17, the movable platform 170 also includes: a sensing system 175, a communication system 176, and a supporting device 177. The supporting device 177 may specifically be a pan/tilt, and the camera 174 is mounted on the movable platform through the supporting device 177. On platform 170.
在一些实施例中,控制装置178具体可以是可移动平台170的飞行控制器。In some embodiments, the control device 178 may specifically be a flight controller of the movable platform 170.
本申请实施例提供的可移动平台的具体原理和实现方式均与上述实施例类似,此处不再赘述。The specific principles and implementation manners of the movable platform provided in the embodiments of the present application are similar to the foregoing embodiments, and will not be repeated here.
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行以实现如上所述的可移动平台的控制装置。The embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to realize the control device of the movable platform as described above.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性 的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The above-mentioned software functional unit is stored in a storage medium, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute the method described in each embodiment of the present application. Part of the steps. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and conciseness of the description, only the division of the above-mentioned functional modules is used as an example. In practical applications, the above-mentioned functions can be allocated by different functional modules as required, that is, the device The internal structure is divided into different functional modules to complete all or part of the functions described above. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not repeated here.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application, not to limit them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application. Scope.

Claims (43)

  1. 一种可移动平台的控制方法,用于控制所述可移动平台环绕目标对象,所述可移动平台包括拍摄装置,其特征在于,包括:A control method of a movable platform, which is used to control the movable platform to surround a target object, the movable platform includes a photographing device, and is characterized in that it includes:
    获取所述拍摄装置拍摄得到的当前图像;Acquiring the current image taken by the photographing device;
    获取参考图像的目标区域中的第一特征点,所述参考图像为所述当前图像的前一帧图像,所述目标区域为所述目标对象对应的图像区域;Acquiring a first feature point in a target area of a reference image, where the reference image is an image of a previous frame of the current image, and the target area is an image area corresponding to the target object;
    确定所述当前图像中与所述第一特征点对应的第二特征点;Determining a second feature point corresponding to the first feature point in the current image;
    当所述第二特征点的数量满足预设条件时,移动所述参考图像的目标区域以得到所述当前图像的目标区域;When the number of the second feature points meets a preset condition, moving the target area of the reference image to obtain the target area of the current image;
    控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。Controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
  2. 根据权利要求1所述的方法,其特征在于,所述获取参考图像的目标区域中的第一特征点之前,还包括:The method according to claim 1, wherein before said acquiring the first feature point in the target area of the reference image, the method further comprises:
    将所述参考图像的目标区域划分为多个栅格区域。The target area of the reference image is divided into a plurality of grid areas.
  3. 根据权利要求2所述的方法,其特征在于,所述将所述参考图像的目标区域划分为多个栅格区域,包括:The method according to claim 2, wherein the dividing the target area of the reference image into multiple grid areas comprises:
    根据所述可移动平台环绕所述目标对象的方向,将所述参考图像的目标区域划分为多个栅格区域。According to the direction in which the movable platform surrounds the target object, the target area of the reference image is divided into a plurality of grid areas.
  4. 根据权利要求2所述的方法,其特征在于,所述获取参考图像的目标区域中的第一特征点,包括:The method according to claim 2, wherein said obtaining the first feature point in the target area of the reference image comprises:
    在每个栅格区域中获取预设数量的特征点作为所述第一特征点。A preset number of feature points are acquired in each grid area as the first feature points.
  5. 根据权利要求2所述的方法,其特征在于,所述预设条件包括:所述参考图像的多个栅格区域中位于边界的栅格区域中的所述第一特征点在所述当前图像中对应的所述第二特征点的数量为零;The method according to claim 2, wherein the preset condition comprises: the first feature point located in the grid area of the boundary among the multiple grid areas of the reference image is in the current image The number of corresponding second feature points in is zero;
    所述移动所述参考图像的目标区域,包括:The moving the target area of the reference image includes:
    朝远离所述边界的栅格区域的方向移动所述参考图像的目标区域。Move the target area of the reference image in a direction away from the grid area of the boundary.
  6. 根据权利要求1所述的方法,其特征在于,所述第二特征点的数量满足预设条件,包括:The method according to claim 1, wherein the number of the second feature points meets a preset condition, comprising:
    所述第二特征点的数量小于所述第一特征点的数量。The number of the second feature points is less than the number of the first feature points.
  7. 根据权利要求1所述的方法,其特征在于,所述确定所述当前图像中与所述第一特征点对应的第二特征点,包括:The method according to claim 1, wherein the determining a second feature point corresponding to the first feature point in the current image comprises:
    基于所述第一特征点,利用跟踪算法获取所述当前图像中跟踪得到的特征点;Based on the first feature point, using a tracking algorithm to obtain the feature point obtained by tracking in the current image;
    基于极线约束条件对所述跟踪得到的特征点进行过滤,以得到所述第二特征点。Filtering the feature points obtained by the tracking based on the epipolar constraint condition to obtain the second feature point.
  8. 根据权利要求1所述的方法,其特征在于,所述控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点之前,还包括:The method according to claim 1, wherein before the controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image, the method further comprises:
    获取所述当前图像的目标区域中的第三特征点,所述第三特征点的数量大于或等于所述第二特征点的数量。Acquire a third feature point in the target area of the current image, where the number of the third feature points is greater than or equal to the number of the second feature points.
  9. 根据权利要求8所述的方法,其特征在于,所述控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点,包括:The method according to claim 8, wherein the controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image comprises:
    根据所述当前图像的目标区域中的第三特征点,确定所述当前图像的目标区域对应的三维空间点;Determine the three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image;
    控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。Controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述当前图像的目标区域中的第三特征点,确定所述当前图像的目标区域对应的三维空间点,包括:The method according to claim 9, wherein the determining a three-dimensional space point corresponding to the target area of the current image according to a third characteristic point in the target area of the current image comprises:
    对所述当前图像的目标区域中的第三特征点的深度值进行加权平均;Performing a weighted average on the depth value of the third feature point in the target area of the current image;
    根据加权平均值和所述拍摄装置拍摄所述当前图像时的拍摄方向,确定所述当前图像的目标区域对应的三维空间点。Determine the three-dimensional space point corresponding to the target area of the current image according to the weighted average value and the shooting direction when the shooting device shoots the current image.
  11. 根据权利要求10所述的方法,其特征在于,靠近所述当前图像的目标区域的中心的第三特征点对应的权重大于远离所述当前图像的目标区域的中心的第三特征点对应的权重。The method according to claim 10, wherein a weight corresponding to a third feature point close to the center of the target area of the current image is greater than a weight corresponding to a third feature point far from the center of the target area of the current image .
  12. 根据权利要求8所述的方法,其特征在于,所述获取所述当前图像的目标区域中的第三特征点,包括:The method according to claim 8, wherein said acquiring a third characteristic point in the target area of the current image comprises:
    获取所述当前图像的目标区域中的备选特征点;Acquiring candidate feature points in the target area of the current image;
    根据所述备选特征点的深度值和/或所述备选特征点的语义信息,对所述备选特征点进行过滤以确定所述第三特征点。According to the depth value of the candidate feature point and/or the semantic information of the candidate feature point, filtering the candidate feature point to determine the third feature point.
  13. 根据权利要求8所述的方法,其特征在于,所述控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点,包括:The method according to claim 8, wherein the controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image comprises:
    在所述当前图像的目标区域中的第三特征点的数量大于预设阈值时, 控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。When the number of third feature points in the target area of the current image is greater than a preset threshold, controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
  14. 根据权利要求8所述的方法,其特征在于,在所述当前图像的目标区域中的第三特征点的数量小于或等于所述预设阈值时,控制所述拍摄装置朝向所述参考图像的目标区域对应的三维空间点。The method according to claim 8, wherein when the number of third feature points in the target area of the current image is less than or equal to the preset threshold, the camera is controlled to face the reference image The three-dimensional space point corresponding to the target area.
  15. 根据权利要求1-14任意一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-14, further comprising:
    控制所述可移动平台的移动,以使得所述拍摄装置与所述当前图像的目标区域对应的三维空间点之间的距离为环绕半径。The movement of the movable platform is controlled so that the distance between the photographing device and the three-dimensional space point corresponding to the target area of the current image is a circle radius.
  16. 根据权利要求1-14任意一项所述的方法,其特征在于,所述移动所述参考图像的目标区域以得到所述当前图像的目标区域之前,还包括:The method according to any one of claims 1-14, wherein before the moving the target area of the reference image to obtain the target area of the current image, the method further comprises:
    识别所述目标对象的目标类型;Identifying the target type of the target object;
    若所述目标类型是预设类型,则执行所述移动所述参考图像的目标区域以得到所述当前图像的目标区域的步骤。If the target type is a preset type, the step of moving the target area of the reference image to obtain the target area of the current image is performed.
  17. 根据权利要求1-14任意一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-14, further comprising:
    将所述拍摄装置拍摄得到的初始图像发送给所述可移动平台的控制装置,以使得所述控制装置显示所述初始图像;Sending the initial image captured by the photographing device to the control device of the movable platform, so that the control device displays the initial image;
    获取用户对所述目标对象的指示信息,所述指示信息是根据用户对所述控制装置显示的所述初始图像的点选操作或框选操作生成的;Acquiring instruction information of the user for the target object, the instruction information being generated according to the user's click operation or frame selection operation on the initial image displayed by the control device;
    根据所述指示信息,确定所述初始图像的目标区域。According to the instruction information, the target area of the initial image is determined.
  18. 根据权利要求17所述的方法,其特征在于,所述根据所述指示信息,确定所述初始图像的目标区域,包括:The method according to claim 17, wherein the determining the target area of the initial image according to the instruction information comprises:
    对所述初始图像进行图像分割以获取多个分割区域;Performing image segmentation on the initial image to obtain multiple segmented regions;
    根据所述分割区域和所述指示信息,确定所述初始图像的目标区域。Determine the target area of the initial image according to the segmented area and the instruction information.
  19. 根据权利要求18所述的方法,其特征在于,所述根据所述分割区域和所述指示信息,确定所述初始图像的目标区域,包括:The method according to claim 18, wherein the determining the target area of the initial image according to the segmented area and the indication information comprises:
    当所述指示信息表示的所述初始图像中的图像区域在目标分割区域中所占的比例大于预设比例时,则根据所述目标分割区域,确定所述初始图像的目标区域,所述目标分割区域为所述多个分割区域中的至少一个。When the proportion of the image area in the initial image indicated by the indication information in the target segmentation area is greater than the preset ratio, the target area of the initial image is determined according to the target segmentation area, and the target The divided area is at least one of the plurality of divided areas.
  20. 根据权利要求17所述的方法,其特征在于,将所述初始图像的目标区域对应的三维空间点作为所述可移动平台环绕所述目标对象的中心点,将所述可移动平台与所述初始图像的目标区域对应的三维空间点之 间的距离作为环绕半径。The method according to claim 17, wherein the three-dimensional space point corresponding to the target area of the initial image is used as the center point of the movable platform surrounding the target object, and the movable platform is connected to the target object. The distance between the three-dimensional space points corresponding to the target area of the initial image is taken as the surrounding radius.
  21. 一种可移动平台的控制装置,所述控制装置用于控制所述可移动平台环绕目标对象,所述可移动平台包括拍摄装置,其特征在于,所述控制装置包括:存储器、处理器;A control device for a movable platform, the control device is used to control the movable platform to surround a target object, the movable platform includes a photographing device, and is characterized in that the control device includes: a memory and a processor;
    所述存储器用于存储程序代码;The memory is used to store program code;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:The processor calls the program code, and when the program code is executed, is used to perform the following operations:
    获取所述拍摄装置拍摄得到的当前图像;Acquiring the current image taken by the photographing device;
    获取参考图像的目标区域中的第一特征点,所述参考图像为所述当前图像的前一帧图像,所述目标区域为所述目标对象对应的图像区域;Acquiring a first feature point in a target area of a reference image, where the reference image is an image of a previous frame of the current image, and the target area is an image area corresponding to the target object;
    确定所述当前图像中与所述第一特征点对应的第二特征点;Determining a second feature point corresponding to the first feature point in the current image;
    当所述第二特征点的数量满足预设条件时,移动所述参考图像的目标区域以得到所述当前图像的目标区域;When the number of the second feature points meets a preset condition, moving the target area of the reference image to obtain the target area of the current image;
    控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。Controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
  22. 根据权利要求21所述的控制装置,其特征在于,所述处理器在所述获取参考图像的目标区域中的第一特征点之前,还用于:The control device according to claim 21, wherein the processor is further configured to: before the acquiring the first feature point in the target area of the reference image:
    将所述参考图像的目标区域划分为多个栅格区域。The target area of the reference image is divided into a plurality of grid areas.
  23. 根据权利要求22所述的控制装置,其特征在于,所述处理器将所述参考图像的目标区域划分为多个栅格区域时,具体用于:The control device according to claim 22, wherein when the processor divides the target area of the reference image into multiple grid areas, it is specifically configured to:
    根据所述可移动平台环绕所述目标对象的方向,将所述参考图像的目标区域划分为多个栅格区域。According to the direction in which the movable platform surrounds the target object, the target area of the reference image is divided into a plurality of grid areas.
  24. 根据权利要求22所述的控制装置,其特征在于,所述处理器获取参考图像的目标区域中的第一特征点时,具体用于:The control device according to claim 22, wherein when the processor obtains the first feature point in the target area of the reference image, it is specifically configured to:
    在每个栅格区域中获取预设数量的特征点作为所述第一特征点。A preset number of feature points are acquired in each grid area as the first feature points.
  25. 根据权利要求22所述的控制装置,其特征在于,所述预设条件包括:所述参考图像的多个栅格区域中位于边界的栅格区域中的所述第一特征点在所述当前图像中对应的所述第二特征点的数量为零;The control device according to claim 22, wherein the preset condition comprises: the first feature point located in the grid area of the boundary among the multiple grid areas of the reference image is in the current The number of corresponding second feature points in the image is zero;
    所述处理器移动所述参考图像的目标区域时,具体用于:When the processor moves the target area of the reference image, it is specifically configured to:
    朝远离所述边界的栅格区域的方向移动所述参考图像的目标区域。Move the target area of the reference image in a direction away from the grid area of the boundary.
  26. 根据权利要求21所述的控制装置,其特征在于,所述第二特征点的数量满足预设条件,包括:The control device according to claim 21, wherein the number of the second characteristic points meets a preset condition, comprising:
    所述第二特征点的数量小于所述第一特征点的数量。The number of the second feature points is less than the number of the first feature points.
  27. 根据权利要求21所述的控制装置,其特征在于,所述处理器确定所述当前图像中与所述第一特征点对应的第二特征点时,具体用于:The control device according to claim 21, wherein when the processor determines the second feature point corresponding to the first feature point in the current image, it is specifically configured to:
    基于所述第一特征点,利用跟踪算法获取所述当前图像中跟踪得到的特征点;Based on the first feature point, using a tracking algorithm to obtain the feature point obtained by tracking in the current image;
    基于极线约束条件对所述跟踪得到的特征点进行过滤,以得到所述第二特征点。Filtering the feature points obtained by the tracking based on the epipolar constraint condition to obtain the second feature point.
  28. 根据权利要求21所述的控制装置,其特征在于,所述处理器控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点之前,还用于:22. The control device according to claim 21, wherein before the processor controls the photographing device to face the three-dimensional space point corresponding to the target area of the current image, it is further configured to:
    获取所述当前图像的目标区域中的第三特征点,所述第三特征点的数量大于或等于所述第二特征点的数量。Acquire a third feature point in the target area of the current image, where the number of the third feature points is greater than or equal to the number of the second feature points.
  29. 根据权利要求28所述的控制装置,其特征在于,所述处理器控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点时,具体用于:The control device according to claim 28, wherein when the processor controls the photographing device to face the three-dimensional space point corresponding to the target area of the current image, it is specifically configured to:
    根据所述当前图像的目标区域中的第三特征点,确定所述当前图像的目标区域对应的三维空间点;Determine the three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image;
    控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。Controlling the photographing device to face the three-dimensional space point corresponding to the target area of the current image.
  30. 根据权利要求29所述的控制装置,其特征在于,所述处理器根据所述当前图像的目标区域中的第三特征点,确定所述当前图像的目标区域对应的三维空间点时,具体用于:The control device according to claim 29, wherein the processor determines the three-dimensional space point corresponding to the target area of the current image according to the third feature point in the target area of the current image, specifically using At:
    对所述当前图像的目标区域中的第三特征点的深度值进行加权平均;Performing a weighted average on the depth value of the third feature point in the target area of the current image;
    根据加权平均值和所述拍摄装置拍摄所述当前图像时的拍摄方向,确定所述当前图像的目标区域对应的三维空间点。Determine the three-dimensional space point corresponding to the target area of the current image according to the weighted average value and the shooting direction when the shooting device shoots the current image.
  31. 根据权利要求30所述的控制装置,其特征在于,靠近所述当前图像的目标区域的中心的第三特征点对应的权重大于远离所述当前图像的目标区域的中心的第三特征点对应的权重。The control device according to claim 30, wherein the weight corresponding to the third feature point close to the center of the target area of the current image is greater than the weight corresponding to the third feature point far away from the center of the target area of the current image. Weights.
  32. 根据权利要求28所述的控制装置,其特征在于,所述处理器获取所述当前图像的目标区域中的第三特征点时,具体用于:The control device according to claim 28, wherein when the processor acquires the third feature point in the target area of the current image, it is specifically configured to:
    获取所述当前图像的目标区域中的备选特征点;Acquiring candidate feature points in the target area of the current image;
    根据所述备选特征点的深度值和/或所述备选特征点的语义信息,对所述备选特征点进行过滤以确定所述第三特征点。According to the depth value of the candidate feature point and/or the semantic information of the candidate feature point, filtering the candidate feature point to determine the third feature point.
  33. 根据权利要求28所述的控制装置,其特征在于,所述处理器控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点时,具体用于:The control device according to claim 28, wherein when the processor controls the photographing device to face the three-dimensional space point corresponding to the target area of the current image, it is specifically configured to:
    在所述当前图像的目标区域中的第三特征点的数量大于预设阈值时,控制所述拍摄装置朝向所述当前图像的目标区域对应的三维空间点。When the number of third feature points in the target area of the current image is greater than a preset threshold, the camera is controlled to face the three-dimensional space point corresponding to the target area of the current image.
  34. 根据权利要求28所述的控制装置,其特征在于,所述处理器,还用于:在所述当前图像的目标区域中的第三特征点的数量小于或等于所述预设阈值时,控制所述拍摄装置朝向所述参考图像的目标区域对应的三维空间点。The control device according to claim 28, wherein the processor is further configured to: when the number of third feature points in the target area of the current image is less than or equal to the preset threshold, control The photographing device faces the three-dimensional space point corresponding to the target area of the reference image.
  35. 根据权利要求21-34任意一项所述的控制装置,其特征在于,所述处理器,还用于:The control device according to any one of claims 21-34, wherein the processor is further configured to:
    控制所述可移动平台的移动,以使得所述拍摄装置与所述当前图像的目标区域对应的三维空间点之间的距离为环绕半径。The movement of the movable platform is controlled so that the distance between the photographing device and the three-dimensional space point corresponding to the target area of the current image is a circle radius.
  36. 根据权利要求21-34任意一项所述的控制装置,其特征在于,所述处理器移动所述参考图像的目标区域以得到所述当前图像的目标区域之前,还用于:The control device according to any one of claims 21-34, wherein the processor is further configured to: before moving the target area of the reference image to obtain the target area of the current image:
    识别所述目标对象的目标类型;Identifying the target type of the target object;
    若所述目标类型是预设类型,则执行所述移动所述参考图像的目标区域以得到所述当前图像的目标区域的步骤。If the target type is a preset type, the step of moving the target area of the reference image to obtain the target area of the current image is performed.
  37. 根据权利要求21-34任意一项所述的控制装置,其特征在于,所述处理器,还用于:The control device according to any one of claims 21-34, wherein the processor is further configured to:
    将所述拍摄装置拍摄得到的初始图像发送给所述可移动平台的控制装置,以使得所述控制装置显示所述初始图像;Sending the initial image captured by the photographing device to the control device of the movable platform, so that the control device displays the initial image;
    获取用户对所述目标对象的指示信息,所述指示信息是根据用户对所述控制装置显示的所述初始图像的点选操作或框选操作生成的;Acquiring instruction information of the user for the target object, the instruction information being generated according to the user's click operation or frame selection operation on the initial image displayed by the control device;
    根据所述指示信息,确定所述初始图像的目标区域。According to the instruction information, the target area of the initial image is determined.
  38. 根据权利要求37所述的控制装置,其特征在于,所述处理器根据所述指示信息,确定所述初始图像的目标区域时,具体用于:The control device according to claim 37, wherein the processor is specifically configured to: when determining the target area of the initial image according to the instruction information:
    对所述初始图像进行图像分割以获取多个分割区域;Performing image segmentation on the initial image to obtain multiple segmented regions;
    根据所述分割区域和所述指示信息,确定所述初始图像的目标区域。Determine the target area of the initial image according to the segmented area and the instruction information.
  39. 根据权利要求38所述的控制装置,其特征在于,所述处理器根据所述分割区域和所述指示信息,确定所述初始图像的目标区域时,具体用于:The control device according to claim 38, wherein the processor is specifically configured to:
    当所述指示信息表示的所述初始图像中的图像区域在目标分割区域中所占的比例大于预设比例时,则根据所述目标分割区域,确定所述初始图像的目标区域,所述目标分割区域为所述多个分割区域中的至少一个。When the proportion of the image area in the initial image indicated by the indication information in the target segmentation area is greater than the preset ratio, the target area of the initial image is determined according to the target segmentation area, and the target The divided area is at least one of the plurality of divided areas.
  40. 根据权利要求37所述的控制装置,其特征在于,将所述初始图像的目标区域对应的三维空间点作为所述可移动平台环绕所述目标对象的中心点,将所述可移动平台与所述初始图像的目标区域对应的三维空间点之间的距离作为环绕半径。The control device according to claim 37, wherein the three-dimensional space point corresponding to the target area of the initial image is used as the center point of the movable platform surrounding the target object, and the movable platform is connected to the center point of the target object. The distance between the three-dimensional space points corresponding to the target area of the initial image is taken as the surrounding radius.
  41. 一种可移动平台,其特征在于,包括:A movable platform, characterized in that it comprises:
    机身;body;
    动力系统,安装在所述机身,用于提供动力;The power system is installed on the fuselage to provide power;
    拍摄装置;Camera
    以及如权利要求21-40任一项所述的控制装置。And the control device according to any one of claims 21-40.
  42. 根据权利要求41所述的可移动平台,其特征在于,所述可移动平台包括无人机。The movable platform of claim 41, wherein the movable platform comprises an unmanned aerial vehicle.
  43. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-20任一项所述的方法。A computer-readable storage medium, characterized in that a computer program is stored thereon, and the computer program is executed by a processor to implement the method according to any one of claims 1-20.
PCT/CN2020/087423 2020-04-28 2020-04-28 Method and apparatus for controlling movable platform, and device and storage medium WO2021217403A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080030068.8A CN113853559A (en) 2020-04-28 2020-04-28 Control method, device and equipment of movable platform and storage medium
PCT/CN2020/087423 WO2021217403A1 (en) 2020-04-28 2020-04-28 Method and apparatus for controlling movable platform, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087423 WO2021217403A1 (en) 2020-04-28 2020-04-28 Method and apparatus for controlling movable platform, and device and storage medium

Publications (1)

Publication Number Publication Date
WO2021217403A1 true WO2021217403A1 (en) 2021-11-04

Family

ID=78331565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087423 WO2021217403A1 (en) 2020-04-28 2020-04-28 Method and apparatus for controlling movable platform, and device and storage medium

Country Status (2)

Country Link
CN (1) CN113853559A (en)
WO (1) WO2021217403A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281096A (en) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 Unmanned aerial vehicle tracking control method, device and medium based on target detection algorithm

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115898039B (en) * 2023-03-10 2023-06-02 北京建工四建工程建设有限公司 Reinforcing steel bar hole-aligning visual adjustment method, device, equipment, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502275B1 (en) * 2014-04-11 2015-03-13 중앙대학교 산학협력단 Automatically Driven Control Apparatus for non people helicopters and Control Method the same
CN105043392A (en) * 2015-08-17 2015-11-11 中国人民解放军63920部队 Aircraft pose determining method and aircraft pose determining device
CN107194339A (en) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 Obstacle recognition method, equipment and unmanned vehicle
WO2020014987A1 (en) * 2018-07-20 2020-01-23 深圳市大疆创新科技有限公司 Mobile robot control method and apparatus, device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101502275B1 (en) * 2014-04-11 2015-03-13 중앙대학교 산학협력단 Automatically Driven Control Apparatus for non people helicopters and Control Method the same
CN105043392A (en) * 2015-08-17 2015-11-11 中国人民解放军63920部队 Aircraft pose determining method and aircraft pose determining device
CN107194339A (en) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 Obstacle recognition method, equipment and unmanned vehicle
WO2020014987A1 (en) * 2018-07-20 2020-01-23 深圳市大疆创新科技有限公司 Mobile robot control method and apparatus, device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281096A (en) * 2021-11-09 2022-04-05 中时讯通信建设有限公司 Unmanned aerial vehicle tracking control method, device and medium based on target detection algorithm

Also Published As

Publication number Publication date
CN113853559A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
EP3579192B1 (en) Method, apparatus and device for determining camera posture information, and storage medium
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
WO2020014909A1 (en) Photographing method and device and unmanned aerial vehicle
CN109102537B (en) Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera
CN102148965B (en) Video monitoring system for multi-target tracking close-up shooting
CN111436208B (en) Planning method and device for mapping sampling points, control terminal and storage medium
CN106548516B (en) Three-dimensional roaming method and device
US20170305546A1 (en) Autonomous navigation method and system, and map modeling method and system
WO2018098824A1 (en) Photographing control method and apparatus, and control device
CN111935393A (en) Shooting method, shooting device, electronic equipment and storage medium
CN105678748A (en) Interactive calibration method and apparatus based on three dimensional reconstruction in three dimensional monitoring system
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
WO2020014987A1 (en) Mobile robot control method and apparatus, device, and storage medium
KR102398478B1 (en) Feature data management for environment mapping on electronic devices
CN110276768B (en) Image segmentation method, image segmentation device, image segmentation apparatus, and medium
CN110361005B (en) Positioning method, positioning device, readable storage medium and electronic equipment
CN108961423B (en) Virtual information processing method, device, equipment and storage medium
EP3629570A2 (en) Image capturing apparatus and image recording method
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
CN113379901A (en) Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data
CN111176425A (en) Multi-screen operation method and electronic system using same
JP2020021368A (en) Image analysis system, image analysis method and image analysis program
CN117711130A (en) Factory safety production supervision method and system based on 3D modeling and electronic equipment
KR102644608B1 (en) Camera position initialization method based on digital twin
US11736795B2 (en) Shooting method, apparatus, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933610

Country of ref document: EP

Kind code of ref document: A1