WO2021243566A1 - 成像方法、成像装置、计算机可读存储介质 - Google Patents

成像方法、成像装置、计算机可读存储介质 Download PDF

Info

Publication number
WO2021243566A1
WO2021243566A1 PCT/CN2020/093940 CN2020093940W WO2021243566A1 WO 2021243566 A1 WO2021243566 A1 WO 2021243566A1 CN 2020093940 W CN2020093940 W CN 2020093940W WO 2021243566 A1 WO2021243566 A1 WO 2021243566A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
target
pixels
point
image
Prior art date
Application number
PCT/CN2020/093940
Other languages
English (en)
French (fr)
Inventor
梁家斌
关雁铭
黄文杰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080005244.2A priority Critical patent/CN112771842A/zh
Priority to PCT/CN2020/093940 priority patent/WO2021243566A1/zh
Publication of WO2021243566A1 publication Critical patent/WO2021243566A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present disclosure relates to the field of image processing, and in particular to an imaging method, imaging device, computer-readable storage medium, removable platform, and electronic equipment.
  • Movable platforms such as unmanned aerial vehicles usually include camera devices that can be used to capture images.
  • the camera is usually installed on a movable platform through a pan-tilt, and the camera can be rotated relative to the movable platform by controlling the rotation of the pan-tilt.
  • the folded image is often used for various entertainment displays. Fly over the target area through a movable platform, the camera will shoot at different attitude angles during the flight, and the user will use image processing software to stitch the captured images to obtain images with changing perspectives, resulting in a distorted real world Effect.
  • image processing software to stitch the captured images to obtain images with changing perspectives, resulting in a distorted real world Effect.
  • manual operation requires high image processing capabilities for users, and it is difficult to popularize it among ordinary consumers.
  • the embodiment of the present disclosure provides an imaging method for a movable platform, the movable platform includes a photographing device, and the method includes:
  • Acquiring shooting control parameters automatically generating the target trajectory of the movable platform according to the shooting control parameters, and determining the target shooting posture of the shooting device when the movable platform moves along the target trajectory;
  • Extracting pixels to be spliced of each frame of the image in the image set the pixels to be spliced include a preset row of pixels or a preset column of pixels, and the shooting content corresponding to the pixels to be spliced of two adjacent frames of images does not overlap each other;
  • the pixels to be spliced of the images of each frame are spliced together to generate the target image.
  • the embodiment of the present disclosure also provides an imaging device, including:
  • Memory used to store executable instructions
  • the processor is configured to execute the executable instructions stored in the memory to perform the following operations:
  • Acquiring shooting control parameters automatically generating the target trajectory of the movable platform according to the shooting control parameters, and determining the target shooting posture of the shooting device when the movable platform moves along the target trajectory;
  • Extracting pixels to be spliced of each frame of the image in the image set the pixels to be spliced include a preset row of pixels or a preset column of pixels, and the shooting content corresponding to the pixels to be spliced of two adjacent frames of images does not overlap each other;
  • the pixels to be spliced of the images of each frame are spliced together to generate the target image.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the one or more processors can execute the foregoing Imaging method.
  • the embodiment of the present disclosure also provides a movable platform, including: the imaging device described above.
  • An embodiment of the present disclosure also provides an electronic device, including: the imaging device described above.
  • FIG. 1 is a flowchart of an imaging method according to an embodiment of the disclosure.
  • Fig. 2 is a schematic structural diagram of a movable platform to which the imaging method of this embodiment is applied.
  • Figure 3a shows the predetermined point and the ground of the target area
  • Figure 3b shows the predetermined point and the equivalent ground
  • Figure 3c shows the target trajectory and target area.
  • Figure 4 shows the twisted coordinate system
  • Figure 5a shows the geometric relationship of the equivalent ground point in the twisted coordinate system
  • Figure 5b shows the geometric relationship of ground points in a twisted coordinate system.
  • Figure 6 shows the positional relationship between the UAV and the target trajectory
  • Figure 7 shows another positional relationship between the UAV and the target trajectory
  • Figure 8a shows a preset row of pixels of an image frame
  • Figure 8b shows the target image after stitching
  • Figure 8c shows an example of the stitched target image.
  • Figure 9 shows a target trajectory
  • Figure 10 shows another target trajectory.
  • Figure 11 shows another target trajectory.
  • Figure 12 shows the target trajectory in a twisted coordinate system.
  • Figure 13 shows the process of determining the target attitude angle of the camera.
  • Figure 14 shows another target trajectory.
  • Figure 15 shows another target trajectory.
  • FIG. 16 is a flowchart of an imaging device according to an embodiment of the disclosure.
  • FIG. 17 is a schematic structural diagram of a movable platform according to an embodiment of the disclosure.
  • FIG. 18 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
  • the user wants to generate a folded image of the target area, for the method of artificially generating the folded image, first of all, the user needs to control the drone to fly over the target area, and control the camera to different positions at several positions during the flight. The image is taken at the pitch angle to obtain a series of images of the target area. Then, the user also needs to use image processing software to process the series of images, intercept the required part of each image, and stitch the required parts of each image together to obtain the folded image of the target area.
  • the determination of the drone's flight trajectory, shooting position, and the pitch angle of the shooting device all need to rely on the user's feeling and experience.
  • the selection of the required parts of each image and the stitching between images also rely on the user.
  • Feeling and experience Therefore, the above-mentioned method of generating a folded image has the disadvantages that it is difficult to operate and relies too much on the user's feeling and experience, which makes the user threshold of the folded image too high, which is not conducive to popularization.
  • the selection of required parts of the image and the splicing process require a lot of manual operations, which is time-consuming, laborious, and inefficient.
  • the resulting folded image usually has segmented traces and is not smooth enough, which affects the quality of the folded image.
  • An embodiment of the present disclosure provides an imaging method. As shown in FIG. 1, the imaging method includes:
  • S101 Acquire shooting control parameters, automatically generate a target trajectory of the movable platform according to the shooting control parameters, and determine the target shooting posture of the shooting device when the movable platform moves along the target trajectory;
  • S102 Control the movable platform to move along the target trajectory, and control the shooting device to shoot according to the target shooting posture during the movement, so as to obtain an image collection;
  • S103 Extract pixels to be spliced of each frame of images in the image set, where the pixels to be spliced include a preset row of pixels or a preset column of pixels, and the photographed content corresponding to the pixels to be spliced of two adjacent frames of images do not overlap each other;
  • S104 According to the shooting order of the image frames, splicing the pixels to be spliced of each frame image together to generate a target image.
  • the imaging method of this embodiment can be applied to various devices capable of imaging a target area, such as a movable platform and an electronic device. These devices use the imaging method of this embodiment to obtain the image collection of the target area, and then use the image collection to obtain the folded image of the target area.
  • the movable platform may include, but is not limited to: robots, drones, and unmanned vehicles. There is no restriction on this, as long as it is a vehicle with a camera.
  • Electronic equipment may include, but is not limited to: remote controls, smart phones/mobile phones, tablets, laptops, desktop computers, media content players, video game stations/systems, virtual reality systems, augmented reality systems, wearable devices, Any electronic device that can provide or render image data is not limited.
  • FIG. 2 shows a movable platform 100.
  • the movable platform 100 includes: a movable platform 110, a pan-tilt 140 and a camera 130.
  • the movable platform 110 may include a fuselage 105 and one or more propulsion units 150.
  • the propulsion unit 150 may be configured as the movable platform 110 to generate lift.
  • the propulsion unit 150 may include a rotor.
  • the movable platform 110 can fly in a three-dimensional space and can rotate along at least one of a pitch axis, a yaw axis, and a roll axis.
  • the movable platform 100 may include one or more camera devices 130.
  • the photographing device 130 may be a camera or a video camera.
  • the camera 130 can be installed on the pan/tilt 140.
  • the pan/tilt head 140 may allow the camera 130 to rotate around at least one of a pitch axis, a yaw axis, and a roll axis.
  • a pitch axis a pitch axis
  • a yaw axis a roll axis.
  • the movable platform 100 can be controlled by a remote controller 120.
  • the remote controller 120 can communicate with at least one of the movable platform 110, the pan-tilt 140, and the camera 130.
  • the remote controller 120 includes a display. The display is used to display the image of the camera 130.
  • the remote controller 120 also includes an input device. The input device can be used to receive user input information.
  • the shooting control parameters are acquired through S101, the target trajectory of the drone is automatically generated according to the shooting control parameters, and the target shooting posture of the camera when the movable platform moves along the target trajectory is determined.
  • the purpose of this embodiment is to automatically generate a folded image of the target area.
  • the folded image of the target area around the predetermined point is actually an image that gradually increases from an oblique viewing angle to a vertical viewing angle along a direction away from the predetermined point.
  • the angle of view refers to the angle between the optical axis of the camera and the horizontal plane, that is, the pitch angle of the camera.
  • Figure 3b assuming that the camera is located at a predetermined point and remains stationary, if the ground of the target area can be folded toward the predetermined point to form an equivalent ground, then the camera only needs to take a frame of image at the predetermined point to get A folded image of the target area.
  • the drone needs to be driven to move the camera to simulate the folding effect of the ground in the target area.
  • the drone drives the camera to move along the target track to the end point.
  • the distance between each point and the corresponding ground point is the same as the distance between the corresponding ground point and the predetermined point in Figure 3b; at the same time, the pitch angle of the camera at each point is the same as that in Figure 3b.
  • the pitch angle formed by the camera at a predetermined point and the corresponding ground point also remains the same.
  • the distance between P1 and T1 is equal to the distance between P and T1 in Figure 3b
  • the pitch angle of the camera at P1 (that is, the distance between the line between P1 and T1 and the horizontal plane)
  • the included angle) is equal to the included angle between the line of P and T1 and the tangent plane of the equivalent ground at T1;
  • the distance between P2 and T2 is equal to the distance between P and T2;
  • the pitch angle of the camera at P2 (that is, the angle between the line between P2 and T2 and the horizontal plane) is equal to the angle between the line between P and T2 and the tangent plane of the equivalent ground at T2; and so on, for the target trajectory point P6 And the corresponding ground point T6, the distance between P6 and T6 is equal to the distance between P and T6, and the pitch angle of the camera at P6 (that is, the angle between the line between P6 and T6
  • the target trajectory of this embodiment can be characterized by the following parameters: the position coordinates of the trajectory point.
  • the shooting attitude of the target includes: the pitch angle of the camera at the track point.
  • the position coordinates of the trajectory point of the target trajectory and the pitch angle of the camera at the trajectory point are determined according to the aforementioned rules.
  • the shooting control parameter may include one or more of shooting distance, shooting height, bending radius, starting point of the target trajectory, ending position of the target trajectory, and direction of the target trajectory.
  • this embodiment can determine the position coordinates of the track point of the target track and the pitch angle of the camera at the track point in a variety of ways.
  • the position coordinates of the track point and the pitch angle of the camera at the track point can be determined analytically.
  • a twisted coordinate system can be constructed according to the shooting control parameters, and the position coordinates of the track point in the twisted coordinate system can be determined.
  • the shooting distance d refers to the distance between the start point P1 and the end point P6 of the target track, that is, the distance between the projection point T0 of the start point P1 on the ground and the ground T6 corresponding to the end point P6 .
  • the shooting height H refers to the height of the end point P6 from the ground, that is, the distance between the end point P6 and its corresponding ground point T6.
  • the bending radius r refers to the height of the origin O of the twisted coordinate system, that is, the distance between the origin O and its projection point on the ground, that is, the corresponding ground point T1 of the starting point P1.
  • the horizontal axis (x axis) of the twisted coordinate system is a horizontal line extending past the starting point P1 of the target trajectory toward the end point P6 of the target trajectory, the origin O of the twisted coordinate system is on this horizontal line, and the origin O is the starting point of the target trajectory
  • the distance of P1 is the difference Hr between the imaging height H and the bending radius r. It can be seen that, in the example shown in Figure 4, the shooting height H, the shooting distance d and the bending radius r have a corresponding relationship, namely For the above three shooting control parameters, if two of them are obtained, the third parameter can be determined. In this embodiment, the shooting height H and the bending radius r sent by the control device can be directly obtained.
  • the shooting height H and the shooting distance d are relatively intuitive parameters for the user
  • the shooting height H and the shooting distance d sent by the control device can also be obtained first, and then the bending radius r can be calculated from the shooting height H and the shooting distance d.
  • the start position and end position of the target track refer to the physical coordinates of the start point P1 and the end point P6.
  • the physical coordinates may be coordinates in the geographic coordinate system, or coordinates in the body coordinate system of the drone.
  • the direction of the target trajectory is the projection direction of the target trajectory on the horizontal plane, that is, the direction between the projection point T0 of the starting point P1 of the target trajectory on the ground and the ground point T6 corresponding to the end point P6.
  • the direction can reflect which direction the user wants to take from the starting point.
  • obtaining two of the parameters can determine the position coordinates of the track point and the pitch angle of the camera at the track point.
  • the start position and direction of the target track, or the start position and end position, or the end position and direction can be acquired.
  • the equivalent ground point is a point on the equivalent ground
  • the equivalent ground is a quarter arc in a twisted coordinate system, which is formed by folding the ground on the target area.
  • equivalent position coordinates and equivalent pitch angle can be determined through the following steps:
  • the equivalent position coordinates are determined according to the shooting height H, the bending radius r and the equivalent pitch angle.
  • the pitch angle of the camera at the track point is determined according to the above-mentioned equivalent position coordinates (T x ′, Ty ′) and the equivalent pitch angle ⁇ ′2.
  • the pitch angle of the camera at the track point P2 is ⁇ 2
  • the pitch angle of the camera at the trajectory point P2 of the target trajectory is obtained.
  • the position coordinates of the ground point corresponding to the equivalent ground point in the twisted coordinate system can be determined according to the shooting height H, the bending radius r, and the equivalent position coordinates.
  • the position coordinates of the ground point in the twisted coordinate system, and the pitch angle of the camera on the track point, the position coordinates of the track point in the twisted coordinate system are determined.
  • the distance between the equivalent ground point and the starting point of the target track is first determined according to the shooting height H, the bending radius r, and the equivalent position coordinates of the equivalent ground point.
  • the position coordinates of the ground point in the twisted coordinate system, and the pitch angle of the camera on the trajectory point are determined.
  • the position coordinates of the trajectory point P2 of the target trajectory in the twisted coordinate system are obtained.
  • the position coordinates of all track points and the pitch angle of the camera at all track points can be obtained, and the target trajectory and the camera along the drone can be obtained.
  • the shooting posture of the target when the target trajectory moves.
  • the user can set the shooting distance, shooting height, and bending radius in the shooting control parameters through the remote control of the drone. For example, the user can select the "folding image mode" through the remote control, and input or select the aforementioned shooting control parameter values in the display of the remote control in the "folding image mode".
  • the remote controller sends a "folding image mode” command and the aforementioned shooting control parameter values to the drone. After the drone receives the "folding image mode” command and these shooting control parameter values, it enters the "folding image mode” and determines the target trajectory and the shooting attitude of the camera when the drone moves along the target trajectory.
  • the current position of the drone can be used as the predetermined point, that is, the drone is located at the target
  • the starting point P1 of the trajectory is shown in Figure 6.
  • the shooting distance d refers to the distance between the projection point T0 of the current position of the drone on the ground and the ground point T6 corresponding to the end point P6 of the target trajectory.
  • the drone can move along the target trajectory in that direction from the current position, and take pictures of the target area to obtain an image collection.
  • the user can input or select the direction of the target track through the remote control.
  • the remote controller sends the direction set by the user to the drone.
  • the user may not need to input or select the direction of the target trajectory, but the heading of the drone when it enters the "folded image mode" as the direction of the target trajectory.
  • the distance between the drone and the ground is not equal to the bending radius set by the user, and the current position of the drone is not at the starting point of the target trajectory, as shown in Figure 7. Show.
  • the user also needs to set the starting point position of the target trajectory and the direction of the target trajectory, and the starting point position refers to the physical coordinates of the starting point P1.
  • the drone first moves from the current position to the starting point P1 of the target trajectory, and then moves along the target trajectory in this direction from the starting point P1 to capture the target area to obtain an image set.
  • the user can input or select the start position of the target track and the direction of the target track through the remote control, or input or select the end position of the target track and the direction of the target track, and then get the starting point of the target track.
  • the drone can also be raised or lowered so that the distance between the drone and the ground is equal to the bending radius set by the user.
  • the user does not need to set the bending radius, but can automatically set the distance between the drone and the ground as the bending radius when the user selects the "folding image mode".
  • the pixels to be spliced in each frame of the image in the image set can be extracted through S103.
  • the pixels to be spliced include preset rows of pixels or preset columns of pixels. There is no overlap, and the pixels to be spliced of each frame image are spliced together according to the shooting order of the image frames through S104 to generate the target image.
  • the camera can shoot at a predetermined rate or frame rate in the shooting posture of the target, thereby obtaining a collection of images.
  • Each frame of image in the image set can correspond to each track point of the target trajectory. Therefore, by extracting a number of pixels in each frame of image, and splicing the extracted pixels of each frame of image according to a predetermined rule, the folding of the target area can be obtained. image.
  • the extracted preset row of pixels includes: one or more rows of pixels in the middle of each frame of image
  • the preset column of pixels includes: one or more columns of pixels in the middle of each frame of image.
  • the image set may include n frames of images.
  • the preset row of pixels of each frame of image is one or more rows of pixels in the middle. This embodiment does not limit the number of rows of pixels in a preset row. The number of rows can be set as needed, as long as the preset row of pixels of each frame of image and the corresponding shooting content of the preset row of pixels of two adjacent frames of images do not overlap each other. .
  • the preset pixels of the images of each frame are spliced together according to the shooting order of the image frames.
  • Fig. 8c shows an example of the stitched target image. From the bottom of the image to the top of the image, a smooth change from the oblique viewing angle gradually increasing to the vertical viewing angle is realized, resulting in a folding effect.
  • the user only needs to set the shooting control parameters to automatically generate the target trajectory of the movable platform and the shooting posture of the target when the camera moves along the target trajectory to obtain an image collection. , And can automatically complete the pixel selection and stitching of the image collection, so as to realize the automatic generation of the folded image.
  • the user does not need to stitch images by himself, completely getting rid of the dependence on the user's feeling and experience, and the user is very simple and quick to operate, which greatly reduces the number of users who fold images.
  • the threshold is very beneficial to the popularization and promotion of this function.
  • the image selection and stitching process does not require any manual operation, saving time and effort, greatly improving work efficiency, and the resulting folded image is smooth and smooth, greatly suppressing or even eliminating segmentation marks, thereby mentioning the quality of the folded image and improving Improve the user experience.
  • the target trajectory in the figure includes six trajectory points P1-P6, but those skilled in the art should understand that this is only for the convenience of description, and the target trajectory is composed of a series of trajectory points.
  • the shape of the target trajectory in the drawings is an arc, but those skilled in the art should understand that this is only one form of the target trajectory, and the target trajectory can also have various other forms, as shown in Figures 9 to 9 11 shows the target trajectory.
  • Another embodiment of the present disclosure provides an imaging method.
  • the same or similar features of this embodiment and the previous embodiment will not be repeated, and only the content that is different from the previous embodiment will be described below.
  • the imaging method of this embodiment can automatically generate the target trajectory of the drone and the target shooting posture of the camera through a fitting method. Specifically,
  • this group of control points at least includes: the starting point and the end point of the target trajectory.
  • the fitting curve is determined according to the position coordinates of the group of control points, and the position coordinates of the points on the combined curve in the twisted coordinate system are used as the position coordinates of the target trajectory points in the twisted coordinate system.
  • the fitting method also requires the establishment of a twisted coordinate system.
  • the starting point P1 and the ending point P6 of the target trajectory as the control point the position coordinates of the target trajectory in the twisted coordinate system can be determined by the shooting height H and the bending radius r.
  • the coordinates of the starting point P1 in the twisted coordinate system are (Hr, 0)
  • the coordinates of the end point P6 in the twisted coordinate system are
  • the fitted target trajectory may be a spline curve, for example, a third-order spline curve.
  • the group of control points may further include: at least one intermediate track point between the start P1 and the end P6.
  • the target pitch angle of the camera After fitting the target trajectory of the UAV station, it is also necessary to determine the target pitch angle of the camera. In some examples, determine the target pitch angle of the camera when the drone moves along the target trajectory, so that when the drone moves along the target trajectory at a first predetermined speed to the end of the target trajectory, the intersection of the optical axis of the camera and the ground Move to the projection point on the ground at the end of the target trajectory at a second predetermined speed, and use the angle between the line of the camera and the intersection point and the horizontal plane during the movement as the target pitch angle of the camera. Wherein, the values of the first predetermined speed and the second predetermined speed can be set by the user through the remote control, or can be preset in the drone.
  • the drone moves from the starting point P1 along the target trajectory at a first predetermined speed to the end point P6 of the target trajectory.
  • the speed moves to the ground point T6 corresponding to the end point P6, and the angle between the line of the camera and the intersection point and the horizontal plane during the above movement is used as the target pitch angle of the camera.
  • the user only needs to set the positions of several control points to automatically generate the target trajectory of the movable platform and the target shooting posture of the camera when the movable platform moves along the target trajectory. It not only simplifies the user's operation, improves the work efficiency, and improves the quality of the folded image.
  • the shooting control parameters that need to be input by the user are more intuitive and streamlined, and there is no need for a complicated calculation process, and the process of generating the folded image is simpler and more efficient.
  • Yet another embodiment of the present disclosure also provides an imaging device, as shown in FIG. 16, including:
  • Memory used to store executable instructions
  • the processor is configured to execute the executable instructions stored in the memory to perform the following operations:
  • Acquiring shooting control parameters automatically generating the target trajectory of the movable platform according to the shooting control parameters, and determining the target shooting posture of the shooting device when the movable platform moves along the target trajectory;
  • Extracting pixels to be spliced of each frame of the image in the image set the pixels to be spliced include a preset row of pixels or a preset column of pixels, and the shooting content corresponding to the pixels to be spliced of two adjacent frames of images does not overlap each other;
  • the pixels to be spliced of the images of each frame are spliced together to generate the target image.
  • the processor may perform operations corresponding to the steps in the imaging method of the foregoing embodiment.
  • the target trajectory is characterized by the following parameters: the position coordinates of the trajectory point; the target shooting posture includes: the pitch angle of the photographing device at the trajectory point.
  • the target trajectory of the movable platform is automatically generated through an analytical method and/or a fitting method.
  • the shooting control parameters include: shooting height, bending radius, the position of the starting point of the target trajectory, and the direction of the target trajectory.
  • the shooting distance is the distance between the start point and the end point of the target trajectory; the shooting height is the height of the end point of the target trajectory from the ground.
  • a twisted coordinate system is constructed according to the shooting control parameters, and the position coordinates of the trajectory point of the target track in the twisted coordinate system are determined.
  • the horizontal axis of the twisted coordinate system is a horizontal line extending past the starting point of the target trajectory in a direction close to the end point of the target trajectory, and the distance between the origin of the twisted coordinate system and the starting point of the target trajectory Is the difference between the imaging height and the bending radius, and the origin is on the horizontal line.
  • the bending radius is the distance between the origin and the ground.
  • the processor is further configured to perform the following operations: obtain the position coordinates of a set of control points in the twisted coordinate system; the set of control points includes at least: the start point and the end point of the target track Determine a fitted curve according to the position coordinates of the set of control points, and use the position coordinates of the points on the fitted curve in the twisted coordinate system as the track point of the target trajectory in the twisted coordinate system Position coordinates.
  • the position coordinates of the set of control points in the twisted coordinate system are determined by the shooting height and the bending radius.
  • the set of control points further includes: at least one intermediate point between the start and the end point.
  • the target shooting attitude includes: a target pitch angle; the processor is further configured to perform the following operations:
  • the processor is further configured to perform the following operation: taking the angle between the line of the photographing device and the intersection point and the horizontal plane in the moving process as the target pitch angle.
  • the preset row of pixels includes: one or more rows of pixels in the middle of the image; the preset column of pixels includes: one or more columns of pixels in the middle of the image; the preset rows of pixels of the first frame of image include: All rows of pixels under the preset row; the preset columns of pixels in the first frame of image include: all columns of pixels under the preset column; the preset rows of pixels in the last frame of image include: the preset All row pixels above the row; the preset column pixels of the last frame of image include: all the column pixels above the preset column.
  • Yet another embodiment of the present disclosure also provides a computer-readable storage medium that stores executable instructions, and when the executable instructions are executed by one or more processors, the one or more processors The imaging method of the above-mentioned embodiment is performed.
  • a computer-readable storage medium may be any medium that can contain, store, transmit, propagate, or transmit instructions.
  • a readable storage medium may include, but is not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, device, or propagation medium.
  • Specific examples of readable storage media include: magnetic storage devices, such as magnetic tape or hard disk (HDD); optical storage devices, such as optical disks (CD-ROM); memory, such as random access memory (RAM) or flash memory; and/or wired /Wireless communication link.
  • the computer program may be configured to have, for example, computer program code including computer program modules. It should be noted that the division method and number of modules are not fixed. Those skilled in the art can use appropriate program modules or program module combinations according to actual conditions. When these program module combinations are executed by a computer (or processor), the computer The flow of the simulation method of the drone described in the present disclosure and its variants can be executed.
  • Yet another embodiment of the present disclosure also provides a movable platform, as shown in FIG. 17, including: the imaging device of the foregoing embodiment.
  • the imaging device is installed on the movable platform, or installed on the movable platform through a pan-tilt.
  • the movable platform is a drone.
  • Yet another embodiment of the present disclosure further provides an electronic device, as shown in FIG. 18, including: the imaging device of the above-mentioned embodiment.
  • the electronic device is a remote control of a movable platform.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种成像方法、成像装置、计算机可读存储介质、可移动平台和电子设备,所述成像方法用于可移动平台,可移动平台包括拍摄装置,所述方法包括:获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。

Description

成像方法、成像装置、计算机可读存储介质 技术领域
本公开涉及图像处理领域,尤其涉及一种成像方法、成像装置、计算机可读存储介质、可移动平台和电子设备。
背景技术
诸如无人机等可移动平台通常包括拍摄装置,可以用于采集图像。拍摄装置通常通过云台安装于可移动平台,通过控制云台的旋转可以使得拍摄装置相对于可移动平台旋转。
折叠图像作为一种特效图像,常用于进行各种娱乐展示。通过可移动平台在目标区域上空飞行,拍摄装置在飞行过程中以不同的姿态角进行拍摄,用户后期利用图像处理软件将拍摄得到的图像进行拼接,从而得到视角变化的图像,产生现实世界被扭曲的效果。然而,采用人工操作的方式对用户的图像处理能力要求较高,难以向普通消费者普及。
发明内容
本公开实施例提供了一种成像方法,用于可移动平台,所述可移动平台包括拍摄装置,所述方法包括:
获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;
控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;
提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;
按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
本公开实施例还提供了一种成像装置,包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的所述可执行指令,以执行如下 操作:
获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;
控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;
提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;
按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
本公开实施例还提供了一种计算机可读存储介质,其存储有可执行指令,所述可执行指令在由一个或多个处理器执行时,可以使所述一个或多个处理器执行上述成像方法。
本公开实施例还提供了一种可移动平台,包括:上述成像装置。
本公开实施例还提供了一种电子设备,包括:上述成像装置。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例成像方法的流程图。
图2为应用本实施例成像方法的可移动平台的结构示意图。
图3a显示了预定点与目标区域的地面;
图3b显示了预定点与等效地面;
图3c显示了目标轨迹与目标区域。
图4显示了扭曲坐标系。
图5a显示了等效地面点在扭曲坐标系下的几何关系;
图5b显示了地面点在扭曲坐标系下的几何关系。
图6显示了无人机与目标轨迹的位置关系;
图7显示了无人机与目标轨迹的另一位置关系;
图8a显示了图像帧的预设行像素;
图8b显示了拼接后的目标图像;
图8c显示了拼接后的目标图像的一种示例。
图9显示了一种目标轨迹。
图10显示了另一种目标轨迹。
图11显示了又一种目标轨迹。
图12显示了扭曲坐标系下的目标轨迹。
图13显示了确定拍摄装置的目标姿态角的过程。
图14显示了另一种目标轨迹。
图15显示了再一种目标轨迹。
图16为本公开实施例成像装置的流程图。
图17为本公开实施例可移动平台的结构示意图。
图18为本公开实施例电子设备的结构示意图。
具体实施方式
当用户想要生成目标区域的折叠图像时,对于人工生成折叠图像的方式,首先,用户需要控制无人机在目标区域上空飞行,并在飞行过程中的若干位置、分别控制拍摄装置以不同的俯仰角拍摄图像,得到目标区域的一系列图像。然后,用户还需要利用图像处理软件对该一系列图像进行处理,截取每张图像的所需部分,并将各个图像的所需部分拼接在一起,才能得到目标区域的折叠图像。
在上述过程中,无人机的飞行轨迹、拍摄位置、拍摄装置的俯仰角的确定都需要依靠用户的感觉和经验,每张图像所需部分的选取、以及图像之间的拼接也同样依靠用户的感觉和经验。因此,上述生成折叠图像的方式,存在操作难度大,且过多依赖用户的感觉和经验的缺陷,使得折叠图像的用户门槛过高,不利于普及推广。同时,图像所需部分的选取以及拼接过程需要大量的人工操作,费时费力,效率低下,得到的折叠图像通常带有分段痕迹,不够平滑流畅,影响折叠图像的质量。
下面将结合实施例和实施例中的附图,对本公开技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本公开一部分实施例,而不是 全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开一实施例提供了一种成像方法,如图1所示,所述成像方法包括:
S101:获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;
S102:控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;
S103:提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;
S104:按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
本实施例的成像方法可应用于可移动平台、电子设备等各种可对目标区域进行成像的设备。这些设备利用本实施例的成像方法得到目标区域的图像集合后,利用图像集合得到目标区域的折叠图像。其中,可移动平台可以包括但不限于:机器人、无人机、无人车,对此不做限制,只要是带有拍摄装置的载具即可。电子设备可以包括但不限于:遥控器、智能电话/手机、平板电脑、膝上计算机、台式计算机、媒体内容播放器、视频游戏站/系统、虚拟现实系统、增强现实系统、可穿戴式装置、能够提供或渲染图像数据的任意电子装置,对此不做限制。
为了方便描述,以下以无人机为例,对本实施例进行说明。图2示出了一种可移动平台100。可移动平台100包括:可移动平台110、云台140和拍摄装置130。
可移动平台110可以包括机身105、以及一个或多个推进单元150。推进单元150可以被配置为可移动平台110产生升力。推进单元150可以包括旋翼。可移动平台110能够在三维空间内飞行,并可沿俯仰轴、偏航轴、横滚轴中的至少一个旋转。
可移动平台100可包括一个或多个拍摄装置130。在一些示例中,拍 摄装置130可以是相机、摄像机。拍摄装置130可安装于云台140。云台140可以允许拍摄装置130围绕俯仰轴、偏航轴、横滚轴中的至少一个旋转。为了方便描述,以下以拍摄装置130为相机为例,对本实施例进行说明。
可移动平台100可通过遥控器120控制。遥控器120可与可移动平台110、云台140、拍摄装置130中的至少之一通信。遥控器120包括显示器。显示器用于显示拍摄装置130的画面。遥控器120还包括输入装置。输入装置可用于接收用户的输入信息。
通过S101获取拍摄控制参数,根据拍摄控制参数自动生成无人机的目标轨迹,并确定相机在可移动平台沿目标轨迹移动时的目标拍摄姿态。
本实施例的目的在于自动生成目标区域的折叠图像。在一些示例中,如图3a所示,对于预定点周围的目标区域的折叠图像,实际上是一种沿远离预定点的方向、由倾斜视角逐渐增大至垂直视角的图像。所述视角指的是相机光轴与水平面的夹角,即相机的俯仰角。如图3b所示,假定相机位于预定点且保持不动,如果能将目标区域的地面朝预定点方向折叠起来形成一等效地面,那么相机只需在预定点拍摄一帧图像,即可得到目标区域的折叠图像。但实际上目标区域的地面是无法折叠的,所以需要通过无人机带动相机移动,来模拟目标区域地面的折叠效果。如图3c所示,以预定点为起点,无人机带动相机沿目标轨迹移动至终点。其中,对于目标轨迹的各个点,每个点和对应的地面点的距离与图3b中该对应地面点与预定点的距离保持相同;同时,相机在每个点的俯仰角,与图3b中相机在预定点与对应地面点形成的俯仰角也保持相同。例如,对于目标轨迹点P1以及对应的地面点T1,P1与T1之间的距离等于图3b中P与T1的距离,且相机在P1的俯仰角(即P1与T1之间连线与水平面的夹角)等于P和T1的连线与等效地面在T1处的切面的夹角;对于目标轨迹点P2以及对应的地面点T2,P2与T2之间的距离等于P与T2的距离,且相机在P2的俯仰角(即P2与T2之间连线与水平面的夹角)等于P和T2的连线与等效地面在T2处的切面的夹角;以此类推,对于目标轨迹点P6以及对应的地面点T6,P6与T6之间的距离等于P与T6的距离,且相机在P6的俯仰角(即P6与T6之间连线与水平面的夹角)等于P和T6的连线 与等效地面在T1处的切面的夹角。由此可见,在一些示例中,本实施例的目标轨迹可通过以下参数表征:轨迹点的位置坐标。目标拍摄姿态包括:相机在轨迹点的俯仰角。本实施例根据上述规则确定目标轨迹的轨迹点的位置坐标、以及相机在轨迹点的俯仰角。
首先获取拍摄控制参数。在一些示例中,拍摄控制参数可以包括:拍摄距离、拍摄高度、弯曲半径、目标轨迹的起点位置、目标轨迹的终点位置以及目标轨迹的方向中的一个或多个。获得拍摄控制参数后,本实施例可通过多种方式确定目标轨迹的轨迹点的位置坐标、以及相机在轨迹点的俯仰角。
在一个示例中,可通过解析方式确定轨迹点的位置坐标、以及相机在轨迹点的俯仰角。在解析方式中,可根据拍摄控制参数构建一个扭曲坐标系,并确定轨迹点在扭曲坐标系的位置坐标。
如图4所示,在一些示例中,拍摄距离d指的是目标轨迹的起点P1与终点P6之间的距离,即起点P1在地面的投影点T0与终点P6对应的地面T6之间的距离。拍摄高度H指的是终点P6距地面的高度,即终点P6与其对应的地面点T6之间的距离。弯曲半径r指的是扭曲坐标系的原点O的高度,即原点O与其在地面的投影点,即起点P1的对应地面点T1之间的距离。
其中,扭曲坐标系的横轴(x轴)为过目标轨迹的起点P1向靠近目标轨迹的终点P6方向延伸的水平线,扭曲坐标系的原点O在该水平线上,且原点O与目标轨迹的起点P1的距离为成像高度H和弯曲半径r的差值H-r。由此可见,图4所示的示例中,拍摄高度H、拍摄距离d和弯曲半径r具有对应关系,即
Figure PCTCN2020093940-appb-000001
对于上述三个拍摄控制参数来说,如果获得其中两个参数,就可以确定出第三个参数。本实施例可以直接获取控制装置发送的拍摄高度H和弯曲半径r。考虑到拍摄高度H和拍摄距离d是对用户比较直观的参数,也可以先获取控制装置发送的拍摄高度H和拍摄距离d,再由拍摄高度H和拍摄距离d计算出弯曲半径r。
目标轨迹的起点位置和终点位置指的是起点P1和终点P6的物理坐标。在一些示例中,物理坐标可以是地理坐标系下的坐标,或者无人机的机体坐标系下的坐标。目标轨迹的方向是目标轨迹在水平面投影的方向,即目标轨迹的起点P1在地面的投影点T0与终点P6对应的地面点T6之间的方向。该方向可以反映用户希望从起点开始沿哪个方向进行拍摄。对于上述三个拍摄控制参数来说,在一些情况下,获得其中的两个参数即可确定出轨迹点的位置坐标、以及相机在轨迹点的俯仰角。例如,可以获取目标轨迹的起点位置和方向,或者起点位置和终点位置、或者终点位置和方向。
以下结合图5说明确定轨迹点在扭曲坐标系的位置坐标、以及相机在轨迹点的俯仰角的过程。
首先,根据拍摄控制参数确定等效地面点在扭曲坐标系的等效位置坐标、以及目标轨迹的起点与等效地面点的连线与水平面的等效俯仰角;
其中,等效地面点为等效地面上的点,等效地面为扭曲坐标系下的四分之一圆弧,通过将目标区域的地面上折叠而形成。
具体来说,可通过以下步骤确定等效位置坐标和等效俯仰角:
根据拍摄高度H和弯曲半径r确定相机在目标轨迹起点的起始俯仰角;
根据起始俯仰角确定等效俯仰角;
根据拍摄高度H、弯曲半径r和等效俯仰角确定等效位置坐标。
如图5a所示,相机在目标轨迹起点的起始俯仰角为α1,由图5a中的几何关系可知,
Figure PCTCN2020093940-appb-000002
X表示起点在扭曲坐标系x轴的位置坐标,且X=H-r。地面点T2对应的等效地面点为T2′。起点P1与等效地面点T2′的连线与水平面的等效俯仰角为α′2,且α′2=α1-ωt。其中,t表示从地面点T1移动至地面点T2的时间;ω为扫描角速度,其可以由用户来设置,也可以在无人机或相机中预设。等效地面点T2′在扭曲坐标系的等效位置坐标(T x′,T y′)可通过以下公式确定:
Figure PCTCN2020093940-appb-000003
-T y′=(X-T x′)×arctanα 2
然后,根据上述等效位置坐标(T x′,T y′)和等效俯仰角α′2确定相机在轨迹点的俯仰角。
如图5a所示,
Figure PCTCN2020093940-appb-000004
Figure PCTCN2020093940-appb-000005
由图5b所示,相机在轨迹点P2的俯仰角为α2,
Figure PCTCN2020093940-appb-000006
从而得到了相机在目标轨迹的轨迹点P2的俯仰角。
之后可根据拍摄高度H、弯曲半径r和等效位置坐标确定与等效地面点对应的地面点在扭曲坐标系下的位置坐标。
结合图5a和图5b可以看出,等效地面点T2′对应的地面点T2在扭曲坐标系下的位置坐标为(T x,T y),且
T x=-θ×r
T y=-r
再根据拍摄控制参数、地面点在扭曲坐标系下的位置坐标、以及相机在轨迹点的俯仰角确定轨迹点在扭曲坐标系下的位置坐标。
具体来说,先根据拍摄高度H、弯曲半径r和等效地面点的等效位置坐标确定等效地面点与目标轨迹起点的距离。
如图5a所示,等效地面点T2′与目标轨迹起点P1的距离为L,则
Figure PCTCN2020093940-appb-000007
再根据等效地面点与目标轨迹起点的距离、地面点在扭曲坐标系下的位置坐标和相机在轨迹点的俯仰角确定轨迹点在扭曲坐标系下的位置坐标。
如图5b所示,轨迹点P2在扭曲坐标系下的位置坐标为(P x,P y),则
P x=T x+L×sinβ
P y=T y+L×cosβ
从而得到了目标轨迹的轨迹点P2在扭曲坐标系下的位置坐标。以此 类推,按照上述过程对目标轨迹中的各个轨迹点进行处理,即可得到所有轨迹点的位置坐标、以及相机在所有轨迹点的俯仰角,从而得到目标轨迹、以及相机在无人机沿目标轨迹移动时的目标拍摄姿态。
在一些示例中,用户可通过无人机的遥控器设置拍摄控制参数中的拍摄距离、拍摄高度和弯曲半径。例如,用户可通过遥控器选择“折叠图像模式”,并在“折叠图像模式”下在遥控器的显示器中输入或选取上述拍摄控制参数值。响应于用户的操作,遥控器向无人机发送“折叠图像模式”指令以及上述拍摄控制参数值。无人机接收到“折叠图像模式”指令以及这些拍摄控制参数值后,进入“折叠图像模式”并确定目标轨迹、以及相机在无人机沿目标轨迹移动时的目标拍摄姿态。
在一些示例中,当用户选择“折叠图像模式”时,如果无人机与地面的距离等于用户设置的弯曲半径r,则无人机的当前位置即可作为预定点,即无人机位于目标轨迹的起点P1,如图6所示。这种情况下,拍摄距离d指的是无人机当前位置在地面的投影点T0与目标轨迹终点P6对应的地面点T6之间的距离。用户设置好目标轨迹的方向后,无人机即可从当前位置沿该方向沿目标轨迹移动,对目标区域进行拍摄得到图像集合。在一些情况下,用户可通过遥控器输入或选取目标轨迹的方向。响应于用户的操作,遥控器将用户设置的方向发送给无人机。在另一些情况下,用户可能无需输入或选取目标轨迹的方向,而是将无人机进入“折叠图像模式”时的航向作为目标轨迹的方向。
在一些示例中,当用户选择“折叠图像模式”时,无人机与地面的距离并不等于用户设置的弯曲半径,则无人机的当前位置并不位于目标轨迹的起点,如图7所示。这种情况下,用户还需要设置目标轨迹的起点位置、以及目标轨迹的方向,所述起点位置指的是起点P1的物理坐标。之后无人机由当前位置先移动至目标轨迹的起点P1,再从起点P1沿该方向沿目标轨迹移动,对目标区域进行拍摄得到图像集合。在一些情况下,用户可通过遥控器输入或选取目标轨迹的起点位置和目标轨迹的方向,也可以输入或选取目标轨迹的终点位置和目标轨迹的方向,再得到目标轨迹的起点位置。在一种可行的实施方式中,无人机也可以通过升高度或者降高度的形式,使得无人机与地面的距离等于用户设置的弯曲半径。在另一种可行 的实施方式中,用户不需要设置弯曲半径,而可以将用户选择“折叠图像模式”时,无人机与地面的距离自动设置为弯曲半径。
通过S102得到图像集合后,可通过S103提取图像集合中每帧图像的待拼接像素,待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠,并通过S104按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
相机可在目标拍摄姿态按照预定速率或帧率拍摄,从而得到图像集合。图像集合中的各帧图像可对应于目标轨迹的各个轨迹点,因此通过提取每帧图像中的若干像素,并将提取出的各帧图像的像素按照预定规则拼接,即可得到目标区域的折叠图像。
在一些示例中,提取的预设行像素包括:每帧图像的中间一行或多行像素,预设列像素包括:每帧图像的中间一列或多列像素。以下以提取和拼接预设行像素为例对本实施例进行说明。
如图8a所示,图像集合可包括n帧图像,对于其中的第2帧至第n-1帧图像,每帧图像的预设行像素为中间一行或多行像素。本实施例不对预设行像素的行数进行限定,行数可以根据需要设置,只要每帧图像的预设行像素与相邻两帧图像的预设行像素对应的拍摄内容互不重叠即可。提取出预设行像素后,如图8b所示,按照图像帧的拍摄顺序,将各帧图像的预设像素拼接在一起。这包括:将第2帧至第n-1帧图像的预设行像素、第1帧图像的预设行及其之下的所有行像素、以及第n帧图像的预设行及其之上的所有行像素拼接在一起,得到完整的目标图像。图8c显示了拼接后的目标图像的一种示例,由图像下方到图像上方实现了由倾斜视角逐渐增大至垂直视角的平滑变化,产生了一种折叠的效果。
由此可见,本实施例的成像方法,用户只需设置好拍摄控制参数,即可自动生成可移动平台的目标轨迹、以及相机在可移动平台沿目标轨迹移动时的目标拍摄姿态,得到图像集合,并且可自动完成图像集合的像素选取和拼接,从而实现折叠图像的自动生成。相对于现有技术,无需用户自己确定目标轨迹和目标拍摄姿态,也无需用户自己拼接图像,完全摆脱了对用户的感觉和经验的依赖,用户操作起来非常简单快捷,大大降低了折叠图像的用户门槛,对于该项功能的普及推广非常有利。同时,图像的选 取以及拼接过程无需任何人工操作,省时省力,极大地提高了作业效率,并且得到的折叠图像平滑流畅,大大抑制甚至消除了分段痕迹,从而提到了折叠图像的质量,改善了用户体验。
以上只是对本实施例的示例性说明。附图中的目标轨迹包括了六个轨迹点P1-P6,但本领域技术人员应当明白,这只是为了描述方便,目标轨迹是由一系列轨迹点组成的。附图中的目标轨迹的形状为一段弧线,但本领域技术人员应当明白,这只是目标轨迹的一种形式而已,目标轨迹的还可以具有其他各种各样的形式,如图9至图11所示的目标轨迹。
本公开另一实施例提供了一种成像方法。为简要起见,本实施例与上一实施例相同或相似的特征不再赘述,以下仅描述其不同于上一实施例的内容。
本实施例的成像方法,可通过拟合方式自动生成无人机的目标轨迹、以及相机的目标拍摄姿态。具体来说,
首先获取一组控制点在扭曲坐标系的位置坐标;该组控制点至少包括:目标轨迹的起点和终点。
根据该组控制点的位置坐标确定拟合曲线,将合曲线上的点在扭曲坐标系的位置坐标作为目标轨迹的轨迹点在扭曲坐标系的位置坐标。
拟合方式同样需要建立扭曲坐标系。参见上一实施例的相关描述,并如图12所示,作为控制点的目标轨迹的起点P1和终点P6,其在扭曲坐标系的位置坐标可由拍摄高度H和弯曲半径r确定。其中,起点P1在扭曲坐标系下的坐标为(H-r,0),终点P6在扭曲坐标系下的坐标为
Figure PCTCN2020093940-appb-000008
在一些示例中,利用起点和终点的位置坐标,拟合出的目标轨迹可以是样条曲线,例如,3阶样条曲线。在另一些示例中,该组控制点还可以包括:起始P1和终点P6之间的至少一个中间轨迹点。
拟合出无人机台的目标轨迹后,还需要确定相机的目标俯仰角。在一些示例中,确定相机在无人机沿目标轨迹移动时的目标俯仰角,以使得无人机沿目标轨迹以第一预定速度移动至目标轨迹的终点时、相机的光轴与地面的交点以第二预定速度向目标轨迹的终点在地面的投影点移动,并将 上述移动过程中相机和该交点的连线与水平面的夹角作为相机的目标俯仰角。其中,第一预定速度和第二预定速度的值可由用户通过遥控器设置,也可以在无人机中预设。如图13所示,无人机从起点P1沿目标轨迹以第一预定速度移动至目标轨迹的终点P6,相机的光轴与地面的交点T1(即起点P1对应的地面点)以第二预定速度向终点P6对应的地面点T6移动,并将上述移动过程中相机和该交点的连线与水平面的夹角作为相机的目标俯仰角。
由此可见,本实施例的成像方法,用户只需设置好若干控制点的位置,即可自动生成可移动平台的目标轨迹、以及相机在可移动平台沿目标轨迹移动时的目标拍摄姿态。不仅简化了用户的操作,提高了作业效率,提高了折叠图像的质量,同时需要用户输入的拍摄控制参数更为直观和精简,且无需要复杂的计算过程,生成折叠图像的过程更加简单高效。
以上只是对本实施例的示例性说明。附图中的目标轨迹的形状为一段弧线,但本领域技术人员应当明白,这只是目标轨迹的一种形式而已,目标轨迹的还可以具有其他各种各样的形式,如图14、图15所示的目标轨迹。
本公开再一实施例还提供了一种成像装置,如图16所示,包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的所述可执行指令,以执行如下操作:
获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;
控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;
提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;
按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
本实施例的成像装置,处理器可以执行与上述实施例成像方法中的步骤对应的操作。
在一些示例中,所述目标轨迹通过以下参数表征:轨迹点的位置坐标;所述目标拍摄姿态包括:所述拍摄装置在所述轨迹点的俯仰角。
在一些示例中,通过解析方式和/或拟合方式自动生成所述可移动平台的目标轨迹。
在一些示例中,所述拍摄控制参数包括:拍摄高度、弯曲半径、所述目标轨迹的起点的位置、以及所述目标轨迹的方向。
在一些示例中,获取控制装置发送的所述拍摄高度和所述弯曲半径;或者;获取所述控制装置发送的所述拍摄高度和在所述方向的拍摄距离;根据所述拍摄高度和所述拍摄距离确定所述弯曲半径。
在一些示例中,所述拍摄距离为所述目标轨迹的所述起点与终点之间的距离;所述拍摄高度为所述目标轨迹的所述终点距地面的高度。
在一些示例中,根据所述拍摄控制参数构建扭曲坐标系,并确定所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标。
在一些示例中,所述扭曲坐标系的横轴为过所述目标轨迹的起点向靠近所述目标轨迹的终点方向延伸的水平线,所述扭曲坐标系的原点与所述目标轨迹的起点的距离为所述成像高度和所述弯曲半径的差值,所述原点在所述水平线上。
在一些示例中,所述弯曲半径为所述原点与地面的距离。
在一些示例中,所述处理器还用于执行以下操作:获取一组控制点在所述扭曲坐标系的位置坐标;所述一组控制点至少包括:所述目标轨迹的所述起点和终点;根据所述一组控制点的所述位置坐标确定拟合曲线,将所拟合曲线上的点在所述扭曲坐标系的位置坐标作为所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标。
在一些示例中,所述一组控制点在所述扭曲坐标系的位置坐标由所述拍摄高度和所述弯曲半径确定。
在一些示例中,所述一组控制点还包括:所述起始和所述终点之间的至少一个中间点。
在一些示例中,所述目标拍摄姿态包括:目标俯仰角;所述处理器还 用于执行以下操作:
确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态,以使得所述可移动平台沿所述目标轨迹以第一预定速度移动至所述目标轨迹的终点时、所述拍摄装置的光轴与地面的交点以第二预定速度向所述目标轨迹的终点在地面的投影点移动。
在一些示例中,所述处理器还用于执行以下操作:将移动过程中所述拍摄装置和所述交点的连线与水平面的夹角作为所述目标俯仰角。
在一些示例中,所述预设行像素包括:图像的中间一行或多行像素;所述预设列像素包括:图像的中间一列或多列像素;第一帧图像的预设行像素包括:所述预设行之下的所有行像素;第一帧图像的预设列像素包括:所述预设列之下的所有列像素;最后一帧图像的预设行像素包括:所述预设行之上的所有行像素;最后一帧图像的预设列像素包括:所述预设列之上的所有列像素。
本公开再一实施例还提供了一种计算机可读存储介质,其存储有可执行指令,所述可执行指令在由一个或多个处理器执行时,可以使所述一个或多个处理器执行上述实施例的成像方法。
计算机可读存储介质,例如可以是能够包含、存储、传送、传播或传输指令的任意介质。例如,可读存储介质可以包括但不限于电、磁、光、电磁、红外或半导体系统、装置、器件或传播介质。可读存储介质的具体示例包括:磁存储装置,如磁带或硬盘(HDD);光存储装置,如光盘(CD-ROM);存储器,如随机存取存储器(RAM)或闪存;和/或有线/无线通信链路。
另外,计算机程序可被配置为具有例如包括计算机程序模块的计算机程序代码。应当注意,模块的划分方式和个数并不是固定的,本领域技术人员可以根据实际情况使用合适的程序模块或程序模块组合,当这些程序模块组合被计算机(或处理器)执行时,使得计算机可以执行本公开所述所述的无人机的仿真方法的流程及其变形。
本公开再一实施例还提供了一种可移动平台,如图17所示,包括:上述实施例的成像装置。所述成像装置安装于所述可移动平台,或者,通过云台安装于所述可移动平台。在一些示例中,所述可移动平台为无人机。
本公开再一实施例还提供了一种电子设备,如图18所示,包括:上述实施例的成像装置。在一些示例中,所述电子设备为可移动平台的遥控器。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述各实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;在不冲突的情况下,本公开实施例中的特征可以任意组合;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。

Claims (39)

  1. 一种成像方法,其特征在于,用于可移动平台,所述可移动平台包括拍摄装置,所述方法包括:
    获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;
    控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;
    提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;
    按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
  2. 如权利要求1所述的成像方法,其特征在于,
    所述目标轨迹通过以下参数表征:轨迹点的位置坐标;
    所述目标拍摄姿态包括:所述拍摄装置在所述轨迹点的俯仰角。
  3. 如权利要求1所述的成像方法,其特征在于,所述根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,包括:
    通过解析方式和/或拟合方式自动生成所述可移动平台的目标轨迹。
  4. 如权利要求1所述的成像方法,其特征在于,所述拍摄控制参数包括:拍摄高度、弯曲半径、所述目标轨迹的起点的位置、以及所述目标轨迹的方向。
  5. 如权利要求4所述的成像方法,其特征在于,所述获取拍摄控制参数,包括:
    获取控制装置发送的所述拍摄高度和所述弯曲半径;
    或者;
    获取所述控制装置发送的所述拍摄高度和在所述方向的拍摄距离;
    根据所述拍摄高度和所述拍摄距离确定所述弯曲半径。
  6. 如权利要求5所述的成像方法,其特征在于,
    所述拍摄距离为所述目标轨迹的所述起点与终点之间的距离;
    所述拍摄高度为所述目标轨迹的所述终点距地面的高度。
  7. 如权利要求4所述的成像方法,其特征在于,所述根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,包括:
    根据所述拍摄控制参数构建扭曲坐标系,并确定所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标。
  8. 如权利要求7所述的成像方法,其特征在于,
    所述扭曲坐标系的横轴为过所述目标轨迹的起点向靠近所述目标轨迹的终点方向延伸的水平线,所述扭曲坐标系的原点与所述目标轨迹的起点的距离为所述成像高度和所述弯曲半径的差值,所述原点在所述水平线上。
  9. 如权利要求8所述的成像方法,其特征在于,所述弯曲半径为所述原点与地面的距离。
  10. 如权利要求7所述的成像方法,其特征在于,所述确定所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标,包括:
    获取一组控制点在所述扭曲坐标系的位置坐标;所述一组控制点至少包括:所述目标轨迹的所述起点和终点;
    根据所述一组控制点的所述位置坐标确定拟合曲线,将所拟合曲线上的点在所述扭曲坐标系的位置坐标作为所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标。
  11. 如权利要求10所述的成像方法,其特征在于,所述一组控制点 在所述扭曲坐标系的位置坐标由所述拍摄高度和所述弯曲半径确定。
  12. 如权利要求10所述的成像方法,其特征在于,所述一组控制点还包括:所述起始和所述终点之间的至少一个中间点。
  13. 如权利要求1所述的成像方法,其特征在于,所述目标拍摄姿态包括:目标俯仰角;
    所述确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态,包括:
    确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态,以使得所述可移动平台沿所述目标轨迹以第一预定速度移动至所述目标轨迹的终点时、所述拍摄装置的光轴与地面的交点以第二预定速度向所述目标轨迹的终点在地面的投影点移动。
  14. 如权利要求13所述的成像方法,其特征在于,将移动过程中所述拍摄装置和所述交点的连线与水平面的夹角作为所述目标俯仰角。
  15. 如权利要求1所述的成像方法,其特征在于,
    所述预设行像素包括:图像的中间一行或多行像素;
    所述预设列像素包括:图像的中间一列或多列像素。
  16. 如权利要求1所述的成像方法,其特征在于,
    第一帧图像的预设行像素包括:所述预设行之下的所有行像素;
    第一帧图像的预设列像素包括:所述预设列之下的所有列像素。
  17. 如权利要求1所述的成像方法,其特征在于,
    最后一帧图像的预设行像素包括:所述预设行之上的所有行像素;
    最后一帧图像的预设列像素包括:所述预设列之上的所有列像素。
  18. 一种成像装置,其特征在于,包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的所述可执行指令,以执行如下操作:
    获取拍摄控制参数,根据所述拍摄控制参数自动生成所述可移动平台的目标轨迹,并确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态;
    控制所述可移动平台沿所述目标轨迹移动,并在移动过程中控制所述拍摄装置按照所述目标拍摄姿态进行拍摄,以得到图像集合;
    提取所述图像集合中每帧图像的待拼接像素,所述待拼接像素包括预设行像素或者预设列像素,相邻两帧图像的待拼接像素对应的拍摄内容互不重叠;
    按照图像帧的拍摄顺序,将各帧图像的所述待拼接像素拼接在一起,以生成目标图像。
  19. 如权利要求18所述的成像装置,其特征在于,
    所述目标轨迹通过以下参数表征:轨迹点的位置坐标;
    所述目标拍摄姿态包括:所述拍摄装置在所述轨迹点的俯仰角。
  20. 如权利要求18所述的成像装置,其特征在于,所述处理器还用于执行以下操作:
    通过解析方式和/或拟合方式自动生成所述可移动平台的目标轨迹。
  21. 如权利要求18所述的成像装置,其特征在于,所述拍摄控制参数包括:拍摄高度、弯曲半径、所述目标轨迹的起点的位置、以及所述目标轨迹的方向。
  22. 如权利要求21所述的成像装置,其特征在于,所述处理器还用于执行以下操作:
    获取控制装置发送的所述拍摄高度和所述弯曲半径;
    或者;
    获取所述控制装置发送的所述拍摄高度和在所述方向的拍摄距离;
    根据所述拍摄高度和所述拍摄距离确定所述弯曲半径。
  23. 如权利要求22所述的成像装置,其特征在于,
    所述拍摄距离为所述目标轨迹的所述起点与终点之间的距离;
    所述拍摄高度为所述目标轨迹的所述终点距地面的高度。
  24. 如权利要求21所述的成像装置,其特征在于,所述处理器还用于执行以下操作:
    根据所述拍摄控制参数构建扭曲坐标系,并确定所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标。
  25. 如权利要求24所述的成像装置,其特征在于,
    所述扭曲坐标系的横轴为过所述目标轨迹的起点向靠近所述目标轨迹的终点方向延伸的水平线,所述扭曲坐标系的原点与所述目标轨迹的起点的距离为所述成像高度和所述弯曲半径的差值,所述原点在所述水平线上。
  26. 如权利要求25所述的成像装置,其特征在于,所述弯曲半径为所述原点与地面的距离。
  27. 如权利要求24所述的成像装置,其特征在于,所述处理器还用于执行以下操作:
    获取一组控制点在所述扭曲坐标系的位置坐标;所述一组控制点至少包括:所述目标轨迹的所述起点和终点;
    根据所述一组控制点的所述位置坐标确定拟合曲线,将所拟合曲线上的点在所述扭曲坐标系的位置坐标作为所述目标轨迹的轨迹点在所述扭曲坐标系的位置坐标。
  28. 如权利要求27所述的成像装置,其特征在于,所述一组控制点 在所述扭曲坐标系的位置坐标由所述拍摄高度和所述弯曲半径确定。
  29. 如权利要求27所述的成像装置,其特征在于,所述一组控制点还包括:所述起始和所述终点之间的至少一个中间点。
  30. 如权利要求18所述的成像装置,其特征在于,所述目标拍摄姿态包括:目标俯仰角;
    所述处理器还用于执行以下操作:
    确定所述拍摄装置在所述可移动平台沿所述目标轨迹移动时的目标拍摄姿态,以使得所述可移动平台沿所述目标轨迹以第一预定速度移动至所述目标轨迹的终点时、所述拍摄装置的光轴与地面的交点以第二预定速度向所述目标轨迹的终点在地面的投影点移动。
  31. 如权利要求30所述的成像装置,其特征在于,所述处理器还用于执行以下操作:
    将移动过程中所述拍摄装置和所述交点的连线与水平面的夹角作为所述目标俯仰角。
  32. 如权利要求18所述的成像装置,其特征在于,
    所述预设行像素包括:图像的中间一行或多行像素;
    所述预设列像素包括:图像的中间一列或多列像素。
  33. 如权利要求18所述的成像装置,其特征在于,
    第一帧图像的预设行像素包括:所述预设行之下的所有行像素;
    第一帧图像的预设列像素包括:所述预设列之下的所有列像素。
  34. 如权利要求18所述的成像装置,其特征在于,
    最后一帧图像的预设行像素包括:所述预设行之上的所有行像素;
    最后一帧图像的预设列像素包括:所述预设列之上的所有列像素。
  35. 一种计算机可读存储介质,其特征在于,其存储有可执行指令, 所述可执行指令在由一个或多个处理器执行时,可以使所述一个或多个处理器执行如权利要求1至17中任一项权利要求所述的成像方法。
  36. 一种可移动平台,其特征在于,包括:如权利要求18-34任一项所述的成像装置。
  37. 如权利要求36所述的可移动平台,其特征在于,所述可移动平台为无人机。
  38. 一种电子设备,其特征在于,包括:如权利要求18-34任一项所述的成像装置。
  39. 如权利要求38所述的电子设备,其特征在于,所述电子设备为可移动平台的遥控器。
PCT/CN2020/093940 2020-06-02 2020-06-02 成像方法、成像装置、计算机可读存储介质 WO2021243566A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080005244.2A CN112771842A (zh) 2020-06-02 2020-06-02 成像方法、成像装置、计算机可读存储介质
PCT/CN2020/093940 WO2021243566A1 (zh) 2020-06-02 2020-06-02 成像方法、成像装置、计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/093940 WO2021243566A1 (zh) 2020-06-02 2020-06-02 成像方法、成像装置、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021243566A1 true WO2021243566A1 (zh) 2021-12-09

Family

ID=75699520

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093940 WO2021243566A1 (zh) 2020-06-02 2020-06-02 成像方法、成像装置、计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112771842A (zh)
WO (1) WO2021243566A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546111A (zh) * 2022-09-13 2022-12-30 武汉海微科技有限公司 曲面屏检测方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104754228A (zh) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 一种利用移动终端摄像头拍照的方法及移动终端
CN105262951A (zh) * 2015-10-22 2016-01-20 努比亚技术有限公司 具有双目摄像头的移动终端及其拍照方法
US20170068246A1 (en) * 2014-07-30 2017-03-09 SZ DJI Technology Co., Ltd Systems and methods for target tracking
CN107945112A (zh) * 2017-11-17 2018-04-20 浙江大华技术股份有限公司 一种全景图像拼接方法及装置
CN108513642A (zh) * 2017-07-31 2018-09-07 深圳市大疆创新科技有限公司 一种图像处理方法、无人机、地面控制台及其图像处理系统
CN110648283A (zh) * 2019-11-27 2020-01-03 成都纵横大鹏无人机科技有限公司 图像拼接方法、装置、电子设备和计算机可读存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168455B (zh) * 2014-08-08 2018-03-09 北京航天控制仪器研究所 一种空基大场景摄像系统及方法
KR20160113903A (ko) * 2015-03-23 2016-10-04 엘지전자 주식회사 이동 단말기 및 그것의 제어방법
CN105573341B (zh) * 2016-01-22 2018-08-10 深圳泰山体育科技股份有限公司 一种飞行器光学控制方法及系统
CN106792078A (zh) * 2016-07-12 2017-05-31 乐视控股(北京)有限公司 视频处理方法及装置
CN106331527B (zh) * 2016-10-12 2019-05-17 腾讯科技(北京)有限公司 一种图像拼接方法及装置
CN107343153A (zh) * 2017-08-31 2017-11-10 王修晖 一种无人设备的拍摄方法、装置及无人机
CN108702447B (zh) * 2017-09-29 2021-11-12 深圳市大疆创新科技有限公司 一种视频处理方法、设备、无人机及系统、计算机可读存储介质
CN107516294B (zh) * 2017-09-30 2020-10-13 百度在线网络技术(北京)有限公司 拼接图像的方法和装置
CN110073403A (zh) * 2017-11-21 2019-07-30 深圳市大疆创新科技有限公司 输出影像生成方法、设备及无人机
CN109032184B (zh) * 2018-09-05 2021-07-09 深圳市道通智能航空技术股份有限公司 飞行器的飞行控制方法、装置、终端设备及飞行控制系统
WO2020087346A1 (zh) * 2018-10-31 2020-05-07 深圳市大疆创新科技有限公司 拍摄控制方法、可移动平台、控制设备及存储介质
CN111192286A (zh) * 2018-11-14 2020-05-22 西安中兴新软件有限责任公司 一种图像合成方法、电子设备及存储介质
CN110751683A (zh) * 2019-10-28 2020-02-04 北京地平线机器人技术研发有限公司 轨迹预测方法、装置、可读存储介质及电子设备
CN110717861B (zh) * 2019-12-12 2020-03-20 成都纵横大鹏无人机科技有限公司 图像拼接方法、装置、电子设备和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170068246A1 (en) * 2014-07-30 2017-03-09 SZ DJI Technology Co., Ltd Systems and methods for target tracking
CN104754228A (zh) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 一种利用移动终端摄像头拍照的方法及移动终端
CN105262951A (zh) * 2015-10-22 2016-01-20 努比亚技术有限公司 具有双目摄像头的移动终端及其拍照方法
CN108513642A (zh) * 2017-07-31 2018-09-07 深圳市大疆创新科技有限公司 一种图像处理方法、无人机、地面控制台及其图像处理系统
CN107945112A (zh) * 2017-11-17 2018-04-20 浙江大华技术股份有限公司 一种全景图像拼接方法及装置
CN110648283A (zh) * 2019-11-27 2020-01-03 成都纵横大鹏无人机科技有限公司 图像拼接方法、装置、电子设备和计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546111A (zh) * 2022-09-13 2022-12-30 武汉海微科技有限公司 曲面屏检测方法、装置、设备及存储介质
CN115546111B (zh) * 2022-09-13 2023-12-05 武汉海微科技有限公司 曲面屏检测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112771842A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
US11120261B2 (en) Imaging control method and device
US10871258B2 (en) Method and system for controlling gimbal
JP7020522B2 (ja) 情報処理装置、情報処理方法、コンピュータ読み取り可能な媒体、撮像システム、および飛行体
US11914370B2 (en) System and method for providing easy-to-use release and auto-positioning for drone applications
US11340606B2 (en) System and method for controller-free user drone interaction
CN113038016B (zh) 无人机图像采集方法及无人机
US11513511B2 (en) Techniques for image recognition-based aerial vehicle navigation
CN108702444B (zh) 一种图像处理方法、无人机及系统
WO2020014909A1 (zh) 拍摄方法、装置和无人机
WO2019127395A1 (zh) 一种无人机拍照方法、图像处理方法和装置
WO2021098453A1 (zh) 目标跟踪方法及无人飞行器
US11961407B2 (en) Methods and associated systems for managing 3D flight paths
WO2019227333A1 (zh) 集体照拍摄方法和装置
WO2020249088A1 (zh) 一种移动目标的追踪方法、装置及无人机
WO2022141369A1 (en) Systems and methods for supporting automatic video capture and video editing
WO2021243566A1 (zh) 成像方法、成像装置、计算机可读存储介质
WO2021056352A1 (zh) 无人机的仿真方法、仿真装置和计算机可读存储介质
WO2022152050A1 (zh) 一种对象检测方法、装置、计算机设备及存储介质
WO2021217403A1 (zh) 可移动平台的控制方法、装置、设备及存储介质
WO2021056411A1 (zh) 航线调整方法、地面端设备、无人机、系统和存储介质
WO2022056683A1 (zh) 视场确定方法、视场确定装置、视场确定系统和介质
CN113781524A (zh) 一种基于二维标签的目标追踪系统及方法
WO2020000386A1 (zh) 一种飞行控制方法、设备、系统及存储介质
WO2023123254A1 (zh) 无人机的控制方法、装置、无人机及存储介质
KR102481122B1 (ko) 드론 제어 시스템 및 그것의 제어방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938612

Country of ref document: EP

Kind code of ref document: A1