WO2022141123A1 - 可移动平台及其控制方法、装置、终端设备和存储介质 - Google Patents

可移动平台及其控制方法、装置、终端设备和存储介质 Download PDF

Info

Publication number
WO2022141123A1
WO2022141123A1 PCT/CN2020/141086 CN2020141086W WO2022141123A1 WO 2022141123 A1 WO2022141123 A1 WO 2022141123A1 CN 2020141086 W CN2020141086 W CN 2020141086W WO 2022141123 A1 WO2022141123 A1 WO 2022141123A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
distance
target
pose
Prior art date
Application number
PCT/CN2020/141086
Other languages
English (en)
French (fr)
Inventor
宋春林
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/141086 priority Critical patent/WO2022141123A1/zh
Publication of WO2022141123A1 publication Critical patent/WO2022141123A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions

Definitions

  • the present application relates to the technical field of image capturing, and in particular, to a movable platform and its control method, device, terminal device and storage medium.
  • the present application provides a movable platform and a control method, device, terminal device and storage medium thereof, which can accurately realize repeated shooting according to reference images.
  • an embodiment of the present application provides a method for controlling a movable platform, wherein the movable platform includes a first photographing device, including:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • an embodiment of the present application provides a control device for a movable platform for controlling the movable platform, wherein the movable platform includes a first photographing device, including a memory and one or more processors;
  • the memory for storing program instructions
  • the one or more processors operating individually or collectively, invoke and execute the program instructions for performing the steps of:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • an embodiment of the present application provides a terminal device capable of communicating with a movable platform
  • the terminal device includes a memory and one or more processors
  • the memory is used to store program instructions
  • the one or more processors operating individually or collectively, invoke and execute the program instructions for performing the steps of:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • an embodiment of the present application provides a movable platform, including a first photographing device, a memory, and one or more processors, where the first photographing device is used to acquire an image;
  • the memory for storing program instructions
  • the one or more processors operating individually or collectively, invoke and execute the program instructions for performing the steps of:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores program instructions, and when the program instructions are executed by a processor, the processor implements the foregoing method.
  • the embodiments of the present application provide a movable platform and a control method, device, terminal device and storage medium thereof, which adjust the actual pose of the first shooting device according to the reference pose indication parameters of the shooting reference image, and obtain the first shooting device The current image captured; and determine the first feature point of the target image area in the current image and the second feature point that matches the first feature point in the reference image; according to the position of the first feature point in the current image and the second feature point in the
  • the actual pose of the first photographing device is adjusted with reference to the position in the image.
  • the current image acquired by the first photographing device can be closer to the reference image.
  • FIG. 1 is a schematic flowchart of a control method for a movable platform provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of data transmission between a terminal device and a movable platform
  • Fig. 3 is the scene schematic diagram that realizes repeated shooting according to reference image
  • FIG. 4 is a schematic diagram of a first photographing device photographing an image at a target pose
  • Fig. 5 is the schematic diagram of target image area and reference image matching feature point
  • FIG. 6 is a schematic diagram of a current image captured after adjusting the actual pose
  • Fig. 7 is the schematic diagram of determining the image distance when in-focus
  • FIG. 8 is a schematic block diagram of a control device of a movable platform provided by an embodiment of the present application.
  • FIG. 9 is a schematic block diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 10 is a schematic block diagram of a movable platform provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method for controlling a movable platform provided by an embodiment of the present application.
  • the control method of the movable platform can be applied to the movable platform and/or the terminal device, for repeating the process of photographing a scene in a certain place according to the reference image.
  • the movable platform may include at least one of an unmanned aerial vehicle, a gimbal, an unmanned vehicle, and the like.
  • the aircraft can be a rotary-wing UAV, such as a quad-rotor UAV, a hexa-rotor UAV, an octa-rotor UAV, or a fixed-wing UAV.
  • the terminal device may include at least one of a mobile phone, a tablet computer, a notebook computer, a desktop computer, a wearable device, a remote control, and the like.
  • FIG. 2 is a schematic diagram of a scenario for implementing the control method provided by the embodiment of the present application.
  • the scenario includes the UAV 100 and a terminal device 200 , the UAV 100 is connected to the terminal device 200 in communication, and the terminal device 200 is used to control the UAV 100 .
  • the UAV 100 includes a body 110 and a power system 120 disposed on the body 100.
  • the power system 120 may include one or more propellers 121, one or more motors 122 corresponding to the one or more propellers, One or more electronic governors (referred to as ESCs for short).
  • the motor 122 is connected between the electronic governor and the propeller 121, and the motor 122 and the propeller 121 are arranged on the body 110 of the unmanned aerial vehicle 100; the electronic governor is used to receive the driving signal generated by the control system, and provide according to the driving signal Driving current is supplied to the motor 122 to control the rotational speed of the motor 122 .
  • the motor 122 is used to drive the propeller 121 to rotate, thereby providing power for the flight of the unmanned aerial vehicle 100, and the power enables the unmanned aerial vehicle 100 to realize the movement of one or more degrees of freedom.
  • UAV 100 may rotate about one or more axes of rotation.
  • the above-mentioned rotation axes may include a roll axis, a yaw axis, and a pitch axis.
  • the motor 122 may be a DC motor or an AC motor.
  • the motor 122 may be a brushless motor or a brushed motor.
  • the unmanned aerial vehicle 100 further includes a controller and a sensing system (not shown in FIG. 2 ), the sensing system is used to measure the attitude information of the unmanned aerial vehicle, that is, the position information and state information of the unmanned aerial vehicle 100 in space , for example, 3D position, 3D angle, 3D velocity, 3D acceleration, 3D angular velocity, etc.
  • the sensing system may include at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, a barometer, and other sensors.
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the controller is used to control the movement of the unmanned aerial vehicle 100, for example, the movement of the unmanned aerial vehicle 100 can be controlled according to the attitude information measured by the sensing system. It should be understood that the controller may control the UAV 100 according to pre-programmed instructions.
  • data is transmitted between the terminal device and the movable platform through a wireless channel.
  • a wireless channel from the movable platform to the terminal device is used to transmit data of the movable platform, such as videos, pictures, sensor data obtained by Mobile platforms, such as telemetry data such as the state information (OSD) of the drone.
  • data of the movable platform such as videos, pictures, sensor data obtained by Mobile platforms, such as telemetry data such as the state information (OSD) of the drone.
  • OSD state information
  • the wireless channel from the terminal device to the mobile platform is called the uplink channel, which is used to transmit remote control data; for example, when the mobile platform is an unmanned aerial vehicle, the uplink channel is used to transmit flight control instructions and take pictures, videos, and return home. and other control commands.
  • the movable platform includes a first photographing device, and the first photographing device is used for acquiring images.
  • the first photographing device is mounted on the body of the unmanned aerial vehicle through the gimbal, and the pose of the first photographing device can be adjusted by adjusting the position of the unmanned aerial vehicle, the attitude of the unmanned aerial vehicle and/or the pose of the gimbal.
  • the first photographing device may be directly mounted on the body of the unmanned aerial vehicle, and the pose of the first photographing device may be adjusted by adjusting the pose of the unmanned aerial vehicle.
  • the pose includes a position and/or an attitude.
  • FIG. 3 is a schematic diagram of a scene in which repeated shooting is currently implemented according to a reference image.
  • the left side of FIG. 3 shows a scene when a reference image is captured.
  • the reference image is obtained by aiming at a tree by a second capturing device, and the reference image also includes a house behind the tree.
  • the right side of Figure 3 when the scene at this place is repeatedly photographed, there may be a difference between the pose of the first photographing device and that of the second photographing device when the reference image is taken, or due to objects near the tree, For example, the influence of the house causes the first photographing device to aim at the house for repeated shooting.
  • the image obtained by repeated shooting is shown on the right side of Figure 3, which can be called the image of repeated shooting.
  • the deviation from the reference image is large, and the accuracy of repeated shooting is compared. Difference.
  • the inventor of the present application has improved the control method of the movable platform to improve the accuracy of repeated shooting according to the reference image.
  • the control method of the movable platform according to the embodiment of the present application includes steps S110 to S150.
  • the photographing device that performs repeated photography is referred to as the first photographing device
  • the photographing device that captures the reference image is referred to as the second photographing device. It can be understood that the first photographing apparatus and the second photographing apparatus may be the same photographing apparatus, or may be different photographing apparatuses.
  • the second photographing device records its own posture information when photographing the reference image, such as the position of the unmanned aerial vehicle, the direction of the nose and the posture of the gimbal, etc.
  • These posture information may be referred to as reference posture indication parameters.
  • the reference image and the reference pose indication parameter corresponding to the reference image can be sent to the terminal device, so that the terminal device can determine the reference image and the reference pose indication parameter corresponding to the reference image.
  • the user may view the image captured by the second photographing device on the terminal device, and may determine any frame of images as the reference image, for example, may determine the image containing the target object of interest as the reference image.
  • a reference image can be determined according to a preset artificial intelligence algorithm, for example, an image of a target object with potential safety hazards is determined as a reference image; or an image to be screened can be determined according to a preset artificial intelligence algorithm, such as determining the target.
  • the image of the object with potential safety hazard is the image to be screened, and then the user determines the reference image in the image to be screened.
  • S120 Determine the pose indicated by the parameter pose indication parameter as the target pose of the first photographing device, control and adjust the actual pose of the first photographing device according to the target pose, and obtain the image captured by the first photographing device. current image.
  • the first photographing device is mounted on the body of the unmanned aerial vehicle through the gimbal, and the pose of the first photographing device can be adjusted by adjusting the position of the unmanned aerial vehicle, the attitude of the unmanned aerial vehicle and/or the pose of the gimbal.
  • the first photographing device may be directly mounted on the body of the unmanned aerial vehicle, and the posture of the first photographing device may be adjusted by adjusting the posture of the unmanned aerial vehicle, where the posture and posture include position and/or attitude.
  • FIG. 4 is a schematic diagram of the current image captured by the first capturing device after the actual pose of the first capturing device is controlled and adjusted according to the target pose.
  • the parameter pose indication parameter is sent to the unmanned aerial vehicle, so that the unmanned aerial vehicle moves to the position indicated by the parameter pose indication parameter, adjusts to the nose direction corresponding to the parameter pose indication parameter, and adjusts the The pose of the gimbal is the same as the pose of the gimbal indicated by the parameter pose indication parameter, so that the actual pose of the first camera can be adjusted to be the same or close to the pose of the second camera when the reference image is captured.
  • the current image captured by the first photographing device is acquired. It can be understood that the current image and the reference image have a greater degree of similarity, for example, the image area of the same target object may be included. As shown in FIG. 4, the current image is similar to the reference image and includes image areas of trees and/or houses.
  • the field of view (FOV) angle of the current image is greater than or equal to the field of view angle of the reference image. Therefore, even when the pose adjustment has a deviation, the current image can still include the image area of the same target object as the reference image.
  • the focal length at which the current image is obtained is less than or equal to the focal length at which the reference image is obtained.
  • controlling the first photographing device to focus to the focal length of the reference image, or to the minimum focal length can make the captured current image have a larger field of view.
  • an area in the current image that has the same or similar features as the reference image may be determined as the target image area.
  • an image area in the current image that has the same or similar characteristics as the target object in the reference image is determined as the target image area, for example, an image area of the target object in the current image is determined as the target image area.
  • the current image and/or the reference image include multiple objects, such as a tree and a house, at least one of which may be determined as a target object. For example, it is determined that the object closest to the camera during shooting is the target object, or at least one of the objects can be determined to be the target object according to a user's selection operation. For example, if a tree is determined as a target object that needs to be repeatedly photographed, as shown in FIG. 4 , the target image area in the current image is the image area where the tree is located.
  • the feature points refer to representative points in the image, which remain unchanged after the camera angle of view is changed.
  • the feature points in the image can also be called visual features.
  • acquiring feature points in an image may include two processes of extracting key points and calculating descriptors.
  • SIFT Scale-Invariant Feature Transform
  • SURF Scale-Invariant Feature Transform
  • ORB Oriented FAST and Rotated BRIEF
  • CNN convolutional neural networks
  • a second feature point in the reference image that matches the first feature point may be determined, and the first feature point and the matched second feature point form a matching pair.
  • the second feature point in the reference image that matches the first feature point may be acquired by a brute force matching method.
  • the similarity between feature points can be described by the degree of similarity between descriptors. For each feature point in the reference image or the reference area in the reference image, search for its descriptor in the target image area of the current image. The closest feature points are used as matching pairs.
  • the matching first feature point and the second feature point may be screened to improve the matching accuracy.
  • a matching pair that conforms to the reprojection model is screened out according to the reprojection error, that is, the first feature point in the target image region and the second feature point in the reference image that matches the first feature point.
  • the first feature point and the second feature point are shown with multiplication signs. It can be determined that the first feature point in the target image area and the second feature point matching the first feature point in the reference image correspond to the same target object, such as a tree, and the matching accuracy is high.
  • the method further comprises determining a reference area in the reference image.
  • the acquiring the first feature point of the target image region and the second feature point matching the first feature point in the reference image includes: acquiring the first feature point of the target image region and a second feature point in the reference area that matches the first feature point. The efficiency and accuracy of feature point determination and feature point matching can be improved.
  • the reference area is an image area determined in the reference image according to a user's selection operation.
  • the reference area can be determined according to the user's selection operation.
  • the user can select the reference area in the reference image displayed by the terminal device, or the user can determine the target object in the reference image, and determine the reference image according to the target object determined by the user.
  • the image area of the target object is the reference area.
  • the type of the target object determined by the user such as a tree, may be acquired, and the reference area may be determined in the reference image according to the type.
  • the object closest to the first shooting device during shooting is the target object
  • the image area of the target object in the reference image is determined to be the reference area.
  • a foreground area in the reference image is determined as the reference area.
  • the reference area is an image area of the target object in the reference image.
  • an area of the target object in the reference image may be determined, then a feature point may be determined in this area, and a second feature point matching the first feature point may be determined by matching the feature point to this area in the reference image .
  • the pose deviation of the pose of the first photographing device corresponding to the two images is obtained, and according to the pose deviation, the PTZ and/or unmanned aerial vehicle can be adjusted by adjusting the pose deviation.
  • the pose of the camera adjusts the actual pose of the first photographing device.
  • the adjusting the actual pose of the first photographing device according to the position of the first feature point in the current image and the position of the second feature point in the reference image includes: According to the position of the first feature point in the current image and the position of the second feature point in the reference image, determine the difference between the actual pose of the current first camera and the pose indicated by the indication parameter The pose deviation between the two, adjust the actual pose of the first photographing device according to the pose deviation, such as adjusting the position of the drone, the yaw (yaw) angle of the drone and/or the attitude of the gimbal, To make the actual posture of the first photographing device tend to the posture indicated by the indication parameter, restore the posture surface of the second photographing device when the reference image was taken, so that the current image obtained by the first photographing device can be compared with the reference image. Closer.
  • the position of the first feature point in the current image is leftward compared to the position of the second feature point in the reference image, and the first camera can be adjusted by adjusting the drone and/or the gimbal.
  • the actual pose moves to the left, so that the position of the first feature point in the current image can be moved to the right, and the current image as shown in Figure 6 is obtained.
  • the current image captured after adjusting the actual pose may be determined as the target image, which may be used as the image for re-shooting the reference image.
  • the current image may be determined as the target image when the pose deviation corresponding to the current image is less than or equal to a preset deviation threshold, or the currently captured image may be determined according to the user's shooting operation when the pose deviation corresponding to the current image is small for the target image.
  • the position of the i-th feature point in the image can be represented by the following two-dimensional coordinates:
  • one of ui and vi is used to represent the abscissa of the feature point in the image, and the other represents the ordinate.
  • the homogeneous coordinate form corresponding to this two-dimensional coordinate is:
  • the three-dimensional coordinate p i of the spatial point corresponding to the feature point in the camera coordinate system can be determined according to the following formula:
  • z represents the object distance corresponding to the image
  • K is the internal parameter of the camera, which is generated by the camera calibration and is the input information.
  • the Euclidean transformation is determined according to the three-dimensional coordinates of the first spatial point and the three-dimensional coordinates of the second spatial point, and the Euclidean transformation is used to indicate that the camera coordinate system corresponding to the current image corresponds to the reference image.
  • the transformation of the camera coordinate system; the pose deviation between the actual pose of the current first photographing device and the pose indicated by the indication parameter may include the Euclidean transformation.
  • the actual pose of the first photographing device can be adjusted according to the Euclidean transformation.
  • the Euclidean transformation includes a rotation matrix R and/or a translation vector t, and the rotation matrix R is used to indicate the attitude between the attitude of the first photographing device when the current image is collected and the attitude when the reference image is collected.
  • the translation vector t is used to indicate the position difference between the position when the first photographing device collects the current image and the position when the reference image is collected.
  • the UAV can be controlled to move in the left-right and/or up-down directions according to the translation vector t
  • the gimbal can be controlled to adjust the attitude according to the rotation matrix R to adjust the orientation of the first photographing device.
  • the three-dimensional coordinates of the first spatial point (a group of first spatial points) corresponding to the first feature point in the current image can be expressed as:
  • the three-dimensional coordinates of the second spatial point (another group of first spatial points) corresponding to the second feature point in the reference image can be expressed as:
  • n the number of matched feature points in the image.
  • the optimal estimates R * and t * of the rotation matrix R and the translation vector t can be determined according to the ICP method based on SVD decomposition, and the calculation steps are as follows:
  • the method further includes: adjusting the actual position of the first photographing device according to the position of the first feature point in the current image and the position of the second feature point in the reference image After the pose, the focal length of the first photographing device is adjusted according to the focal length when the reference image is photographed, and the first photographing device is controlled to photograph and obtain an observation image.
  • the current image of the first camera can be Aiming at the target object, by adjusting the focal length of the first photographing device to be the same as the focal length when the reference image is photographed, the obtained observation image is closer to the reference image.
  • the observation image may be referred to as an image required to retake the reference image, or may be referred to as a target image.
  • the first The photographing device may automatically adjust and/or adjust the focal length according to the user's setting operation, for example, by adjusting the focal length, the quality of the current image is better, for example, the target object is clearer in the current image.
  • the reference image includes an image area of the target object.
  • the reference image is obtained by photographing the target object.
  • the method further comprises: acquiring a first distance between the target object and the second photographing device when the reference image is photographed.
  • the second photographing device is equipped with a distance sensor, such as a time-of-flight sensor and/or a binocular camera, etc., through which the distance between the target object and the second photographing device when the image is photographed can be determined.
  • a distance sensor such as a time-of-flight sensor and/or a binocular camera, etc.
  • the first distance may be referred to as the target distance.
  • the image distance of the second photographing device when the target object is in focus is determined, and the object distance corresponding to the image distance can be determined according to the imaging formula.
  • the object distance can be used as the first distance. It can be understood that when the target object is in focus, the object image of the target object just falls on the photosensitive element of the second photographing device, so that the target object is sufficiently clear in the photographed image. For example, using a second shooting device with contrast focusing, when shooting at the target object, the motor in the lens module will drive the lens to move from the bottom to the top.
  • the photosensitive element such as the image sensor
  • the photosensitive element can detect the entire scene range Comprehensive detection is performed in the depth direction, and contrast values such as contrast are continuously recorded.
  • the image distance corresponding to the maximum contrast value can be determined as the image distance of the second photographing device when the target object is in focus.
  • the determining the target image area in the current image includes: determining the target image area in the current image according to the first distance.
  • the image area in the current image of the object whose distance from the first photographing device is determined as the first distance is the target image. area.
  • the target image area in the current image includes the area of the tree, but not the area of the house.
  • the determining the target image area in the current image according to the first distance includes: determining multiple candidate image areas in the current image; determining multiple candidate image areas when shooting the current image; a second distance between the object in the image area and the first photographing device; the target image area is determined from the plurality of candidate image areas according to the first distance and the second distance.
  • An area in the current image that has the same or similar features as the reference image can be determined as the target image area.
  • the current image may be divided into multiple candidate image regions, for example, into multiple candidate image regions with m rows and n columns, where m and n are natural numbers greater than zero, and at least one of m and n Greater than 1.
  • the contours of different objects in the current image may be determined by a machine learning algorithm, and the current image may be divided into multiple candidate image regions according to the contours, for example, one of the candidate image regions includes a tree, and the other candidate image region includes a house.
  • the second distance may be determined by a distance sensor mounted on the first photographing device.
  • the distance sensor is aimed at objects in each candidate image area, such as a tree and a house in sequence, to obtain the second distance between the objects in different candidate image areas and the first photographing device.
  • the determining the second distance between the objects in the multiple candidate image areas and the first photographing device when the current image is photographed includes: adjusting the image distance of the first photographing device, and determining the multiple candidate images. The image distance when each of the areas is in focus; and the second distance between the objects in the plurality of candidate image areas and the first photographing device is determined according to the image distance when the focus is achieved.
  • determining the image distance when the candidate image area is in focus includes: when the image distance of the first photographing device is adjusted, determining the contrast value of the pixel parameter in the candidate image area; determining the contrast value when the contrast value is the largest
  • the image distance of the first photographing device is the image distance when the candidate image area is in focus.
  • the object distance corresponding to the image distance can be determined according to the imaging formula.
  • the target image area includes a candidate image area where the difference between the second distance and the first distance is less than or equal to a preset threshold.
  • the target image area includes a candidate image area with the smallest difference between the second distance and the first distance.
  • the distance between the object in the target image area and the first photographing device can be close to the first distance, for example, the object in the target image area is the target object in the reference image.
  • the candidate image area where the tree A farther away from the first photographing device is located is in focus first, and the image distance when in focus corresponds to the object distance, That is, the second distance is d1; then the candidate image area where the tree B is located is in focus, and the object distance corresponding to the image distance when in focus is d2; then the candidate image area where the tree C that is closer to the first photographing device is located is in focus , and the object distance corresponding to the image distance when in focus is d3.
  • d3 is close to the first distance, it is determined that the candidate image area in the image with the object distance d3 in focus is the target image area.
  • the determining the target image area from the current image includes: determining a plurality of candidate image areas in the current image; a second distance between the first photographing devices; the target image area is determined from the plurality of candidate image areas according to the second distance.
  • the determining the target image area from the plurality of candidate image areas according to the second distance includes: determining the candidate image area with the smallest second distance as the target image area. As shown in FIG. 7 , several candidate image regions with the smallest second distance are determined as target image regions.
  • the method further includes: acquiring a first distance between the target object and the second photographing device when the reference image is photographed.
  • the determining the target image region from the plurality of candidate image regions according to the second distance includes: according to the first distance and the second distance, selecting the target image region from the plurality of candidate image regions according to the first distance and the second distance.
  • the target image area is determined in the area. An area in the current image that has the same or similar features as the reference image can be determined as the target image area.
  • the target image area includes a candidate image area where the difference between the second distance and the first distance is less than or equal to a preset threshold.
  • the target image area includes a candidate image area with the smallest difference between the second distance and the first distance. Therefore, the distance between the object in the target image area and the first photographing device can be close to the first distance, for example, the object in the target image area is the target object in the reference image.
  • the target image area is an image area corresponding to a target object in the current image, wherein the target object is the first photographing device within its field of view when the current image is photographed and the first photographing device. The object the camera is closest to.
  • the distance between the target object and the photographing device is relatively close compared to the background when the image is captured.
  • the area corresponding to the target object in the image may be called the foreground.
  • the target object in the reference image is located in the foreground area, and the target image area may be the foreground area in the current image.
  • An area in the current image that has the same or similar features as the reference image can be determined as the target image area.
  • the target image area is determined in the current image through a preset image segmentation model.
  • the image segmentation model may be a trained neural network model for segmenting a foreground area in an image, and determining the target image area according to the foreground area in the current image.
  • the method of determining the target image area according to the foreground in the current image is not limited to this.
  • the depth of each pixel in the current image can be determined by multi-view observation by adjusting the position of the first photographing device, and the minimum depth can be determined.
  • the image area is the target image area.
  • the method further comprises: acquiring the type of the target object in the reference image.
  • the determining the target image area from the current image includes: determining the image area of the target object in the current image according to the type of the target object, and converting the image area of the target object into Determine the target image area.
  • the region in the current image that has the same or similar characteristics as the reference image can be determined as the target image region.
  • the target object and/or the type of the target object is determined according to a user's operation; and/or the target object and/or the type of the target object is obtained by recognizing the reference image.
  • the type of the target object in the reference image may be determined according to the user's operation on the terminal device and/or by identifying the reference image.
  • the type of the target object input by the user in the terminal device may be acquired, or determined by identifying the reference image or the target object in the reference image.
  • the object closest to the second camera device when the reference image is captured is the target object, or at least one of the objects can be determined to be the target object according to a user's selection operation.
  • the actual pose of the first shooting device is adjusted according to the reference pose indication parameter of the shooting reference image, and the current image captured by the first shooting device is acquired; and the current image is determined The first feature point in the target image area and the second feature point matching the first feature point in the reference image; according to the position of the first feature point in the current image and the second feature point At the position in the reference image, the actual pose of the first photographing device is adjusted.
  • the current image obtained by the first shooting device can be closer to the reference image.
  • the actual pose of the shooting device can prevent non-target objects in the field of view from affecting the adjustment of the pose of the first shooting device during shooting.
  • the scene adaptability of similar targets is better, and other parts in the field of view can be avoided to affect the position and attitude calculation, so that more accurate re-shot results can be obtained.
  • using the focal plane to distinguish objects at different distances in the image can play the role of screening the target image area.
  • the target image area is determined by the image distance when each area in the current image is in focus during contrast focusing, and feature points are extracted in the target image area and feature matching is performed.
  • Objects with different distances in the image can be segmented without changing the position of the first shooting device before performing feature point matching in repeated shooting, and the accurate target image area can also be screened in the current image in combination with the prior information of the target distance or position. , shielding the influence of useless information.
  • FIG. 8 is a schematic block diagram of a control apparatus 500 for a movable platform provided by an embodiment of the present application.
  • the control device 500 is used to control the movable platform.
  • the movable platform may include at least one of an unmanned aerial vehicle, a gimbal, an unmanned vehicle, and the like.
  • the aircraft may be a rotary-wing UAV, such as a quad-rotor UAV, a hexa-rotor UAV, an octa-rotor UAV, or a fixed-wing UAV.
  • the movable platform includes a first camera for capturing images.
  • the first photographing device is mounted on the body of the unmanned aerial vehicle through the gimbal, and the pose of the first photographing device can be adjusted by adjusting the position of the unmanned aerial vehicle, the attitude of the unmanned aerial vehicle and/or the pose of the gimbal.
  • the first photographing device may be directly mounted on the body of the unmanned aerial vehicle, and the pose of the first photographing device may be adjusted by adjusting the pose of the unmanned aerial vehicle.
  • the pose includes a position and/or an attitude.
  • the movable platform includes a control device 500, the control device 500 includes one or more processors 501, and the one or more processors 501 work individually or together for executing the aforementioned control method of the movable platform. step.
  • control device 500 further includes a memory 502 for storing program instructions.
  • the processor 501 and the memory 502 are connected through a bus 503, and the bus 503 is, for example, an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 501 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a mobile hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 502 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, or a mobile hard disk, and the like.
  • the one or more processors 501 are configured to call the program instructions stored in the memory 502, and execute the steps of the aforementioned control method of the movable platform when the program instructions are executed.
  • the processor 501 is configured to call program instructions stored in the memory 502, and perform the following steps when executing the program instructions:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • control apparatus provided in the embodiments of the present application are similar to the control methods of the movable platforms in the foregoing embodiments, and are not described herein again.
  • Embodiments of the present application further provide a computer-readable storage medium, where program instructions are stored in the computer-readable storage medium, and when the program instructions are executed by a processor, the processor enables the processor to implement the functions of the removable platform provided by the foregoing embodiments. The steps of the control method.
  • the computer-readable storage medium may be an internal storage unit of the control device described in any of the foregoing embodiments, such as a hard disk or a memory of the control device.
  • the computer-readable storage medium may also be an external storage device of the control device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the control device ) card, Flash Card, etc.
  • FIG. 9 is a schematic block diagram of a terminal device 600 provided by an embodiment of the present application.
  • the terminal device 600 may include at least one of a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, a wearable device, a remote control, and the like.
  • the terminal device 600 can be communicatively connected with a movable platform, and the movable platform includes a first photographing device.
  • the movable platform can send its own pose information and the image captured by the first photographing device to the terminal device 600, and the terminal device 600 can control the movable platform to adjust the pose according to the pose information and the image.
  • the terminal device 600 includes one or more processors 601, and the one or more processors 601 work individually or together to execute the steps of the aforementioned control method of the movable platform.
  • the terminal device 600 further includes a memory 602, and the memory 602 is used for storing program instructions.
  • the processor 601 and the memory 602 are connected through a bus 603, and the bus 603 is, for example, an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 601 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 602 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 602 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • the one or more processors 601 are configured to call the program instructions stored in the memory 602, and execute the steps of the aforementioned method for controlling a movable platform when the program instructions are executed.
  • the processor 601 is configured to call program instructions stored in the memory 602, and perform the following steps when executing the program instructions:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • Embodiments of the present application further provide a computer-readable storage medium, where program instructions are stored in the computer-readable storage medium, and when the program instructions are executed by a processor, the processor enables the processor to implement the functions of the removable platform provided by the foregoing embodiments. The steps of the control method.
  • the computer-readable storage medium may be an internal storage unit of the terminal device described in any of the foregoing embodiments, such as a hard disk or a memory of the terminal device.
  • the computer-readable storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk equipped on the terminal device, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) ) card, Flash Card, etc.
  • FIG. 10 is a schematic block diagram of a movable platform 700 provided by an embodiment of the present application.
  • the movable platform may include at least one of an unmanned aerial vehicle, a gimbal, an unmanned vehicle, and the like.
  • the unmanned aerial vehicle may be a rotary-wing drone, such as a quad-rotor drone, a hexa-rotor drone, an octa-rotor drone, or a fixed-wing drone.
  • the movable platform 700 includes a first photographing device, the movable platform 700 can obtain images photographed by the first photographing device, and the movable platform can adjust its own posture according to the posture information and the images photographed by the first photographing device.
  • the movable platform 700 includes one or more processors 701, and the one or more processors 701 work individually or together to perform the steps of the aforementioned control method of the movable platform.
  • the removable platform 700 further includes a memory 702 for storing program instructions.
  • the processor 701 and the memory 702 are connected through a bus 703, and the bus 703 is, for example, an I2C (Inter-integrated Circuit) bus.
  • I2C Inter-integrated Circuit
  • the processor 701 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 702 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 702 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • the one or more processors 701 are configured to call the program instructions stored in the memory 702, and when executing the program instructions, execute the steps of the aforementioned method for controlling a movable platform.
  • the processor 701 is configured to call program instructions stored in the memory 702, and perform the following steps when executing the program instructions:
  • the actual pose of the first photographing device is adjusted according to the position of the first feature point in the current image and the position of the second feature point in the reference image.
  • An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores program instructions, the program instructions include program instructions, and when the program instructions are executed by a processor, the processor implements the The steps of the control method of the movable platform provided by the above embodiments.
  • the computer-readable storage medium may be an internal storage unit of the removable platform described in any of the foregoing embodiments, such as a hard disk or a memory of the removable platform.
  • the computer-readable storage medium can also be an external storage device of the removable platform, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) equipped on the removable platform , SD) card, flash memory card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

一种可移动平台的控制方法,包括:获取参考图像和拍摄参考图像的第二拍摄装置的参考位姿指示参数(S110);将参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像(S120);从当前图像中确定目标图像区域(S130);获取目标图像区域的第一特征点和参考图像中与第一特征点匹配的第二特征点(S140);根据第一特征点在当前图像的位置和第二特征点在参考图像中的位置,调整第一拍摄装置的实际位姿(S150),根据参考图像实现重复拍摄。

Description

可移动平台及其控制方法、装置、终端设备和存储介质 技术领域
本申请涉及图像拍摄技术领域,尤其涉及一种可移动平台及其控制方法、装置、终端设备和存储介质。
背景技术
在一些应用场景中,例如在对电力设施、道路桥梁等进行巡检时,在拍摄一处的景物后,需要再次重复拍摄该处的景物。但是通常重复拍摄该处的景物时,拍摄装置的位置或姿态与上一次拍摄时不同,重复拍摄的图像与上一次拍摄的图像存在较大的差别,重复拍摄的效果不好,影响工作效率和用户体验。
发明内容
本申请提供了一种可移动平台及其控制方法、装置、终端设备和存储介质,能够准确的根据参考图像实现重复拍摄。
第一方面,本申请实施例提供了一种可移动平台的控制方法,其中,所述可移动平台包括第一拍摄装置,包括:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
第二方面,本申请实施例提供了一种可移动平台的控制装置,用于控制可移动平台,其中,所述可移动平台包括第一拍摄装置,包括存储器和一个或多个处理器;
所述存储器,用于存储程序指令;
所述一个或多个处理器,单独地或共同地工作,调用并执行所述程序指令以用于执行如下步骤:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
第三方面,本申请实施例提供了一种终端设备,能够与可移动平台通信连接;
所述终端设备包括存储器和一个或多个处理器;
所述存储器用于存储程序指令;
所述一个或多个处理器,单独地或共同地工作,调用并执行所述程序指令以用于执行如下步骤:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为可移动平台的第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
第四方面,本申请实施例提供了一种可移动平台,包括第一拍摄装置、存储器和一个或多个处理器,所述第一拍摄装置用于获取图像;
所述存储器,用于存储程序指令;
所述一个或多个处理器,单独地或共同地工作,调用并执行所述程序指令以用于执行如下步骤:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令被处理器执行时使所述处理器实现上述的方法。
本申请实施例提供了一种可移动平台及其控制方法、装置、终端设备和存储介质,根据拍摄参考图像的参考位姿指示参数调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;以及确定当前图像中目标图像区域的第一特征点和参考图像中与第一特征点匹配的第二特征点;根据第一特征点在当前图像的位置和第二特征点在参考图像中的位置,调整第一拍摄装置的实际位姿。以使第一拍摄装置的实际位姿趋向拍摄参考图像时的位姿,从而第一拍摄装置获取的当前图像可以与参考图像更接近。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请实施例的公开内容。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需 要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种可移动平台的控制方法的流程示意图;
图2是终端设备和可移动平台之间进行数据传输的示意图;
图3是根据参考图像实现重复拍摄的场景示意图;
图4是第一拍摄装置在目标位姿拍摄图像的示意图;
图5是目标图像区域和参考图像匹配特征点的示意图;
图6是调整实际位姿后拍摄的当前图像的示意图;
图7是确定合焦时像距的示意图;
图8是本申请实施例提供的一种可移动平台的控制装置的示意性框图;
图9是本申请实施例提供的一种终端设备的示意性框图;
图10是本申请实施例提供的一种可移动平台的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
请参阅图1,图1是本申请实施例提供的一种可移动平台的控制方法的流程示意图。所述可移动平台的控制方法可以应用在可移动平台和/或终端设备中,用于根据参考图像重复拍摄某处的景物等过程。
示例性的,可移动平台可以包括无人飞行器、云台、无人车等中的至少一种。进一步而言,飞行器可以为旋翼型无人机,例如四旋翼无人机、六旋翼无 人机、八旋翼无人机,也可以是固定翼无人机。终端设备可以包括手机、平板电脑、笔记本电脑、台式电脑、穿戴式设备、遥控器等中的至少一项。
请参阅图2,图2是实施本申请实施例提供的控制方法的一场景示意图。如图2所示,该场景包括无人飞行器100和终端设备200,无人飞行器100与终端设备200通信连接,终端设备200用于控制无人飞行器100。
其中,该无人飞行器100包括机体110和设于机体100上的动力系统120,该动力系统120可以包括一个或多个螺旋桨121、与一个或多个螺旋桨相对应的一个或多个电机122、一个或多个电子调速器(简称为电调)。其中,电机122连接在电子调速器与螺旋桨121之间,电机122和螺旋桨121设置在无人飞行器100的机体110上;电子调速器用于接收控制系统产生的驱动信号,并根据驱动信号提供驱动电流给电机122,以控制电机122的转速。电机122用于驱动螺旋桨121旋转,从而为无人飞行器100的飞行提供动力,该动力使得无人飞行器100能够实现一个或多个自由度的运动。在某些实施例中,无人飞行器100可以围绕一个或多个旋转轴旋转。例如,上述旋转轴可以包括横滚轴、偏航轴和俯仰轴。应理解,电机122可以是直流电机,也可以交流电机。另外,电机122可以是无刷电机,也可以是有刷电机。
其中,无人飞行器100还包括控制器和传感系统(图2中未示出),该传感系统用于测量无人飞行器的姿态信息,即无人飞行器100在空间的位置信息和状态信息,例如,三维位置、三维角度、三维速度、三维加速度和三维角速度等。传感系统例如可以包括陀螺仪、超声传感器、电子罗盘、惯性测量单元(Inertial Measurement Unit,IMU)、视觉传感器、全球导航卫星系统和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。控制器用于控制无人飞行器100的移动,例如,可以根据传感系统测量的姿态信息控制无人飞行器100的移动。应理解,控制器可以按照预先编好的程序指令对无人飞行器100进行控制。
进一步而言,终端设备和可移动平台之间通过无线信道传输数据。
示例性的,从可移动平台到终端设备的无线信道,称为下行信道,用于传输可移动平台的数据,例如通过第一拍摄装置获取的视频、图片、通过传感器获取的传感器数据、以及可移动平台,如无人机的状态信息(OSD)等遥测数据。
示例性的,从终端设备到可移动平台的无线信道,称为上行信道,用于传 输遥控数据;例如可移动平台为无人飞行器时,上行信道用于传输飞控指令以及拍照、录像、返航等控制指令。
具体的,可移动平台包括第一拍摄装置,第一拍摄装置用于获取图像。示例性的,第一拍摄装置通过云台搭载于无人飞行器的机体上,可以通过调整无人飞行器的位置、无人飞行器姿态和/或云台的位姿调整第一拍摄装置的位姿。或者第一拍摄装置可以直接搭载于无人飞行器的机体上,可以通过调整无人飞行器的位姿调整第一拍摄装置的位姿。其中位姿包括位置和/或姿态。
如图3所示为目前根据参考图像实现重复拍摄的场景示意图。图3左侧示出了在拍摄参考图像时的场景,参考图像是第二拍摄装置对准树拍摄得到的,参考图像中还包括树之后的房子。如图3右侧所示,在对该处的景物进行重复拍摄时,会由于第一拍摄装置的位姿和拍摄参考图像时第二拍摄装置的位姿存在差别,或者由于树附近的物体,如房子的影响使得第一拍摄装置对准房子进行重复拍摄,图3右侧示出了重复拍摄得到的图像,可称为复拍的图像,与参考图像的偏差较大,复拍准确性比较差。
针对该发现,本申请的发明人对可移动平台的控制方法进行了改进,以提高根据参考图像进行重复拍摄的准确性。具体地,如图1所示,本申请实施例的可移动平台的控制方法包括步骤S110至步骤S150。
为便于说明,将进行重复拍摄的拍摄装置称为第一拍摄装置,将拍摄参考图像的拍摄装置称为第二拍摄装置。可以理解的,第一拍摄装置、第二拍摄装置可以为同一个拍摄装置,也可以为不同的拍摄装置。
S110、获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数。
示例性的,第二拍摄装置在拍摄参考图像时记录自身的位姿信息,如无人飞行器的位置、机头方向和云台的姿态等,这些位姿信息可以称为参考位姿指示参数。
示例性的,可以将将参考图像和参考图像对应的参考位姿指示参数发送给终端设备,从而在终端设备上可以确定参考图像和所述参考图像对应的参考位姿指示参数。例如用户可以在终端设备查看第二拍摄装置拍摄的图像,可以确定其中任一帧图像为参考图像,例如可以确定包含感兴趣的目标对象的图像为参考图像。当然也不限于此,例如可以根据预设的人工智能算法确定参考图像, 例如确定目标对象具有安全隐患的图像为参考图像;或者可以根据预设的人工智能算法确定待筛选的图像,例如确定目标对象具有安全隐患的图像为待筛选的图像,然后由用户在待筛选的图像中确定参考图像。
S120、将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像。
示例性的,第一拍摄装置通过云台搭载于无人飞行器的机体上,可以通过调整无人飞行器的位置、无人飞行器姿态和/或云台的位姿调整第一拍摄装置的位姿。或者第一拍摄装置可以直接搭载于无人飞行器的机体上,可以通过调整无人飞行器的位姿调整第一拍摄装置的位姿,其中位姿包括位置和/或姿态。
示例性的,在确定所述第一拍摄装置的实际位姿已经调整至所述目标位姿时,获取第一拍摄装置拍摄的当前图像。举例而言,如图4所示为根据所述目标位姿控制调整第一拍摄装置的实际位姿后,第一拍摄装置拍摄当前图像的示意图。
示例性的,将所述参数位姿指示参数发送给无人飞行器,以使无人飞行器移动到参数位姿指示参数指示的位置,并调整到参数位姿指示参数对应的机头方向,以及调整云台的姿态与参数位姿指示参数指示的云台的姿态相同,从而可以将第一拍摄装置的实际位姿调整为与第二拍摄装置拍摄所述参考图像时的位姿相同或接近。
示例性的,在调整第一拍摄装置的实际位姿之后,获取第一拍摄装置拍摄拍摄的当前图像。可以理解的,所述当前图像与所述参考图像具有较大的相似度,例如可以包括相同目标对象的图像区域。如图4所示,所述当前图像与所述参考图像类似,包括树和/或房子的图像区域。
在一些实施方式中,所述当前图像的视场(FOV)角度大于或等于所述参考图像的视场角度。从而可以使得即使在位姿调整具有偏差时当前图像中仍能包括与参考图像相同目标对象的图像区域。
示例性的,获取所述当前图像时的焦距小于或等于获取所述参考图像的焦距。例如在第一拍摄装置获取所述当前图像时,控制第一拍摄装置调焦到参考图像的焦距,或者调焦到最小的焦距,可以在使采集的当前图像具有更大的视场角度。
S130、从所述当前图像中确定目标图像区域。
示例性的,可以确定当前图像中与参考图像具有相同或相似特征的区域为所述目标图像区域。例如确定当前图像中与参考图像中的目标对象具有相同或相似特征的图像区域为所述目标图像区域,例如确定当前图像中目标对象的图像区域为所述目标图像区域。
在一些实施方式中,所述当前图像和/或参考图像中包括多个对象,如树和房子,可以确定其中至少一个为目标对象。例如确定拍摄时距离摄像装置最近的对象为所述目标对象,或者可以根据用户的选取操作确定其中至少一个为目标对象。例如确定树为需要重复拍摄的目标对象,那么如图4所示,所述当前图像中的目标图像区域为树所在的图像区域。
S140、获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点。
示例性的,特征点是指图像中有代表性的点,在相机视角发生变化后保持不变。图像中的特征点又可以称为视觉特征。
示例性的,在图像中获取特征点可以包含提取关键点和计算描述子两个过程。例如可以通过SIFT(Scale-Invariant Feature Transform)尺度不变特征变换、SURF(Speeded Up Robust Features,加速稳健特征)、ORB(Oriented FAST and Rotated BRIEF)、卷积神经网络(CNN)中的至少一种过程在当前图像的目标图像区域中确定第一特征点,和在所述参考图像中确定第二特征点。
示例性的,通过特征点匹配可以确定所述参考图像中与所述第一特征点匹配的第二特征点,所述第一特征点与匹配的第二特征点组成匹配对。
例如,可以通过暴力匹配法获取所述参考图像中与所述第一特征点匹配的第二特征点。具体的,特征点之间的相似性可以用描述子之间的相似程度描述,对参考图像或参考图像中参考区域中的每个特征点在所述当前图像的目标图像区域内搜索与其描述子最相近的特征点作为匹配对。
示例性的,可以对匹配的第一特征点和第二特征点进行筛选处理,以提高匹配的准确性。例如根据重投影误差筛选出符合重投影模型的匹配对,即所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点。
如图5所示,第一特征点、第二特征点以乘号示出。可以确定目标图像区 域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点对应相同的目标对象,如树,匹配的准确性较高。
在一些实施方式中,所述方法还包括:确定所述参考图像中的参考区域。示例性的,所述获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点,包括:获取所述目标图像区域的第一特征点和所述参考区域中与所述第一特征点匹配的第二特征点。可以提高特征点确定、特征点匹配的效率和准确性。
示例性的,所述参考区域为根据用户的选取操作在所述参考图像中确定的图像区域。参考区域可以根据用户的选取操作确定,例如用户可以在终端设备显示的参考图像中框选所述参考区域,或者用户可以在参考图像中确定目标对象,根据用户确定的目标对象确定所述参考图像中目标对象的图像区域为所述参考区域。示例性的,可以获取用户确定的目标对象的类型,如树,以及根据所述类型在参考图像中确定所述参考区域。
示例性的,可以确定拍摄时距离第一拍摄装置最近的对象为所述目标对象,确定所述参考图像中目标对象的图像区域为所述参考区域。例如确定所述参考图像中的前景区域为所述参考区域。
示例性的,示例性的,所述参考区域为所述参考图像中目标对象的图像区域。例如,可以确定所述参考图像中目标对象的区域,然后在该区域确定特征点,以及通过特征点匹配在所述参考图像中的该区域确定与所述第一特征点匹配的第二特征点。
S150、根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
例如,根据各个匹配对内的两个特征点在各自图像上的坐标求解两幅图像对应的第一拍摄装置位姿的位姿偏差,根据该位姿偏差可以通过调整云台和/或无人机的位姿调整第一拍摄装置的实际位姿。
在一些实施方式中,所述根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿,包括:根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差,根据所述位姿偏差调整所述第一拍摄装置的实际位姿,例如调整 无人机的位置、无人机的yaw(偏航)角度和/或云台的姿态,以使所述第一拍摄装置的实际位姿趋向所述指示参数指示的位姿,恢复参考图像拍摄时的第二拍摄装置的位姿面,从而第一拍摄装置获取的当前图像可以与参考图像更接近。
请参阅图5,第一特征点在所述当前图像的位置相较于第二特征点在所述参考图像中的位置偏左,可以通过调整无人机和/或云台使第一拍摄装置的实际位姿向左移动,从而可以使得当前图像中第一特征点的位置向右移动,得到如图6所示的当前图像。
示例性的,可以将调整实际位姿之后拍摄的当前图像确定为目标图像,可以作为复拍所述参考图像的图像。例如可以在当前图像对应的位姿偏差小于或等于预设的偏差阈值时确定所述当前图像为目标图像,或者可以根据用户在当前图像对应的位姿偏差较小时的拍摄操作确定当前拍摄的图像为所述目标图像。
在一些实施方式中,所述根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差,包括:根据所述第一特征点在所述当前图像中的位置确定所述第一特征点对应的第一空间点的三维坐标;根据所述第二特征点在所述参考图像中的位置确定所述第二特征点对应的第二空间点的三维坐标;根据所述第一空间点的三维坐标和所述第二空间点的三维坐标,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差。根据所述位姿偏差调整第一拍摄装置的实际位姿。
示例性的,第i个特征点在图像中的位置可以用以下二维坐标表示:
[u i,v i]
其中,u i、v i中的一个用于表示特征点在图像中的横坐标,另一个表示纵坐标。
该二维坐标对应的齐次坐标形式为:
p i,uv=[u i,v i,1] T
那么该特征点对应的空间点在相机坐标系内的三维坐标p i可以根据下式确定:
p i=zK -1p i,uv
其中,z表示图像对应的物距,K为相机内参,由相机标定产生,为输入信息。
示例性的,根据所述第一空间点的三维坐标和所述第二空间点的三维坐标确定欧式变换,所述欧式变换用于指示所述当前图像对应的相机坐标系到所述参考图像对应的相机坐标系的变换;当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差可以包括所述欧式变换。
根据所述欧式变换可以调整第一拍摄装置的实际位姿。所述欧式变换包括旋转矩阵R和/或平移矢量t,所述旋转矩阵R用于指示所述第一拍摄装置采集所述当前图像时的姿态和采集所述参考图像时的姿态之间的姿态差,所述平移矢量t用于指示所述第一拍摄装置采集所述当前图像时的位置和采集所述参考图像时的位置之间的位置差。例如根据平移矢量t可以控制无人飞行器在左右方向和/或上下方向上移动,根据旋转矩阵R可以控制云台调整姿态,以调整第一拍摄装置的朝向。
例如,所述当前图像中第一特征点对应的第一空间点(一组第一空间点)的三维坐标可以表示为:
P′={p′ 1,…,p′ n}
所述参考图像中第二特征点对应的第二空间点(另一组第一空间点)的三维坐标可以表示为:
P={p 1,…,p n}
其中n表示图像中匹配到的特征点的个数。
例如可以根据基于SVD分解的ICP方法确定旋转矩阵R和平移矢量t的最优估计R *和t *,计算步骤如下:
1)计算两组空间点的质心位置p和p′,以及去质心坐标q i和q′ i
Figure PCTCN2020141086-appb-000001
q i=p i-p,q′ i=p′ i-p′
2)计算欧式变换中的旋转矩阵的最优估计R *
R *=UV T
其中,有
Figure PCTCN2020141086-appb-000002
以及对W做SVD分解有W=UΣV T
3)计算欧式变换中的平移矢量的最优估计t *
t *=p-R *p′
在一些实施方式中,所述方法还包括:根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿之后,根据所述参考图像拍摄时的焦距调整所述第一拍摄装置的焦距,并控制所述第一拍摄装置拍摄获取观测图像。
在根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿之后,第一拍摄装置的当前图像可以对准目标对象,通过调节第一拍摄装置的焦距与参考图像拍摄时的焦距相同,使得获取的观测图像与参考图像更接近。在一些实施方式中,所述观测图像可以称为复拍所述参考图像需要的图像,或者可以称为目标图像。
在另一些实施方式中,根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿之后,第一拍摄装置可以自动调节和/或根据用户的设置操作调节焦距,例如通过调节焦距使得当前图像的质量更好,例如目标对象在当前图像中更清晰。
在一些实施方式中,所述参考图像包括目标对象的图像区域。示例性的,参考图像是拍摄目标对象得到的。
在一些实施方式中,所述方法还包括:获取在拍摄所述参考图像时目标对象与第二拍摄装置之间的第一距离。
示例性的,第二拍摄装置搭载有距离传感器,如飞行时间传感器和/或双目相机等,通过距离传感器可以确定拍摄图像时目标对象与第二拍摄装置之间的距离。举例而言,第一距离可以称为目标距离。
示例性的,在第二拍摄装置拍摄图像时,通过调整第二拍摄装置的像距,确定目标对象合焦时第二拍摄装置的像距,根据成像公式可以确定所述像距对应的物距,该物距可以作为所述第一距离。可以理解的,目标对象合焦时目标对象的物像刚好落在第二拍摄装置的感光元件上,使得目标对象在拍摄的图像中足够清晰。例如采用反差对焦的第二拍摄装置,当对准目标对象拍摄时,镜头模组内的马达便会驱动镜片从底部向顶部移动,在这个过程中,感光元件,如图像传感器可以对整个场景范围进行纵深方向上的全面检测,并持续记录对比度等反差数值,最大的反差数值对应的像距,可以确定为目标对象合焦时第二拍摄装置的像距。
示例性的,所述确定所述当前图像中的目标图像区域,包括:根据所述第 一距离,在所述当前图像中确定所述目标图像区域。
示例性的,在第一拍摄装置获取所述当前图像时,确定与所述第一拍摄装置之间的距离为所述第一距离的对象在所述当前图像中的图像区域为所述目标图像区域。例如在拍摄所述参考图像时树与第二拍摄装置之间的距离为第一距离,房子与第二拍摄装置之间的距离不等于第一距离,则所述当前图像中的目标图像区域包括树的区域,而不包括房子的区域。
示例性的,所述根据所述第一距离,在所述当前图像中确定所述目标图像区域,包括:确定所述当前图像中的多个候选图像区域;确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离;根据所述第一距离和所述第二距离,从所述多个候选图像区域中确定所述目标图像区域。可以确定当前图像中与参考图像具有相同或相似特征的区域为所述目标图像区域。
示例性的,可以将当前图像等分为多个候选图像区域,例如等分为m行n列的多个候选图像区域,其中m、n是大于零的自然数,且m、n中的至少一个大于1。示例性的,可以通过机器学习算法确定当前图像中不同对象的轮廓,根据轮廓将所述当前图像分割为多个候选图像区域,例如其中一个候选图像区域包括树,另一个候选图像区域包括房子。
示例性的,所述第二距离可以通过所述第一拍摄装置搭载的距离传感器确定。例如将距离传感器对准各候选图像区域中的对象,如依次对准树和房子,以获取不同候选图像区域中对象与第一拍摄装置之间的第二距离。
示例性的,所述确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离,包括:调整第一拍摄装置的像距,确定所述多个候选图像区域中每一个合焦时的像距;根据所述合焦时的像距,确定所述多个候选图像区域中的对象与第一拍摄装置之间的第二距离。
例如,确定所述候选图像区域合焦时的像距,包括:在所述第一拍摄装置的像距调整时,确定所述候选图像区域中像素参数的反差数值;确定反差数值最大时所述第一拍摄装置的像距为所述候选图像区域合焦时的像距。根据成像公式可以确定所述像距对应的物距。
示例性的,所述目标图像区域包括第二距离与所述第一距离的差值小于或等于预设阈值的候选图像区域。或者,所述目标图像区域包括第二距离与所述第一距离的差值最小的候选图像区域。
从而可以使得目标图像区域中的对象与第一拍摄装置之间的距离与所述第一距离接近,例如目标图像区域中的对象为所述参考图像中的目标对象。
如图7所示,在调整第一拍摄装置的像距的过程中,距离第一拍摄装置较远的树A所在的候选图像区域先合焦,且合焦时的像距对应的物距,即第二距离为d1;之后树B所在的候选图像区域合焦,且合焦时的像距对应的物距为d2;之后距离第一拍摄装置较近的树C所在的候选图像区域合焦,且合焦时的像距对应的物距为d3。当d3与所述第一距离相近时,确定图像中合焦时物距为d3的候选图像区域为所述目标图像区域。
在另一些实施方式中,所述从所述当前图像中确定目标图像区域,包括:确定所述当前图像中的多个候选图像区域;确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离;根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域。
示例性的,所述根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域,包括:将所述第二距离最小的候选图像区域确定为目标图像区域。如图7所示,将所述第二距离最小的若干个候选图像区域确定为目标图像区域。
示例性的,所述方法还包括:获取在拍摄所述参考图像时目标对象与第二拍摄装置之间的第一距离。示例性的,所述根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域,包括:根据所述第一距离和所述第二距离,从所述多个候选图像区域中确定所述目标图像区域。可以确定当前图像中与参考图像具有相同或相似特征的区域为所述目标图像区域。
示例性的,所述目标图像区域包括第二距离与所述第一距离的差值小于或等于预设阈值的候选图像区域。或者,所述目标图像区域包括第二距离与所述第一距离的差值最小的候选图像区域。从而可以使得目标图像区域中的对象与第一拍摄装置之间的距离与所述第一距离接近,例如目标图像区域中的对象为所述参考图像中的目标对象。
在其他一些实施方式中,所述目标图像区域为所述当前图像中目标对象对应的图像区域,其中,所述目标对象为第一拍摄装置在拍摄所述当前图像时其视野范围内与第一拍摄装置距离最近的对象。
通常在拍摄图像时目标对象相较于背景与拍摄装置之间的距离是比较近的,例如,目标对象在图像中对应的区域可以称为前景。示例性的,所述参考图像 中目标对象位于前景区域中,所述目标图像区域可以为所述当前图像中的前景区域。可以确定当前图像中与参考图像具有相同或相似特征的区域为所述目标图像区域。
示例性的,所述目标图像区域是通过预设的图像分割模型在所述当前图像中确定的。图像分割模型可以是训练好的神经网络模型,用于在图像中分割出前景区域,根据当前图像中的前景区域确定所述目标图像区域。当然根据当前图像中的前景确定所述目标图像区域的方式也不限于此,例如可以通过调整第一拍摄装置的位置,通过多视角观测确定所述当前图像中各个像素的深度,以及确定深度最小的图像区域为所述目标图像区域。
在其他另一些实施方式中,所述方法还包括:获取所述参考图像中目标对象的类型。
示例性的,所述从所述当前图像中确定目标图像区域,包括:根据所述目标对象的类型在所述当前图像中确定所述目标对象的图像区域,并将所述目标对象的图像区域确定为所述目标图像区域。从而可以确定当前图像中与参考图像具有相同或相似特征的区域为所述目标图像区域。
示例性的,所述目标对象和/或所述目标对象的类型根据用户的操作确定;和/或所述目标对象和/或所述目标对象的类型通过对所述参考图像进行识别得到。
示例性的,所述参考图像中目标对象的类型可以根据用户在终端设备上的操作和/或通过对所述参考图像进行识别确定。例如可以获取用户在终端设备输入的目标对象的类型,或者通过对参考图像或者对参考图像中的目标对象进行识别确定。
示例性的,确定拍摄所述参考图像时距离第二摄像装置最近的对象为所述目标对象,或者可以根据用户的选取操作确定其中至少一个为目标对象。
本申请实施例提供的可移动平台的控制方法,根据拍摄参考图像的参考位姿指示参数调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;以及确定所述当前图像中目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。以使所述第一拍摄装置的实际位姿趋向拍摄参考图像时的位姿,从而第一拍摄 装置获取的当前图像可以与参考图像更接近。
通过区分当前图像中用户实际关切的目标图像区域,在目标图像区域内提取特征点并做特征匹配,根据当前图像中目标图像区域的特征点的位置和参考图像中匹配的特征点的位置调整第一拍摄装置的实际位姿,可以防止拍摄时视野中非目标对象影响第一拍摄装置位姿的调整,对第一拍摄装置位姿变化、拍摄时视野中背景纹理变化以及视野中有不同距离的相似目标的场景适应性更好,可以避免视野中的其他部分影响位置和姿态计算,从而可以得到更准确的复拍结果。
在一些实施方式中,使用合焦面区分图像中不同距离的物体,可以起到筛选目标图像区域的作用。通过反差对焦时当前图像中各区域合焦时的像距确定所述目标图像区域,以及在目标图像区域内提取特征点并做特征匹配。可以在复拍进行特征点匹配前在不改变第一拍摄装置位置的前提下分割图像中的不同距离的对象,还可以结合目标距离或位置的先验信息在当前图像中筛选准确的目标图像区域,屏蔽无用信息的影响。
请结合上述实施例参阅图8,图8是本申请实施例提供的可移动平台的控制装置500的示意性框图。控制装置500用于控制可移动平台。
示例性的,可移动平台可以包括无人飞行器、云台、无人车等中的至少一种。进一步而言,飞行器可以为旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机。
可移动平台包括第一拍摄装置,第一拍摄装置用于获取图像。
示例性的,第一拍摄装置通过云台搭载于无人飞行器的机体上,可以通过调整无人飞行器的位置、无人飞行器姿态和/或云台的位姿调整第一拍摄装置的位姿。或者第一拍摄装置可以直接搭载于无人飞行器的机体上,可以通过调整无人飞行器的位姿调整第一拍摄装置的位姿。其中位姿包括位置和/或姿态。
示例性的,可移动平台包括控制装置500,控制装置500包括一个或多个处理器501,一个或多个处理器501单独地或共同地工作,用于执行前述的可移动平台的控制方法的步骤。
示例性的,控制装置500还包括存储器502,存储器502用于存储程序指令。
示例性的,处理器501和存储器502通过总线503连接,该总线503比如 为I2C(Inter-integrated Circuit)总线。
具体地,处理器501可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器502可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述一个或多个处理器501用于调用存储在存储器502中的程序指令,并在执行所述程序指令时执行前述的可移动平台的控制方法的步骤。
示例性的,所述处理器501用于调用存储在存储器502中的程序指令,并在执行所述程序指令时执行如下步骤:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
本申请实施例提供的控制装置的具体原理和实现方式均与前述实施例的可移动平台的控制方法类似,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令被处理器执行时使所述处理器实现上述实施例提供的可移动平台的控制方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的控制装置的内部存储单元,例如所述控制装置的硬盘或内存。所述计算机可读存储介质也可以是所述控制装置的外部存储设备,例如所述控制装置上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
请结合上述实施例参阅图9,图9是本申请实施例提供的终端设备600的 示意性框图。
其中终端设备600可以包括手机、平板电脑、笔记本电脑、台式电脑、个人数字助理、穿戴式设备、遥控器等中的至少一项。
具体的,终端设备600能够与可移动平台通信连接,可移动平台包括第一拍摄装置。可移动平台能够将自身的位姿信息,和第一拍摄装置拍摄的图像发送给终端设备600,终端设备600能够根据位姿信息和图像控制可移动平台调整位姿。
该终端设备600包括一个或多个处理器601,一个或多个处理器601单独地或共同地工作,用于执行前述的可移动平台的控制方法的步骤。
示例性的,终端设备600还包括存储器602,存储器602用于存储程序指令。
示例性的,处理器601和存储器602通过总线603连接,该总线603比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器601可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器602可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述一个或多个处理器601用于调用存储在存储器602中的程序指令,并在执行所述程序指令时执行前述的可移动平台的控制方法的步骤。
示例性的,所述处理器601用于调用存储在存储器602中的程序指令,并在执行所述程序指令时执行如下步骤:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为可移动平台的第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考 图像中的位置,调整第一拍摄装置的实际位姿。
本申请实施例提供的终端设备的具体原理和实现方式均与前述实施例的可移动平台的控制方法类似,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令被处理器执行时使所述处理器实现上述实施例提供的可移动平台的控制方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的终端设备的内部存储单元,例如所述终端设备的硬盘或内存。所述计算机可读存储介质也可以是所述终端设备的外部存储设备,例如所述终端设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
请参阅图10,图10是本申请实施例提供的可移动平台700的示意性框图。示例性的,所述可移动平台可以包括无人飞行器、云台、无人车等中的至少一种。进一步而言,无人飞行器可以为旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机。
具体的,可移动平台700包括第一拍摄装置,可移动平台700能够获取第一拍摄装置拍摄的图像,可移动平台可以根据位姿信息和第一拍摄装置拍摄的图像调整自身的位姿。
该可移动平台700包括一个或多个处理器701,一个或多个处理器701单独地或共同地工作,用于执行前述的可移动平台的控制方法的步骤。
示例性的,可移动平台700还包括存储器702,存储器702用于存储程序指令。
示例性的,处理器701和存储器702通过总线703连接,该总线703比如为I2C(Inter-integrated Circuit)总线。
具体地,处理器701可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
具体地,存储器702可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,所述一个或多个处理器701用于调用存储在存储器702中的程序指 令,并在执行所述程序指令时执行前述的可移动平台的控制方法的步骤。
示例性的,所述处理器701用于调用存储在存储器702中的程序指令,并在执行所述程序指令时执行如下步骤:
获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
从所述当前图像中确定目标图像区域;
获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
本申请实施例提供的可移动平台的具体原理和实现方式均与前述实施例的可移动平台的控制方法类似,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令中包括程序指令,所述程序指令被处理器执行时使所述处理器实现上述实施例提供的可移动平台的控制方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的可移动平台的内部存储单元,例如所述可移动平台的硬盘或内存。所述计算机可读存储介质也可以是所述可移动平台的外部存储设备,例如所述可移动平台上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
应当理解,在此本申请中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。
还应当理解,在本申请和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (47)

  1. 一种可移动平台的控制方法,其中,所述可移动平台包括第一拍摄装置,其特征在于,包括:
    获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
    将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
    从所述当前图像中确定目标图像区域;
    获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
    根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
  2. 根据权利要求1所述的控制方法,其特征在于,所述参考图像包括目标对象的图像区域,所述方法还包括:
    获取在拍摄所述参考图像时目标对象与第二拍摄装置之间的第一距离;
    所述确定所述当前图像中的目标图像区域,包括:
    根据所述第一距离,在所述当前图像中确定所述目标图像区域。
  3. 根据权利要求2所述的控制方法,其特征在于,所述根据所述第一距离,在所述当前图像中确定所述目标图像区域,包括:
    确定所述当前图像中的多个候选图像区域;
    确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离;
    根据所述第一距离和所述第二距离,从所述多个候选图像区域中确定所述目标图像区域。
  4. 根据权利要求3所述的控制方法,其特征在于,所述目标图像区域包括第二距离与所述第一距离的差值小于或等于预设阈值的候选图像区域;或者
    所述目标图像区域包括第二距离与所述第一距离的差值最小的候选图像区域。
  5. 根据权利要求1所述的控制方法,其特征在于,所述从所述当前图像中确定目标图像区域,包括:
    确定所述当前图像中的多个候选图像区域;
    确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离;
    根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域。
  6. 根据权利要求5所述的控制方法,其特征在于,所述根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域,包括:
    将所述第二距离最小的候选图像区域确定为目标图像区域。
  7. 根据权利要求5所述的控制方法,其特征在于,所述方法还包括:
    获取在拍摄所述参考图像时目标对象与第二拍摄装置之间的第一距离;
    所述根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域,包括:
    根据所述第一距离和所述第二距离,从所述多个候选图像区域中确定所述目标图像区域。
  8. 根据权利要求7所述的控制方法,其特征在于,所述目标图像区域包括第二距离与所述第一距离的差值小于或等于预设阈值的候选图像区域;或者
    所述目标图像区域包括第二距离与所述第一距离的差值最小的候选图像区域。
  9. 根据权利要求1所述的控制方法,其特征在于,所述目标图像区域为所述当前图像中目标对象对应的图像区域,其中,所述目标对象为第一拍摄装置在拍摄所述当前图像时其视野范围内与第一拍摄装置距离最近的对象。
  10. 根据权利要求9所述的控制方法,其特征在于,所述目标图像区域是通过预设的图像分割模型在所述当前图像中确定的。
  11. 根据权利要求1所述的控制方法,其特征在于,所述方法还包括:
    获取所述参考图像中目标对象的类型;
    所述从所述当前图像中确定目标图像区域,包括:
    根据所述目标对象的类型在所述当前图像中确定所述目标对象的图像区域,并将所述目标对象的图像区域确定为所述目标图像区域。
  12. 根据权利要求11所述的控制方法,其特征在于,所述目标对象和/或所 述目标对象的类型根据用户的操作确定;和/或
    所述目标对象和/或所述目标对象的类型通过对所述参考图像进行识别得到。
  13. 根据权利要求1-12中任一项所述的控制方法,其特征在于,所述方法还包括:
    确定所述参考图像中的参考区域;
    所述获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点,包括:
    获取所述目标图像区域的第一特征点和所述参考区域中与所述第一特征点匹配的第二特征点。
  14. 根据权利要求13所述的控制方法,其特征在于,所述参考区域为所述参考图像中目标对象的图像区域;或者
    所述参考区域为根据用户的选取操作在所述参考图像中确定的图像区域。
  15. 根据权利要求3或5所述的控制方法,其特征在于,所述确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离,包括:
    调整第一拍摄装置的像距,确定所述多个候选图像区域中每一个合焦时的像距;
    根据所述合焦时的像距,确定所述多个候选图像区域中的对象与第一拍摄装置之间的第二距离。
  16. 根据权利要求3或5所述的控制方法,其特征在于,通过第一拍摄装置搭载的距离传感器确定所述候选图像区域中的对象与第一拍摄装置之间的第二距离。
  17. 根据权利要求1-16中任一项所述的控制方法,其特征在于,所述当前图像的视场角度大于或等于所述参考图像的视场角度。
  18. 根据权利要求1-16中任一项所述的控制方法,其特征在于,获取所述当前图像时的焦距小于或等于获取所述参考图像的焦距。
  19. 根据权利要求1-18中任一项所述的控制方法,其特征在于,所述根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿,包括:
    根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考 图像中的位置,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差,根据所述位姿偏差调整所述第一拍摄装置的实际位姿。
  20. 根据权利要求19所述的控制方法,其特征在于,所述根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差,包括:
    根据所述第一特征点在所述当前图像中的位置确定所述第一特征点对应的第一空间点的三维坐标;
    根据所述第二特征点在所述参考图像中的位置确定所述第二特征点对应的第二空间点的三维坐标;
    根据所述第一空间点的三维坐标和所述第二空间点的三维坐标,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差。
  21. 根据权利要求1-20中任一项所述的控制方法,其特征在于,所述可移动平台包括如下至少一种:无人飞行器、云台、无人车。
  22. 根据权利要求1-21中任一项所述的控制方法,其特征在于,所述方法还包括:根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿之后,根据所述参考图像拍摄时的焦距调整所述第一拍摄装置的焦距,并控制所述第一拍摄装置拍摄获取观测图像。
  23. 一种可移动平台的控制装置,用于控制可移动平台,其中,所述可移动平台包括第一拍摄装置,其特征在于,包括存储器和一个或多个处理器;
    所述存储器,用于存储程序指令;
    所述一个或多个处理器,单独地或共同地工作,调用并执行所述程序指令以用于执行如下步骤:
    获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
    将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
    从所述当前图像中确定目标图像区域;
    获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点 匹配的第二特征点;
    根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
  24. 根据权利要求23所述的控制装置,其特征在于,所述参考图像包括目标对象的图像区域,所述处理器还用于执行:
    获取在拍摄所述参考图像时目标对象与第二拍摄装置之间的第一距离;
    所述处理器执行所述确定所述当前图像中的目标图像区域时,用于执行:
    根据所述第一距离,在所述当前图像中确定所述目标图像区域。
  25. 根据权利要求24所述的控制装置,其特征在于,所述处理器执行所述根据所述第一距离,在所述当前图像中确定所述目标图像区域时,用于执行:
    确定所述当前图像中的多个候选图像区域;
    确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离;
    根据所述第一距离和所述第二距离,从所述多个候选图像区域中确定所述目标图像区域。
  26. 根据权利要求25所述的控制装置,其特征在于,所述目标图像区域包括第二距离与所述第一距离的差值小于或等于预设阈值的候选图像区域;或者所述目标图像区域包括第二距离与所述第一距离的差值最小的候选图像区域。
  27. 根据权利要求23所述的控制装置,其特征在于,所述处理器执行所述从所述当前图像中确定目标图像区域时,用于执行:
    确定所述当前图像中的多个候选图像区域;
    确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离;
    根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域。
  28. 根据权利要求27所述的控制装置,其特征在于,所述处理器执行所述根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域时,用于执行:
    将所述第二距离最小的候选图像区域确定为目标图像区域。
  29. 根据权利要求27所述的控制装置,其特征在于,所述处理器还用于执 行:
    获取在拍摄所述参考图像时目标对象与第二拍摄装置之间的第一距离;
    所述处理器执行所述根据所述第二距离从所述多个候选图像区域中确定所述目标图像区域时,用于执行:
    根据所述第一距离和所述第二距离,从所述多个候选图像区域中确定所述目标图像区域。
  30. 根据权利要求29所述的控制装置,其特征在于,所述目标图像区域包括第二距离与所述第一距离的差值小于或等于预设阈值的候选图像区域;或者所述目标图像区域包括第二距离与所述第一距离的差值最小的候选图像区域。
  31. 根据权利要求23所述的控制装置,其特征在于,所述目标图像区域为所述当前图像中目标对象对应的图像区域,其中,所述目标对象为第一拍摄装置在拍摄所述当前图像时其视野范围内与第一拍摄装置距离最近的对象。
  32. 根据权利要求31所述的控制装置,其特征在于,所述目标图像区域是通过预设的图像分割模型在所述当前图像中确定的。
  33. 根据权利要求23所述的控制装置,其特征在于,所述处理器还用于执行:
    获取所述参考图像中目标对象的类型;
    所述处理器执行所述从所述当前图像中确定目标图像区域时,用于执行:
    根据所述目标对象的类型在所述当前图像中确定所述目标对象的图像区域,并将所述目标对象的图像区域确定为所述目标图像区域。
  34. 根据权利要求33所述的控制装置,其特征在于,所述目标对象和/或所述目标对象的类型根据用户的操作确定;和/或
    所述目标对象和/或所述目标对象的类型通过对所述参考图像进行识别得到。
  35. 根据权利要求23-34中任一项所述的控制装置,其特征在于,所述处理器还用于执行:
    确定所述参考图像中的参考区域;
    所述处理器执行所述获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点时,用于执行:
    获取所述目标图像区域的第一特征点和所述参考区域中与所述第一特征点匹配的第二特征点。
  36. 根据权利要求35所述的控制装置,其特征在于,所述参考区域为所述参考图像中目标对象的图像区域;或者
    所述参考区域为根据用户的选取操作在所述参考图像中确定的图像区域。
  37. 根据权利要求25或27所述的控制装置,其特征在于,所述处理器执行所述确定在拍摄当前图像时多个候选图像区域中的对象与第一拍摄装置之间的第二距离时,用于执行:
    调整第一拍摄装置的像距,确定所述多个候选图像区域中每一个合焦时的像距;
    根据所述合焦时的像距,确定所述多个候选图像区域中的对象与第一拍摄装置之间的第二距离。
  38. 根据权利要求25或27所述的控制装置,其特征在于,通过第一拍摄装置搭载的距离传感器确定所述候选图像区域中的对象与第一拍摄装置之间的第二距离。
  39. 根据权利要求23-38中任一项所述的控制装置,其特征在于,所述当前图像的视场角度大于或等于所述参考图像的视场角度。
  40. 根据权利要求23-39中任一项所述的控制装置,其特征在于,获取所述当前图像时的焦距小于或等于获取所述参考图像的焦距。
  41. 根据权利要求23-40中任一项所述的控制装置,其特征在于,所述处理器执行所述根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿时,用于执行:
    根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差,根据所述位姿偏差调整所述第一拍摄装置的实际位姿。
  42. 根据权利要求41所述的控制装置,其特征在于,所述处理器执行所述根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差时,用于执行:
    根据所述第一特征点在所述当前图像中的位置确定所述第一特征点对应的 第一空间点的三维坐标;
    根据所述第二特征点在所述参考图像中的位置确定所述第二特征点对应的第二空间点的三维坐标;
    根据所述第一空间点的三维坐标和所述第二空间点的三维坐标,确定当前第一拍摄装置的实际位姿与所述指示参数指示的位姿之间的位姿偏差。
  43. 根据权利要求23-42中任一项所述的控制装置,其特征在于,所述可移动平台包括如下至少一种:无人飞行器、云台、无人车。
  44. 根据权利要求23-43中任一项所述的控制装置,其特征在于,所述处理器还用于执行:根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿之后,根据所述参考图像拍摄时的焦距调整所述第一拍摄装置的焦距,并控制所述第一拍摄装置拍摄获取观测图像。
  45. 一种终端设备,其特征在于,能够与可移动平台通信连接;
    所述终端设备包括存储器和一个或多个处理器;
    所述存储器用于存储程序指令;
    所述一个或多个处理器,单独地或共同地工作,调用并执行所述程序指令以用于执行如下步骤:
    获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
    将所述参数位姿指示参数指示的位姿确定为可移动平台的第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
    从所述当前图像中确定目标图像区域;
    获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
    根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
  46. 一种可移动平台,其特征在于,包括第一拍摄装置、存储器和和一个或多个处理器,所述第一拍摄装置用于获取图像;
    所述存储器,用于存储程序指令;
    所述一个或多个处理器,单独地或共同地工作,调用并执行所述程序指令 以用于执行如下步骤:
    获取参考图像和拍摄所述参考图像的第二拍摄装置的参考位姿指示参数;
    将所述参数位姿指示参数指示的位姿确定为第一拍摄装置的目标位姿,根据所述目标位姿控制调整第一拍摄装置的实际位姿,并获取第一拍摄装置拍摄的当前图像;
    从所述当前图像中确定目标图像区域;
    获取所述目标图像区域的第一特征点和所述参考图像中与所述第一特征点匹配的第二特征点;
    根据所述第一特征点在所述当前图像的位置和所述第二特征点在所述参考图像中的位置,调整第一拍摄装置的实际位姿。
  47. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有程序指令,所述程序指令被处理器执行时使所述处理器实现如权利要求1-22中任一项所述的控制方法。
PCT/CN2020/141086 2020-12-29 2020-12-29 可移动平台及其控制方法、装置、终端设备和存储介质 WO2022141123A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141086 WO2022141123A1 (zh) 2020-12-29 2020-12-29 可移动平台及其控制方法、装置、终端设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141086 WO2022141123A1 (zh) 2020-12-29 2020-12-29 可移动平台及其控制方法、装置、终端设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022141123A1 true WO2022141123A1 (zh) 2022-07-07

Family

ID=82259905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141086 WO2022141123A1 (zh) 2020-12-29 2020-12-29 可移动平台及其控制方法、装置、终端设备和存储介质

Country Status (1)

Country Link
WO (1) WO2022141123A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959625A (zh) * 2016-05-04 2016-09-21 北京博瑞爱飞科技发展有限公司 控制无人机追踪拍摄的方法及装置
CN111316185A (zh) * 2019-02-26 2020-06-19 深圳市大疆创新科技有限公司 可移动平台的巡检控制方法和可移动平台
CN111316632A (zh) * 2019-01-17 2020-06-19 深圳市大疆创新科技有限公司 拍摄控制方法及可移动平台
CN111429517A (zh) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 重定位方法、重定位装置、存储介质与电子设备
US20200250429A1 (en) * 2017-10-26 2020-08-06 SZ DJI Technology Co., Ltd. Attitude calibration method and device, and unmanned aerial vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959625A (zh) * 2016-05-04 2016-09-21 北京博瑞爱飞科技发展有限公司 控制无人机追踪拍摄的方法及装置
US20200250429A1 (en) * 2017-10-26 2020-08-06 SZ DJI Technology Co., Ltd. Attitude calibration method and device, and unmanned aerial vehicle
CN111316632A (zh) * 2019-01-17 2020-06-19 深圳市大疆创新科技有限公司 拍摄控制方法及可移动平台
CN111316185A (zh) * 2019-02-26 2020-06-19 深圳市大疆创新科技有限公司 可移动平台的巡检控制方法和可移动平台
CN111429517A (zh) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 重定位方法、重定位装置、存储介质与电子设备

Similar Documents

Publication Publication Date Title
CN109857144B (zh) 无人机、无人机控制系统及控制方法
US20210141378A1 (en) Imaging method and device, and unmanned aerial vehicle
CN113038016B (zh) 无人机图像采集方法及无人机
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
WO2019113966A1 (zh) 一种避障方法、装置和无人机
WO2019076304A1 (zh) 基于双目相机的无人机视觉slam方法、无人机及存储介质
WO2019119328A1 (zh) 一种基于视觉的定位方法及飞行器
WO2020113423A1 (zh) 目标场景三维重建方法、系统及无人机
WO2019126930A1 (zh) 测距方法、装置以及无人机
CN106529538A (zh) 一种飞行器的定位方法和装置
WO2021035731A1 (zh) 无人飞行器的控制方法、装置及计算机可读存储介质
WO2019104571A1 (zh) 图像处理方法和设备
CN108235815B (zh) 摄像控制装置、摄像装置、摄像系统、移动体、摄像控制方法及介质
CN108520559B (zh) 一种基于双目视觉的无人机定位导航的方法
KR101510312B1 (ko) 복수의 카메라들을 이용한 3d 얼굴 모델링 장치, 시스템 및 방법
WO2019061064A1 (zh) 图像处理方法和设备
JP6583840B1 (ja) 検査システム
WO2022021027A1 (zh) 目标跟踪方法、装置、无人机、系统及可读存储介质
WO2020135447A1 (zh) 一种目标距离估计方法、装置及无人机
WO2021258251A1 (zh) 用于可移动平台的测绘方法、可移动平台和存储介质
CN110720113A (zh) 一种参数处理方法、装置及摄像设备、飞行器
WO2019189381A1 (ja) 移動体、制御装置、および制御プログラム
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
WO2021138856A1 (zh) 相机控制方法、设备及计算机可读存储介质
CN110036411B (zh) 生成电子三维漫游环境的装置和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967471

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967471

Country of ref document: EP

Kind code of ref document: A1