WO2021051227A1 - 三维重建中图像的位姿信息确定方法和装置 - Google Patents

三维重建中图像的位姿信息确定方法和装置 Download PDF

Info

Publication number
WO2021051227A1
WO2021051227A1 PCT/CN2019/105923 CN2019105923W WO2021051227A1 WO 2021051227 A1 WO2021051227 A1 WO 2021051227A1 CN 2019105923 W CN2019105923 W CN 2019105923W WO 2021051227 A1 WO2021051227 A1 WO 2021051227A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
current frame
image
pose information
information
Prior art date
Application number
PCT/CN2019/105923
Other languages
English (en)
French (fr)
Inventor
朱晏辰
薛唐立
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/105923 priority Critical patent/WO2021051227A1/zh
Priority to CN201980030549.6A priority patent/CN112106113A/zh
Publication of WO2021051227A1 publication Critical patent/WO2021051227A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present invention generally relates to the technical field of three-dimensional reconstruction, and more specifically relates to a method and device for determining image pose information in three-dimensional reconstruction.
  • real-time 3D reconstruction simultaneously performs two module algorithms, namely the front-end tracking thread and the back-end mapping thread.
  • An important role of the front-end tracking thread is to determine the pose information of the camera when the image is taken in real time.
  • the back-end mapping thread uses the pose information to build the image. If the tracking process cannot give the corresponding position and pose, the reconstruction fails. Therefore, the robustness of real-time 3D reconstruction greatly depends on the robustness of synchronous positioning and mapping algorithms.
  • the traditional front-end tracking thread it is divided into three stages: initialization, tracking and relocation.
  • the relocation process will be initiated, that is, all image frames in the global map (such as key frame images) ) Perform a similarity search to find the image frame that best matches the current frame image, and use the found image frame to perform tracking again to calculate the position information of the camera corresponding to the current frame image.
  • the required image frame cannot be found during the relocation process, and the pose information of the current frame image cannot be determined. This will cause the real-time 3D reconstruction to be unable to continue or the current frame image to be unavailable.
  • the present invention provides a scheme for determining the pose information of an image in three-dimensional reconstruction, which effectively avoids the problem that three-dimensional reconstruction cannot be achieved if relocation fails, and improves the robustness of real-time three-dimensional reconstruction.
  • a method for determining the pose information of an image in three-dimensional reconstruction includes: acquiring a current frame image taken by a camera carried on a pan/tilt of a movable platform, wherein the The mobile platform includes a sensor for measuring the position information of the movable platform and the posture information of the pan/tilt; when the relocation of the current frame image fails, the feature of the previous frame image of the current frame image is extracted Point, and execute a feature point matching algorithm according to the feature points of the previous frame image to determine the feature point matching pair corresponding to the current frame image and the previous frame image; and when the feature point matching pair satisfies a preset
  • determine the relative pose information of the current frame image relative to the previous frame image according to the feature point matching pair determine the relative pose information of the current frame image relative to the previous frame image according to the relative pose information and the current frame image.
  • the position information and the posture information measured by the sensor determine the posture information of the current
  • a device for determining the pose information of an image in three-dimensional reconstruction includes a storage device and a processor, wherein the storage device is configured to store program code, and the processor, The program code is executed, and when the program code is executed, it is used to: obtain the current frame image taken by the camera carried on the pan-tilt of the movable platform, wherein the movable platform includes The position information of the mobile platform and the sensor of the posture information of the PTZ; when the current frame image fails to be relocated, the feature points of the previous frame image of the current frame image are extracted, and based on the previous frame
  • the feature point of the image executes a feature point matching algorithm to determine the feature point matching pair corresponding to the current frame image and the previous frame image; and when the feature point matching pair meets a preset matching success condition, according to the The feature point matching pair determines the relative pose information of the current frame image with respect to the previous frame image, and based on the relative pose information and the position information measured by the sensor
  • a movable platform includes the above-mentioned device for determining the pose information of an image in three-dimensional reconstruction.
  • a storage medium having a computer program stored on the storage medium, and the computer program executes the above-mentioned method for determining the pose information of an image in three-dimensional reconstruction during operation.
  • the method, device, movable platform, and storage medium for determining the pose information of the image in three-dimensional reconstruction according to the embodiments of the present invention are based on the characteristics between the current frame image and the previous frame image in the case of failure to relocate the current frame image
  • the relative pose information of the current frame image relative to the previous frame image is obtained by matching, and the pose information of the current frame image is determined based on the relative pose information and the sensor pose information on the movable platform where the shooting device of the image is taken , It effectively avoids the problem that 3D reconstruction cannot be done if relocation fails, and improves the robustness of real-time 3D reconstruction.
  • Fig. 1 shows a schematic flowchart of a method for determining pose information of an image in three-dimensional reconstruction according to an embodiment of the present invention
  • FIG. 2 shows a schematic flowchart of a method for determining pose information of an image in three-dimensional reconstruction according to another embodiment of the present invention
  • FIG. 3 shows a schematic flowchart of a method for determining pose information of an image in three-dimensional reconstruction according to still another embodiment of the present invention
  • FIG. 4 shows a schematic flowchart of a method for determining the pose information of an image in three-dimensional reconstruction according to another embodiment of the present invention
  • Fig. 5 shows a schematic block diagram of a device for determining the pose information of an image in three-dimensional reconstruction according to an embodiment of the present invention.
  • Fig. 6 shows a schematic block diagram of a movable platform according to an embodiment of the present invention.
  • FIG. 1 shows a schematic flowchart of a method 100 for determining the pose information of an image in three-dimensional reconstruction according to an embodiment of the present invention.
  • the method 100 for determining the pose information of an image in 3D reconstruction may include the following steps:
  • step S110 obtain the current frame image taken by the camera carried on the platform of the movable platform, wherein the movable platform includes information for measuring the position of the movable platform and the posture information of the platform Sensor.
  • the image acquired in step S110 is captured by a camera carried on a pan/tilt of a movable platform (such as a drone).
  • a movable platform such as a drone
  • the pose information is the position information and the pose information when the camera captures the frame of the image.
  • the current frame image can be tracked first, and if the tracking fails, the current frame image is repositioned. The following describes the process of tracking and relocation.
  • the tracking process can be as follows: the feature points of the three-dimensional points (that is, the map points in space) obtained from the feature points of the previous frame image are determined correspondingly and matched feature points in the current frame image, that is, in the current frame
  • Finding the feature point with the same name in the image is a 3D-2D tracking process.
  • one way of implementation is to perform feature extraction on the current frame image to obtain the feature points of the current frame image. Then, the feature points of the three-dimensional points obtained from the feature points of the previous frame of image can be matched with the feature points of the current frame of image.
  • Another implementation method is to implement the KLT algorithm on the feature points of the three-dimensional point (that is, the map point in space) obtained from the feature points of the previous frame of image to determine the corresponding matching feature point in the current frame of image.
  • the preset matching success condition for example, the number of obtained feature point matching pairs reaches a predetermined threshold
  • the posture information of the assumed current frame image the assumed current frame image
  • the pose information can be predicted based on the pose information of the last frame of image, for example) Project the three-dimensional point obtained based on the previous frame of image to the current frame of image to obtain the two-dimensional point corresponding to the three-dimensional point.
  • the visual reprojection error is calculated based on the matching relationship between the two-dimensional points and the feature points of the current frame image, and the pose information of the current frame image is calculated by minimizing the reprojection error.
  • the obtained feature point matching pair does not meet the preset matching success condition (for example, the number of obtained feature point matching pairs does not reach a predetermined threshold)
  • the current frame image is repositioned. It is understandable that a three-dimensional point is obtained from the feature points of the previous frame of image, and the three-dimensional point is at least jointly observed by the previous frame of image and the previous frame of the previous frame of image.
  • the relocation process can be as follows: similarity search is performed on the image before the last frame image to search for the frame most similar to the current frame image, and the current frame image is re-tracked based on the searched frame. To calculate the pose information of the current frame image. If no frame similar to the current frame image is found, it means that the relocation failed. In the traditional scheme, if the relocation fails, the pose information of the current frame image cannot be given, and the three-dimensional reconstruction cannot be continued.
  • step S120 when the relocation of the current frame image fails, the feature points of the previous frame image of the current frame image are extracted, and the feature point matching algorithm is executed according to the feature points of the previous frame image to determine the A matching pair of feature points corresponding to the current frame image and the previous frame image.
  • step S130 when the feature point matching pair meets the preset matching success condition, determine the relative pose information of the current frame image relative to the previous frame image according to the feature point matching pair, and determine the relative pose information of the current frame image relative to the previous frame image according to the feature point matching pair.
  • the relative pose information and the position information and the pose information measured by the sensor when the current frame image is taken determine the pose information of the current frame image.
  • the feature points of the current frame image and the previous frame image can be matched again.
  • the difference from the matching in the aforementioned tracking is that the feature points here Matching is not to match the feature points of the three-dimensional points obtained from the feature points of the previous frame image with the feature points of the current frame image, but to match all the feature points of the previous frame image with the feature points of the current frame image, that is, 2D-2D tracking or two-dimensional tracking.
  • the operation here after the relocation fails is called two-dimensional tracking. That is to say, in the embodiment of the present invention, when the current frame image fails to be relocated, the current frame image can be tracked in two dimensions.
  • the process of two-dimensional tracking can be as follows: Match with the feature points of the current frame image, and perform feature point matching algorithm on the feature points of the previous frame image to determine the feature point matching pair corresponding to the current frame image and the previous frame image. If the feature point matching pair corresponding to the current frame image and the previous frame image satisfies the preset matching success condition (for example, the number of obtained feature point matching pairs reaches a predetermined threshold), the current frame image can be determined according to the feature point matching pair Relative pose information relative to the previous frame of image, and based on the relative pose information and the sensor (that is, included on the mobile platform and used to measure the position of the movable platform) when the current frame of image is taken Information and the sensor of the attitude information of the PTZ) measured by the position information of the movable platform and the attitude information of the PTZ (for the convenience of description, the position information of the movable platform and the attitude information of the PTZ measured by the sensor may be referred to as hereinafter (Sensor pose information) determine the
  • the senor here may include a real-time dynamic carrier phase difference (RTK) positioning device for measuring the position information of the movable platform.
  • RTK real-time dynamic carrier phase difference
  • the method 100 for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention is obtained based on the feature matching between the current frame image and the previous frame image in the case that the current frame image fails to relocate
  • the relative pose information of the current frame image with respect to the previous frame image, and the pose information of the current frame image is determined based on the relative pose information and the sensor pose information on the movable platform where the shooting device that takes the image is located.
  • only the current frame image and the previous frame image are required to observe the three-dimensional points together, which effectively avoids the problem that the three-dimensional reconstruction cannot be performed due to relocation failure, and improves the robustness of real-time three-dimensional reconstruction.
  • FIG. 2 shows a schematic flowchart of a method 200 for determining the pose information of an image in three-dimensional reconstruction according to another embodiment of the present invention.
  • the method 200 for determining the pose information of an image in 3D reconstruction may include the following steps:
  • step S210 obtain the current frame image taken by the camera carried on the platform of the movable platform, wherein the movable platform includes information for measuring the position of the movable platform and the posture information of the platform Sensor.
  • step S220 when the relocation of the current frame image fails, the feature points of the previous frame image of the current frame image are extracted, and the feature point matching algorithm is executed according to the feature points of the previous frame image to determine the A matching pair of feature points corresponding to the current frame image and the previous frame image.
  • step S230 when the feature point matching pair meets the preset matching success condition, the relative pose information of the current frame image relative to the previous frame image is determined according to the feature point matching pair, and the relative pose information of the current frame image relative to the previous frame image is determined according to the feature point matching pair.
  • the relative pose information and the position information and the pose information measured by the sensor when the current frame image is taken determine the pose information of the current frame image.
  • step S240 when the characteristic point matching pair does not satisfy the preset matching success condition, the position information and the posture information measured by the sensor when the current frame image is taken are determined as the current The pose information of the frame image.
  • step S240 that is, when the matching pair of feature points corresponding to the current frame image and the previous frame image does not meet the preset matching success condition (for example, the number of obtained feature point matching pairs does not reach a predetermined threshold ), the position information of the movable platform measured by the sensor and the posture information of the pan/tilt (that is, the sensor posture information) measured by the sensor when the current frame image is taken are determined as the posture information of the current frame image.
  • the preset matching success condition for example, the number of obtained feature point matching pairs does not reach a predetermined threshold
  • the two-dimensional tracking of the current frame image also fails.
  • the position information of the movable platform measured by the sensor when the current frame image is taken and the posture information of the pan/tilt ie, the sensor pose information
  • the method 300 for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention can obtain the current frame under any circumstances.
  • the pose information of the image can obtain the current frame under any circumstances. The pose information of the image.
  • the method 300 for determining the pose information of an image in three-dimensional reconstruction obtains the current frame based on the feature matching between the current frame image and the previous frame image in the case that the relocation of the current frame image fails.
  • the relative pose information of the frame image relative to the previous frame of image, and the pose information of the current frame image is determined based on the relative pose information and the sensor pose information on the movable platform where the imaging device that took the image is located, effectively avoiding
  • the robustness of real-time 3D reconstruction is improved.
  • the method 300 for determining the pose information of the image in the three-dimensional reconstruction in the case that the two-dimensional tracking of the current frame image also fails, the position information and cloud position information of the movable platform measured by the sensor when the current frame image is taken.
  • the pose information of the station ie, the sensor pose information
  • the pose information of the current frame image can be obtained under any circumstances, which further improves the robustness of real-time 3D reconstruction.
  • the next frame of image after acquiring the pose information of the current frame of image, the next frame of image can be acquired.
  • the processing method of the next frame of image is also It can be different.
  • FIG. 3 shows a schematic flowchart of a method 300 for determining the pose information of an image in three-dimensional reconstruction according to still another embodiment of the present invention.
  • the method 300 for determining the pose information of the image in the three-dimensional reconstruction may include the following steps:
  • step S310 obtain the current frame image taken by the camera carried on the platform of the movable platform, wherein the movable platform includes information for measuring the position of the movable platform and the posture information of the platform Sensor.
  • step S320 when the relocation of the current frame image fails, the feature points of the previous frame image of the current frame image are extracted, and the feature point matching algorithm is executed according to the feature points of the previous frame image to determine the A matching pair of feature points corresponding to the current frame image and the previous frame image.
  • step S330 when the feature point matching pair meets the preset matching success condition, the relative pose information of the current frame image relative to the previous frame image is determined according to the feature point matching pair, and the relative pose information of the current frame image relative to the previous frame image is determined according to the The relative pose information and the position information and the pose information measured by the sensor when the current frame image is taken determine the pose information of the current frame image.
  • step S340 the next frame image of the current frame image captured by the photographing device is acquired, the feature point matching pair corresponding to the next frame image and the current frame image is determined, and according to the next frame image Matching with the feature point of the current frame image determines the pose information of the next frame image.
  • Steps S310 to S330 in the method 300 for determining the pose information of an image in three-dimensional reconstruction according to an embodiment of the present application described with reference to FIG. 3 are the same as the pose information of an image in three-dimensional reconstruction according to an embodiment of the present application described with reference to FIG. 1
  • Steps S110 to S130 in the determining method 100 are similar, and for the sake of brevity, details are not repeated here.
  • the method 300 for determining the pose information of an image in three-dimensional reconstruction according to the embodiment of the present application described with reference to FIG. 3 is different.
  • step S340 that is, acquiring the next frame of image, determining the matching pair of feature points corresponding to the next frame of image and the current frame of image, and determining the position of the next frame of image according to the matching pair of feature points of the next frame of image and the current frame of image.
  • the current frame image mentioned in this article is the i-th frame image
  • the previous frame image is the i-1th frame image
  • the next frame image is the i+1th frame image
  • i is a natural number .
  • the pose information of the current frame image is obtained. Therefore, it can be based on the pose information of the current frame image and the characteristics corresponding to the previous frame image of the current frame image.
  • the point matching pair reconstructs a three-dimensional point. Based on this, after acquiring the next frame image of the current frame image, the next frame image can be tracked, that is, feature extraction is performed on the next frame image to obtain the feature points of the next frame image. Then, the feature points of the three-dimensional points obtained from the feature points of the current frame of image can be matched with the feature points of the next frame of image.
  • the obtained feature point matching pair satisfies the preset matching success condition (for example, the number of obtained feature point matching pairs reaches a predetermined threshold)
  • the assumed pose information of the next frame of image the assumed next The pose information of the frame image can be obtained, for example, based on the pose information of the current frame image and the moving speed of the movable platform.
  • the three-dimensional point obtained based on the current frame image is projected to the next frame image to obtain the three-dimensional point corresponding to the three-dimensional point.
  • the two-dimensional point is performed by the assumed pose information of the next frame of image.
  • these two-dimensional points are matched with the feature points of the next frame of image, and the visual reprojection error is calculated based on the matching relationship, and the pose information of the next frame of image is calculated by minimizing the reprojection error.
  • the obtained feature point matching pair does not meet the preset matching success condition (for example, the number of obtained feature point matching pairs does not reach a predetermined threshold)
  • the tracking of the next frame of image fails, the next frame of image is repositioned. If the relocation of the next frame of image fails, the above-mentioned two-dimensional tracking is performed on the next frame of image. If the two-dimensional tracking of the next frame image fails, the sensor pose information is used as the pose information of the next frame image.
  • FIG. 4 shows a schematic flowchart of a method 400 for determining the pose information of an image in three-dimensional reconstruction according to another embodiment of the present invention.
  • the method 400 for determining the pose information of an image in 3D reconstruction may include the following steps:
  • step S410 obtain the current frame image taken by the camera carried on the platform of the movable platform, wherein the movable platform includes information for measuring the position of the movable platform and the posture information of the platform Sensor.
  • step S420 when the relocation of the current frame image fails, the feature points of the previous frame image of the current frame image are extracted, and the feature point matching algorithm is executed according to the feature points of the previous frame image to determine the A matching pair of feature points corresponding to the current frame image and the previous frame image.
  • step S430 when the feature point matching pair meets the preset matching success condition, determine the relative pose information of the current frame image relative to the previous frame image according to the feature point matching pair, and determine the relative pose information of the current frame image relative to the previous frame image according to the feature point matching pair.
  • the relative pose information and the position information and the pose information measured by the sensor when the current frame image is taken determine the pose information of the current frame image.
  • step S440 when the feature point matching pair does not meet the preset matching success condition, the position information and the posture information measured by the sensor when the current frame image is taken are determined as the current The pose information of the frame image.
  • step S450 the next frame of image is acquired, and the next frame of image is repositioned.
  • the sensor pose information is determined to be the pose information of the current frame image.
  • the failure of the two-dimensional tracking means that the current frame image is between the current frame image and the previous frame image.
  • the feature point matching pair is not enough to establish a new three-dimensional point. Therefore, after the next frame of image is acquired, the next frame of image cannot be tracked, and the next frame of image can only be repositioned, that is, before the current frame of image.
  • the similarity search is performed on the image to search for the most similar frame to the next frame of image, and the tracking process is performed again on the next frame based on the searched frame to calculate the pose information of the next frame of image. If no frame similar to the next frame is found, it means the relocation failed. If the relocation of the next frame of image fails, two-dimensional tracking is performed on the next frame of image, and if the two-dimensional tracking also fails, the sensor pose information is used as the pose information of the next frame of image.
  • the pose information of the current frame image is based on the relative pose information between the current frame image and the previous frame image and the movable platform measured by the sensor when the current frame image is taken
  • the position information of the image and the posture information of the pan/tilt are determined (that is, the pose information of the current frame image is determined by two-dimensional tracking)
  • the current frame image and the previous frame image can be determined based on the pose information of the current frame image
  • the feature points are matched to the corresponding map points (that is, three-dimensional point reconstruction is performed).
  • the pose information of the current frame image is determined to be the position information of the movable platform and the pose information of the pan/tilt measured by the sensor when the current frame image is taken (that is, the pose information of the current frame image is determined to be the sensor Pose information), because the accuracy of the sensor pose information is limited, the pose information of the current frame image can be optimized according to the subsequent images of the current frame image.
  • a subsequent image captured by the photographing device after the current frame image may be acquired, and a matching pair of feature points corresponding to the subsequent image and the current frame image may be determined, and based on the subsequent image and the current frame
  • the feature point matching corresponding to the image optimizes the pose information of the current frame image.
  • the subsequent image may be all images after the current frame image, or part of the image after the current frame image (such as some key frame images).
  • the pose information of the subsequent image may be obtained first, and the preset area range may be determined according to the pose information of the subsequent image.
  • the current frame image can determine the matching pair of feature points corresponding to the subsequent image and the current frame image; if the current frame image is not in the preset area When shooting within the range, it may not be possible to determine the feature point matching pair corresponding to the subsequent image and the current frame image, or the determined feature point matching pair does not meet the preset matching success condition.
  • the pose information of the current frame image can be optimized according to the subsequent images to further improve the robustness of the 3D reconstruction.
  • the method for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention obtains the current frame based on the feature matching between the current frame image and the previous frame image when the current frame image fails to be relocated.
  • the relative pose information of the image relative to the previous frame of image, and the pose information of the current frame image is determined based on the relative pose information and the sensor pose information on the movable platform where the imaging device that took the image is located, effectively avoiding If the relocation fails, 3D reconstruction is impossible, which improves the robustness of real-time 3D reconstruction.
  • the method for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention in the case that the two-dimensional tracking of the current frame image also fails, the position information of the movable platform measured by the sensor when the current frame image is taken and the pan/tilt
  • the pose information ie, the sensor pose information
  • the method for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention can also optimize the pose information of the current frame image according to the subsequent images, so as to further improve the robustness of the three-dimensional reconstruction.
  • FIG. 5 shows a schematic block diagram of a device 500 for determining the pose information of an image in three-dimensional reconstruction according to an embodiment of the present invention.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction includes a storage device 510 and a processor 520.
  • the storage device 510 stores the program code used to implement the corresponding steps in the method for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention.
  • the processor 520 is configured to run the program code stored in the storage device 510 to execute the corresponding steps of the method for determining the pose information of the image in the three-dimensional reconstruction according to the embodiment of the present invention.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction executes the following steps: acquiring the current frame image taken by the camera carried on the platform of the movable platform , Wherein the movable platform includes a sensor for measuring the position information of the movable platform and the posture information of the pan/tilt; when the relocation of the current frame image fails, extracting the current frame image The feature points of the previous frame of image, and execute feature point matching algorithm according to the feature points of the previous frame of image to determine the feature point matching pair corresponding to the current frame of image and the previous frame of image; and when the feature is When the point matching pair meets the pre-set conditions for successful matching, the relative pose information of the current frame image relative to the previous frame image is determined according to the characteristic point matching pair, and the relative pose information is determined according to the relative pose information and the shooting location.
  • the position information and the posture information measured by the sensor in the current frame image determine the pose information of the current frame image.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction is also caused to perform the following steps: when the feature point matching pair does not satisfy the preset matching success condition When the current frame image is taken, the position information and the posture information measured by the sensor are determined as the posture information of the current frame image.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction is also caused to perform the following steps: acquiring the next frame of the current frame image captured by the shooting device And when the pose information of the current frame image is the position information and the posture information measured by the sensor when the current frame image is taken, the next frame image is repositioned.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction is also caused to perform the following steps: acquiring the next frame of the current frame image captured by the shooting device When the pose information of the current frame image is determined based on the relative pose information and the position information and the pose information measured by the sensor when the current frame image is taken, determine the next A matching pair of feature points corresponding to the frame image and the current frame image; and determining the pose information of the next frame image according to the matching pair of feature points of the next frame image and the current frame image.
  • the pose information of the current frame image is determined based on the relative pose information and the position information and the posture information measured by the sensor when the current frame image is taken.
  • the pose information of the current frame image is used to determine the map point corresponding to the feature point matching pair of the current frame image and the previous frame image.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction is also made to perform the following steps: acquiring a subsequent image taken by the photographing device after the current frame image, and determining that the subsequent image matches the feature point corresponding to the current frame image Yes; and optimizing the pose information of the current frame image according to the feature point matching of the subsequent image and the current frame image.
  • the device 500 for determining the pose information of the image in the three-dimensional reconstruction is also caused to perform the following steps: after determining that the subsequent image and the corresponding feature point of the current frame image match Before, obtain the posture information of the subsequent image, and determine the preset area range according to the posture information of the subsequent image; and determine whether the current frame image is shot within the preset area range, if it is determined that all If the current frame image is shot within the preset area, then a matching pair of feature points corresponding to the subsequent image and the current frame image is determined.
  • the senor includes a real-time dynamic carrier phase differential positioning device for measuring the position information of the movable platform.
  • a storage medium on which program instructions are stored, and when the program instructions are executed by a computer or a processor, they are used to execute the three-dimensional reconstruction in the embodiment of the present invention.
  • the corresponding steps of the method for determining the pose information of the image may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • the computer program instructions can execute the method for determining the pose information of an image in three-dimensional reconstruction according to an embodiment of the present invention when the computer program instructions are executed by a computer.
  • the computer program instructions when run by the computer or processor, cause the computer or processor to perform the following steps: obtain the current frame image taken by the camera carried on the platform of the movable platform, where all The movable platform includes a sensor for measuring the position information of the movable platform and the posture information of the pan/tilt; when the current frame image fails to be relocated, the previous frame image of the current frame image is extracted And execute a feature point matching algorithm according to the feature points of the previous frame image to determine the feature point matching pair corresponding to the current frame image and the previous frame image; and when the feature point matching pair satisfies
  • the condition for successful matching is preset, the relative pose information of the current frame image relative to the previous frame image is determined according to the feature point matching pair, and the current frame image is captured according to the relative pose information
  • the position information and the posture information measured by the sensor determine the posture information of the current frame image.
  • the computer or the processor when the computer program instructions are executed by the computer or the processor, the computer or the processor also executes the following steps: when the feature point matching pair does not meet the preset matching success condition, shooting The position information and the posture information measured by the sensor in the current frame image are determined as the pose information of the current frame image.
  • the computer program instructions when run by the computer or processor, also cause the computer or processor to perform the following steps: acquiring the next frame image of the current frame image captured by the photographing device; and
  • the pose information of the current frame image is the position information and the pose information measured by the sensor when the current frame image is taken, and the next frame image is repositioned.
  • the computer or the processor when the computer program instructions are executed by the computer or the processor, the computer or the processor further executes the following steps: acquiring the next frame image of the current frame image captured by the photographing device; when the When the pose information of the current frame image is determined based on the relative pose information and the position information and the pose information measured by the sensor when the current frame image is taken, the next frame of image and the posture information are determined. The feature point matching pair corresponding to the current frame image; and determining the pose information of the next frame image according to the feature point matching pair of the next frame image and the current frame image.
  • the pose information of the current frame image is determined based on the relative pose information and the position information and the posture information measured by the sensor when the current frame image is taken.
  • the pose information of the current frame image is used to determine the map point corresponding to the feature point matching pair of the current frame image and the previous frame image.
  • the computer program instructions are processed by a computer or When the device is running, the computer or the processor also executes the following steps: acquiring the subsequent image taken by the photographing device after the current frame image, and determining the matching pair of feature points corresponding to the subsequent image and the current frame image; and according to Matching feature points corresponding to the subsequent image and the current frame image optimizes the pose information of the current frame image.
  • the computer or the processor when the computer program instructions are executed by the computer or the processor, the computer or the processor also executes the following steps: before determining the matching pair of feature points corresponding to the subsequent image and the current frame image, obtain The pose information of the subsequent image, and determine the preset area range according to the pose information of the subsequent image; and determine whether the current frame image is shot within the preset area range, if it is determined that the current frame image If the shooting is within the preset area, then the matching pair of feature points corresponding to the subsequent image and the current frame image is determined.
  • the senor includes a real-time dynamic carrier phase differential positioning device for measuring the position information of the movable platform.
  • a movable platform is also provided.
  • the movable platform may include the aforementioned device for determining the pose information of an image in three-dimensional reconstruction according to the embodiment of the present invention.
  • the movable platform provided according to another aspect of the present invention will be described below in conjunction with FIG. 6.
  • Fig. 6 shows a schematic block diagram of a movable platform 600 according to an embodiment of the present invention.
  • the movable platform 600 includes a pan-tilt 610, a camera 620, a sensor 630, and a device 640 for determining the pose information of the image in the three-dimensional reconstruction.
  • the camera 620 is used to mount on the pan/tilt 610 of the movable platform 600, and take images of the target survey area during the movement of the movable platform 600.
  • the device 640 for determining the pose information of the image in the three-dimensional reconstruction is used to determine the pose information of each frame of image based on the image captured by the camera 600 and the position information of the movable platform 600 and the pose information of the pan/tilt 610 measured by the sensor 630.
  • FIG. 5 can refer to FIG. 5 in combination with the previous description of FIG. 5 to understand the detailed operation of the device 640 for determining the pose information of the image in the three-dimensional reconstruction in the movable platform 600 according to the embodiment of the present invention. Go into details.
  • the above exemplarily describes the method, device, storage medium, and movable platform for determining the pose information of the image in the three-dimensional reconstruction according to the embodiments of the present invention.
  • the method, device, storage medium, and movable platform for determining the pose information of the image in the three-dimensional reconstruction according to the embodiments of the present invention are based on the current frame image and the previous frame in the case of failure to relocate the current frame image.
  • the feature matching between the images obtains the relative pose information of the current frame image relative to the previous frame image, and determines the current frame based on the relative pose information and the sensor pose information on the movable platform where the imaging device that took the image is located
  • the pose information of the image effectively avoids the problem of inability to 3D reconstruction if relocation fails, and improves the robustness of real-time 3D reconstruction.
  • the device, storage medium, and movable platform for determining the pose information of the image in the three-dimensional reconstruction according to the embodiments of the present invention, if the two-dimensional tracking of the current frame image fails, the sensor measured when the current frame image is taken.
  • the position information of the movable platform and the posture information of the pan/tilt ie the sensor pose information
  • the pose information of the current frame image can be obtained under any circumstances, which further improves the real-time Robustness of 3D reconstruction.
  • the method, device, storage medium, and movable platform for determining the pose information of the image in 3D reconstruction can also optimize the pose information of the current frame image according to subsequent images, so as to further improve the 3D reconstruction. Robustness.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种三维重建中图像的位姿信息确定方法和装置,该方法包括:获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中可移动平台包括用于测量可移动平台的位置信息和云台的姿态信息的传感器(S110);当对当前帧图像重定位失败时,提取上一帧图像的特征点,并确定当前帧图像和上一帧图像对应的特征点匹配对(S120);当特征点匹配对满足预设匹配成功的条件时,根据特征点匹配对确定当前帧图像相对于上一帧图像的相对位姿信息,并根据相对位姿信息和拍摄当前帧图像时传感器测量的位置信息和姿态信息确定当前帧图像的位姿信息(S130)。上述三维重建中图像的位姿信息确定方法有效避免了重定位失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。

Description

三维重建中图像的位姿信息确定方法和装置
说明书
技术领域
本发明总体涉及三维重建技术领域,更具体地涉及一种三维重建中图像的位姿信息确定方法和装置。
背景技术
目前,实时三维重建同步进行两个模块算法,分别为前端跟踪线程和后端建图线程。前端跟踪线程的一个重要作用是实时确定拍摄装置在拍摄图像时的位姿信息,后端建图线程利用所述位姿信息来建图,如果跟踪过程不能给出相应的位置和姿态则重建失败,因此实时三维重建的鲁棒性极大地依赖同步定位与建图算法的鲁棒性。
在传统的前端跟踪线程中,分为三个阶段:初始化、跟踪和重定位,当当前帧图像的跟踪失败时就会启动重定位过程,即对全局地图中所有的图像帧(例如关键帧图像)进行相似度搜索找到与当前帧图像最匹配的图像帧,使用找到的图像帧再次进行跟踪计算当前帧图像对应的拍摄装置的位信息姿。然而,在某些情况中,重定位过程中找不到符合要求的图像帧,无法确定当前帧图像的位姿信息,这样会导致实时三维重建将无法继续进行或者当前帧图像无法利用。
发明内容
为了解决上述问题而提出了本发明。本发明提供一种三维重建中图像的位姿信息确定方案,有效避免了重定位失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。下面简要描述本发明提出的三维重建中图像的位姿信息确定方案,更多细节将在后续结合附图在具体实施方式中加以描述。
根据本发明一方面,提供了一种三维重建中图像的位姿信息确定方法,所述方法包括:获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧 图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器;当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对;以及当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
根据本发明另一方面,提供了一种三维重建中图像的位姿信息确定装置,所述装置包括存储装置和处理器,其中,所述存储装置,用于存储程序代码,所述处理器,执行所述程序代码,当所述程序代码执行时,用于:获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器;当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对;以及当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
根据本发明再一方面,提供了一种可移动平台,所述可移动平台包括如上所述的三维重建中图像的位姿信息确定装置。
根据本发明又一方面,提供了一种存储介质,所述存储介质上存储有计算机程序,所述计算机程序在运行时执行如上所述的三维重建中图像的位姿信息确定方法。
根据本发明实施例的三维重建中图像的位姿信息确定方法、装置、可移动平台和存储介质在对当前帧图像重定位失败的情况下,基于当前帧图像和上一帧图像之间的特征匹配得到当前帧图像相对于上一帧图像的相对位姿信息,并基于该相对位姿信息和拍摄图像的拍摄装置所位于的可移动平台上的传感器位姿信息确定当前帧图像的位姿信息,有效避免了重定位 失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。
附图说明
图1示出根据本发明实施例的三维重建中图像的位姿信息确定方法的示意性流程图;
图2示出根据本发明另一实施例的三维重建中图像的位姿信息确定方法的示意性流程图;
图3示出根据本发明再一实施例的三维重建中图像的位姿信息确定方法的示意性流程图;
图4示出根据本发明又一实施例的三维重建中图像的位姿信息确定方法的示意性流程图;
图5示出根据本发明实施例的三维重建中图像的位姿信息确定装置的示意性框图;以及
图6示出根据本发明实施例的可移动平台的示意性框图。
具体实施方式
为了使得本发明的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本发明的示例实施例。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是本发明的全部实施例,应理解,本发明不受这里描述的示例实施例的限制。基于本发明中描述的本发明实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本发明的保护范围之内。
在下文的描述中,给出了大量具体的细节以便提供对本发明更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本发明可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本发明发生混淆,对于本领域公知的一些技术特征未进行描述。
应当理解的是,本发明能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本发明的范围完全地传递给本领域技术人员。
在此使用的术语的目的仅在于描述具体实施例并且不作为本发明的 限制。在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其它的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。
为了彻底理解本发明,将在下列的描述中提出详细的步骤以及详细的结构,以便阐释本发明提出的技术方案。本发明的较佳实施例详细描述如下,然而除了这些详细描述外,本发明还可以具有其他实施方式。
下面参照图1描述根据本发明实施例的三维重建中图像的位姿信息确定方法。图1示出根据本发明实施例的三维重建中图像的位姿信息确定方法100的示意性流程图。如图1所示,三维重建中图像的位姿信息确定方法100可以包括如下步骤:
在步骤S110,获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器。
在本发明的实施例中,在步骤S110所获取的图像是通过承载在可移动平台(诸如无人机)的云台上的拍摄装置拍摄得到的。为了基于拍摄到的图像进行三维重建,需要获取每一帧图像的位姿信息,该位姿信息是拍摄装置拍摄该帧图像时的位置信息和姿态信息。一般地,在获取到当前帧图像之后,可以首先对当前帧图像执行跟踪,如果跟踪失败,则对当前帧图像进行重定位。下面描述跟踪和重定位的过程。
跟踪的过程可以为如下:将上一帧图像的特征点中得到三维点(即空间中的地图点)的三维位置的特征点在当前帧图像确定对应匹配的特征点,即在所述当前帧图像中找到同名特征点,这是一个3D-2D的跟踪过程。其中,一种实现的方式是对当前帧图像进行特征提取,得到当前帧图像的特征点。接着,可以将上一帧图像的特征点中得到三维点的特征点与当前帧图像的特征点进行匹配。另外一种实现方式是:对上一帧图像的特征点中得到三维点(即空间中的地图点)三维位置的特征点实施KLT算法以在当前帧图像确定对应匹配的特征点。一方面,如果得到的特征点匹配对满足 预设匹配成功的条件(例如得到的特征点匹配对的数目达到预定阈值),则根据假定的当前帧图像的位姿信息(该假定的当前帧图像的位姿信息例如可以基于上一帧图像的位姿信息预测得到)将基于上一帧图像得到的三维点投影至当前帧图像,以得到与所述三维点对应的二维点。接着,并基于二维点与当前帧图像的特征点的匹配关系计算视觉重投影误差,以最小化重投影误差的方式计算得到当前帧图像的位姿信息。另一方面,如果得到的特征点匹配对不满足预设匹配成功的条件(例如得到的特征点匹配对的数目达不到预定阈值),则表示对当前帧图像的跟踪失败。如果对当前帧图像的跟踪失败,则对当前帧图像进行重定位。可以理解的是,上一帧图像的特征点中得到三维点,所述三维点是至少是所述上一帧图像和所述上一帧图像的上一帧图像共同观测到的。如果上一帧图像的特征点中得到三维点三维位置的特征点在当前帧图像找到了对应匹配的特征点,则要求当前帧图像、上一帧图像和所述上一帧图像的上一帧图像共同观测到所述三维点。然而在某些情况中,当前帧图像、上一帧图像和所述上一帧图像的上一帧图像不能共同观测到所述三维点,这样会导致跟踪失败。重定位的过程可以为如下:对所述上一帧图像之前的图像进行相似度搜索,以搜索与当前帧图像最相似的帧,并基于搜索到的帧对当前帧图像再次进行上述跟踪过程,来计算当前帧图像的位姿信息。如果未搜到与当前帧图像相似的帧,则表示重定位失败。在传统的方案中,如果重定位失败,则无法给出当前帧图像的位姿信息,而使得三维重建无法继续进行。
在本申请的方案中,如果重定位失败,仍有相应的处理方案,如下面的步骤将描述的。
在步骤S120,当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对。
在步骤S130,当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
在本发明的实施例中,当对当前帧图像重定位失败时,可以再次将当 前帧图像与上一帧图像的特征点进行匹配,与前述跟踪中进行匹配不同的是,此处的特征点匹配不是将上一帧图像的特征点中得到三维点的特征点与当前帧图像的特征点进行匹配,而是将上一帧图像的所有的特征点与当前帧图像的特征点进行匹配,即2D-2D跟踪或者二维跟踪。为了将其与前述的跟踪过程相区分,将重定位失败后的此处操作称为二维跟踪。也就是说,在本发明的实施例中,当对当前帧图像重定位失败时,可以对当前帧图像进行二维跟踪,二维跟踪的过程可以为如下:将上一帧图像的所有特征点与当前帧图像的特征点进行匹配,对上一帧图像的特征点执行特征点匹配算法以确定当前帧图像和上一帧图像对应的特征点匹配对。如果当前帧图像和上一帧图像对应的特征点匹配对满足预设匹配成功的条件(例如得到的特征点匹配对的数目达到预定阈值),则可以根据所述特征点匹配对确定当前帧图像相对于上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器(即可移动平台上包括的、用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器)测量的可移动平台的位置信息和云台的姿态信息(为了描述方便,传感器测量的可移动平台的位置信息和云台的姿态信息在下文中可以简称为传感器位姿信息)确定所述当前帧图像的位姿信息。其中,此处的传感器可以包括用于测量所述可移动平台的位置信息的实时动态载波相位差分(RTK)定位装置。基于上面的描述,现有技术中的跟踪,要求当前帧图像、上一帧图像和所述上一帧图像的上一帧图像共同观测到所述三维点。本发明实施例中,根据本发明实施例的三维重建中图像的位姿信息确定方法100在对当前帧图像重定位失败的情况下,基于当前帧图像和上一帧图像之间的特征匹配得到当前帧图像相对于上一帧图像的相对位姿信息,并基于该相对位姿信息和拍摄图像的拍摄装置所位于的可移动平台上的传感器位姿信息确定当前帧图像的位姿信息。本发明实施例中只要求当前帧图像和上一帧图像共同观测到三维点,有效避免了重定位失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。
下面参照图2描述根据本发明另一实施例的三维重建中图像的位姿信息确定方法。图2示出根据本发明另一实施例的三维重建中图像的位姿信息确定方法200的示意性流程图。如图2所示,三维重建中图像的位姿信 息确定方法200可以包括如下步骤:
在步骤S210,获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器。
在步骤S220,当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对。
在步骤S230,当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
在步骤S240,当所述特征点匹配对不满足所述预设匹配成功的条件时,将拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定为所述当前帧图像的位姿信息。
在参考图2描述的根据本申请实施例的三维重建中图像的位姿信息确定方法200中的步骤S210到步骤S230与参考图1描述的根据本申请实施例的三维重建中图像的位姿信息确定方法100中的步骤S110到步骤S130类似,为了简洁,此处不再赘述。与参考图1描述的根据本申请实施例的三维重建中图像的位姿信息确定方法100不同的是,在参考图2描述的根据本申请实施例的三维重建中图像的位姿信息确定方法200还包括步骤S240,即当所述当前帧图像和所述上一帧图像对应的特征点匹配对不满足所述预设匹配成功的条件(例如得到的特征点匹配对的数目达不到预定阈值)时,将拍摄所述当前帧图像时所述传感器测量的可移动平台的位置信息和云台的姿态信息(即传感器位姿信息)确定为所述当前帧图像的位姿信息。
如果当前帧图像和上一帧图像对应的特征点匹配对不满足预设匹配成功的条件,意味着对当前帧图像进行二维跟踪也失败。在本发明的实施例中,当对当前帧图像进行二维跟踪也失败时,可以将拍摄当前帧图像时传感器测量的可移动平台的位置信息和云台的姿态信息(即传感器位姿信息)确定为当前帧图像的位姿信息。由于传感器测量的可移动平台的位置 信息和云台的姿态信息总是存在的,因此,根据本发明实施例的三维重建中图像的位姿信息确定方法300无论在任何情况下均能获得当前帧图像的位姿信息。
基于上面的描述,根据本发明实施例的三维重建中图像的位姿信息确定方法300在对当前帧图像重定位失败的情况下,基于当前帧图像和上一帧图像之间的特征匹配得到当前帧图像相对于上一帧图像的相对位姿信息,并基于该相对位姿信息和拍摄图像的拍摄装置所位于的可移动平台上的传感器位姿信息确定当前帧图像的位姿信息,有效避免了重定位失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。此外,根据本发明实施例的三维重建中图像的位姿信息确定方法300在对当前帧图像二维跟踪也失败的情况下,将拍摄当前帧图像时传感器测量的可移动平台的位置信息和云台的姿态信息(即传感器位姿信息)确定为当前帧图像的位姿信息,因而无论在任何情况下均能获得当前帧图像的位姿信息,进一步提高了实时三维重建的鲁棒性。
在本发明进一步的实施例中,在获取当前帧图像的位姿信息后,可以接着获取下一帧图像,根据当前帧图像的位姿信息获取方式的不同,对下一帧图像的处理方式也可以不同。
下面参照图3描述根据本发明再一实施例的三维重建中图像的位姿信息确定方法。图3示出根据本发明再一实施例的三维重建中图像的位姿信息确定方法300的示意性流程图。如图3所示,三维重建中图像的位姿信息确定方法300可以包括如下步骤:
在步骤S310,获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器。
在步骤S320,当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对。
在步骤S330,当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测 量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
在步骤S340,获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像,确定所述下一帧图像和所述当前帧图像对应的特征点匹配对,并根据所述下一帧图像和所述当前帧图像的特征点匹配对确定所述下一帧图像的位姿信息。
在参考图3描述的根据本申请实施例的三维重建中图像的位姿信息确定方法300中的步骤S310到步骤S330与参考图1描述的根据本申请实施例的三维重建中图像的位姿信息确定方法100中的步骤S110到步骤S130类似,为了简洁,此处不再赘述。与参考图1描述的根据本申请实施例的三维重建中图像的位姿信息确定方法100不同的是,在参考图3描述的根据本申请实施例的三维重建中图像的位姿信息确定方法300还包括步骤S340,即获取下一帧图像,确定下一帧图像和当前帧图像对应的特征点匹配对,并根据下一帧图像和当前帧图像的特征点匹配对确定下一帧图像的位姿信息。此处,应注意,本文中提到的当前帧图像均为第i帧图像,上一帧图像均为第i-1帧图像,下一帧图像均为第i+1帧图像,i为自然数。
在该实施例中,当对当前帧图像进行二维跟踪成功后,得到了当前帧图像的位姿信息,因此可以基于当前帧图像的位姿信息和当前帧图像的上一帧图像对应的特征点匹配对重建出三维点。基于此,在获取当前帧图像的下一帧图像后,可以对下一帧图像进行跟踪,即对下一帧图像进行特征提取,得到下一帧图像的特征点。接着,可以将当前帧图像的特征点中得到三维点的特征点与下一帧图像的特征点进行匹配。一方面,如果得到的特征点匹配对满足预设匹配成功的条件(例如得到的特征点匹配对的数目达到预定阈值),则根据假定的下一帧图像的位姿信息(该假定的下一帧图像的位姿信息例如可以基于当前帧图像的位姿信息以及可移动平台的移动速度而得到)将基于当前帧图像得到的三维点投影至下一帧图像,以得到与所述三维点对应的二维点。接着,将这些二维点与下一帧图像的特征点进行匹配,并基于匹配关系计算视觉重投影误差,以最小化重投影误差的方式计算得到下一帧图像的位姿信息。另一方面,如果得到的特征点匹配对不满足预设匹配成功的条件(例如得到的特征点匹配对的数目达不到预定阈值),则表示对下一帧图像的跟踪失败。如果对下一帧图像的跟踪失败, 则对下一帧图像进行重定位。如果对下一帧图像重定位失败,则对下一帧图像进行上述的二维跟踪。如果对下一帧图像二维跟踪也失败,则采用传感器位姿信息作为下一帧图像的位姿信息。
下面参照图4描述根据本发明又一实施例的三维重建中图像的位姿信息确定方法。图4示出根据本发明又一实施例的三维重建中图像的位姿信息确定方法400的示意性流程图。如图4所示,三维重建中图像的位姿信息确定方法400可以包括如下步骤:
在步骤S410,获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器。
在步骤S420,当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对。
在步骤S430,当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
在步骤S440,当所述特征点匹配对不满足所述预设匹配成功的条件时,将拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定为所述当前帧图像的位姿信息。
在步骤S450,获取下一帧图像,并对所述下一帧图像进行重定位。
在参考图4描述的根据本申请实施例的三维重建中图像的位姿信息确定方法400中的步骤S410到步骤S440与参考图2描述的根据本申请实施例的三维重建中图像的位姿信息确定方法200中的步骤S210到步骤S240类似,为了简洁,此处不再赘述。与参考图2描述的根据本申请实施例的三维重建中图像的位姿信息确定方法200不同的是,在参考图4描述的根据本申请实施例的三维重建中图像的位姿信息确定方法400还包括步骤S450,即获取下一帧图像,并对所述下一帧图像进行重定位。
在该实施例中,当对当前帧图像进行二维跟踪失败后,将传感器位姿信息确定为了当前帧图像的位姿信息,二维跟踪的失败意味着当前帧图像 和上一帧图像之间的特征点匹配对不足以建立新的三维点,因此在获取下一帧图像之后,无法对下一帧图像进行跟踪,只能对下一帧图像进行重定位,即对所述当前帧图像之前的图像进行相似度搜索,以搜索与下一帧图像最相似的帧,并基于搜索到的帧对下一帧图像再次进行上述跟踪过程,来计算下一帧图像的位姿信息。如果未搜到与下一帧图像相似的帧,则表示重定位失败。如果对下一帧图像重定位失败,则对下一帧图像进行二维跟踪,如果二维跟踪也失败,则将传感器位姿信息作为下一帧图像的位姿信息。
以上结合图1到图2描述了根据本发明实施例的三维重建中图像的位姿信息确定方法关于对当前帧图像的位姿信息的确定方式,并结合图3到图4描述了在当前帧图像的位姿信息的不同确定方式下,对下一帧图像的位姿信息的确定方式。下面描述当前帧的位姿信息确定之后的操作。
在本发明进一步的实施例中,如果当前帧图像的位姿信息是根据当前帧图像与上一帧图像之间的相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的可移动平台的位置信息和云台的姿态信息确定的(即当前帧图像的位姿信息是通过二维跟踪方式确定的),则可以基于当前帧图像的位姿信息确定当前帧图像和上一帧图像对应的特征点匹配对对应的地图点(即进行三维点的重建)。如果当前帧图像的位姿信息是被确定为拍摄所述当前帧图像时所述传感器测量的可移动平台的位置信息和云台的姿态信息(即当前帧图像的位姿信息被确定为是传感器位姿信息),则由于传感器位姿信息的精度有限,可以根据当前帧图像的后续图像对当前帧图像的位姿信息进行优化。
具体地,可以获取所述当前帧图像之后所述拍摄装置拍摄的后续图像,并确定所述后续图像和所述当前帧图像对应的特征点匹配对,并根据所述后续图像和所述当前帧图像对应的特征点匹配对对所述当前帧图像的位姿信息进行优化。进一步地,该后续图像可以是当前帧图像之后的全部图像,也可以是当前帧图像之后的部分图像(诸如某些关键帧图像)。进一步地,在获取所述后续图像之后,可以先获取所述后续图像的位姿信息,并根据所述后续图像的位姿信息确定预设区域范围。如果所述当前帧图像是在所述预设区域范围内拍摄,则可以确定所述后续图像和所述当前帧图 像对应的特征点匹配对;如果所述当前帧图像不是在所述预设区域范围内拍摄,则可能无法确定出所述后续图像和所述当前帧图像对应的特征点匹配对,或所确定的特征点匹配对不满足预设匹配成功的条件。总之,可以根据后续图像对当前帧图像的位姿信息进行优化,以进一步提高三维重建的鲁棒性。
以上示例性地描述了根据本发明实施例的三维重建中图像的位姿信息确定方法。基于上面的描述,根据本发明实施例的三维重建中图像的位姿信息确定方法在对当前帧图像重定位失败的情况下,基于当前帧图像和上一帧图像之间的特征匹配得到当前帧图像相对于上一帧图像的相对位姿信息,并基于该相对位姿信息和拍摄图像的拍摄装置所位于的可移动平台上的传感器位姿信息确定当前帧图像的位姿信息,有效避免了重定位失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。此外,根据本发明实施例的三维重建中图像的位姿信息确定方法在对当前帧图像二维跟踪也失败的情况下,将拍摄当前帧图像时传感器测量的可移动平台的位置信息和云台的姿态信息(即传感器位姿信息)确定为当前帧图像的位姿信息,因而无论在任何情况下均能获得当前帧图像的位姿信息,进一步提高了实时三维重建的鲁棒性。进一步地,根据本发明实施例的三维重建中图像的位姿信息确定方法还可以根据后续图像对当前帧图像的位姿信息进行优化,以进一步提高三维重建的鲁棒性。
下面结合图5描述根据本发明另一方面提供的三维重建中图像的位姿信息确定装置。图5示出了根据本发明实施例的三维重建中图像的位姿信息确定装置500的示意性框图。三维重建中图像的位姿信息确定装置500包括存储装置510以及处理器520。
其中,存储装置510存储用于实现根据本发明实施例的三维重建中图像的位姿信息确定方法中的相应步骤的程序代码。处理器520用于运行存储装置510中存储的程序代码,以执行根据本发明实施例的三维重建中图像的位姿信息确定方法的相应步骤。本领域技术人员可以参照图1到图4结合前文关于图1到图4的描述理解根据本发明实施例的三维重建中图像的位姿信息确定装置500中处理器520的详细操作,为了简洁,此处仅简要描述处理器520的主要操作。
在一个实施例中,在所述程序被处理器520运行时使得三维重建中图像的位姿信息确定装置500执行以下步骤:获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器;当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对;以及当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
在一个实施例中,在所述程序被处理器520运行时还使得三维重建中图像的位姿信息确定装置500执行以下步骤:当所述特征点匹配对不满足所述预设匹配成功的条件时,将拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定为所述当前帧图像的位姿信息。
在一个实施例中,在所述程序被处理器520运行时还使得三维重建中图像的位姿信息确定装置500执行以下步骤:获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;以及当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,对所述下一帧图像进行重定位。
在一个实施例中,在所述程序被处理器520运行时还使得三维重建中图像的位姿信息确定装置500执行以下步骤:获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,确定所述下一帧图像和所述当前帧图像对应的特征点匹配对;以及根据所述下一帧图像和所述当前帧图像的特征点匹配对确定所述下一帧图像的位姿信息。
在一个实施例中,当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,所述当前帧图像的位姿信息用于确定所述当前帧图像和 所述上一帧图像对应的特征点匹配对对应的地图点。
在一个实施例中,当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,在所述程序被处理器520运行时还使得三维重建中图像的位姿信息确定装置500执行以下步骤:获取所述当前帧图像之后所述拍摄装置拍摄的后续图像,并确定所述后续图像和所述当前帧图像对应的特征点匹配对;以及根据所述后续图像和所述当前帧图像对应的特征点匹配对对所述当前帧图像的位姿信息进行优化。
在一个实施例中,在所述程序被处理器520运行时还使得三维重建中图像的位姿信息确定装置500执行以下步骤:在确定所述后续图像和所述当前帧图像对应的特征点匹配对之前,获取所述后续图像的位姿信息,并根据所述后续图像的位姿信息确定预设区域范围;以及确定所述当前帧图像是否在所述预设区域范围内拍摄,如果确定所述当前帧图像是在所述预设区域范围内拍摄,则确定所述后续图像和所述当前帧图像对应的特征点匹配对。
在一个实施例中,所述传感器包括用于测量所述可移动平台的位置信息的实时动态载波相位差分定位装置。
此外,根据本发明实施例,还提供了一种存储介质,在所述存储介质上存储了程序指令,在所述程序指令被计算机或处理器运行时用于执行本发明实施例的三维重建中图像的位姿信息确定方法的相应步骤。所述存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。
在一个实施例中,所述计算机程序指令在被计算机运行时可以执行根据本发明实施例的三维重建中图像的位姿信息确定方法。
在一个实施例中,所述计算机程序指令在被计算机或处理器运行时使计算机或处理器执行以下步骤:获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平 台的位置信息和所述云台的姿态信息的传感器;当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对;以及当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
在一个实施例中,所述计算机程序指令在被计算机或处理器运行时还使计算机或处理器执行以下步骤:当所述特征点匹配对不满足所述预设匹配成功的条件时,将拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定为所述当前帧图像的位姿信息。
在一个实施例中,所述计算机程序指令在被计算机或处理器运行时还使计算机或处理器执行以下步骤:获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;以及当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,对所述下一帧图像进行重定位。
在一个实施例中,所述计算机程序指令在被计算机或处理器运行时还使计算机或处理器执行以下步骤:获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,确定所述下一帧图像和所述当前帧图像对应的特征点匹配对;以及根据所述下一帧图像和所述当前帧图像的特征点匹配对确定所述下一帧图像的位姿信息。
在一个实施例中,当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,所述当前帧图像的位姿信息用于确定所述当前帧图像和所述上一帧图像对应的特征点匹配对对应的地图点。
在一个实施例中,当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,所述计算机程序 指令在被计算机或处理器运行时还使计算机或处理器执行以下步骤:获取所述当前帧图像之后所述拍摄装置拍摄的后续图像,并确定所述后续图像和所述当前帧图像对应的特征点匹配对;以及根据所述后续图像和所述当前帧图像对应的特征点匹配对对所述当前帧图像的位姿信息进行优化。
在一个实施例中,所述计算机程序指令在被计算机或处理器运行时还使计算机或处理器执行以下步骤:在确定所述后续图像和所述当前帧图像对应的特征点匹配对之前,获取所述后续图像的位姿信息,并根据所述后续图像的位姿信息确定预设区域范围;以及确定所述当前帧图像是否在所述预设区域范围内拍摄,如果确定所述当前帧图像是在所述预设区域范围内拍摄,则确定所述后续图像和所述当前帧图像对应的特征点匹配对。
在一个实施例中,所述传感器包括用于测量所述可移动平台的位置信息的实时动态载波相位差分定位装置。
此外,根据本发明实施例,还提供了一种可移动平台,所述可移动平台可以包括前述的根据本发明实施例的三维重建中图像的位姿信息确定装置。下面结合图6描述根据本发明另一方面提供的可移动平台。图6示出了根据本发明实施例的可移动平台600的示意性框图。如图6所示,可移动平台600包括云台610、拍摄装置620、传感器630和三维重建中图像的位姿信息确定装置640。其中,拍摄装置620用于搭载在可移动平台600的云台610上,并在可移动平台600的移动中对目标测区拍摄图像。三维重建中图像的位姿信息确定装置640用于基于拍摄装置600拍摄的图像以及传感器630采测量的可移动平台600的位置信息和云台610的姿态信息来确定每帧图像的位姿信息。本领域技术人员可以参照图5结合前文关于图5的描述理解根据本发明实施例的可移动平台600中的三维重建中图像的位姿信息确定装置640的详细操作,为了简洁,此处不再赘述。
以上示例性地描述了根据本发明实施例的三维重建中图像的位姿信息确定方法、装置、存储介质和可移动平台。基于上面的描述,根据本发明实施例的三维重建中图像的位姿信息确定方法、装置、存储介质和可移动平台在对当前帧图像重定位失败的情况下,基于当前帧图像和上一帧图像之间的特征匹配得到当前帧图像相对于上一帧图像的相对位姿信息,并基于该相对位姿信息和拍摄图像的拍摄装置所位于的可移动平台上的传感 器位姿信息确定当前帧图像的位姿信息,有效避免了重定位失败则无法三维重建的问题,提高了实时三维重建的鲁棒性。此外,根据本发明实施例的三维重建中图像的位姿信息确定方法、装置、存储介质和可移动平台在对当前帧图像二维跟踪也失败的情况下,将拍摄当前帧图像时传感器测量的可移动平台的位置信息和云台的姿态信息(即传感器位姿信息)确定为当前帧图像的位姿信息,因而无论在任何情况下均能获得当前帧图像的位姿信息,进一步提高了实时三维重建的鲁棒性。进一步地,根据本发明实施例的三维重建中图像的位姿信息确定方法、装置、存储介质和可移动平台还可以根据后续图像对当前帧图像的位姿信息进行优化,以进一步提高三维重建的鲁棒性。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本发明的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本发明的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本发明的范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本发明并帮助理解各个发明方面中的一个或多个,在对本发明的示例性实施例的描述中,本发明的各个特征有时 被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本发明的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的一些模块的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这 些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
以上所述,仅为本发明的具体实施方式或对具体实施方式的说明,本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。本发明的保护范围应以权利要求的保护范围为准。

Claims (18)

  1. 一种三维重建中图像的位姿信息确定方法,其特征在于,所述方法包括:
    获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器;
    当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定所述当前帧图像和所述上一帧图像对应的特征点匹配对;以及
    当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述特征点匹配对不满足所述预设匹配成功的条件时,将拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定为所述当前帧图像的位姿信息。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;以及
    当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,对所述下一帧图像进行重定位。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述方法还包括:
    获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;
    当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,确定所述下一帧图像和所述当前帧图像对应的特征点匹配对;以及
    根据所述下一帧图像和所述当前帧图像的特征点匹配对确定所述下一帧图像的位姿信息。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,当所述当前帧 图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,所述当前帧图像的位姿信息用于确定所述当前帧图像和所述上一帧图像对应的特征点匹配对对应的地图点。
  6. 根据权利要求1-4中的任一项所述的方法,其特征在于,当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,所述方法还包括:
    获取所述当前帧图像之后所述拍摄装置拍摄的后续图像,并确定所述后续图像和所述当前帧图像对应的特征点匹配对;以及
    根据所述后续图像和所述当前帧图像对应的特征点匹配对对所述当前帧图像的位姿信息进行优化。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在确定所述后续图像和所述当前帧图像对应的特征点匹配对之前,获取所述后续图像的位姿信息,并根据所述后续图像的位姿信息确定预设区域范围;以及
    确定所述当前帧图像是否在所述预设区域范围内拍摄,如果确定所述当前帧图像是在所述预设区域范围内拍摄,则确定所述后续图像和所述当前帧图像对应的特征点匹配对。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述传感器包括用于测量所述可移动平台的位置信息的实时动态载波相位差分定位装置。
  9. 一种三维重建中图像的位姿信息确定装置,其特征在于,所述装置包括存储装置和处理器,其中,
    所述存储装置,用于存储程序代码;
    所述处理器,执行所述程序代码,当所述程序代码执行时,用于:
    获取承载在可移动平台的云台上的拍摄装置拍摄的当前帧图像,其中,所述可移动平台包括用于测量所述可移动平台的位置信息和所述云台的姿态信息的传感器;
    当对所述当前帧图像重定位失败时,提取所述当前帧图像的上一帧图像的特征点,并根据所述上一帧图像的特征点执行特征点匹配算法以确定 所述当前帧图像和所述上一帧图像对应的特征点匹配对;以及
    当所述特征点匹配对满足预设匹配成功的条件时,根据所述特征点匹配对确定所述当前帧图像相对于所述上一帧图像的相对位姿信息,并根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定所述当前帧图像的位姿信息。
  10. 根据权利要求9所述的装置,其特征在于,所述处理器还用于:
    当所述特征点匹配对不满足所述预设匹配成功的条件时,将拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定为所述当前帧图像的位姿信息。
  11. 根据权利要求10所述的装置,其特征在于,所述处理器还用于:
    获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;以及
    当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,对所述下一帧图像进行重定位。
  12. 根据权利要求9-11任一项所述的装置,其特征在于,所述处理器还用于:
    获取所述拍摄装置拍摄的所述当前帧图像的下一帧图像;
    当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,确定所述下一帧图像和所述当前帧图像对应的特征点匹配对;以及
    根据所述下一帧图像和所述当前帧图像的特征点匹配对确定所述下一帧图像的位姿信息。
  13. 根据权利要求9-12任一项所述的装置,其特征在于,当所述当前帧图像的位姿信息是根据所述相对位姿信息以及拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息确定的时,所述当前帧图像的位姿信息用于确定所述当前帧图像和所述上一帧图像对应的特征点匹配对对应的地图点。
  14. 根据权利要求9-12中的任一项所述的装置,其特征在于,当所述当前帧图像的位姿信息是拍摄所述当前帧图像时所述传感器测量的所述位置信息和所述姿态信息时,所述处理器还用于:
    获取所述当前帧图像之后所述拍摄装置拍摄的后续图像,并确定所述 后续图像和所述当前帧图像对应的特征点匹配对;以及
    根据所述后续图像和所述当前帧图像对应的特征点匹配对对所述当前帧图像的位姿信息进行优化。
  15. 根据权利要求14所述的装置,其特征在于,所述处理器还用于:
    在确定所述后续图像和所述当前帧图像对应的特征点匹配对之前,获取所述后续图像的位姿信息,并根据所述后续图像的位姿信息确定预设区域范围;以及
    确定所述当前帧图像是否在所述预设区域范围内拍摄,如果确定所述当前帧图像是在所述预设区域范围内拍摄,则确定所述后续图像和所述当前帧图像对应的特征点匹配对。
  16. 根据权利要求9-15任一项所述的装置,其特征在于,所述传感器包括用于测量所述可移动平台的位置信息的实时动态载波相位差分定位装置。
  17. 一种可移动平台,其特征在于,所述可移动平台包括权利要求9-16中的任一项的三维重建中图像的位姿信息确定装置。
  18. 一种存储介质,其特征在于,所述存储介质上存储有计算机程序,所述计算机程序在运行时执行如权利要求1-8中的任一项所述的三维重建中图像的位姿信息确定方法。
PCT/CN2019/105923 2019-09-16 2019-09-16 三维重建中图像的位姿信息确定方法和装置 WO2021051227A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/105923 WO2021051227A1 (zh) 2019-09-16 2019-09-16 三维重建中图像的位姿信息确定方法和装置
CN201980030549.6A CN112106113A (zh) 2019-09-16 2019-09-16 三维重建中图像的位姿信息确定方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105923 WO2021051227A1 (zh) 2019-09-16 2019-09-16 三维重建中图像的位姿信息确定方法和装置

Publications (1)

Publication Number Publication Date
WO2021051227A1 true WO2021051227A1 (zh) 2021-03-25

Family

ID=73748378

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105923 WO2021051227A1 (zh) 2019-09-16 2019-09-16 三维重建中图像的位姿信息确定方法和装置

Country Status (2)

Country Link
CN (1) CN112106113A (zh)
WO (1) WO2021051227A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950710A (zh) * 2021-02-24 2021-06-11 广州极飞科技股份有限公司 位姿确定方法、装置、电子设备和计算机可读存储介质
CN113409388A (zh) * 2021-05-18 2021-09-17 深圳市乐纯动力机器人有限公司 扫地机位姿确定方法、装置、计算机设备和存储介质
CN116758157B (zh) * 2023-06-14 2024-01-30 深圳市华赛睿飞智能科技有限公司 一种无人机室内三维空间测绘方法、系统及存储介质
CN117726687B (zh) * 2023-12-29 2024-06-21 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) 一种融合实景三维与视频的视觉重定位方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1975646A2 (en) * 2007-03-28 2008-10-01 Honeywell International Inc. Lader-based motion estimation for navigation
CN108303099A (zh) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 基于三维视觉slam的无人机室内自主导航方法
CN108648235A (zh) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置及存储介质
CN109073385A (zh) * 2017-12-20 2018-12-21 深圳市大疆创新科技有限公司 一种基于视觉的定位方法及飞行器
CN110068335A (zh) * 2019-04-23 2019-07-30 中国人民解放军国防科技大学 一种gps拒止环境下无人机集群实时定位方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1975646A2 (en) * 2007-03-28 2008-10-01 Honeywell International Inc. Lader-based motion estimation for navigation
CN109073385A (zh) * 2017-12-20 2018-12-21 深圳市大疆创新科技有限公司 一种基于视觉的定位方法及飞行器
CN108648235A (zh) * 2018-04-27 2018-10-12 腾讯科技(深圳)有限公司 相机姿态追踪过程的重定位方法、装置及存储介质
CN108303099A (zh) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 基于三维视觉slam的无人机室内自主导航方法
CN110068335A (zh) * 2019-04-23 2019-07-30 中国人民解放军国防科技大学 一种gps拒止环境下无人机集群实时定位方法及系统

Also Published As

Publication number Publication date
CN112106113A (zh) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2021051227A1 (zh) 三维重建中图像的位姿信息确定方法和装置
JP5722502B2 (ja) モバイルデバイスのための平面マッピングおよびトラッキング
JP5950973B2 (ja) フレームを選択する方法、装置、及びシステム
US9177404B2 (en) Systems and methods of merging multiple maps for computer vision based tracking
JP6845895B2 (ja) 画像に基づく位置検出方法、装置、機器及び記憶媒体
US9501725B2 (en) Interactive and automatic 3-D object scanning method for the purpose of database creation
US9420265B2 (en) Tracking poses of 3D camera using points and planes
JP4488804B2 (ja) ステレオ画像の関連付け方法及び3次元データ作成装置
JP2020530168A5 (zh)
US11788845B2 (en) Systems and methods for robust self-relocalization in a visual map
US9064171B2 (en) Detection device and method for transition area in space
JP2018526698A (ja) ローカライゼーション・エリア記述ファイルのプライバシ機密クエリ
JP2008513852A5 (zh)
JP2009245207A (ja) 確率分布構築方法、確率分布構築装置、および確率分布構築プログラム、並びに被写体検出方法、被写体検出装置、および被写体検出プログラム
JP6922348B2 (ja) 情報処理装置、方法、及びプログラム
CN110163914B (zh) 基于视觉的定位
WO2018227580A1 (zh) 摄像头标定方法和终端
WO2014203743A1 (en) Method for registering data using set of primitives
JP2015045919A (ja) 画像認識方法及びロボット
Dou et al. Benchmarking 3D pose estimation for face recognition
WO2022174603A1 (zh) 一种位姿预测方法、位姿预测装置及机器人
CN115272470A (zh) 相机定位方法、装置、计算机设备和存储介质
CN108734721B (zh) 追踪系统以及追踪方法
JPH11194027A (ja) 三次元座標計測装置
TWI612498B (zh) 基於空間順序的漸進式特徵點匹配方法與裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945751

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945751

Country of ref document: EP

Kind code of ref document: A1