WO2015139574A1 - 一种静态物体重建方法和系统 - Google Patents

一种静态物体重建方法和系统 Download PDF

Info

Publication number
WO2015139574A1
WO2015139574A1 PCT/CN2015/074074 CN2015074074W WO2015139574A1 WO 2015139574 A1 WO2015139574 A1 WO 2015139574A1 CN 2015074074 W CN2015074074 W CN 2015074074W WO 2015139574 A1 WO2015139574 A1 WO 2015139574A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
feature point
dimensional feature
current frame
external parameter
Prior art date
Application number
PCT/CN2015/074074
Other languages
English (en)
French (fr)
Inventor
章国锋
鲍虎军
王康侃
周炯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP15764364.4A priority Critical patent/EP3093823B1/en
Publication of WO2015139574A1 publication Critical patent/WO2015139574A1/zh
Priority to US15/232,229 priority patent/US9830701B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to the field of graphic image processing technologies, and in particular, to a static object reconstruction method and system.
  • Object reconstruction has many applications in computer graphics and computer vision, such as movie effects, Three Dimensions (3D) games, virtual reality and human-computer interaction.
  • Most reconstruction systems are capable of reconstructing detailed 3D models that use multiple simultaneous cameras or 3D scanning devices (such as lasers and structured light cameras) to capture information from objects and then model them, but expensive equipment, Complex and cumbersome user interaction interfaces greatly limit the application of these reconstruction systems.
  • Microsoft introduced the Kinect depth camera, the RGB-D camera derived from it has been widely used in object modeling research due to its low price and easy operation.
  • the RGB-D camera collects two-dimensional image information and depth information of the object, and then reconstructs the system to model according to the two-dimensional image and depth information, but when the depth information collected by the RGB-D camera is lost, static objects are caused. The reconstruction failed.
  • Embodiments of the present invention provide a static object reconstruction method and system, which can realize reconstruction of a static object when depth data collected by a depth camera is lost.
  • a first aspect of the embodiments of the present invention provides a static object reconstruction method, including:
  • the two-dimensional feature point and the reference feature point are three-dimensionally Feature point matching, or two-dimensional feature points in the previous frame image of the current frame image of the static object Matching, calculating a camera external parameter of the current frame image;
  • the matching the three-dimensional feature points with the reference feature points, and calculating the camera external parameters of the current frame image specifically includes:
  • the step of selecting the candidate corresponding point, calculating the model parameter and the score is performed cyclically, and the model parameter of the model with the highest score is used as the camera external parameter of the current frame image, wherein:
  • the probability that the partial three-dimensional feature point selected in the loop is consecutively K s times and the candidate corresponding point contains an abnormality is less than a preset value, or the number of the looping steps exceeds a preset value, or the execution is performed If the time of the looping step exceeds the preset value, the looping step of selecting the corresponding candidate point, calculating the model parameters and scoring is stopped.
  • the selecting three-dimensional features of all three-dimensional feature points of the current frame image The candidate corresponding points corresponding to the points respectively include:
  • the performing the three-dimensional feature point and the reference feature point further includes: before calculating a camera external parameter of the current frame image, the method further includes:
  • the three-dimensional feature points are matched with the reference feature points, and the camera external parameters of the current frame image are calculated.
  • the initial camera external parameter is used as a condition for determining a camera external parameter according to the model score, and the three-dimensional The feature point is matched with the reference feature point of the static object, and finally the camera external parameter of the current frame image is obtained.
  • the matching the two-dimensional feature points with the three-dimensional feature points of the reference feature points, and calculating the camera external parameters of the current frame image specifically includes:
  • the determined camera external parameter is used as a camera external parameter of the current frame image.
  • the two-dimensional feature points are matched with the two-dimensional feature points of the previous frame image of the current frame image of the static object, and the camera external parameters of the current frame image are calculated, which specifically includes:
  • the method further includes:
  • the target model is formed by depth data of a current frame image of the static object
  • the transformation matrix is a matrix that minimizes a first energy function
  • the first energy function is including a distance term and smooth
  • the distance term is used to represent a distance of a vertex in the source model to a corresponding vertex in the target model
  • the smooth item is used to constrain the transformation of adjacent vertex.
  • the method further includes:
  • the converting, by using the calculated camera external parameter, the point cloud of the current frame image to the reference coordinate system formed by the reference feature point specifically: performing the adjusting by using the calculated camera external parameter
  • the obtained camera external parameter converts the point cloud of the current frame image to the reference coordinate system.
  • the method further includes:
  • the second energy function further includes a two-dimensional feature point in the image of the i-th frame, and a distance from the corresponding reference feature point to the feature point in the image coordinate system of the ith frame.
  • the method further includes:
  • the converting, by using the calculated camera external parameter, the point cloud of the current frame image to the reference coordinate system formed by the reference feature point specifically: performing the updating by using the calculated camera external parameter
  • the obtained camera external parameter converts the point cloud of the current frame image to the reference coordinate system.
  • a second aspect of the embodiments of the present invention provides a static object reconstruction system, including:
  • a feature acquiring unit configured to respectively acquire a three-dimensional feature point and a two-dimensional feature point in a current frame image of the static object
  • a first external parameter calculation unit configured to match a three-dimensional feature point acquired by the feature acquiring unit with a reference feature point, and calculate a camera external parameter of the current frame image, wherein the reference feature point is the static object The accumulation of feature points on multiple frame images;
  • a second external parameter calculation unit configured to: if the first foreign parameter calculation unit calculates a camera external parameter of the current frame image based on the three-dimensional feature point, the camera external parameter is not calculated within a preset time And matching the two-dimensional feature points acquired by the feature acquiring unit with the three-dimensional feature points in the reference feature points, or matching the two-dimensional feature points in the previous frame image of the current frame image of the static object Calculating a camera external parameter of the current frame image;
  • a conversion unit configured to convert a point cloud of the current frame image to a reference coordinate system composed of the reference feature points by using a camera external parameter calculated by the first foreign parameter calculation unit or the second external parameter calculation unit, To model the static object.
  • the first external parameter calculation unit specifically includes:
  • a candidate selecting unit configured to select, in the reference feature point, a plurality of candidate corresponding points that are closest to a three-dimensional feature point of the current frame image
  • a selecting unit configured to select a candidate corresponding point corresponding to a part of the three-dimensional feature points of all the three-dimensional feature points of the current frame image
  • a model calculation unit configured to calculate a model parameter according to a candidate corresponding point corresponding to a part of the three-dimensional feature points selected by the selecting unit;
  • a scoring unit configured to score a model parameter corresponding model calculated by the model calculation unit
  • a foreign parameter determining unit configured to: in the score obtained by the step of performing the step of selecting the candidate candidate, calculating the model parameter, and the scoring cycle, the model parameter of the model with the highest score is taken as the The camera external reference of the current frame image, where:
  • the external parameter determining unit is further configured to: in the loop, the probability that the partial three-dimensional feature point selected by consecutive K s times and the candidate corresponding point include an abnormality is less than a preset value, or the number of times of the looping step If the preset value is exceeded, or the time for executing the looping step exceeds a preset value, the selecting unit, the model calculating unit, and the scoring unit are notified to stop performing the looping step of selecting the candidate corresponding point, calculating the model parameter, and scoring .
  • the selecting unit is specifically configured to: according to the correct correspondence probability of the feature points in the spatial regions of the static image object in the previous frame image and the reference feature points, from all the three-dimensional feature points of the current frame image Selecting the partial three-dimensional feature points; selecting candidate corresponding points of the three-dimensional feature points from the plurality of candidate corresponding points according to a probability that the candidate corresponding points of the feature points in the previous frame image are selected.
  • the system further includes:
  • the first external parameter calculation unit is further configured to match the three-dimensional feature point with a feature point in a previous frame image of a current frame image of the static object to obtain an initial camera external parameter;
  • the initial camera external parameter is used as a condition for determining a camera external parameter according to the model score, and the three-dimensional feature point is matched with the reference feature point of the static object to finally obtain a camera external parameter of the current frame image.
  • the second external parameter calculation unit specifically includes:
  • a feature matching unit configured to match the two-dimensional feature point with the three-dimensional feature point of the reference feature point, and determine a three-dimensional reference feature point corresponding to the two-dimensional feature point;
  • a foreign parameter obtaining unit configured to determine a camera external parameter that minimizes a function of a camera pose in the reference coordinate system, wherein a function of the camera pose includes a correspondence between the two-dimensional feature point and a three-dimensional reference feature point;
  • the determined camera external parameter is used as a camera external parameter of the current frame image.
  • the second foreign parameter calculation unit further includes a corresponding selection unit
  • the feature matching unit is further configured to match the two-dimensional feature point with a two-dimensional feature point of a previous frame image of a current frame image of the static object, and determine that the image in the previous frame image is a two-dimensional feature point corresponding to the two-dimensional feature point;
  • the corresponding selecting unit is configured to select a two-dimensional feature point of the current frame image and a plurality of pairs of corresponding feature points having depth data at the feature points corresponding to the two-dimensional feature points in the previous frame image.
  • the external parameter obtaining unit is further configured to determine, according to the depth change information of the plurality of pairs of corresponding feature points selected by the corresponding selecting unit, the relative camera external parameters of the current frame image and the previous frame image; according to the relative camera The camera external parameter participating in the image of the previous frame is externally determined, and the camera external parameter of the current frame image is determined.
  • a model generating unit configured to generate a source model including depth data according to the collected two-dimensional data of the plurality of frame images of the static object when determining that the camera external parameter of the current frame image is failed based on the three-dimensional feature point
  • the target model is formed by depth data of a current frame image of the static object
  • the transformation matrix is a matrix that minimizes a first energy function, the first energy function comprising a distance term and a smooth term, the distance term is used to represent a distance of a vertex in the source model to a corresponding vertex in the target model, and the smooth term is used to constrain the transformation of adjacent vertex;
  • a model conversion unit configured to convert the source model generated by the model generation unit to a target model through a transformation matrix
  • a completion unit configured to complete the current frame according to the target model converted by the model conversion unit Depth data lost in the image.
  • the system also includes:
  • a corresponding establishing unit configured to establish, by using the camera external parameter, a correspondence between the three-dimensional feature point in the current frame image and the reference feature point;
  • An adjusting unit configured to adjust a camera external parameter of the N-frame image of the static object, so that a second energy function is included, wherein the second energy function includes a three-dimensional feature point in the image of the ith frame, and a corresponding Converting a reference feature point to a distance of a feature point in the ith frame image coordinate system, wherein the i is a positive integer from 0 to N;
  • the conversion unit is specifically configured to adjust a camera external parameter of the current frame image obtained by the adjustment unit to convert a point cloud of the current frame image to the reference coordinate system.
  • the corresponding establishing unit is further configured to establish a two-dimensional feature point and the reference feature in the current frame image by using the camera external parameter if the camera external parameter is calculated based on the two-dimensional feature point Point correspondence
  • the second energy function used in the adjustment of the external parameter of the camera further includes a two-dimensional feature point in the image of the i-th frame, and the corresponding reference feature point is converted to the feature point in the image coordinate system of the ith frame. the distance.
  • the system also includes:
  • a merging unit configured to merge feature points in the certain frame image that match the another frame image if feature points in a certain frame image of the static object overlap with feature points in another frame image
  • an updating unit configured to obtain an updated reference feature point according to the merged feature points of the merging unit, and update a camera external parameter of each frame image of the static object according to the updated reference feature point;
  • the conversion unit is specifically configured to update the current frame image obtained by the update unit a camera external parameter that converts a point cloud of the current frame image to the reference coordinate system.
  • the static object reconstruction system calculates the camera external parameter based on the three-dimensional feature point
  • the camera external parameter is not obtained within the preset time
  • the depth data collected by the depth camera is lost or damaged
  • the method is adopted.
  • Two-dimensional feature points are used to calculate the camera external parameters, so as to realize the alignment of the point cloud in a certain frame image according to the external parameters of the camera, so that the two-dimensional feature points and the three-dimensional feature points are combined, and the depth data collected by the depth camera can be lost or damaged.
  • FIG. 1 is a flowchart of a static object reconstruction method provided in an embodiment of the present invention.
  • FIG. 2 is a flow chart of a method for calculating a camera external parameter based on a three-dimensional feature point in an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing a comparison of selecting a corresponding point in a reference feature point and selecting a plurality of candidate corresponding points in a reference feature point in the embodiment of the present invention
  • FIG. 4 is a flow chart of a method for calculating camera external parameters and depth data complementation based on two-dimensional feature points in an embodiment of the present invention
  • FIG. 5 is a schematic diagram of depth data completion in an embodiment of the present invention.
  • FIG. 6 is a flow chart of another method for reconstructing a static object provided in an embodiment of the present invention.
  • FIG. 7 is a flowchart of another method for reconstructing a static object provided in an embodiment of the present invention.
  • FIG. 8 is a flowchart of another method for reconstructing a static object according to an embodiment of the present invention.
  • FIG. 9 is a global schematic diagram of static object reconstruction provided in an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a static object reconstruction system according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of another static object reconstruction system provided in an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of another static object reconstruction system according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of another static object reconstruction system according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of another static object reconstruction system according to an embodiment of the present invention.
  • the feature point may include a three-dimensional feature point, and/or a two-dimensional feature point, specifically Which feature point is different depending on the processing method.
  • Embodiments of the present invention provide a static object reconstruction method.
  • the method of the present invention mainly uses a depth camera, such as an RGB-D camera, to capture a multi-frame image from a static object in various directions, such as in various directions around a static object.
  • a depth camera such as an RGB-D camera
  • Shooting wherein each frame image is an image of a static object captured by the depth camera in a certain direction, and one frame of image data may include two-dimensional information such as color, etc., and may also include three-dimensional information such as depth data, etc.
  • the system models the static object according to the multi-frame image captured by the depth camera described above.
  • the method of the present invention is a method performed by a static object reconstruction system.
  • the method flow chart is as shown in FIG. 1 and includes:
  • Step 101 Acquire three-dimensional feature points and two-dimensional feature points in the current frame image of the static object, respectively.
  • the static object reconstruction system acquires a multi-frame image taken by the depth camera, and processes the data of each frame image captured by the depth camera according to steps 101 to 105.
  • the static object reconstruction system needs to perform feature extraction on the current frame image. Since the data of one frame image captured by the depth camera includes two-dimensional information and three-dimensional information, the embodiment In the case, it is necessary to extract a feature point having three-dimensional information, that is, a three-dimensional feature point, and a feature point having only two-dimensional information, that is, a two-dimensional feature point.
  • a static object surface texture may be used to extract a two-dimensional feature.
  • the sign for example, uses a Scale-invariant feature transform (SIFT) method to extract two-dimensional feature points in the current frame image.
  • SIFT Scale-invariant feature transform
  • some static objects have few surface textures and can only extract a few traditional feature points.
  • the static object reconstruction system will utilize geometric information or utilize The corner points extracted by texture and geometric information are used as three-dimensional feature points, such as Fast Point Feature Histograms (FPFH) to extract 3D feature points.
  • FPFH Fast Point Feature Histograms
  • each of the two-dimensional feature points and the three-dimensional feature points extracted need to correspond to one feature description quantity
  • the feature description quantity of the two-dimensional feature points may include information such as color information and two-dimensional coordinates.
  • the feature description amount of the three-dimensional feature point may include information such as depth data and three-dimensional coordinates, and may also include some color information and the like.
  • the corner point refers to a point that is representative and robust in the image (that is, the point that the point can be stably stabilized even in the presence of noise interference), such as a local bright spot or a dark spot, a line segment end point, or a curve. The maximum point of curvature and so on.
  • the static object reconstruction system may adopt a splicing method of the feature description quantity, and splicing the two-dimensional feature description quantity and the three-dimensional feature description quantity of the three-dimensional feature point into one.
  • the collection is obtained.
  • the static object reconstruction system normalizes the two-dimensional feature description quantity and the three-dimensional feature description quantity respectively, that is, the standard of the two-dimensional feature description quantity and the three-dimensional feature description quantity in the training set of the two-dimensional feature description quantity and the three-dimensional feature description quantity respectively.
  • the standardized feature description quantity can be obtained; then the standardized two-dimensional feature description quantity and the three-dimensional feature quantity are combined to obtain the feature description quantity of the three-dimensional feature point.
  • f ( ⁇ f 2D , ⁇ f 3D ), where f 2D is a normalized two-dimensional feature description quantity, f 3D is a normalized three-dimensional feature description quantity, and ⁇ and ⁇ are two-dimensional feature description quantities and three-dimensional feature description quantity coefficients, respectively.
  • Step 102 Matching the three-dimensional feature point with the reference feature point, and calculating a camera external parameter of the current frame image, wherein the reference feature point is formed by accumulating feature points on the plurality of frame images of the static object, and the camera external parameter refers to
  • the depth camera is parameter information such as the position and orientation of the depth camera in three-dimensional space when shooting the current frame image, and one camera external parameter corresponding to each frame image.
  • the matching of the three-dimensional feature points with the reference feature points is mainly performed by comparing the feature description quantities of the three-dimensional feature points and the reference feature points, and finding all the three-dimensional feature points in the current frame image respectively.
  • the three-dimensional feature points of the current frame image are aligned to the reference feature coordinate system formed by the reference feature points, and then the camera external parameters can be calculated by finding the corresponding reference feature points; if a certain three-dimensional feature point is not found
  • Corresponding three-dimensional feature points are added to the reference feature points.
  • the Euclidean distance between the three-dimensional feature point and all the reference feature points may be separately calculated, and the reference feature point closest to the Euclidean distance is used as the reference feature point corresponding to the three-dimensional feature point.
  • Step 103 Determine whether the camera external parameter is obtained within a preset time when the camera external parameter of the current frame image is calculated based on the three-dimensional feature point in the above step 102, and if obtained, pass the camera external parameter calculated in the above step 102, Step 105 is performed; if not, step 104 is performed to perform step 105 by using a camera external parameter calculated based on the two-dimensional feature point.
  • step 104 If the camera external parameter is not calculated at the preset time by the above step 102, it is considered that the camera external parameter fails to be calculated based on the three-dimensional feature point, and the camera external parameter calculation fails for various reasons. In this embodiment, it is considered that If some or all of the depth data in the current frame image captured by the depth camera is lost or damaged, step 104 needs to be performed, that is, the camera external parameter is calculated by the two-dimensional feature point.
  • Step 104 Match the two-dimensional feature point with the three-dimensional feature point in the reference feature point, or match the two-dimensional feature point in the previous frame image of the current frame image of the static object, and calculate a camera external parameter of the current frame image.
  • the static object reconstruction system can align the two-dimensional feature points of the current frame image to the three-dimensional feature points in the reference feature points, and then according to the corresponding reference feature points found.
  • the 3D feature points are calculated by the camera external parameters.
  • the static object reconstruction system may align the two-dimensional feature points of the current frame image into the previous frame image, and then calculate the camera external parameters, specifically, according to the current frame image and the previous frame.
  • the corresponding two-dimensional feature points in the image are obtained from the camera external parameters, and then combined with the previously calculated camera external parameters of the previous image, so that the camera external parameters of the current frame image can be obtained.
  • Step 105 Convert the point cloud of the current frame image to a reference coordinate system composed of reference feature points by using the calculated camera external parameter to model the static object.
  • the point cloud of the current frame image refers to a set of massive points that express the target spatial distribution and the target surface characteristics in the same spatial coordinate system.
  • the above two-dimensional feature points and three-dimensional feature points are only partial points in the point cloud.
  • the static object reconstruction system calculates the phase based on the three-dimensional feature points
  • the external parameter of the camera is not obtained within the preset time, indicating that the depth data collected by the depth camera is lost or damaged
  • the two-dimensional feature point is used to calculate the camera external parameter, thereby implementing a certain according to the camera external parameter.
  • the alignment of the point cloud in the frame image which combines the two-dimensional feature points and the three-dimensional feature points, can realize the successful reconstruction of the static object when the depth data collected by the depth camera is lost or damaged.
  • step 102 when the static object reconstruction system performs the above-mentioned step 102 to calculate the camera external parameters of the current frame image based on the three-dimensional feature points, the following steps may be specifically implemented:
  • A1 Among the reference feature points, a plurality of candidate corresponding points closest to the three-dimensional feature point distance of the current frame image are selected.
  • the static object reconstruction system uses the nearest neighbor feature matching method to perform matching between feature points. If the reference feature point is directly determined according to the correlation algorithm, the three-dimensional feature point corresponds to one reference feature point, and the corresponding error may be found. In order to improve the correct correspondence rate, in the embodiment of the present invention, several candidate corresponding points may be found for each three-dimensional feature point, and then further calculated. Specifically, for a certain three-dimensional feature point of the current frame image, calculating a Euclidean distance between the three-dimensional feature point and all reference feature points, and sorting the Euclidean distances, selecting a plurality of reference feature points having a smaller Euclidean distance as candidates Corresponding point.
  • the feature points on the f1 frame image may find the wrong corresponding feature points on the f2 frame image, and the broken corresponding feature points are indicated by broken lines in FIG. 3(a), such as the header in the f1 frame image.
  • the feature points of the part correspond to the feature points of the abdomen in the f2 frame image, so that f1 and f2 cannot be correctly aligned.
  • FIG. 3(a) the feature points on the f1 frame image may find the wrong corresponding feature points on the f2 frame image, and the broken corresponding feature points are indicated by broken lines in FIG. 3(a), such as the header in the f1 frame image.
  • the feature points of the part correspond to the feature points of the abdomen in the f2 frame image, so that f1 and f2 cannot be correctly aligned.
  • each feature point on the f1 frame image finds a plurality of nearest neighbor candidate points on the f2 frame image, such as a feature point of the head in the f1 frame image, the image in the f2 frame Find the candidate corresponding points 1, 2 and 3, then the correct corresponding feature points are included in it (indicated by the solid line).
  • A2 Select candidate corresponding points corresponding to some three-dimensional feature points of all three-dimensional feature points of the current frame image.
  • the feature points corresponding to the three-dimensional feature points of the current frame image may be selected from the plurality of candidate corresponding points corresponding to each three-dimensional feature point selected in the above step A1, and the camera external parameters are calculated.
  • the model parameter calculation method can be used to obtain the camera external parameters, such as the model using Random Sample Consensus (RANSAC). Type parameter calculation method.
  • each three-dimensional feature point corresponds to a plurality of candidate corresponding points
  • the ordinary RANSAC method is complicated and time-consuming to calculate
  • a plurality of candidate corresponding RANSACs based on prior knowledge are proposed (Prior-based Multi
  • the -Candidates RANSAC, PMCSAC method mainly uses the probability distribution of the previous frame image and the reference feature point to guide the random selection of the current frame image by using the similarity between the current frame image and the previous frame image. Specifically, it can be implemented by steps A2 to A4, and step A2 can be implemented by the following steps:
  • A21 selecting some three-dimensional feature points from all three-dimensional feature points of the current frame image according to the correct correspondence probability of the feature points in the spatial regions of the previous frame and the reference feature points in the previous frame image, wherein how many three-dimensional feature points are selected
  • the feature points are mainly determined according to the method of calculating the model parameters.
  • A22 Select, according to a probability that a certain candidate corresponding point corresponding to the feature point in the previous frame image is selected, select a candidate corresponding point of the three-dimensional feature point of the current frame image from the plurality of candidate corresponding points obtained in the step A1.
  • the static object reconstruction system correctly matches the probability points when the feature points in the image of one frame match the reference feature points in each grid record.
  • One grid is A space area.
  • the correct corresponding probability of G i Normalize the correct corresponding probability of all the grids, and get the probability of selecting the correct corresponding feature points from G i : If the probability is high, it means that the correct correspondence probability of a certain spatial region in the previous frame image, that is, G i is higher, and accordingly, the correct correspondence probability of the spatial region in the current frame image is also higher, and the current frame can be obtained from the current frame. A part of the three-dimensional feature points are selected near the spatial region of the image.
  • each feature point in the previous frame image has b candidate corresponding points.
  • the selected probability of selecting the kth candidate corresponding point is: Normalized, Where ⁇ is an adjustment parameter that controls the effect of the spatial distance d on the probability. If the probability is high, it means that the probability that the feature point in the previous frame image correctly corresponds to the k-th candidate corresponding point is higher, and the probability that the three-dimensional feature point in the current frame image correctly corresponds to the k-th candidate corresponding point is correspondingly
  • a candidate corresponding point may be selected from the vicinity of the kth candidate corresponding point among the plurality of candidate corresponding points obtained in the above step A2.
  • A3 Calculate the model parameters according to the candidate corresponding points corresponding to the selected three-dimensional feature points in step A2.
  • step A4 The model corresponding to the model parameter calculated in step A3 is scored.
  • the score score s of a model parameter corresponding model can be described by a covariance matrix C, that is,
  • A is the volume of the entire grid used for standardization, Indicates the ellipsoid volume that correctly corresponds to the distribution of feature points (ie, the correct corresponding 3D feature points can be found in the reference feature points), and the covariance matrix C can be:
  • N is the number of correctly corresponding feature points
  • f i is the spatial position of the i-th correct corresponding feature point. Is the average position of all correct corresponding feature points. If the score score is higher, it means that the model can find more correct corresponding three-dimensional feature points in the reference feature points, and these three-dimensional feature points can be evenly distributed in the entire space of the current frame image.
  • A5 The above steps A2 to A4 are executed cyclically, that is, the steps of selecting the candidate corresponding points, calculating the model parameters and scoring are performed cyclically, and the model parameters of the model with the highest score are taken as the camera external parameters of the current frame image.
  • the static object reconstruction system needs to set an initial value of a camera external parameter, and the initial value is used as a condition for determining the camera external parameter according to the model score, and then the phase of the current frame image is determined.
  • the machine is outside the machine, it is necessary to make the score of the final camera corresponding parameter corresponding to the model corresponding to the initial value of the external parameter of the camera higher.
  • the static object reconstruction system may first use the model parameters of the model to the current frame. All three-dimensional feature points of the image are transformed into the reference coordinate system, and the corresponding reference feature points are calculated; then, according to all the three-dimensional feature points that can find the correct corresponding feature points in the reference feature points, the model parameters are recalculated and then recalculated.
  • the model parameters are used as the final camera external parameters. Wherein, if the number of correct corresponding points in the three-dimensional feature points and the reference feature points is calculated when calculating the model parameters, the more accurate the final calculated model parameters, the more accurate the external camera parameters obtained above.
  • the static object reconstruction system may repeatedly perform the above steps A2 to A4 to select corresponding candidate points corresponding to different three-dimensional feature points in the current frame image, and further obtain different model parameters for scoring, and then The model parameter of the highest scoring model is used as the camera external parameter of the current frame image.
  • the static object reconstruction system can set a condition to stop the loop performing the above steps A2 to A4, for example, if in the above cycle, consecutive K s times are selected.
  • b is the number of candidate corresponding points corresponding to a three-dimensional feature point
  • the probability that one candidate corresponding point corresponds to the three-dimensional feature point of the current image correctly is selected from the plurality of candidate corresponding points for: The probability that all the selected three-dimensional feature points and the reference feature points all correctly correspond to Where m is the number of multiple three-dimensional feature points.
  • the loops of the above steps A2 to A4 are stopped, that is, the candidate corresponding points, the calculation model parameters, and the score are selected. Cycle steps.
  • the static object reconstruction system can directly obtain the camera external parameters of the current frame image according to the above steps A1 to A5; in other specific embodiments, the static object reconstruction system can first calculate an initial camera external parameter, and then pass The above steps A1 to A5 optimize the camera external parameters, specifically:
  • the static object reconstruction system first matches the three-dimensional feature points with the feature points in the previous frame image of the current frame image of the static object to obtain an initial camera external parameter.
  • the specific method is similar to the above steps A1 to A5, and the difference is
  • the static object reconstruction system needs to select a plurality of candidate corresponding points closest to the three-dimensional feature point of the current frame image in the feature points of the image of the previous frame, and then perform other processing; then the static object reconstruction system uses the initial camera
  • the external parameter is used as a condition for determining the external parameter of the camera according to the model score, and the three-dimensional feature point is matched with the reference feature point of the static object according to the methods of the above steps A1 to A5, and finally the camera external parameter of the current frame image is obtained.
  • the initial camera external parameter is mainly when the static object reconstruction system performs the above step A5, so that the score of the final determined camera external reference model is higher than the score of the initial camera external reference corresponding model.
  • B1 Matching the two-dimensional feature points of the current frame image with the three-dimensional feature points of the reference feature points, and determining the three-dimensional reference feature points corresponding to the two-dimensional feature points. Due to the amount of feature descriptions of the three-dimensional feature points Including the two-dimensional description quantity, the two-dimensional description quantity of the two-dimensional feature point can be matched with the two-dimensional description quantity of the three-dimensional feature point in the reference feature point.
  • B2 determining a camera external parameter that minimizes a function of a camera pose in a reference coordinate system, including a correspondence between the two-dimensional feature point and the three-dimensional reference feature point in a function of the camera pose, and determining the determined camera external parameter as the current frame image Camera external reference.
  • the function of the camera pose can be Where (x i , X i ) is the correspondence between the two-dimensional feature points and the three-dimensional reference feature points, K is the internal reference matrix, R is the camera rotation matrix, and t is the camera displacement vector, where R and t can be equivalent to the camera external parameters. . Minimizing this function is to determine the values of R and t to minimize the value of the function.
  • C1 Matching the two-dimensional feature point of the current frame image with the two-dimensional feature point of the previous frame image of the current frame image of the static object, and determining the two-dimensional feature point corresponding to the two-dimensional feature point in the previous frame image.
  • C2 selecting two-dimensional feature points of the current frame image, and multiple pairs of corresponding feature points having depth data at the two-dimensional feature points corresponding to the two-dimensional feature points in the previous frame image.
  • C3 determining relative camera external parameters of the current frame image and the previous frame image according to the selected depth change information of the corresponding pair of feature points, specifically, calculating feature points corresponding to the two frame images according to the ratio of the depth change The scaling of the length of the displacement vector between them allows a relative camera external parameter to be obtained.
  • the depth change information of the plurality of pairs of corresponding feature points is specifically obtained according to the depth data at the corresponding pairs of corresponding feature points.
  • C4 Determine the camera external parameter of the current frame image according to the camera external parameter participating in the previous frame image outside the camera.
  • the relative camera external parameter is obtained by the five-point method (Five-Point) of the above steps C2 to C3, and then the current frame image is obtained according to the camera external parameter of the known previous frame image.
  • the five-point method calculates the relative conversion of two frames by establishing five correct two-dimensional corresponding points on two frames.
  • the method combines the above two methods. After obtaining the camera external parameter of the current frame image by the above two methods, the three-dimensional feature points of the current frame image can be first converted to the reference coordinates through the two camera external parameters. The system (refer to the coordinate system composed of the feature points); then calculate the three-dimensional feature points of the current frame image to find the corresponding proportion in the reference feature points, that is, the feature points of the three-dimensional feature points converted to the reference coordinate system and the nearest reference If the distance of the feature point is less than the preset value, it is considered that the correspondence can be found in the reference feature point; then the camera external parameter corresponding to the higher ratio is used as the camera external parameter of the final current frame image.
  • the static object reconstruction system when the camera external parameter is calculated based on the three-dimensional feature point, the camera external parameter is not calculated within a preset time, indicating that the depth data of the image captured by the depth camera may be lost, and Calculating the camera external parameter based on the two-dimensional feature point, that is, the above step 104; meanwhile, further, in order to reconstruct a complete static object model, the static object reconstruction system also needs to perform geometric complementation, specifically, merging all frame images. Depth data, through the ray tracing technology to complete the missing part of each frame of image, and then use the image sequence to recover the depth data of each frame, thereby further filling the missing depth data. This is achieved by the following steps:
  • D1 Generate a source model including depth data according to the two-dimensional data of the plurality of frame images of the collected static object.
  • D2 Convert the source model to the target model through a transformation function.
  • the target model is formed by depth data of the current frame image of the static object
  • the transform function is a function that minimizes the first energy function
  • the first energy function includes a distance term and a smooth term.
  • each transform (vertex) in the source model is assigned a transform X i
  • the distance term in the first energy function it is used to represent the distance from the vertex in the source model to the corresponding vertex in the target model.
  • W is a weight matrix, denoted diag(w 1 ,...,w n ), for the missing vertex in the target model (ie the depth data lost in the current frame image), the corresponding weight w i is set to 0 .
  • the smoothing term can be used to ensure smooth deformation. Specifically, it can be defined as:
  • G: diag(1,1,1, ⁇ )
  • is used to weigh the rotating part and the displacement part of the transformation, and the edge set of the model is obtained by the adjacent pixels, and there is a vertice corresponding to the adjacent pixel.
  • the first energy function can be obtained as:
  • a transformation matrix can be obtained by minimizing the first energy function.
  • the smooth weight ⁇ in the first energy function is gradually reduced during the deformation process.
  • a larger weight value constraint is used, so that the source model can be integrally deformed to the target model; more local deformation is completed by continuously reducing the weight value. Due to the constraint of the smooth term, the vertices of the source model do not move directly to the corresponding vertices of the target model, but move parallel to the target surface. There is no corresponding vertex on the source model, because it is constrained by adjacent vertices and smoothly deformed. If the X change is below a threshold, the deformation is terminated.
  • D3 Complement the missing depth data in the current frame image according to the converted target model.
  • the static object reconstruction system will convert the resulting target model.
  • the source model S each vertex v i X i through a transformation matrix gradually deformed corresponding to the target model T vertex u i, the same model with similar adjacent vertices transform to ensure a smooth deformation .
  • the smooth constraint weight gradually decreases during the deformation process to maintain the overall shape of the source model and complete the local deformation, and the deformation is repeated until a stable state is reached.
  • the missing part of the target model is complemented by the vertices of the corresponding deformed model (ie, the points connected by dashed lines in Fig. 5), and the complemented vertex is the depth data of the complement.
  • the step of calculating the camera external parameter based on the two-dimensional feature point has no absolute order relationship with the steps of performing the geometric complementing in the above steps D1 to D3, and may be performed simultaneously or sequentially. Shown in 4 is just one specific implementation.
  • the camera external parameters can be optimized in the following ways:
  • the camera external parameters are optimized by the beam optimization method, that is, the camera external parameters of the N-frame image are optimized, and the N-frame images are consecutive N-frame images:
  • the static object reconstruction system after performing the above steps 101 to 102 for each frame image of an N frame image (such as a 30 frame image), the static object reconstruction system further performs the following step 201, and then performs the adjustment in step 202. Then, when performing the above step 105, the point cloud of the current frame image is converted to the reference coordinate system by using a camera external parameter of the current frame image obtained by adjusting the calculated camera external parameter, specifically:
  • Step 201 Establish a correspondence between the three-dimensional feature point and the reference feature point in the current frame image by using a camera external parameter.
  • the three-dimensional feature points in the current frame image may be first converted to a reference coordinate system by using a camera external parameter; then the spatial distance between the converted feature points and each reference feature point is calculated, and the nearest spatial distance is found. If the nearest spatial distance is less than a preset value such as 6 mm, a correspondence between the nearest spatial distance corresponding reference feature point and the corresponding three-dimensional feature point is established.
  • Step 202 Adjust a camera external parameter of the N frame image of the static object to minimize the second energy function, and adjust the reference feature corresponding to each feature point in the N frame image in the process of minimizing the second energy function.
  • the location of the point In this way, each feature point on each frame image is aligned to the corresponding reference feature point, and all corresponding point distances are minimized, so that the alignment error is dispersed throughout the optimization sequence.
  • the second energy function includes a three-dimensional feature point in the image of the ith frame, and a distance from the corresponding reference feature point to the feature point in the image coordinate system of the ith frame, where the i is from 0 to N. A positive integer.
  • the second energy function can be:
  • P ik is the kth three-dimensional feature point on the i-th frame image
  • g j (j ⁇ [1,...,L]) is the reference feature point corresponding to P ik , Q(T i ,g j )
  • the reference feature point g j in the reference coordinate system is transformed by T i to the three-dimensional feature point in the ith frame image coordinate system, and T i is to transfer the depth data of the ith frame image from the reference coordinate system to the ith frame image.
  • the rigid transformation in the coordinate system, d(x, y) represents the Euclidean distance.
  • the camera external parameters are optimized by the beam optimization method, that is, the camera external parameters of the N frame images are optimized:
  • the static object reconstruction system performs the following steps 301 after performing the above steps 101 to 104 for each frame image of the N frame image, and then performs the adjustment in step 302;
  • the point cloud of the current frame image is converted to the reference coordinate system by using a camera external parameter of the current frame image obtained by adjusting the calculated camera external parameter, specifically:
  • Step 301 Establish, by using a camera external parameter, a correspondence between the two-dimensional feature point and the three-dimensional feature point in the current frame image and the reference feature point, respectively.
  • Step 302 Adjust a camera external parameter of the N-frame image of the static object to minimize the second energy function, and further adjust a position of the reference feature point corresponding to each feature point in the N-frame image, where the second energy function includes the first a three-dimensional feature point in the i-frame image, and a distance from the corresponding reference feature point to the feature point in the ith frame image coordinate system; and a two-dimensional feature point in the i-th frame image, and the corresponding reference feature point is converted to the The distance of the feature points in the i-th frame image coordinate system, where i is a positive integer from 0 to N.
  • the second energy function can be:
  • X ik is the kth two-dimensional feature point on the i-th frame image, and the corresponding reference feature point is g r ;
  • K is the camera internal reference matrix, and the three-dimensional point in the coordinate system is projected onto the pixel on the image;
  • i is the number of two-dimensional feature points on the i-th frame image;
  • is a weight value used to control the influence of the alignment error of the two-dimensional feature points and the three-dimensional feature points on the total energy in each frame image; the second energy function
  • Other symbols in the other symbols are similar to other symbols in the second energy function described above, and are not described herein.
  • part (such as the first part) is calculated based on the two-dimensional feature point, and another part (such as the second part) is calculated based on the three-dimensional feature point, then in the pair Among the second energy functions used in the beam optimization of the N frame image, the portion of the first partial frame image is calculated according to the first method described above (ie, the above steps 201 to 202); the portion of the second partial frame image The calculation is performed according to the second method (ie, the above steps 301 to 302), and details are not described herein.
  • the static object reconstruction system may perform global optimization according to the following steps 401 to 403 after processing the multi-frame image according to the above steps 101 to 104; then, when performing step 105, The conversion of the point cloud is performed to perform modeling based on the camera external parameters obtained by optimizing the calculated camera external parameters.
  • Step 401 Determine whether a feature point in a certain frame image of the static object overlaps with a feature point in another frame image, specifically, if overlap, indicating that the depth camera is in the process of taking a photo around the static object, the depth camera returns To the position of the initial shooting, the feature points of a certain two frames of images overlap to form a closed loop, and global optimization is required. Steps 402 to 403 are specifically performed; if they do not overlap, global optimization is not performed.
  • the first reference feature point corresponding to the feature point in the image of the frame and the second reference feature point corresponding to the feature point in the other frame image may be separately obtained, if the first reference feature point exceeds the pre-predetermined
  • the set feature points can find a correspondence in the second reference feature points, then the overlap is determined, and if it is not exceeded, it can be considered that there is no overlap.
  • Step 402 Combine the feature points that match one frame image with another frame image, and merge them into one feature point. Since the reference feature points are accumulated by feature points of each frame image, it is necessary to The feature points correspond to update reference feature points.
  • Step 403 Update the camera external parameter of each frame image of the static object according to the updated reference feature point, specifically, first update the correspondence between the feature point in each frame image and the updated reference feature point, and further, according to the update. Corresponding to update the camera external parameters of each frame image, so that the accumulated error can be effectively dispersed in the closed loop, how to obtain the camera external parameters of the frame image according to the correspondence between the feature points and the reference feature points in one frame image.
  • the method refer to the PMCSAC method described in the above embodiments, and no further description is made here.
  • the reconstruction of the static object can be performed by extracting feature points, acquiring camera external parameters, optimizing camera external parameters, and point cloud conversion steps, as shown in FIG. 9, including:
  • a static object is captured from each method by a depth camera to obtain a multi-frame image; for a certain frame image, a three-dimensional feature point is extracted, and the three-dimensional feature point is matched with the reference feature point (ie, Feature point) to calculate a camera outer parameter of the current frame image, wherein if there is no matching reference feature point, the three-dimensional feature point on the failed match is added to the reference feature point.
  • the reference feature point ie, Feature point
  • the camera external parameter is calculated based on the 3D feature points
  • the 2D feature points of the current frame image need to be extracted, and the 2D feature points and the reference feature points, or the previous frame
  • the feature points of the image are matched to calculate the camera external parameters of the current frame image, and depth data complementation is also possible.
  • the camera external parameters can be optimized, including beam optimization and global optimization, and then the next frame image is processed according to the above steps until all the frame images are processed. Then, the point clouds in all the frame images are respectively aligned to the reference coordinate system through the camera external parameters of the corresponding frame image, and the static object is modeled, for example, the Poisson modeling method is used to reconstruct the three-dimensional model of the static object.
  • the method of the embodiment of the present invention can be applied to most active depth acquisition devices at present. Since the active depth acquisition device uses infrared LEDs or lasers in order to avoid the visual influence of the light source, it is easy to be The influence of outdoor sunlight, the lack of depth data.
  • An embodiment of the present invention further provides a static object reconstruction system, and a schematic structural diagram thereof is shown in FIG. 10, including:
  • the feature acquiring unit 10 is configured to respectively acquire three-dimensional feature points in the current frame image of the static object.
  • the feature acquiring unit 10 can obtain the feature description amount of each feature point when acquiring the feature points.
  • the feature acquiring unit 10 can describe the two-dimensional feature amount.
  • the three-dimensional feature description quantity is standardized separately, that is, in the training set of the two-dimensional feature description quantity and the three-dimensional feature description quantity, the standard deviation of the two-dimensional feature description quantity and the three-dimensional feature description quantity are respectively obtained, and the feature description quantity is divided by the corresponding standard. If the difference is obtained, the standardized feature description quantity can be obtained; then the standardized two-dimensional feature description quantity and the three-dimensional feature quantity are combined to obtain the feature description quantity of the three-dimensional feature point.
  • a first external parameter calculation unit 11 configured to match the three-dimensional feature points acquired by the feature acquiring unit 10 with reference feature points, and calculate a camera outer parameter of the current frame image, wherein the reference feature point is the The accumulation of feature points on multiple frame images of a static object.
  • the first external parameter calculation unit 11 can compare the feature description quantities of the three-dimensional feature points and the reference feature points, find the reference feature points corresponding to all the three-dimensional feature points in the current frame image, and then find the corresponding reference features.
  • the point calculation camera external parameter if a certain three-dimensional feature point does not find a corresponding three-dimensional feature point, the first foreign parameter calculation unit 11 may further add the three-dimensional feature point to the reference feature point.
  • a second external parameter calculation unit 12 configured to: if the first foreign parameter calculation unit 11 calculates a camera external parameter of the current frame image based on the three-dimensional feature point, the camera external parameter is not calculated within a preset time. And matching the two-dimensional feature points acquired by the feature acquiring unit 10 with the three-dimensional feature points of the reference feature points, or with the two-dimensional feature points of the previous frame image of the current frame image of the static object. Matching, calculating a camera external parameter of the current frame image.
  • the second external parameter calculation unit 12 may align the two-dimensional feature points of the current frame image to the three-dimensional feature points in the reference feature points, and then calculate the camera external parameters according to the three-dimensional feature points among the corresponding reference feature points found. It is also possible to align the two-dimensional feature points of the current frame image into the previous frame image, and then obtain relative camera external parameters according to the corresponding two-dimensional feature points in the current frame image and the previous frame image, and then combine the previously calculated previous image. The camera's external parameters, so that you can get the camera's external parameters of the current frame image.
  • a conversion unit 13 configured to convert a point cloud of the current frame image to a reference coordinate composed of the reference feature points by using a camera external parameter calculated by the first foreign parameter calculation unit 11 or the second foreign parameter calculation unit 12 Under the system to model the static object.
  • the first foreign parameter calculation unit 11 when the first foreign parameter calculation unit 11 is based on When the external parameter of the camera is calculated, the camera external parameter is not calculated within the preset time, and the depth data collected by the depth camera is lost or damaged, and the second foreign parameter calculation unit 12 calculates the camera by using the two-dimensional feature point.
  • the external parameter so that the conversion unit 13 realizes the alignment of the point cloud in a certain frame image according to the external parameter of the camera, so that the two-dimensional feature point and the three-dimensional feature point are merged, so that when the depth data collected by the depth camera is lost or damaged, Successfully reconstructed static objects.
  • the static object reconstruction system may include the structure shown in FIG. 10, wherein the first foreign parameter calculation unit 11 may specifically pass the candidate selection unit 110 and the selection unit 111. , a model calculation unit 112, a scoring unit 113, and an external parameter determination unit 114, wherein:
  • the candidate selecting unit 110 is configured to select, among the reference feature points, a plurality of candidate corresponding points that are closest to the three-dimensional feature point of the current frame image, and specifically, the candidate selecting unit 110 may select one of the current frame images.
  • the three-dimensional feature points are calculated, and the Euclidean distance between the three-dimensional feature points and all the reference feature points is calculated, and the Euclidean distances are sorted, and a plurality of reference feature points having a smaller Euclidean distance are selected as candidate corresponding points.
  • the selecting unit 111 is configured to select candidate corresponding points corresponding to the partial three-dimensional feature points of all the three-dimensional feature points of the current frame image from the plurality of candidate corresponding points corresponding to each of the three-dimensional feature points selected by the candidate selecting unit 110;
  • the selecting unit 111 is configured to select, according to the correct correspondence probability of the feature points in the respective spatial regions of the previous frame image and the reference feature points, from all the three-dimensional feature points of the current frame image. And selecting, according to the probability that the candidate corresponding point of the feature point in the previous frame image is selected, selecting the candidate corresponding to the three-dimensional feature point from the plurality of candidate corresponding points selected by the candidate selecting unit 110 point.
  • the model calculation unit 112 is configured to calculate a model parameter according to candidate corresponding points corresponding to the partial three-dimensional feature points selected by the selecting unit 111.
  • the scoring unit 113 is configured to score the model parameter corresponding model calculated by the model calculation unit 112.
  • the external parameter determining unit 114 is configured to: in the score obtained by the step of the selecting unit 111, the model calculating unit 112, and the scoring unit 113 performing the steps of selecting the candidate candidate point, calculating the model parameter, and scoring, the model with the highest scoring
  • the model parameter is used as a camera external parameter of the current frame image, wherein:
  • the outer parameter determining unit 114 is further configured to: in the loop, the probability that the partial three-dimensional feature point selected by consecutive K s times and the candidate corresponding point include an abnormality is less than a preset value, or the cycle step If the number of times exceeds a preset value, or the time when the looping step is performed exceeds a preset value, the selecting unit 111, the model calculating unit 112, and the scoring unit 113 are notified to stop executing the selecting candidate corresponding point, calculating model parameters, and The cyclic step of the score.
  • an initial value of a camera external parameter needs to be set, and when the external parameter determining unit 114 determines the camera external parameter of the current frame image, it is required to make the final determined camera external parameter corresponding model score higher than the camera external parameter.
  • the initial value corresponds to a high score for the model.
  • the model calculation unit 112 calculates the model parameters by using the candidate corresponding points corresponding to the partial three-dimensional feature points (such as three), if the scoring unit 113 passes the highest score, The model, the external parameter determining unit 114 may not first use the model parameter of the model as a camera external parameter, but first convert all the three-dimensional feature points of the current frame image to the reference coordinates by the candidate selecting unit 110 using the model parameter of the highest scoring model.
  • the model calculation unit 112 recalculates the model parameters according to all the three-dimensional feature points that can find the correct corresponding feature points in the reference feature points, and then the external parameter determination unit 114 will recalculate The model parameters are used as the final camera external parameters.
  • the model parameters are used as the final camera external parameters. Among them, because the number of correct corresponding points in the three-dimensional feature points and the reference feature points is more when calculating the model parameters, the more accurate the final calculated model parameters, the more accurate the external camera parameters obtained above.
  • the first foreign parameter calculation unit 11 can directly obtain the camera external parameters of the current frame image according to the above-mentioned several units, and according to the above steps A1 to A5; in other specific embodiments, the first external parameter calculation The unit 11 may first calculate an initial camera external parameter, and then obtain the optimized camera external parameters according to the above steps A1 to A5, specifically:
  • the first external parameter calculation unit 11 is further configured to first match the three-dimensional feature point with the feature point in the previous frame image of the current frame image of the static object to obtain an initial camera external parameter, and the specific method and the above steps A1 to A5 The method is similar, except that the candidate selection unit 110 in the first foreign parameter calculation unit 11 needs to select a plurality of candidate corresponding points that are closest to the three-dimensional feature point of the current frame image among the feature points of the previous frame image.
  • the initial camera external parameter is used as a condition for determining the camera external parameter according to the model score, and the three-dimensional feature point is matched with the reference feature point of the static object according to the above steps A1 to A5, and finally the current frame image is out of the camera. Participation.
  • the initial camera external parameter is mainly when the external parameter determining unit 114 determines the camera external parameter of the current frame image according to the score of the model, and the score of the final determined camera external reference corresponding model needs to be higher than that of the initial camera external reference corresponding model. High score.
  • the static object reconstruction system may include a model generation unit 14, a model conversion unit 15, and a completion unit 16, in addition to the structure shown in FIG.
  • the second external parameter calculation unit 12 can be specifically implemented by the feature matching unit 120 and the external parameter obtaining unit 121, specifically:
  • the feature matching unit 120 is configured to match the two-dimensional feature point with the three-dimensional feature point of the reference feature point, and determine a three-dimensional reference feature point corresponding to the two-dimensional feature point;
  • a foreign parameter obtaining unit 121 configured to determine a camera external parameter that minimizes a function of a camera pose in the reference coordinate system, where a function of the camera pose includes a correspondence between the two-dimensional feature point and a three-dimensional reference feature point, That is, the three-dimensional reference feature points corresponding to the two-dimensional feature points determined by the feature matching unit 120 are included; the determined camera external parameters are used as camera external parameters of the current frame image.
  • the second external parameter calculation unit may further include a corresponding selection unit 122.
  • the feature matching unit 120 is further configured to use the two-dimensional feature point and the previous frame image of the static object.
  • the two-dimensional feature points of the frame image are matched to determine a two-dimensional feature point corresponding to the two-dimensional feature point in the previous frame image;
  • the corresponding selection unit 122 is configured to select the two-dimensional feature point of the current frame image And a plurality of pairs of corresponding feature points having depth data at the feature points corresponding to the two-dimensional feature points in the image of the previous frame;
  • the parameter obtaining unit 121 is further configured to select according to the corresponding selection unit 122 Determining a relative camera external parameter of the current frame image and the previous frame image for the depth change information of the corresponding feature point; determining the current frame image according to the camera external parameter participating in the previous frame image outside the camera Camera external reference.
  • the model generating unit 14 is configured to: when determining that the first foreign parameter calculating unit 11 calculates a camera external parameter of the current frame image based on the three-dimensional feature point, according to the collected two-dimensional image of the plurality of frame images of the static object Data generating a source model comprising depth data; wherein the target model is from the static object Formed by depth data of a current frame image of the volume, and the transformation matrix is a matrix that minimizes a first energy function, the first energy function comprising a distance term and a smooth term, the distance term being used to represent the a distance from a vertex in the source model to a corresponding vertex in the target model, the smooth item is used to constrain the transformation of the adjacent vertex; the model conversion unit 15 is configured to convert the source model generated by the model generation unit 14 through a transformation matrix To the target model, the completion unit 16 is configured to complement the lost depth data in the current frame image according to the target model converted by the model conversion unit 15. In this way, by completing the depth data, it is possible to reconstruct
  • the feature matching unit 120 and the external parameter obtaining unit 121 can implement the camera attitude calculation method based on the 2D-3D matching point, and obtain the feature matching unit 120 and the external parameter.
  • the unit 121 and the corresponding selection unit 122 implement a camera attitude calculation method based on the 2D-2D matching point.
  • the second external parameter calculation unit 12 may also combine the two methods, and may further include an external parameter selection unit for first converting the three-dimensional feature points of the current frame image to the camera external parameters obtained by the two methods respectively. Under the reference coordinate system; then calculating the three-dimensional feature point of the current frame image can find the corresponding proportion in the reference feature point, that is, the distance between the feature point of the three-dimensional feature point converted to the reference coordinate system and the nearest reference feature point is less than the preset For the value, it is considered that the correspondence can be found in the reference feature point; then the camera external parameter corresponding to the higher ratio is used as the camera external parameter of the final current frame image.
  • the static object reconstruction system may include a corresponding establishing unit 17, an adjusting unit 18, a merging unit 19, and an updating unit 20, in addition to the structure shown in FIG. specifically:
  • the corresponding establishing unit 17 is configured to establish, by using the camera external parameter, a correspondence between the three-dimensional feature point in the current frame image and the reference feature point.
  • the corresponding establishing unit 17 may first convert the three-dimensional feature points in the current frame image to the reference coordinate system through the camera external parameter; then calculate the spatial distance between the converted feature points and the respective reference feature points, and find the nearest space. The distance, if the nearest spatial distance is less than a preset value such as 6 mm, establishes a correspondence between the nearest spatial distance corresponding reference feature point and the corresponding three-dimensional feature point.
  • the adjusting unit 18 is configured to adjust a camera external parameter of the N-frame image of the static object, so that the second energy function is minimized. In this process, the adjusting unit 18 can also adjust the reference feature corresponding to each feature point in the N-frame image. The location of the point.
  • the second energy function includes three of the ith frame images And a distance from the corresponding reference feature point to the feature point in the image coordinate system of the ith frame, wherein the i is a positive integer from 0 to N.
  • the camera external parameter after the camera external parameter is obtained by the first foreign parameter calculating unit 11, the correspondence between the three-dimensional feature point and the reference feature point in the current frame image can be established by the corresponding establishing unit 17; then the first external parameter calculating unit 11 After the corresponding processing unit 17 performs corresponding processing on the N-frame image, the camera external parameter of the N-frame image can be adjusted by the adjustment unit 18 in a beam optimization manner; the final conversion unit 13 is specifically configured to pass the adjustment unit 18 The camera external parameter of the current frame image obtained by adjusting the camera external parameter calculated by the first external parameter calculation unit 11 converts the point cloud of the current frame image to the reference coordinate system.
  • the correspondence establishing unit 17 not only needs to establish the correspondence between the three-dimensional feature point and the reference feature point, but also is used to establish two-dimensionality in the current frame image by using the camera external parameter. Correspondence of the feature point and the reference feature point; then the second outer parameter calculation unit 12 and the corresponding establishment unit 17 perform corresponding processing on the N frame image, and the camera of the N frame image can be adopted by the adjustment unit 18 in a beam optimization manner.
  • the outer parameter is adjusted, wherein the second energy function used further includes a two-dimensional feature point in the image of the i-th frame, and the distance from the corresponding reference feature point to the feature point in the image coordinate system of the ith frame.
  • the portion of the first partial frame image is calculated according to the first method described above (ie, the above steps 201 to 202);
  • the portion of the frame image is calculated according to the second method described above (ie, steps 301 to 302 above), and details are not described herein.
  • the global optimization may be implemented by the merging unit 19 and the updating unit 20, in particular, the merging unit 19 is configured to: if a feature point and another frame image in a certain frame image of the static object The feature points in the overlap are merged, and the feature points in the image of the certain frame that match the image of the other frame are merged; the updating unit 20 is configured to obtain the updated reference feature points according to the merged feature points of the merge unit 19 And updating the camera outer parameter of each frame image of the static object according to the updated reference feature point; thus, the converting unit 13 is specifically configured to use the current frame image obtained by the updating unit 20 to update the camera outside the camera. And converting the point cloud of the current frame image to the reference coordinate system.
  • the first reference feature point corresponding to the feature point in the image of the frame if the feature point exceeding the preset value can be found in the second reference feature point corresponding to the feature point in the other frame image, Overlap; or if the camera's external parameters of one frame of image are compared with the camera's external parameters of another frame of image, if they are close, they can be considered as overlapping.
  • the embodiment of the present invention further provides a static object reconstruction system, which is shown in FIG. 14 and includes a memory 22 and a processor 21 respectively connected to the bus, and may further include an input/output device 23 connected to the bus. ,among them:
  • the memory 22 is used for storing data input from the input/output device 23, and may also store information such as necessary files for processing the data by the processor 21.
  • the input/output device 23 may include external devices such as a display, a keyboard, a mouse, a printer, and the like. Ports that the static object reconstruction system can communicate with other devices can be included.
  • the processor 21 is configured to respectively acquire three-dimensional feature points and two-dimensional feature points in the current frame image of the static object; match the acquired three-dimensional feature points with the reference feature points, and calculate a camera external parameter of the current frame image.
  • the reference feature point is formed by cumulative accumulation of feature points on the plurality of frame images of the static object; if the camera external parameter of the current frame image is calculated based on the three-dimensional feature point, not preset Calculating the camera external parameter in time, then matching the acquired two-dimensional feature point with the three-dimensional feature point of the reference feature point, or two of the previous frame image of the current frame image of the static object.
  • the dimension feature points are matched, and the camera outer parameter of the current frame image is calculated; and the point cloud of the current frame image is converted to the reference coordinate system formed by the reference feature point by the calculated camera external parameter, Modeling the static object.
  • the processor 21 can obtain the feature description quantity of each feature point when acquiring each feature point.
  • the processor 21 can separately set the two-dimensional feature description amount and the three-dimensional feature description amount.
  • Standardization that is, the standard deviation of the two-dimensional feature description quantity and the three-dimensional feature description quantity in the training set of the two-dimensional feature description quantity and the three-dimensional feature description quantity, and the standard description quantity is divided by the corresponding standard deviation, and the standardization can be obtained.
  • the following feature description quantity; then the standardized two-dimensional feature description quantity and the three-dimensional feature quantity are combined to obtain the feature description quantity of the three-dimensional feature point.
  • the processor 21 compares the feature descriptions of the three-dimensional feature points with the reference feature points when calculating the external parameters of the camera based on the three-dimensional feature points, and finds the reference feature points corresponding to all the three-dimensional feature points in the current frame image, and then Calculate the camera external parameter by finding the corresponding reference feature point; If a certain three-dimensional feature point does not find a corresponding three-dimensional feature point, the processor 21 may further add the three-dimensional feature point to the reference feature point.
  • the processor 21 may, when calculating the camera external parameter based on the two-dimensional feature point, align the two-dimensional feature point of the current frame image to the three-dimensional feature point in the reference feature point, and then according to the three-dimensional feature point in the corresponding reference feature point found. , calculate the camera external parameters. It is also possible to align the two-dimensional feature points of the current frame image into the previous frame image, and then obtain relative camera external parameters according to the corresponding two-dimensional feature points in the current frame image and the previous frame image, and then combine the previously calculated previous image. The camera's external parameters, so that you can get the camera's external parameters of the current frame image.
  • the processor 21 calculates the camera external parameter based on the three-dimensional feature point, and does not calculate the camera external parameter within the preset time
  • the depth data collected by the depth camera is illustrated. Loss or damage
  • the processor 21 uses the two-dimensional feature points to calculate the camera external parameters, thereby realizing the alignment of the point cloud in a certain frame image according to the camera external parameters, thus combining the two-dimensional feature points and the three-dimensional feature points, which can be realized.
  • Static objects can also be successfully reconstructed when the depth data collected by the depth camera is lost or damaged.
  • the processor 21 when calculating the camera external parameter based on the three-dimensional feature point, specifically selects, among the reference feature points, a plurality of candidates that are closest to the three-dimensional feature point of the current frame image. Corresponding points; and selecting, from the plurality of candidate corresponding points corresponding to each of the selected three-dimensional feature points, candidate corresponding points corresponding to the partial three-dimensional feature points of all the three-dimensional feature points of the current frame image; Calculating model parameters corresponding to the corresponding candidate points corresponding to the three-dimensional feature points; scoring the calculated model parameter corresponding models; and performing the steps of performing the steps of selecting the candidate corresponding points, calculating model parameters, and scoring Taking the model parameter of the highest scoring model as the camera external parameter of the current frame image, in this process, it is necessary to set an initial value of the camera external parameter, and the processor 21 needs to determine the camera external parameter of the current frame image.
  • the score of the final determined camera external reference model is higher than the score of the model corresponding to the initial value of the camera external parameter. Further, if it is determined that the loop is performed in the loop, the probability that the partial three-dimensional feature points selected by consecutive K s times and the candidate corresponding points include an abnormality is less than a preset value, or the number of loop steps exceeds a preset value, Or the time during which the looping step is performed exceeds the preset value, the processor 21 stops performing the looping step of selecting the candidate corresponding point, calculating the model parameters, and scoring.
  • the processor 21 may specifically select one of the current frame images when selecting the candidate corresponding point. a three-dimensional feature point, calculating an Euclidean distance between the three-dimensional feature point and all reference feature points, and sorting the Euclidean distances, selecting a plurality of reference feature points having a smaller Euclidean distance as candidate corresponding points; and the processor 21 is in the selection center
  • the method is specifically configured to: according to the feature points and the reference feature points in the respective spatial regions including the static object in the previous frame image Correctly corresponding probability, selecting the partial three-dimensional feature points from all three-dimensional feature points of the current frame image; according to the probability that the candidate corresponding points of the feature points in the previous frame image are selected, from the selected plurality of A candidate corresponding point of the three-dimensional feature point is selected among the candidate corresponding points.
  • the processor 21 since the processor 21 calculates the model parameters by using the candidate corresponding points corresponding to the partial three-dimensional feature points (such as three), if the model with the highest score is obtained, the processor 21 may first Instead of using the model parameter of the model as a camera external parameter, the model parameters of the highest scoring model are first used to transform all three-dimensional feature points of the current frame image into a reference coordinate system, and the corresponding reference feature points are calculated; 21 Recalculate the model parameters based on all the 3D feature points that can find the correct corresponding feature points in the reference feature points, and then the processor 21 then uses the recalculated model parameters as the final camera external parameters. Among them, because the number of correct corresponding points in the three-dimensional feature points and the reference feature points is more when calculating the model parameters, the more accurate the final calculated model parameters, the more accurate the external camera parameters obtained above.
  • the processor 21 can directly obtain the camera external parameters of the current frame image according to the above steps A1 to A5; in other specific embodiments, the processor 21 can also calculate an initial camera external parameter, and then follow the instructions.
  • the above steps A1 to A5 finally obtain the camera external parameters of the current frame image, specifically:
  • the processor 21 is further configured to first match the three-dimensional feature point with the feature point in the previous frame image of the current frame image of the static object to obtain an initial camera external parameter, and the specific method is similar to the methods of the foregoing steps A1 to A5.
  • the processor 21 needs to select a plurality of candidate corresponding points that are closest to the three-dimensional feature point of the current frame image among the feature points of the image of the previous frame, and then process correspondingly by other units; then the processor 21
  • the initial camera external parameter is used as a condition for determining the camera external parameter according to the model score, and the three-dimensional feature point is matched with the reference feature point of the static object according to the above steps A1 to A5 to obtain the optimized current frame image. Camera external reference.
  • the initial camera external parameters are mainly When the processor 21 determines the camera external parameter of the current frame image according to the score of the model, it is required that the score of the finally determined camera external reference corresponding model is higher than the score of the initial camera external reference corresponding model.
  • the processor 21 may adopt a 2D-3D matching point-based camera attitude calculation method, specifically for using the two-dimensional feature point and the Performing matching on the three-dimensional feature points in the reference feature points, determining three-dimensional reference feature points corresponding to the two-dimensional feature points; determining camera external parameters that minimize a function of camera poses in the reference coordinate system, the camera pose The function includes a correspondence between the two-dimensional feature point and the three-dimensional reference feature point; and the determined camera external parameter is used as a camera external parameter of the current frame image.
  • the processor 21 may further adopt a camera attitude calculation method based on the 2D-2D matching point, specifically, the current feature of the two-dimensional feature point and the static object.
  • the processor 21 may also combine the two methods for calculating camera external parameters based on the two-dimensional feature points, and may also be used to first obtain the three-dimensional feature points of the current frame image by using the two methods respectively.
  • the camera external parameter is converted to the reference coordinate system; then the three-dimensional feature point of the current frame image is calculated, and the corresponding scale can be found in the reference feature point, that is, the three-dimensional feature point is converted to the feature point under the reference coordinate system and the nearest reference feature point. If the distance is smaller than the preset value, it is considered that the correspondence can be found in the reference feature point; then the camera external parameter corresponding to the higher ratio is used as the camera external parameter of the final current frame image.
  • the processor 21 is further configured to: when determining that the camera external parameter of the current frame image is calculated based on the three-dimensional feature point fails, according to the collecting
  • the two-dimensional data of the plurality of frame images of the static object generates a source model including depth data; wherein the target model is formed by depth data of a current frame image of the static object, and the transformation matrix is a matrix that minimizes a first energy function, the first energy function comprising a distance term and a smooth term, the distance term being used to represent a distance of a vertex in the source model to a corresponding vertex in the target model, Smooth terms are used to constrain the transformation of adjacent vertices; the model will be The generated source model is transformed into a target model by a transformation matrix; and the depth data lost in the current frame image is complemented according to the converted target model.
  • the processor 21 can optimize the camera external parameters in the following ways:
  • the correspondence between the three-dimensional feature point and the reference feature point in the current frame image may be established by using the camera external parameter;
  • the processor 21 may adjust the camera external parameter of the N-frame image by using a beam optimization manner, specifically, adjusting the N-frame image of the static object. The camera external parameter minimizes the second energy function.
  • the position of the reference feature point corresponding to each feature point in the N frame image may also be adjusted, wherein the second energy function includes the ith frame a three-dimensional feature point in the image, and a distance from the corresponding reference feature point to a feature point in the ith frame image coordinate system, wherein the i is a positive integer from 0 to N; and finally the processor 21 is configured to pass Determining the adjusted camera outer parameter of the current frame image, converting the point cloud of the current frame image to the reference coordinate system.
  • the processor 21 may first convert the three-dimensional feature points in the current frame image to the reference coordinate system by using a camera external parameter; and then calculate the spatial distance between the converted feature point and each reference feature point. And finding the nearest spatial distance. If the nearest spatial distance is less than a preset value, such as 6 mm, the correspondence between the nearest spatial distance corresponding reference feature point and the corresponding three-dimensional feature point is established.
  • a preset value such as 6 mm
  • the processor 21 After the camera external parameter is calculated by the processor 21 based on the two-dimensional feature point, it is not only necessary to establish a correspondence between the three-dimensional feature point and the reference feature point, but also to establish the current frame image by using the camera external parameter. a correspondence between the two-dimensional feature point and the reference feature point; if the three-dimensional feature point and the two-dimensional feature point are respectively associated with the reference feature point for the N-frame image, the processor 21 may adopt a beam optimization manner The camera external parameter of the N frame image is adjusted.
  • the second energy function used by the processor 21 when adjusting the external parameter of the camera further includes a two-dimensional feature point in the image of the i-th frame, and the corresponding reference feature point is converted to the The distance of the feature points in the i-th frame image coordinate system.
  • the processor 21 is In the second energy function used for beam optimization of the N frame image, regarding the first partial frame
  • the portion of the image is calculated according to the first method described above (ie, steps 201 to 202 above); the portion of the second partial frame image is calculated according to the second method described above (ie, steps 301 to 302 above). This will not be repeated.
  • the processor 21 may be further configured to merge feature points in the certain frame image that match the another frame image if the feature points in the certain frame image of the static object overlap with the feature points in another frame image And obtaining an updated reference feature point; and then updating a camera outer parameter of each frame image of the static object according to the updated reference feature point; such that the converting unit is specifically configured to pass the updated A camera external parameter of the current frame image, converting the point cloud of the current frame image to the reference coordinate system.
  • a feature point exceeding a preset value in a first reference feature point corresponding to a feature point in a certain frame image can be found in a second reference feature point corresponding to the feature point in another frame image, Overlap; or if the camera's external parameters of one frame of image are compared with the camera's external parameters of another frame of image, if they are close, they can be considered as overlapping.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read only memory (ROM), random access memory (RAM), magnetic or optical disk, and the like.

Abstract

本发明实施例公开了静态物体重建方法和系统,应用于图形图像处理技术领域。在本发明实施例中,当静态物体重建系统在基于三维特征点计算相机外参时,未在预置的时间内计算得到该相机外参,则说明深度相机采集的深度数据丢失或损坏,则采用二维特征点来计算相机外参,从而根据相机外参实现某一帧图像中点云的对齐,这样融合了二维特征点和三维特征点,可以实现当深度相机采集的深度数据丢失或损坏时,也能成功重建静态物体。

Description

一种静态物体重建方法和系统
本申请要求于2014年3月18日提交中国专利局、申请号为201410101540.9、发明名称为“一种静态物体重建方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图形图像处理技术领域,特别涉及静态物体重建方法和系统。
背景技术
物体重建在计算机图形学和计算机视觉领域有很多应用,比如电影特效,三维立体图形(Three Dimensions,3D)游戏,虚拟现实和人机交互等等。大部分重建系统能够重建出细节化的三维模型,这些重建系统主要是利用多个同步摄像机或者三维扫描设备(比如激光和结构光相机)采集物体的信息,然后进行建模,但是昂贵的设备,复杂而繁琐的用户交互接口,大大限制了这些重建系统的应用。自从微软推出Kinect深度相机,由于价格便宜和易操作的特性,由其衍生的RGB-D相机开始被广泛应用在物体建模相关的研究上。
具体地,RGB-D相机会采集物体的二维图像信息和深度信息,然后重建系统再根据二维图像和深度信息进行建模,但是当RGB-D相机采集的深度信息丢失,会导致静态物体的重建失败。
发明内容
本发明实施例提供静态物体重建方法和系统,实现了当深度相机采集的深度数据丢失时,可以实现静态物体的重建。
本发明实施例第一方面提供一种静态物体重建方法,包括:
分别获取静态物体的当前帧图像中的三维特征点和二维特征点;
将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,其中,所述参考特征点是所述静态物体的多个帧图像上的特征点累积形成的;
如果基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内得到所述相机外参,则将所述二维特征点,与所述参考特征点中三维特征点匹配,或与所述静态物体的当前帧图像的前一帧图像中的二维特征点进行 匹配,计算所述当前帧图像的相机外参;
通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下,以对所述静态物体进行建模。
本发明实施例第一方面的第一种可能实现方式中,所述将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:
在所述参考特征点中,选择与所述当前帧图像的三维特征点距离最近的多个候选对应点;
选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点;
根据所述部分三维特征点分别对应的候选对应点,计算模型参数;
对所述计算的模型参数对应模型进行评分;
循环执行所述选取候选对应点、计算模型参数和评分的步骤,并将评分最高的模型的模型参数作为所述当前帧图像的相机外参,其中:
如果在所述循环中连续Ks次选取的所述部分三维特征点与候选对应点包含异常对应的概率小于预置的值,或所述循环步骤的次数超过预置的值,或执行所述循环步骤的时间超过预置的值,则停止执行所述选取候选对应点、计算模型参数和评分的循环步骤。
结合本发明实施例第一方面的第一种可能实现方式,在本发明实施例第一方面的第二种可能实现方式中,所述选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点,具体包括:
根据所述前一帧图像中包含所述静态物体的各个空间区域内特征点,与参考特征点的正确对应概率,从所述当前帧图像的所有三维特征点中选取所述部分三维特征点;
根据所述前一帧图像中特征点的候选对应点被选取的概率,从所述多个候选对应点中选择所述三维特征点的候选对应点。
结合本发明实施例第一方面的第一种或第二种可能实现方式,在本发明实施例第一方面的第三种可能实现方式中,所述将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参之前,所述方法还包括:
将所述三维特征点,与所述静态物体的当前帧图像的前一帧图像中的特征 点进行匹配,得到初始相机外参;
则将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:以所述初始相机外参作为根据模型评分确定相机外参的条件,将所述三维特征点,与所述静态物体的参考特征点进行匹配,最终得到所述当前帧图像的相机外参。
结合本发明实施例第一方面,或第一方面的第一种到第三种可能实现方式中任一种可能实现方式,在本发明实施例第一方面的第四种可能实现方式中,所述将所述二维特征点与所述参考特征点中三维特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:
将所述二维特征点与所述参考特征点中三维特征点进行匹配,确定与所述二维特征点对应的三维参考特征点;
确定使得所述参考坐标系下的相机姿态的函数最小化的相机外参,所述相机姿态的函数中包括所述二维特征点与三维参考特征点的对应;
将所述确定的相机外参作为所述当前帧图像的相机外参。
结合本发明实施例第一方面,或第一方面的第一种到第三种可能实现方式中任一种可能实现方式,在本发明实施例第一方面的第五种可能实现方式中,所述将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:
将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在所述前一帧图像中与所述二维特征点对应的二维特征点;
选取所述当前帧图像的二维特征点及在所述前一帧图像中与所述二维特征点对应的二维特征点处都具有深度数据的多对对应特征点;
根据所述选取的多对对应特征点的深度变化信息,确定所述当前帧图像与前一帧图像的相对相机外参;
根据所述相对相机外参与所述前一帧图像的相机外参,确定所述当前帧图像的相机外参。
结合本发明实施例第一方面,或第一方面的第一种到第五种可能实现方式中任一种可能实现方式,在本发明实施例第一方面的第六种可能实现方式中, 如果基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内得到所述相机外参,所述方法还包括:
根据采集的所述静态物体的多个帧图像的二维数据生成包含深度数据的源模型;
将所述源模型通过变换矩阵转换到目标模型;
根据所述转换后的目标模型补全所述当前帧图像中丢失的深度数据;
其中,所述目标模型是由所述静态物体的当前帧图像的深度数据形成的,且所述变换矩阵是使得第一能量函数最小化的矩阵,所述第一能量函数是包括距离项和光滑项,所述距离项用于表示所述源模型中顶点到所述目标模型中相应顶点的距离,所述光滑项用于约束相邻顶点的变换。
结合本发明实施例第一方面,或第一方面的第一种到第六种可能实现方式中任一种可能实现方式,在本发明实施例第一方面的第七种可能实现方式中,所述计算所述当前帧图像的相机外参之后,所述方法还包括:
通过所述相机外参,建立所述当前帧图像中三维特征点与所述参考特征点的对应;
调整所述静态物体的N帧图像的相机外参,使得第二能量函数最小化,其中所述第二能量函数中包括所述第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数;
则所述通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下具体包括:通过对所述计算的相机外参进行所述调整后得到的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
结合本发明实施例第一方面第七种可能实现方式,在本发明实施例第一方面的第八种可能实现方式中,如果所述相机外参是在基于所述二维特征点计算的,则所述计算所述当前帧图像的相机外参后,所述方法还包括:
通过所述相机外参,建立所述当前帧图像中二维特征点与所述参考特征点的对应;
则所述第二能量函数中还包括第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离。
结合本发明实施例第一方面,或第一方面的第一种到第六种可能实现方式中任一种可能实现方式,在本发明实施例第一方面的第九种可能实现方式中,所述计算所述当前帧图像的相机外参之后,所述方法还包括:
如果所述静态物体的某一帧图像中特征点与另一帧图像中的特征点重叠,合并所述某一帧图像中与所述另一帧图像匹配的特征点,并得到更新的参考特征点;
根据所述更新的参考特征点,更新所述静态物体的每一帧图像的相机外参;
则所述通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下具体包括:通过对所述计算的相机外参进行所述更新后得到的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
本发明实施例第第二方面提供一种静态物体重建系统,包括:
特征获取单元,用于分别获取静态物体的当前帧图像中的三维特征点和二维特征点;
第一外参计算单元,用于将所述特征获取单元获取的三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,其中,所述参考特征点是所述静态物体的多个帧图像上的特征点累积形成的;
第二外参计算单元,用于如果所述第一外参计算单元基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内计算得到所述相机外参,则将所述特征获取单元获取的二维特征点,与所述参考特征点中三维特征点匹配,或与所述静态物体的当前帧图像的前一帧图像中的二维特征点进行匹配,计算所述当前帧图像的相机外参;
转换单元,用于通过所述第一外参计算单元或第二外参计算单元计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下,以对所述静态物体进行建模。
本发明第二方面的第一种可能实现方式中,所述第一外参计算单元具体包括:
候选选择单元,用于在所述参考特征点中,选择与所述当前帧图像的三维特征点距最近的多个候选对应点;
选取单元,用于选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点;
模型计算单元,用于根据所述选取单元选取的部分三维特征点分别对应的候选对应点,计算模型参数;
评分单元,用于对所述模型计算单元计算的模型参数对应模型进行评分;
外参确定单元,用于在所述选取单元、模型计算单元和评分单元循环执行所述选取候选对应点、计算模型参数和评分的步骤得到的评分中,将评分最高的模型的模型参数作为所述当前帧图像的相机外参,其中:
所述外参确定单元,还用于在所述循环中,连续Ks次选取的所述部分三维特征点与候选对应点包含异常对应的概率小于预置的值,或所述循环步骤的次数超过预置的值,或执行所述循环步骤的时间超过预置的值,则通知所述选取单元、模型计算单元和评分单元停止执行所述选取候选对应点、计算模型参数和评分的循环步骤。
结合本发明实施例第二方面的第一种可能实现方式,在本发明实施例第二方面的第二种可能实现方式中:
所述选取单元,具体用于根据所述前一帧图像中包含所述静态物体的各个空间区域内特征点,与参考特征点的正确对应概率,从所述当前帧图像的所有三维特征点中选取所述部分三维特征点;根据所述前一帧图像中特征点的候选对应点被选取的概率,从所述多个候选对应点中选择所述三维特征点的候选对应点。
结合本发明实施例第二方面的第一种或第二种可能实现方式,在本发明实施例第二方面的第三种可能实现方式中,所述系统还包括:
所述第一外参计算单元,还用于将所述三维特征点,与所述静态物体的当前帧图像的前一帧图像中的特征点进行匹配,得到初始相机外参;并以所述初始相机外参作为根据模型评分确定相机外参的条件,将所述三维特征点,与所述静态物体的参考特征点进行匹配,最终得到所述当前帧图像的相机外参。
结合本发明实施例第二方面,或第二方面的第一种到第三种可能实现方式中任一种可能实现方式,在本发明实施例第二方面的第四种可能实现方式中,所述第二外参计算单元,具体包括:
特征匹配单元,用于将所述二维特征点与所述参考特征点中三维特征点进行匹配,确定与所述二维特征点对应的三维参考特征点;
外参获得单元,用于确定使得所述参考坐标系下的相机姿态的函数最小化的相机外参,所述相机姿态的函数中包括所述二维特征点与三维参考特征点的对应;将所述确定的相机外参作为所述当前帧图像的相机外参。
结合本发明实施例第二方面第四种可能实现方式,在本发明实施例第二方面的第五种可能实现方式中,所述第二外参计算单元还包括对应选取单元;
所述特征匹配单元,还用于将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在所述前一帧图像中与所述二维特征点对应的二维特征点;
所述对应选取单元,用于选取所述当前帧图像的二维特征点及在所述前一帧图像中与所述二维特征点对应的特征点处都具有深度数据的多对对应特征点;
所述外参获得单元,还用于根据所述对应选取单元选取的多对对应特征点的深度变化信息,确定所述当前帧图像与前一帧图像的相对相机外参;根据所述相对相机外参与所述前一帧图像的相机外参,确定所述当前帧图像的相机外参。
结合本发明实施例第二方面,或第二方面的第一种到第五种可能实现方式中任一种可能实现方式,在本发明实施例第二方面的第六种可能实现方式中,所述还包括:
模型生成单元,用于当确定基于所述三维特征点计算所述当前帧图像的相机外参失败时,根据采集的所述静态物体的多个帧图像的二维数据生成包含深度数据的源模型;其中,所述目标模型是由所述静态物体的当前帧图像的深度数据形成的,且所述变换矩阵是使得第一能量函数最小化的矩阵,所述第一能量函数是包括距离项和光滑项,所述距离项用于表示所述源模型中顶点到所述目标模型中相应顶点的距离,所述光滑项用于约束相邻顶点的变换;
模型转换单元,用于将所述模型生成单元生成的源模型通过变换矩阵转换到目标模型;
补全单元,用于根据所述模型转换单元转换后的目标模型补全所述当前帧 图像中丢失的深度数据。
结合本发明实施例第二方面,或第二方面的第一种到第六种可能实现方式中任一种可能实现方式,在本发明实施例第二方面的第七种可能实现方式中,所述系统还包括:
对应建立单元,用于通过所述相机外参,建立所述当前帧图像中三维特征点与所述参考特征点的对应;
调整单元,用于调整所述静态物体的N帧图像的相机外参,使得第二能量函数最小化,其中所述第二能量函数中包括所述第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数;
则所述转换单元,具体用于通过所述调整单元调整得到的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
结合本发明实施例第二方面第七种可能实现方式,在本发明实施例第二方面的第八种可能实现方式中:
所述对应建立单元,还用于如果所述相机外参是在基于所述二维特征点计算的,通过所述相机外参,建立所述当前帧图像中二维特征点与所述参考特征点的对应;
则所述调整单元在进行相机外参调整时使用的第二能量函数中还包括第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离。
结合本发明实施例第二方面,或第二方面的第一种到第六种可能实现方式中任一种可能实现方式,在本发明实施例第二方面的第九种可能实现方式中,所述系统还包括:
合并单元,用于如果所述静态物体的某一帧图像中特征点与另一帧图像中的特征点重叠,合并所述某一帧图像中与所述另一帧图像匹配的特征点;
更新单元,用于根据所述合并单元合并后的特征点得到更新后的参考特征点,并根据所述更新后的参考特征点,更新所述静态物体的每一帧图像的的相机外参;
则所述转换单元,具体用于通过所述更新单元更新得到的所述当前帧图像 的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
在本发明实施例中,当静态物体重建系统在基于三维特征点计算相机外参时,未在预置的时间内得到该相机外参,则说明深度相机采集的深度数据丢失或损坏,则采用二维特征点来计算相机外参,从而根据相机外参实现某一帧图像中点云的对齐,这样融合了二维特征点和三维特征点,可以实现当深度相机采集的深度数据丢失或损坏时,也能成功重建静态物体。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例中提供的一种静态物体重建方法的流程图;
图2是本发明实施例中基于三维特征点计算相机外参的方法流程图;
图3是本发明实施例中,在参考特征点中选取一个对应点,及在参考特征点中选取多个候选对应点的对比示意图;
图4是本发明实施例中,基于二维特征点计算相机外参和深度数据补全的方法流程图;
图5是本发明实施例中深度数据补全的示意图;
图6是本发明实施例中提供的另一种静态物体重建的方法流程图;
图7是本发明实施例中提供的另一种静态物体重建的方法流程图;
图8是本发明实施例中提供的另一种静态物体重建的方法流程图;
图9是本发明实施例中提供的静态物体重建的全局示意图;
图10是本发明实施例中提供的一种静态物体重建系统的结构示意图;
图11是本发明实施例中提供的另一种静态物体重建系统的结构示意图;
图12是本发明实施例中提供的另一种静态物体重建系统的结构示意图;
图13是本发明实施例中提供的另一种静态物体重建系统的结构示意图;
图14是本发明实施例中提供的另一种静态物体重建系统的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排它的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
且下述文件中涉及到较多处的特征点,如果没有特别指明该特征点是三维特征点还是二维特征点,则该特征点可以包括三维特征点,和/或二维特征点,具体是哪种特征点,是根据处理方法的不同而不同。
本发明实施例提供一种静态物体重建方法,本发明的方法主要是采用深度相机,比如RGB-D相机对静态物体从各个方向进行拍摄多帧图像,比如在围绕静态物体一圈的各个方向进行拍摄,其中每一帧图像是深度相机在某一个方向拍摄的静态物体的图像,一帧图像数据中可以包括二维信息比如颜色等,还可以包括三维信息比如深度数据等;然后通过静态物体重建系统根据上述深度相机拍摄的多帧图像对所述静态物体进行建模。本发明的方法是静态物体重建系统所执行的方法,方法流程图如图1所示,包括:
步骤101,分别获取静态物体的当前帧图像中的三维特征点和二维特征点。
可以理解,静态物体重建系统会获取深度相机拍摄的多帧图像,并按照步骤101到105对深度相机拍摄的每一帧图像的数据进行处理。对于某一帧图像(即当前帧图像)来说,静态物体重建系统需要先对当前帧图像进行特征提取,由于深度相机拍摄的一帧图像的数据中包括二维信息和三维信息,本实施例中,需要对具有三维信息的特征点即三维特征点,及只具有二维信息的特征点即二维特征点进行提取。
具体地,在获取各个特征点时,可以采用静态物体表面纹理,提取二维特 征点,比如采用尺度不变特征转换(Scale-invariant feature transform,SIFT)方法来提取当前帧图像中的二维特征点。但是有些静态物体表面纹理少,只能提取很少的传统特征点,为了提取更多又稳定的特征,本实施例中,在提取三维特征点时,静态物体重建系统将利用几何信息,或利用纹理和几何信息提取的角点作为三维特征点,比如采用快速点特征直方图(Fast Point Feature Histograms,FPFH)的方法来提取三维特征点。其中,以方便之后的计算,,上述提取的每个二维特征点和三维特征点都需要对应一个特征描述量,在二维特征点的特征描述量中可以包括颜色信息和二维坐标等信息,在三维特征点的特征描述量中可以包括深度数据和三维坐标等信息,还可以包括一些颜色信息等。其中,角点是指图像中具有代表性以及健壮性(即指该点能够在有噪声干扰的情况下也能稳定的被定位)的点,例如局部亮点或暗点,线段终点,或者曲线上的曲率最大值点等。
具体地,本发明实施例中,对于三维特征点的特征描述量,静态物体重建系统可以采用特征描述量的拼接方法,将该三维特征点的二维特征描述量和三维特征描述量拼接成一个集合得到。首先静态物体重建系统将二维特征描述量和三维特征描述量分别标准化,即在二维特征描述量和三维特征描述量的训练集中,分别求得二维特征描述量和三维特征描述量的标准差,并用特征描述量除以对应的标准差,就可以得到标准化后的特征描述量;然后将标准化之后的二维特征描述量和三维特征量结合,得到三维特征点的特征描述量。例如:
f=(αf2D,βf3D),其中,f2D是标准化的二维特征描述量,f3D是标准化的三维特征描述量,α和β分别是二维特征描述量和三维特征描述量的系数,来调节两部分对整个特征描述量的影响。
步骤102,将三维特征点与参考特征点进行匹配,计算当前帧图像的相机外参,其中,参考特征点是静态物体的多个帧图像上的特征点累积形成的,相机外参是指当深度相机在拍摄当前帧图像时,深度相机在三维空间中的位置和朝向等参数信息,每一帧图像对应的一个相机外参。
将三维特征点与参考特征点进行匹配主要是通过三维特征点与参考特征点各自的特征描述量进行比较,找到当前帧图像中所有三维特征点分别对应的 参考特征点,即将当前帧图像的三维特征点对齐到参考特征点所组成的参考特征坐标系下,然后就可以通过找到的对应的参考特征点计算相机外参;若某一三维特征点没有找到对应的三维特征点,则将该三维特征点加入到参考特征点中。其中,对于任一个三维特征点,可以分别计算该三维特征点与所有参考特征点之间的欧式距离,并将欧式距离最近的参考特征点作为该三维特征点对应的参考特征点。
步骤103,判断上述步骤102中基于三维特征点计算当前帧图像的相机外参时,是否在预置的时间内得到该相机外参,如果得到,则通过上述步骤102中计算的相机外参,执行步骤105;如果未得到,则执行步骤104,通过基于二维特征点计算的相机外参,执行步骤105。
如果通过上述步骤102在预置的时间都没有计算出相机外参,则认为基于三维特征点计算相机外参失败,而造成相机外参计算失败的原因有多种,本实施例中,是认为深度相机拍摄的当前帧图像中的部分或全部的深度数据丢失或损坏,则需要执行步骤104,即通过二维特征点来计算相机外参。
步骤104,将二维特征点与参考特征点中的三维特征点匹配,或与静态物体的当前帧图像的前一帧图像中的二维特征点进行匹配,计算当前帧图像的相机外参。
由于三维特征点中包括了二维特征描述量,则静态物体重建系统是能将当前帧图像的二维特征点对齐到参考特征点中的三维特征点,然后根据找到的对应的参考特征点中的三维特征点,计算相机外参。在另一种具体实施例中,静态物体重建系统可以将当前帧图像的二维特征点对齐到前一帧图像中,然后再计算相机外参,具体地,是根据当前帧图像与前一帧图像中对应的二维特征点得到相对相机外参,然后再结合之前计算的前一阵图像的相机外参,从而就可以得到当前帧图像的相机外参。
步骤105,通过计算的相机外参,将当前帧图像的点云转换到参考特征点组成的参考坐标系下,以对静态物体进行建模。其中当前帧图像的点云是指在同一空间坐标系下表达目标空间分布和目标表面特性的海量点集合,上述二维特征点和三维特征点只是点云中的部分点。
可见,在本发明实施例中,当静态物体重建系统在基于三维特征点计算相 机外参时,未在预置的时间内得到该相机外参,则说明深度相机采集的深度数据丢失或损坏,会采用二维特征点来计算相机外参,从而根据相机外参实现某一帧图像中点云的对齐,这样融合了二维特征点和三维特征点,可以实现当深度相机采集的深度数据丢失或损坏时,也能成功重建静态物体。
参考图2所示,在一个具体的实施例中,静态物体重建系统在执行上述步骤102中基于三维特征点计算当前帧图像的相机外参时,具体可以通过如下步骤来实现:
A1:在参考特征点中,选择与当前帧图像的三维特征点距离最近的多个候选对应点。
在具体实施例中,静态物体重建系统采用最近邻特征匹配方法进行特征点之间的匹配,如果直接根据相关算法在参考特征点中确定三维特征点对应一个参考特征点,可能会找到错误对应的参考特征点,则为了提高正确对应率,本发明实施例中,可以先为每一个三维特征点找到几个候选对应点,然后再进行进一步地计算。具体地,对于当前帧图像的某一个三维特征点,计算该三维特征点与所有参考特征点之间的欧式距离,并对这些欧式距离排序,选取欧式距离较小的多个参考特征点作为候选对应点。
例如图3(a)所示,f1帧图像上的特征点在f2帧图像上可能找到错误的对应特征点,图3(a)中用虚线表示错误的对应特征点,比如f1帧图像中头部的特征点对应了f2帧图像中腹部的特征点,使得f1和f2不能正确对齐。又如图3(b)所示,如果f1帧图像上每一个特征点在f2帧图像上找多个最近邻的候选对应点,比如对于f1帧图像中头部的特征点,在f2帧图像中找到候选对应点1、2和3,则正确的对应特征点就包含了在里面(用实线表示)。
A2:选取当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点。
为了计算当前帧的相机外参,可以从上述步骤A1选择的每个三维特征点对应的多个候选对应点中,选取当前帧图像的三维特征点正确对应的特征点,并计算出相机外参。在具体实现时,可以采用模型参数计算方法来得到相机外参,比如使用随机样本一致性(Random Sample Consensus,RANSAC)的模 型参数计算方法。但是由于在本实施例中,每一个三维特征点都对应多个候选对应点,普通的RANSAC方法计算复杂且耗时,则提出了一个基于先验知识的多个候选对应RANSAC(Prior-based Multi-Candidates RANSAC,PMCSAC)方法,主要是通过当前帧图像和前一帧图像的数据相似,则利用前一帧图像与参考特征点匹配的概率分布来引导对当前帧图像的随机选取。具体可以通过步骤A2到A4来实现,而其中的步骤A2又可以通过如下步骤来实现:
A21,根据前一帧图像中包含上述静态物体的各个空间区域内特征点,与参考特征点的正确对应概率,从当前帧图像的所有三维特征点中选取部分三维特征点,其中选择多少个三维特征点主要根据后续计算模型参数的方法来决定。
A22,根据所述前一帧图像中特征点对应的某一候选对应点被选取的概率,从上述步骤A1中得到的多个候选对应点中选择当前帧图像的三维特征点的候选对应点。
假设将包含静态物体的空间分成30*30*30网格(grid),静态物体重建系统对每一个grid记录上一帧图像中特征点与参考特征点相匹配时的正确对应概率,一个grid为一个空间区域。
比如grid Gi内有n个特征点,有x个特征点在参考特征点中能找到正确的对应,那么Gi的正确对应概率:
Figure PCTCN2015074074-appb-000001
对所有的grid的正确对应概率做归一化,得到从Gi选取正确对应特征点的概率:
Figure PCTCN2015074074-appb-000002
如果该概率较高,则表示前一帧图像中某一个空间区域即Gi内的正确对应概率较高,则相应地当前帧图像中该空间区域的正确对应概率也较高,可以从当前帧图像的该空间区域附近选取部分三维特征点。
又假设前一帧图像中每一个特征点有b个候选对应点,根据该特征点与候选对应点的空间距离d,定义选取第k个候选对应点的被选取概率为:
Figure PCTCN2015074074-appb-000003
归一化得到,
Figure PCTCN2015074074-appb-000004
其中,δ是一个调节参数,控制上述空间距离d对概率的影响。如果该概率较高,则表示前一帧图像中特征点与第k个候选对应点正确对应的概率较高,则相应地当前帧图像中三维特征点与第k个候选对应点正确对应的概率也较高,可以从上述步骤A2中得到的多个候选对应点中第k个候选对应点附近选取一个候选对应点。
A3:根据上述步骤A2中选取的部分三维特征点分别对应的候选对应点,计算模型参数。
A4:对步骤A3中计算的模型参数对应模型进行评分。
具体地,一个模型参数对应模型的评分分数s可以通过一个协方差矩阵C来描述,即
Figure PCTCN2015074074-appb-000005
其中,A为用来标准化的整个grid的体积,
Figure PCTCN2015074074-appb-000006
表示正确对应特征点(即能在参考特征点中找到正确对应的三维特征点)分布的椭球体积,而协方差矩阵C可以为:
Figure PCTCN2015074074-appb-000007
其中,N是正确对应特征点的个数,fi是第i个正确对应特征点的空间位置,
Figure PCTCN2015074074-appb-000008
是所有正确对应特征点的平均位置。其中如果评分分数较高,则说明该模型中能在参考特征点中找到正确对应的三维特征点较多,且这些三维特征点能均匀分布在当前帧图像的整个空间。
A5:循环执行上述步骤A2到A4,即循环执行上述选取候选对应点、计算模型参数和评分的步骤,并将评分最高的模型的模型参数作为当前帧图像的相机外参。在这个过程中,静态物体重建系统需要设置一个相机外参的初始值,将该初始值作为根据模型评分确定相机外参的条件,则在确定当前帧图像的相 机外参时,需要使得最终确定的相机外参对应模型的评分要比该相机外参的初始值对应模型的评分高。
由于上述计算模型参数时,是通过部分三维特征点(比如三个)对应的候选对应点得到的,如果得到评分最高的模型,则静态物体重建系统还可以先利用该模型的模型参数将当前帧图像的所有三维特征点变换到参考坐标系下,计算得到对应的参考特征点;然后根据所有能在参考特征点中找到正确对应特征点的三维特征点,重新计算一次模型参数,然后将重新计算的模型参数作为最终的相机外参。其中,如果在计算模型参数时,三维特征点和参考特征点中正确对应点的个数越多,则最终计算得到的模型参数也越准确,则上述最终得到的相机外参也较为准确。
可见,本实施例中,静态物体重建系统可以重复执行上述步骤A2到A4,来选取当前帧图像中不同部分的三维特征点分别对应的候选对应点,进一步得到不同的模型参数进行评分后,将评分最高的模型的模型参数作为当前帧图像的相机外参。为了避免循环的次数过多造成计算的复杂性,在这种情况下,静态物体重建系统可以设置一个停止循环执行上述步骤A2到A4的条件,比如:如果在上述循环中,连续Ks次选取的部分三维特征点分别对应的候选对应点包含异常对应的概率小于预置的值η时,则终止上述循环流程,即:
Figure PCTCN2015074074-appb-000009
其中,
Figure PCTCN2015074074-appb-000010
是当前帧图像中grid Gi的正确对应概率,b是一个三维特征点对应的候选对应点个数,从多个候选对应点中选取一个候选对应点与当前图像的三维特征点正确对应的概率为:
Figure PCTCN2015074074-appb-000011
而一次选取的多个三维特征点与参考特征点全部正确对应的概率为
Figure PCTCN2015074074-appb-000012
其中m为多个三维特征点的个数。
又比如,上述循环步骤的次数超过预置的值,或执行上述循环步骤的时间超过预置的值,则停止执行上述步骤A2到A4的循环,即选取候选对应点、计算模型参数和评分的循环步骤。
需要说明的是,静态物体重建系统可以直接按照上述步骤A1到A5,得到当前帧图像的相机外参;在其它具体实施例中,静态物体重建系统可以先计算一个初始相机外参,然后再通过上述步骤A1到A5优化相机外参,具体地:
静态物体重建系统先将三维特征点,与静态物体的当前帧图像的前一帧图像中的特征点进行匹配,得到初始相机外参,具体方法与上述步骤A1到A5的方法类似,不同的是,静态物体重建系统需要在前一帧图像的特征点中,选择与当前帧图像的三维特征点距离最近的多个候选对应点,然后再进行其它处理;然后静态物体重建系统会以该初始相机外参作为根据模型评分确定相机外参的条件,并按照上述步骤A1到A5的方法,将三维特征点,与静态物体的参考特征点进行匹配,最终得到当前帧图像的相机外参。其中,初始相机外参主要是在静态物体重建系统执行上述步骤A5时,使得最终确定的相机外参对应模型的评分要比该初始相机外参对应模型的评分高。
参考图4所示,在另一个具体的实施例中,静态物体重建系统在执行上述步骤104时,具体可以通过如下几种方式来实现:
(1)基于2D-3D匹配点的相机姿态计算方法
B1:将当前帧图像的二维特征点与参考特征点中三维特征点进行匹配,确定与二维特征点对应的三维参考特征点。由于在三维特征点的特征描述量中 包括了二维描述量,则可以将二维特征点的二维描述量与参考特征点中三维特征点的二维描述量进行匹配。
B2:确定使得参考坐标系下的相机姿态的函数最小化的相机外参,在相机姿态的函数中包括二维特征点与三维参考特征点的对应,并将确定的相机外参作为当前帧图像的相机外参。
具体地,相机姿态的函数可以为
Figure PCTCN2015074074-appb-000013
其中,(xi,Xi)为二维特征点与三维参考特征点的对应,K为内参矩阵,R为相机旋转矩阵,t为相机位移向量,这里R和t就可以相当于相机外参。在最小化该函数,就是确定R和t的值使得函数的值最小化。
(2)基于2D-3D匹配点的相机姿态计算方法
C1:将当前帧图像的二维特征点与静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在前一帧图像中与二维特征点对应的二维特征点。
C2:选取当前帧图像的二维特征点,及在前一帧图像中与二维特征点对应的二维特征点处都具有深度数据的多对对应特征点。
C3:根据选取的多对对应特征点的深度变化信息,确定所述当前帧图像与前一帧图像的相对相机外参,具体地,是根据深度变化的比例计算出两帧图像对应的特征点之间位移向量的长度的缩放比例,从而可以得到相对相机外参。其中多对对应特征点的深度变化信息,具体是根据多对对应特征点处的深度数据得到的。
C4:根据相对相机外参与前一帧图像的相机外参,确定当前帧图像的相机外参。
可见,本实施例中,是通过上述步骤C2到C3的五点法(Five-Point)得到相对相机外参,然后再根据已知的前一帧图像的相机外参,从而可以得到当前帧图像的相机外参。五点法通过在两帧上建立五个正确的二维对应点,就可以计算出两帧的相对转换。
(3)混合计算方法
本方法是将上述两种方式结合起来,当通过上述两种方法得到一个当前帧图像的相机外参后,可以先将当前帧图像的三维特征点分别通过这两个相机外参转换到参考坐标系(参考特征点所组成的坐标系)下;然后计算当前帧图像的三维特征点在参考特征点中能找到对应的比例,即三维特征点转换到参考坐标系下的特征点与最近的参考特征点的距离小于预置的值,则认为在参考特征点中能找到对应;然后将比例较高对应的相机外参作为最终当前帧图像的相机外参。
需要说明的是,在具体实施例中,当基于三维特征点计算相机外参时,在预置的时间内未计算得到该相机外参,则说明深度相机采集的图像的深度数据可能丢失,需要基于二维特征点来计算相机外参,即上述步骤104;同时,进一步地,为了能重建一个完整的静态物体的模型,静态物体重建系统还需要进行几何补全,具体是融合所有帧图像的深度数据,通过光线跟踪技术补全每一帧图像丢失的部分,然后利用图像序列恢复出每一帧的深度数据,从而进一步补全缺失的深度数据。具体通过如下步骤来实现:
D1:根据采集的静态物体的多个帧图像的二维数据生成包含深度数据的源模型。
D2:将源模型通过变换函数转换到目标模型。其中,目标模型是由静态物体的当前帧图像的深度数据形成的,且变换函数是使得第一能量函数最小化的函数,第一能量函数是包括距离项和光滑项。
具体地,对源模型中的每一个顶点(vertex)分配一个变换Xi,将源模型的顶点变形到目标模型对应的顶点,所有顶点的变换整合在一起,得到一个4n*3的矩阵,即为变换矩阵X:=[X1…Xn]T,其中,n是源模型的顶点个数。
(1)对于第一能量函数中的距离项,是用于表示源模型中顶点到目标模型中相应顶点的距离。
假设源模型和目标模型之间固定的对应点为(vi,ui),稀疏矩阵D定义为:
Figure PCTCN2015074074-appb-000014
该D将变换矩阵X转换到变形的顶点,vi用坐标表示为vi=[x,y,z,1]T,而目标模型上与源模型对应顶点排列成一个矩阵,U:=[u1,...,un]。则距离项通过Frobenius范数表示成,
Figure PCTCN2015074074-appb-000015
其中,W是一个权重矩阵,表示为diag(w1,...,wn),对于目标模型中丢失的顶点(即当前帧图像中丢失的深度数据),相应的权重wi设为0。
(2)对于第一能量函数中的光滑项用于约束相邻顶点的变换。
为了使得源模型中相邻顶点分别到目标模型的变换相似,不产生突变,则可以通过光滑项来保证变形光滑,具体可以定义为:
Figure PCTCN2015074074-appb-000016
其中,G:=diag(1,1,1,γ),γ是用来权衡变换的旋转部分和位移部分,模型的边集合∈通过相邻像素得到,相邻像素对应的顶点之间就有一条边。
源模型所有的边和顶点,都编上了索引,边从较低的顶点索引指向较高的顶点索引,如果边r连接顶点(i,j),节点-弧矩阵M中第r行的非零项是,Mri=-1,Mrj=1。则光滑项可以表示成如下矩阵形式:
Figure PCTCN2015074074-appb-000017
其中,
Figure PCTCN2015074074-appb-000018
是Kronecker乘积。
结合上述的距离项和光滑项,可以得到第一能量函数为:
E(X):=Ed(X)+αEs(X),其中,α是光滑项权重。该第一能量函数又可以写成:
Figure PCTCN2015074074-appb-000019
通过将该第一能量函数最小化可以得到变换矩阵。本实施例中,为了保持源模型的几何形状,并使得局部的细节化的变形到目标模型,上述第一能量函数中光滑权重α在变形过程中逐步减小。在变形开始阶段,用一个较大的权重值约束,使得源模型能够整体变形到目标模型;通过连续降低权重值,完成更多的局部变形。由于光滑项的约束,源模型的顶点不是直接移动到目标模型对应的顶点,而是平行移动到目标表面,源模型上没有对应的顶点,因为受到相邻顶点的约束,光滑地变形。如果X改变低于一个阈值,则变形终止。
D3:根据转换后的目标模型补全当前帧图像中丢失的深度数据。
在补全丢失的深度数据时,静态物体重建系统会根据转换得到的目标模型。例如,参考图5所示,源模型S每一个顶点vi通过一个变换矩阵Xi逐步变形到目标模型T对应的顶点ui,同一模型中相邻的顶点具有相近的变换,以保证变形光滑。光滑约束权重在变形过程中逐渐下降,以保持源模型的整体形状和完成局部变形,变形不断重复,直到到达一个稳定状态。目标模型上丢失的部分,通过其对应的变形之后模型的顶点(即图5中用虚线连起来的点)补全,则补全的顶点即为补全的深度数据。
需要说明的是,上述步骤104中基于二维特征点计算相机外参的步骤,与上述步骤D1到D3进行几何补全的步骤没有绝对的顺序关系,可以同时执行,也可以顺序执行,上述图4中所示的只是其中一种具体实现方式。
在其它具体的实施例中,当静态物体重建系统在计算出各帧图像的相机外参后,可以通过如下几种方式来对相机外参进行优化:
(1)基于三维特征点计算出相机外参后,通过束优化的方式进行优化相机外参,即对N帧图像的相机外参进行优化,这N帧图像是连续的N帧图像:
参考图6所示,静态物体重建系统在对N帧图像(比如30帧图像)中每一帧图像,都执行上述步骤101到102之后,还执行如下步骤201,然后再执行步骤202中的调整;则在执行上述步骤105时,主要是通过对计算的相机外参进行调整后得到的当前帧图像的相机外参,将当前帧图像的点云转换到所述参考坐标系下,具体地:
步骤201,通过相机外参,建立当前帧图像中三维特征点与参考特征点的对应。
具体地,可以先通过相机外参,将当前帧图像中的三维特征点转换到参考坐标系下;然后计算转换后的特征点与各个参考特征点之间的空间距离,并找到最近的空间距离,如果该最近的空间距离小于预置的值比如6mm,则建立了该最近的空间距离对应参考特征点与对应三维特征点之间的对应。
步骤202,调整静态物体的N帧图像的相机外参,使得第二能量函数最小化,且在这个最小化第二能量函数的过程中,还可以调整N帧图像中各个特征点对应的参考特征点的位置。这样将每一帧图像上的每一个特征点对齐到对应的参考特征点上,最小化所有对应点距离,达到将对齐误差分散在整个优化序列中。其中,第二能量函数中包括第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数。
该第二能量函数可以为:
Figure PCTCN2015074074-appb-000020
其中,Pik是第i帧图像上的第k个三维特征点,gj(j∈[1,...,L])是Pik对应的参考特征点,Q(Ti,gj)是将参考坐标系下的参考特征点gj通过Ti变换到第i帧图像坐标系下的三维特征点,Ti是将第i帧图像的深度数据从参考坐标系转到第i帧图像坐标系下的刚性变换,d(x,y)表示欧几里得距离。
(2)基于二维特征点计算出相机外参后,通过束优化的方式进行优化相机外参,即对N帧图像的相机外参进行优化:
参考图7所示,静态物体重建系统在对N帧图像中每一帧图像,都执行上述步骤101到104之后,还执行如下步骤301,然后再执行步骤302中的调整;则在执行上述步骤105时,主要是通过对计算的相机外参进行调整后得到的当前帧图像的相机外参,将当前帧图像的点云转换到所述参考坐标系下,具体地:
步骤301,通过相机外参,建立当前帧图像中二维特征点和三维特征点分别与参考特征点的对应。
步骤302,调整静态物体的N帧图像的相机外参,使得第二能量函数最小化,且还可以调整N帧图像中各个特征点对应的参考特征点的位置,其中第二能量函数中包括第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离;及第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,这里i为从0到N的正整数。
第二能量函数可以为:
Figure PCTCN2015074074-appb-000021
其中,Xik是第i帧图像上的第k个二维特征点,其对应的参考特征点为gr;K是相机内参矩阵,将坐标系下的三维点投影到图像上的像素;li是第i帧图像上的二维特征点个数;λ是一个权值,用来控制每帧图像中二维特征点和三维特征点的对齐误差对总体能量的影响;该第二能量函数中其它符号与上述第二能量函数中其 它符号类似,在此不进行赘述。
(3)在得到的N帧图像的相机外参中,部分(比如第一部分)是基于二维特征点计算的,另一部分(比如第二部分)是基于三维特征点计算出的,则在对N帧图像进行束优化时所使用的第二能量函数中,关于第一部分帧图像的部分,是按照上述第一种方法(即上述步骤201到202)进行计算;关于第二部分帧图像的部分,是按照上述第二种方法(即上述步骤301到302)进行计算,在此不进行赘述。
可见,通过上述三种方法,对每隔若干帧(比如三十帧)图像的特征点进行局部束优化来减少局部误差,则在建立当前帧图像中特征点与参考特征点的对应时,首先在对前N帧图像进行束优化后得到的参考特征点中查找对应;没有找到对应的参考特征点,则在根据当前N帧图像得到的未优化的参考特征点中查找对应,这样可以局部建立准确对应,而没有引入累积误差。
(4)全局优化,参考图8所示,静态物体重建系统可以在按照上述步骤101到104针对多帧图像进行处理后,按照下述步骤401到403进行全局优化;然后在执行步骤105时,是根据对计算的相机外参进行优化后得到的相机外参,完成点云的转换以进行建模。
步骤401,判断静态物体的某一帧图像中特征点与另一帧图像中的特征点是否重叠,具体是指如果重叠,说明深度相机在围绕着静态物体拍摄照片的过程中,深度相机又回到初始拍摄的位置,使得某两帧图像的特征点重叠,形成闭合环,需要进行全局优化,具体执行步骤402到403;如果不重叠,则不会进行全局优化。
在判断是否重叠时,可以分别获取某一帧图像中特征点对应的第一参考特征点,及另一帧图像中特征点对应的第二参考特征点,如果第一参考特征点中有超过预置值的特征点都能在第二参考特征点中找到对应,则确定重叠,如果没有超过,则可以认为没有重叠。在判断是否重叠时,还可以通过某一帧图像的相机外参和另一帧图像的相机外参进行比较,如果相近,则可以认为重叠。
步骤402,合并某一帧图像与另一帧图像中相匹配的特征点,合并为一个特征点,而由于参考特征点是通过各个帧图像的特征点积累而成的,则这里需要根据合并后的特征点来对应更新参考特征点。
步骤403,根据更新后的参考特征点,更新静态物体的每一帧图像的相机外参,具体地,先更新每一帧图像中特征点与更新后的参考特征点的对应,进而根据更新后的对应来更新每一帧图像的相机外参,这样累积误差能够有效分散在闭合环中,具体如何通过根据一帧图像中特征点与参考特征点的对应,得到该帧图像的相机外参的方法,见上述实施例中所述的PMCSAC方法,在此不进行赘述。
综上所述,静态物体的重建可以经过提取特征点、获取相机外参、优化相机外参和点云的转换步骤,具体如图9所示,包括:
静态物体重建系统中通过深度相机围绕着静态物体从各个方法拍摄静态物体,得到多帧图像;针对某一帧图像,提取三维特征点,并将三维特征点与参考特征点进行匹配(即对其特征点)以计算当前帧图像的相机外参,其中如果没有匹配的参考特征点,则将未能匹配上的三维特征点加入到参考特征点中。当基于三维特征点计算相机外参成功;当基于三维特征点计算相机外参失败,则需要提取当前帧图像的二维特征点,并将二维特征点与参考特征点,或与前一帧图像的特征点进行匹配以计算当前帧图像的相机外参,同时还可以进行深度数据补全。通过前面的步骤获取了当前帧图像的相机外参后,可以对该相机外参进行优化,可以包括束优化和全局优化,然后按照上述步骤继续处理下一帧图像,直到所有帧图像都处理完后,就将所有帧图像中的点云分别通过对应帧图像的相机外参对齐到参考坐标系下,进行静态物体的建模,比如采用泊松建模的方法重建静态物体的三维模型。
通过本实施例中静态物体的重建,可以使得:
在深度数据缺失时,有效解决了现有方法无法处理深度数据丢失的问题;且由于本发明实施例中可以处理深度数据丢失的图像帧,提高了重建的能力。这样本发明实施例的方法可以应用到目前绝大多数的主动式深度采集设备上,由于主动式深度采集设备为了避免光源在视觉上的影响,都使用红外波段的LED或激光,这样很容易受到室外太阳光影响,出现深度数据缺失。
本发明实施例还提供一种静态物体重建系统,其结构示意图如图10所示,包括:
特征获取单元10,用于分别获取静态物体的当前帧图像中的三维特征点和 二维特征点,该特征获取单元10在获取各个特征点时,可以得到各个特征点的特征描述量,特别地,对于三维特征点的特征描述量,特征获取单元10可以将二维特征描述量和三维特征描述量分别标准化,即在二维特征描述量和三维特征描述量的训练集中,分别求得二维特征描述量和三维特征描述量的标准差,并用特征描述量除以对应的标准差,就可以得到标准化后的特征描述量;然后将标准化之后的二维特征描述量和三维特征量结合,得到三维特征点的特征描述量。
第一外参计算单元11,用于将所述特征获取单元10获取的三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,其中,所述参考特征点是所述静态物体的多个帧图像上的特征点累积形成的。
第一外参计算单元11可以通过三维特征点与参考特征点各自的特征描述量进行比较,找到当前帧图像中所有三维特征点分别对应的参考特征点,然后就可以通过找到的对应的参考特征点计算相机外参;若某一三维特征点没有找到对应的三维特征点,则该第一外参计算单元11还可以将该三维特征点加入到参考特征点中。
第二外参计算单元12,用于如果第一外参计算单元11基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内计算得到所述相机外参,则将所述特征获取单元10获取的二维特征点,与所述参考特征点中三维特征点匹配,或与所述静态物体的当前帧图像的前一帧图像中的二维特征点进行匹配,计算所述当前帧图像的相机外参。
第二外参计算单元12可以将当前帧图像的二维特征点对齐到参考特征点中的三维特征点,然后根据找到的对应的参考特征点中的三维特征点,计算相机外参。还可以将当前帧图像的二维特征点对齐到前一帧图像中,然后根据当前帧图像与前一帧图像中对应的二维特征点得到相对相机外参,再结合之前计算的前一阵图像的相机外参,从而就可以得到当前帧图像的相机外参。
转换单元13,用于通过所述第一外参计算单元11或第二外参计算单元12计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下,以对所述静态物体进行建模。
可见,在本实施例的静态物体重建系统中,当第一外参计算单元11在基于 三维特征点计算相机外参时,未在预置的时间内计算得到相机外参,则说明深度相机采集的深度数据丢失或损坏,则第二外参计算单元12采用二维特征点来计算相机外参,从而转换单元13根据相机外参实现某一帧图像中点云的对齐,这样融合了二维特征点和三维特征点,可以实现当深度相机采集的深度数据丢失或损坏时,也能成功重建静态物体。
参考图11所示,在一个具体的实施例中,静态物体重建系统除了可以包括如图10所示的结构外,其中的第一外参计算单元11具体可以通过候选选择单元110、选取单元111、模型计算单元112、评分单元113和外参确定单元114,其中:
候选选择单元110,用于在所述参考特征点中,选择与所述当前帧图像的三维特征点距离最近的多个候选对应点,具体地,候选选择单元110可以对于当前帧图像的某一个三维特征点,计算该三维特征点与所有参考特征点之间的欧式距离,并对这些欧式距离排序,选取欧式距离较小的多个参考特征点作为候选对应点。
选取单元111,用于从候选选择单元110选择的每一个三维特征点对应的多个候选对应点中,选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点;该选取单元111具体用于根据所述前一帧图像中包含所述静态物体的各个空间区域内特征点,与参考特征点的正确对应概率,从所述当前帧图像的所有三维特征点中选取所述部分三维特征点;根据所述前一帧图像中特征点的候选对应点被选取的概率,从所述候选选择单元110选取的多个候选对应点中选择所述三维特征点的候选对应点。
模型计算单元112,用于根据所述选取单元111选取的部分三维特征点分别对应的候选对应点,计算模型参数。
评分单元113,用于对所述模型计算单元112计算的模型参数对应模型进行评分。
外参确定单元114,用于在所述选取单元111、模型计算单元112和评分单元113循环执行所述选取候选对应点、计算模型参数和评分的步骤得到的评分中,将评分最高的模型的模型参数作为所述当前帧图像的相机外参,其中:
所述外参确定单元114,还用于在所述循环中,连续Ks次选取的所述部分 三维特征点与候选对应点包含异常对应的概率小于预置的值,或所述循环步骤的次数超过预置的值,或执行所述循环步骤的时间超过预置的值,则通知所述选取单元111、模型计算单元112和评分单元113停止执行所述选取候选对应点、计算模型参数和评分的循环步骤。
在这个过程中,需要设置一个相机外参的初始值,则外参确定单元114在确定当前帧图像的相机外参时,需要使得最终确定的相机外参对应模型的评分要比该相机外参的初始值对应模型的评分高。
进一步地,在本实施例中,由于上述模型计算单元112在计算模型参数时,是通过部分三维特征点(比如三个)对应的候选对应点得到的,如果经过评分单元113后得到评分最高的模型,外参确定单元114可以先不将该模型的模型参数作为相机外参,而是先由候选选择单元110利用该评分最高模型的模型参数将当前帧图像的所有三维特征点变换到参考坐标系下,计算得到对应的参考特征点;然后模型计算单元112根据所有能在参考特征点中找到正确对应特征点的三维特征点,重新计算一次模型参数,然后外参确定单元114再将重新计算的模型参数作为最终的相机外参。其中,由于在计算模型参数时,三维特征点和参考特征点中正确对应点的个数越多,则最终计算得到的模型参数也越准确,则上述最终得到的相机外参也较为准确。
另外,需要说明的是,第一外参计算单元11可以直接通过上述几个单元,并按照上述步骤A1到A5得到当前帧图像的相机外参;在其它具体实施例中,第一外参计算单元11可以先计算一个初始相机外参,然后再按照上述步骤A1到A5得到优化后的相机外参,具体地:
第一外参计算单元11,还用于先将三维特征点,与静态物体的当前帧图像的前一帧图像中的特征点进行匹配,得到初始相机外参,具体方法与上述步骤A1到A5的方法类似,不同的是,第一外参计算单元11中的候选选择单元110需要在前一帧图像的特征点中,选择与当前帧图像的三维特征点距离最近的多个候选对应点,然后再由其它单元进行相应地处理;然后第一外参计算单元11 会以该初始相机外参作为根据模型评分确定相机外参的条件,并按照上述步骤A1到A5的方法,将三维特征点,与静态物体的参考特征点进行匹配,最终当前帧图像的相机外参。其中,初始相机外参主要是外参确定单元114在根据模型的评分确定当前帧图像的相机外参时,需要使得最终确定的相机外参对应模型的评分要比该初始相机外参对应模型的评分高。
参考图12所示,在一个具体的实施例中,静态物体重建系统除了可以包括如图10所示的结构外,还可以包括模型生成单元14、模型转换单元15和补全单元16,且其中的第二外参计算单元12具体可以通过特征匹配单元120和外参获得单元121实现,具体地:
特征匹配单元120,用于将所述二维特征点与所述参考特征点中三维特征点进行匹配,确定与所述二维特征点对应的三维参考特征点;
外参获得单元121,用于确定使得所述参考坐标系下的相机姿态的函数最小化的相机外参,所述相机姿态的函数中包括所述二维特征点与三维参考特征点的对应,即包括了上述特征匹配单元120确定的二维特征点对应的三维参考特征点;将所述确定的相机外参作为所述当前帧图像的相机外参。
进一步地,第二外参计算单元还可以包括对应选取单元122,在这种情况下,上述特征匹配单元120还用于将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在所述前一帧图像中与所述二维特征点对应的二维特征点;对应选取单元122用于选取所述当前帧图像的二维特征点及在所述前一帧图像中与所述二维特征点对应的特征点处都具有深度数据的多对对应特征点;则参获得单元121还用于根据所述对应选取单元122选取的多对对应特征点的深度变化信息,确定所述当前帧图像与前一帧图像的相对相机外参;根据所述相对相机外参与所述前一帧图像的相机外参,确定所述当前帧图像的相机外参。
模型生成单元14,用于当确定第一外参计算单元11基于所述三维特征点计算所述当前帧图像的相机外参失败时,根据采集的所述静态物体的多个帧图像的二维数据生成包含深度数据的源模型;其中,所述目标模型是由所述静态物 体的当前帧图像的深度数据形成的,且所述变换矩阵是使得第一能量函数最小化的矩阵,所述第一能量函数是包括距离项和光滑项,所述距离项用于表示所述源模型中顶点到所述目标模型中相应顶点的距离,所述光滑项用于约束相邻顶点的变换;模型转换单元15,用于将所述模型生成单元14生成的源模型通过变换矩阵转换到目标模型;补全单元16,用于根据所述模型转换单元15转换后的目标模型补全所述当前帧图像中丢失的深度数据。这样通过深度数据的补全,可以重建一个完整的静态物体的模型,提高了静态物体重建的精度。
需要说明的是,第二外参计算单元12中可以通过特征匹配单元120和外参获得单元121,能实现基于2D-3D匹配点的相机姿态计算方法,而通过特征匹配单元120、外参获得单元121和对应选取单元122实现基于2D-2D匹配点的相机姿态计算方法。且进一步地:
第二外参计算单元12还可以将这两种方法结合起来,还可以包括一个外参选择单元,用于先将当前帧图像的三维特征点分别通过这两种方法得到的相机外参转换到参考坐标系下;然后计算当前帧图像的三维特征点在参考特征点中能找到对应的比例,即三维特征点转换到参考坐标系下的特征点与最近的参考特征点的距离小于预置的值,则认为在参考特征点中能找到对应;然后将比例较高对应的相机外参作为最终当前帧图像的相机外参。
参考图13所示,在一个具体的实施例中,静态物体重建系统除了可以包括如图10所示的结构外,还可以包括对应建立单元17、调整单元18、合并单元19和更新单元20,具体地:
对应建立单元17,用于通过所述相机外参,建立所述当前帧图像中三维特征点与所述参考特征点的对应。对应建立单元17可以先通过相机外参,将当前帧图像中的三维特征点转换到参考坐标系下;然后计算转换后的特征点与各个参考特征点之间的空间距离,并找到最近的空间距离,如果该最近的空间距离小于预置的值比如6mm,则建立了该最近的空间距离对应参考特征点与对应三维特征点之间的对应。
调整单元18,用于调整所述静态物体的N帧图像的相机外参,使得第二能量函数最小化,在这个过程中,调整单元18还可以调整N帧图像中各个特征点对应的参考特征点的位置。其中所述第二能量函数中包括所述第i帧图像中三 维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数。
本实施例中,当通过第一外参计算单元11得到相机外参后,就可以通过对应建立单元17建立当前帧图像中三维特征点与参考特征点的对应;然后第一外参计算单元11和对应建立单元17针对N帧图像都进行相应处理后,可以通过调整单元18采用束优化的方式对N帧图像的相机外参进行调整;最后转换单元13具体用于通过所述调整单元18对第一外参计算单元11计算的相机外参进行调整后得到的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
当第二外参计算单元12得到相机外参后,对应建立单元17不仅需要建立三维特征点与参考特征点的对应,还用于通过所述相机外参,建立所述当前帧图像中二维特征点与所述参考特征点的对应;然后第二外参计算单元12和对应建立单元17针对N帧图像都进行相应处理后,可以通过调整单元18采用束优化的方式对N帧图像的相机外参进行调整,其中使用的第二能量函数中还包括第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离。
另一种情况下,如果N帧图像的相机外参中,部分(比如第一部分)是基于二维特征点计算的,另一部分(比如第二部分)是基于三维特征点计算出的,则上述调整单元18对N帧图像进行束优化时所使用的第二能量函数中,关于第一部分帧图像的部分,是按照上述第一种方法(即上述步骤201到202)进行计算;关于第二部分帧图像的部分,是按照上述第二种方法(即上述步骤301到302)进行计算,在此不进行赘述。
进一步地,在本实施例中,还可以通过合并单元19和更新单元20实现全局优化,具体地:合并单元19,用于如果所述静态物体的某一帧图像中特征点与另一帧图像中的特征点重叠,合并所述某一帧图像中与所述另一帧图像匹配的特征点;更新单元20,用于根据所述合并单元19合并后的特征点得到更新后的参考特征点,并根据更新后的参考特征点,更新所述静态物体的每一帧图像的相机外参;这样转换单元13具体用于通过所述更新单元20更新后得到的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。其 中,如果某一帧图像中特征点对应的第一参考特征点中,有超过预置值的特征点都能在另一帧图像中特征点对应的第二参考特征点中找到对应,则认为重叠;或者如果某一帧图像的相机外参和另一帧图像的相机外参进行比较,如果相近,则可以认为重叠。
本发明实施例还提供一种静态物体重建系统,其结构示意图如图14所示,包括:包括分别连接到总线上的存储器22和处理器21,且还可以包括连接在总线的输入输出装置23,其中:
存储器22中用来储存从输入输出装置23输入的数据,且还可以储存处理器21处理数据的必要文件等信息;输入输出装置23可以包括外接的设备比如显示器、键盘、鼠标和打印机等,还可以包括静态物体重建系统与其它设备通信的端口。
处理器21,用于分别获取静态物体的当前帧图像中的三维特征点和二维特征点;将所述获取的三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,其中,所述参考特征点是所述静态物体的多个帧图像上的特征点累积形成的;如果基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内计算得到该相机外参,则将所述获取的二维特征点,与所述参考特征点中三维特征点匹配,或与所述静态物体的当前帧图像的前一帧图像中的二维特征点进行匹配,计算所述当前帧图像的相机外参;通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下,以对所述静态物体进行建模。
其中,处理器21在获取各个特征点时,可以得到各个特征点的特征描述量,特别地,对于三维特征点的特征描述量,处理器21可以将二维特征描述量和三维特征描述量分别标准化,即在二维特征描述量和三维特征描述量的训练集中,分别求得二维特征描述量和三维特征描述量的标准差,并用特征描述量除以对应的标准差,就可以得到标准化后的特征描述量;然后将标准化之后的二维特征描述量和三维特征量结合,得到三维特征点的特征描述量。
处理器21在基于三维特征点计算相机外参时,可以通过三维特征点与参考特征点各自的特征描述量进行比较,找到当前帧图像中所有三维特征点分别对应的参考特征点,然后就可以通过找到的对应的参考特征点计算相机外参;若 某一三维特征点没有找到对应的三维特征点,则该处理器21还可以将该三维特征点加入到参考特征点中。
处理器21在基于二维特征点计算相机外参时,可以将当前帧图像的二维特征点对齐到参考特征点中的三维特征点,然后根据找到的对应的参考特征点中的三维特征点,计算相机外参。还可以将当前帧图像的二维特征点对齐到前一帧图像中,然后根据当前帧图像与前一帧图像中对应的二维特征点得到相对相机外参,再结合之前计算的前一阵图像的相机外参,从而就可以得到当前帧图像的相机外参。
可见,在本实施例的静态物体重建系统中,当处理器21在基于三维特征点计算相机外参时,未在预置的时间内计算得到该相机外参,则说明深度相机采集的深度数据丢失或损坏,则处理器21采用二维特征点来计算相机外参,从而根据相机外参实现某一帧图像中点云的对齐,这样融合了二维特征点和三维特征点,可以实现当深度相机采集的深度数据丢失或损坏时,也能成功重建静态物体。
在一个具体的实施例中,处理器21在基于三维特征点计算相机外参时,具体地:在所述参考特征点中,选择与所述当前帧图像的三维特征点距离最近的多个候选对应点;并从选择的每一个三维特征点对应的多个候选对应点中,选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点;根据所述选取的部分三维特征点分别对应的候选对应点,计算模型参数;对所述计算的模型参数对应模型进行评分;在所述循环执行所述选取候选对应点、计算模型参数和评分的步骤得到的评分中,将评分最高的模型的模型参数作为所述当前帧图像的相机外参,在这个过程中,需要设置一个相机外参的初始值,则处理器21在确定当前帧图像的相机外参时,需要使得最终确定的相机外参对应模型的评分要比该相机外参的初始值对应模型的评分高。进一步地,如果确定在执行所述循环中,连续Ks次选取的所述部分三维特征点与候选对应点包含异常对应的概率小于预置的值,或循环步骤的次数超过预置的值,或执行循环步骤的时间超过预置的值,则处理器21停止执行所述选取候选对应点、计算模型参数和评分的循环步骤。
其中,处理器21在选择候选对应点时,具体可以对于当前帧图像的某一个 三维特征点,计算该三维特征点与所有参考特征点之间的欧式距离,并对这些欧式距离排序,选取欧式距离较小的多个参考特征点作为候选对应点;且处理器21在选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点时,具体用于根据所述前一帧图像中包含所述静态物体的各个空间区域内特征点与参考特征点的正确对应概率,从所述当前帧图像的所有三维特征点中选取所述部分三维特征点;根据所述前一帧图像中特征点的候选对应点被选取的概率,从所述选择的多个候选对应点中选择所述三维特征点的候选对应点。
进一步地,在本实施例中,由于处理器21在计算模型参数时,是通过部分三维特征点(比如三个)对应的候选对应点得到的,如果得到评分最高的模型,处理器21可以先不将该模型的模型参数作为相机外参,而是先利用该评分最高模型的模型参数将当前帧图像的所有三维特征点变换到参考坐标系下,计算得到对应的参考特征点;然后处理器21根据所有能在参考特征点中找到正确对应特征点的三维特征点,重新计算一次模型参数,然后处理器21再将重新计算的模型参数作为最终的相机外参。其中,由于在计算模型参数时,三维特征点和参考特征点中正确对应点的个数越多,则最终计算得到的模型参数也越准确,则上述最终得到的相机外参也较为准确。
另外,需要说明的是,处理器21可以直接按照上述步骤A1到A5得到当前帧图像的相机外参;在其它具体实施例中,处理器21也可以先计算一个初始相机外参,然后再按照上述步骤A1到A5最终得到当前帧图像的相机外参,具体地:
处理器21,还用于先将三维特征点,与静态物体的当前帧图像的前一帧图像中的特征点进行匹配,得到初始相机外参,具体方法与上述步骤A1到A5的方法类似,不同的是,处理器21需要在前一帧图像的特征点中,选择与当前帧图像的三维特征点距离最近的多个候选对应点,然后再由其它单元进行相应地处理;然后处理器21会以该初始相机外参作为根据模型评分确定相机外参的条件,并按照上述步骤A1到A5的方法,将三维特征点,与静态物体的参考特征点进行匹配,得到优化后的当前帧图像的相机外参。其中,初始相机外参主要 是处理器21在根据模型的评分确定当前帧图像的相机外参时,需要使得最终确定的相机外参对应模型的评分要比该初始相机外参对应模型的评分高。
在另一个具体的实施例中,处理器21在基于二维特征点计算相机外参时,可以采用基于2D-3D匹配点的相机姿态计算方法,具体用于将所述二维特征点与所述参考特征点中三维特征点进行匹配,确定与所述二维特征点对应的三维参考特征点;确定使得所述参考坐标系下的相机姿态的函数最小化的相机外参,所述相机姿态的函数中包括所述二维特征点与三维参考特征点的对应;将所述确定的相机外参作为所述当前帧图像的相机外参。在另一方面,处理器21在基于二维特征点计算相机外参时,还可以采用基于2D-2D匹配点的相机姿态计算方法,具体将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在所述前一帧图像中与所述二维特征点对应的二维特征点;并选取所述当前帧图像的二维特征点及在所述前一帧图像中与所述二维特征点对应的特征点处都具有深度数据的多对对应特征点;根据所述选取的多对对应特征点的深度变化信息,确定所述当前帧图像与前一帧图像的相对相机外参;根据所述相对相机外参与所述前一帧图像的相机外参,确定所述当前帧图像的相机外参。
另一种情况下,处理器21还可以将上述两种基于二维特征点计算相机外参的方法结合起来,还可以用于先将当前帧图像的三维特征点分别通过这两种方法得到的相机外参转换到参考坐标系下;然后计算当前帧图像的三维特征点在参考特征点中能找到对应的比例,即三维特征点转换到参考坐标系下的特征点与最近的参考特征点的距离小于预置的值,则认为在参考特征点中能找到对应;然后将比例较高对应的相机外参作为最终当前帧图像的相机外参。
进一步地,为了重建一个完整的静态物体的模型,提高了静态物体重建的精度,处理器21还用于当确定基于所述三维特征点计算所述当前帧图像的相机外参失败时,根据采集的所述静态物体的多个帧图像的二维数据生成包含深度数据的源模型;其中,所述目标模型是由所述静态物体的当前帧图像的深度数据形成的,且所述变换矩阵是使得第一能量函数最小化的矩阵,所述第一能量函数是包括距离项和光滑项,所述距离项用于表示所述源模型中顶点到所述目标模型中相应顶点的距离,所述光滑项用于约束相邻顶点的变换;将所述模型 生成的源模型通过变换矩阵转换到目标模型;根据所述转换后的目标模型补全所述当前帧图像中丢失的深度数据。
在另一个具体的实施例中,处理器21可以通过如下几种方式对相机外参进行优化:
(1)当通过处理器21基于三维特征点计算了相机外参后,就可以通过所述相机外参,建立所述当前帧图像中三维特征点与所述参考特征点的对应;如果针对N帧图像都建立三维特征点与所述参考特征点的对应后,处理器21可以通过采用束优化的方式对N帧图像的相机外参进行调整,具体地,调整所述静态物体的N帧图像的相机外参,使得第二能量函数最小化,在这个过程中,还可以调整N帧图像中各个特征点对应的参考特征点的位置,其中所述第二能量函数中包括所述第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数;最后处理器21用于通过所述调整后的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
其中,处理器21在建立对应时,可以先通过相机外参,将当前帧图像中的三维特征点转换到参考坐标系下;然后计算转换后的特征点与各个参考特征点之间的空间距离,并找到最近的空间距离,如果该最近的空间距离小于预置的值比如6mm,则建立了该最近的空间距离对应参考特征点与对应三维特征点之间的对应。
(2)当通过处理器21基于二维特征点计算了相机外参后,不仅需要建立三维特征点与参考特征点的对应,还用于通过所述相机外参,建立所述当前帧图像中二维特征点与所述参考特征点的对应;如果针对N帧图像都建立三维特征点和二维特征点分别与所述参考特征点的对应后,处理器21可以通过采用束优化的方式对N帧图像的相机外参进行调整,具体地,处理器21在调整相机外参时使用的第二能量函数中还包括第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离。
(3)如果N帧图像的相机外参中,部分(比如第一部分)是基于二维特征点计算的,另一部分(比如第二部分)是基于三维特征点计算出的,则上述处理器21对N帧图像进行束优化时所使用的第二能量函数中,关于第一部分帧 图像的部分,是按照上述第一种方法(即上述步骤201到202)进行计算;关于第二部分帧图像的部分,是按照上述第二种方法(即上述步骤301到302)进行计算,在此不进行赘述。
(4)全局优化
处理器21还可以用于如果所述静态物体的某一帧图像中特征点与另一帧图像中的特征点重叠,合并所述某一帧图像中与所述另一帧图像匹配的特征点,并得到更新后的参考特征点;然后根据所述更新后的参考特征点,更新所述静态物体的每一帧图像的相机外参;这样转换单元具体用于通过所述更新后的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。其中,如果某一帧图像中特征点对应的第一参考特征点中,有超过预置值的特征点都能在另一帧图像中特征点对应的第二参考特征点中找到对应,则认为重叠;或者如果某一帧图像的相机外参和另一帧图像的相机外参进行比较,如果相近,则可以认为重叠。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM)、随机存取存储器(RAM)、磁盘或光盘等。
以上对本发明实施例所提供的静态物体重建方法和系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种静态物体重建方法,其特征在于,包括:
    分别获取静态物体的当前帧图像中的三维特征点和二维特征点;
    将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,其中,所述参考特征点是所述静态物体的多个帧图像上的特征点累积形成的;
    如果基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内得到所述相机外参,则将所述二维特征点,与所述参考特征点中三维特征点匹配,或与所述静态物体的当前帧图像的前一帧图像中的二维特征点进行匹配,计算所述当前帧图像的相机外参;
    通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下,以对所述静态物体进行建模。
  2. 如权利要求1所述的方法,其特征在于,所述将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:
    在所述参考特征点中,选择与所述当前帧图像的三维特征点距离最近的多个候选对应点;
    选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点;
    根据所述部分三维特征点分别对应的候选对应点,计算模型参数;
    对所述计算的模型参数对应模型进行评分;
    循环执行所述选取候选对应点、计算模型参数和评分的步骤,并将评分最高的模型的模型参数作为所述当前帧图像的相机外参,其中:
    如果在所述循环中连续Ks次选取的所述部分三维特征点与候选对应点包含异常对应的概率小于预置的值,或所述循环步骤的次数超过预置的值,或执行所述循环步骤的时间超过预置的值,则停止执行所述选取候选对应点、计算模型参数和评分的循环步骤。
  3. 如权利要求2所述的方法,其特征在于,所述选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点,具体包括:
    根据所述前一帧图像中包含所述静态物体的各个空间区域内特征点,与参 考特征点的正确对应概率,从所述当前帧图像的所有三维特征点中选取所述部分三维特征点;
    根据所述前一帧图像中特征点的候选对应点被选取的概率,从所述多个候选对应点中选择所述三维特征点的候选对应点。
  4. 如权利要求2或3所述的方法,其特征在于,所述将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参之前,所述方法还包括:
    将所述三维特征点,与所述静态物体的当前帧图像的前一帧图像中的特征点进行匹配,得到初始相机外参;
    则将所述三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:以所述初始相机外参作为根据模型评分确定相机外参的条件,将所述三维特征点,与所述静态物体的参考特征点进行匹配,最终得到所述当前帧图像的相机外参。
  5. 如权利要求1至3任一项所述的方法,其特征在于,所述将所述二维特征点与所述参考特征点中三维特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:
    将所述二维特征点与所述参考特征点中三维特征点进行匹配,确定与所述二维特征点对应的三维参考特征点;
    确定使得所述参考坐标系下的相机姿态的函数最小化的相机外参,所述相机姿态的函数中包括所述二维特征点与三维参考特征点的对应;
    将所述确定的相机外参作为所述当前帧图像的相机外参。
  6. 如权利要求1至4任一项所述的方法,其特者在于,所述将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,计算所述当前帧图像的相机外参,具体包括:
    将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在所述前一帧图像中与所述二维特征点对应的二维特征点;
    选取所述当前帧图像的二维特征点及在所述前一帧图像中与所述二维特征点对应的二维特征点处都具有深度数据的多对对应特征点;
    根据所述选取的多对对应特征点的深度变化信息,确定所述当前帧图像与 前一帧图像的相对相机外参;
    根据所述相对相机外参与所述前一帧图像的相机外参,确定所述当前帧图像的相机外参。
  7. 如权利要求1至6任一项所述的方法,其特征在于,如果基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内得到所述相机外参,所述方法还包括:
    根据采集的所述静态物体的多个帧图像的二维数据生成包含深度数据的源模型;
    将所述源模型通过变换矩阵转换到目标模型;
    根据所述转换后的目标模型补全所述当前帧图像中丢失的深度数据;
    其中,所述目标模型是由所述静态物体的当前帧图像的深度数据形成的,且所述变换矩阵是使得第一能量函数最小化的矩阵,所述第一能量函数是包括距离项和光滑项,所述距离项用于表示所述源模型中顶点到所述目标模型中相应顶点的距离,所述光滑项用于约束相邻顶点的变换。
  8. 如权利要求1至7任一项所述的方法,其特征在于,所述计算所述当前帧图像的相机外参之后,所述方法还包括:
    通过所述相机外参,建立所述当前帧图像中三维特征点与所述参考特征点的对应;
    调整所述静态物体的N帧图像的相机外参,使得第二能量函数最小化,其中所述第二能量函数中包括所述第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数;
    则所述通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下具体包括:通过对所述计算的相机外参进行所述调整后得到的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
  9. 如权利要求8所述的方法,其特征在于,如果所述相机外参是在基于所述二维特征点计算的,则所述计算所述当前帧图像的相机外参后,所述方法还包括:
    通过所述相机外参,建立所述当前帧图像中二维特征点与所述参考特征点 的对应;
    则所述第二能量函数中还包括第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离。
  10. 如权利要求1至7任一项所述的方法,其特征在于,所述计算所述当前帧图像的相机外参之后,所述方法还包括:
    如果所述静态物体的某一帧图像中特征点与另一帧图像中的特征点重叠,合并所述某一帧图像中与所述另一帧图像匹配的特征点,并得到更新的参考特征点;
    根据所述更新的参考特征点,更新所述静态物体的每一帧图像的相机外参;
    则所述通过所述计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下具体包括:通过对所述计算的相机外参进行所述更新后得到的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
  11. 一种静态物体重建系统,其特征在于,包括:
    特征获取单元,用于分别获取静态物体的当前帧图像中的三维特征点和二维特征点;
    第一外参计算单元,用于将所述特征获取单元获取的三维特征点与参考特征点进行匹配,计算所述当前帧图像的相机外参,其中,所述参考特征点是所述静态物体的多个帧图像上的特征点累积形成的;
    第二外参计算单元,用于如果所述第一外参计算单元基于所述三维特征点计算所述当前帧图像的相机外参时,未在预置的时间内计算得到所述相机外参,则将所述特征获取单元获取的二维特征点,与所述参考特征点中三维特征点匹配,或与所述静态物体的当前帧图像的前一帧图像中的二维特征点进行匹配,计算所述当前帧图像的相机外参;
    转换单元,用于通过所述第一外参计算单元或第二外参计算单元计算的相机外参,将所述当前帧图像的点云转换到所述参考特征点组成的参考坐标系下,以对所述静态物体进行建模。
  12. 如权利要求11所述的系统,其特征在于,所述第一外参计算单元具体包括:
    候选选择单元,用于在所述参考特征点中,选择与所述当前帧图像的三维特征点距最近的多个候选对应点;
    选取单元,用于选取所述当前帧图像的所有三维特征点中部分三维特征点分别对应的候选对应点;
    模型计算单元,用于根据所述选取单元选取的部分三维特征点分别对应的候选对应点,计算模型参数;
    评分单元,用于对所述模型计算单元计算的模型参数对应模型进行评分;
    外参确定单元,用于在所述选取单元、模型计算单元和评分单元循环执行所述选取候选对应点、计算模型参数和评分的步骤得到的评分中,将评分最高的模型的模型参数作为所述当前帧图像的相机外参,其中:
    所述外参确定单元,还用于在所述循环中,连续Ks次选取的所述部分三维特征点与候选对应点包含异常对应的概率小于预置的值,或所述循环步骤的次数超过预置的值,或执行所述循环步骤的时间超过预置的值,则通知所述选取单元、模型计算单元和评分单元停止执行所述选取候选对应点、计算模型参数和评分的循环步骤。
  13. 如权利要求12所述的系统,其特征在于,
    所述选取单元,具体用于根据所述前一帧图像中包含所述静态物体的各个空间区域内特征点,与参考特征点的正确对应概率,从所述当前帧图像的所有三维特征点中选取所述部分三维特征点;根据所述前一帧图像中特征点的候选对应点被选取的概率,从所述多个候选对应点中选择所述三维特征点的候选对应点。
  14. 如权利要求12或13所述的系统,其特征在于,还包括:
    所述第一外参计算单元,还用于将所述三维特征点,与所述静态物体的当前帧图像的前一帧图像中的特征点进行匹配,得到初始相机外参;并以所述初始相机外参作为根据模型评分确定相机外参的条件,将所述三维特征点,与所述静态物体的参考特征点进行匹配,最终得到所述当前帧图像的相机外参。
  15. 如权利要求11至14任一项所述的系统,其特征在于,所述第二外参计算单元,具体包括:
    特征匹配单元,用于将所述二维特征点与所述参考特征点中三维特征点进 行匹配,确定与所述二维特征点对应的三维参考特征点;
    外参获得单元,用于确定使得所述参考坐标系下的相机姿态的函数最小化的相机外参,所述相机姿态的函数中包括所述二维特征点与三维参考特征点的对应;将所述确定的相机外参作为所述当前帧图像的相机外参。
  16. 如权利要求15所述的系统,其特征在于,所述第二外参计算单元还包括对应选取单元;
    所述特征匹配单元,还用于将所述二维特征点与所述静态物体的当前帧图像的前一帧图像的二维特征点进行匹配,确定在所述前一帧图像中与所述二维特征点对应的二维特征点;
    所述对应选取单元,用于选取所述当前帧图像的二维特征点及在所述前一帧图像中与所述二维特征点对应的特征点处都具有深度数据的多对对应特征点;
    所述外参获得单元,还用于根据所述对应选取单元选取的多对对应特征点的深度变化信息,确定所述当前帧图像与前一帧图像的相对相机外参;根据所述相对相机外参与所述前一帧图像的相机外参,确定所述当前帧图像的相机外参。
  17. 如权利要求11至16任一项所述的系统,其特征在于,还包括:
    模型生成单元,用于当确定基于所述三维特征点计算所述当前帧图像的相机外参失败时,根据采集的所述静态物体的多个帧图像的二维数据生成包含深度数据的源模型;其中,所述目标模型是由所述静态物体的当前帧图像的深度数据形成的,且所述变换矩阵是使得第一能量函数最小化的矩阵,所述第一能量函数是包括距离项和光滑项,所述距离项用于表示所述源模型中顶点到所述目标模型中相应顶点的距离,所述光滑项用于约束相邻顶点的变换;
    模型转换单元,用于将所述模型生成单元生成的源模型通过变换矩阵转换到目标模型;
    补全单元,用于根据所述模型转换单元转换后的目标模型补全所述当前帧图像中丢失的深度数据。
  18. 如权利要求11至17任一项所述的系统,其特征在于,还包括:
    对应建立单元,用于通过所述相机外参,建立所述当前帧图像中三维特征 点与所述参考特征点的对应;
    调整单元,用于调整所述静态物体的N帧图像的相机外参,使得第二能量函数最小化,其中所述第二能量函数中包括所述第i帧图像中三维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离,其中,所述i为从0到N的正整数;
    则所述转换单元,具体用于通过所述调整单元调整得到的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
  19. 如权利要求18所述的系统,其特征在于,
    所述对应建立单元,还用于如果所述相机外参是在基于所述二维特征点计算的,通过所述相机外参,建立所述当前帧图像中二维特征点与所述参考特征点的对应;
    则所述调整单元在进行相机外参调整时使用的第二能量函数中还包括第i帧图像中二维特征点,与对应的参考特征点转换到所述第i帧图像坐标系下特征点的距离。
  20. 如权利要求11至17任一项所述的系统,其特征在于,还包括:
    合并单元,用于如果所述静态物体的某一帧图像中特征点与另一帧图像中的特征点重叠,合并所述某一帧图像中与所述另一帧图像匹配的特征点;
    更新单元,用于根据所述合并单元合并后的特征点得到更新后的参考特征点,并根据所述更新后的参考特征点,更新所述静态物体的每一帧图像的的相机外参;
    则所述转换单元,具体用于通过所述更新单元更新得到的所述当前帧图像的相机外参,将所述当前帧图像的点云转换到所述参考坐标系下。
PCT/CN2015/074074 2014-03-18 2015-03-12 一种静态物体重建方法和系统 WO2015139574A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15764364.4A EP3093823B1 (en) 2014-03-18 2015-03-12 Static object reconstruction method and system
US15/232,229 US9830701B2 (en) 2014-03-18 2016-08-09 Static object reconstruction method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410101540.9 2014-03-18
CN201410101540.9A CN104933755B (zh) 2014-03-18 2014-03-18 一种静态物体重建方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/232,229 Continuation US9830701B2 (en) 2014-03-18 2016-08-09 Static object reconstruction method and system

Publications (1)

Publication Number Publication Date
WO2015139574A1 true WO2015139574A1 (zh) 2015-09-24

Family

ID=54120906

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/074074 WO2015139574A1 (zh) 2014-03-18 2015-03-12 一种静态物体重建方法和系统

Country Status (4)

Country Link
US (1) US9830701B2 (zh)
EP (1) EP3093823B1 (zh)
CN (1) CN104933755B (zh)
WO (1) WO2015139574A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI709062B (zh) * 2019-09-20 2020-11-01 財團法人資訊工業策進會 虛實疊合方法與系統

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933755B (zh) * 2014-03-18 2017-11-28 华为技术有限公司 一种静态物体重建方法和系统
EP3161572B1 (en) 2014-06-27 2019-01-23 Crown Equipment Corporation Lost vehicle recovery utilizing associated feature pairs
WO2017031718A1 (zh) * 2015-08-26 2017-03-02 中国科学院深圳先进技术研究院 弹性物体变形运动的建模方法
CN105719233B (zh) * 2016-01-21 2018-09-04 万云数码媒体有限公司 一种平面游戏转立体的顶点位置横向变换方法
JP6754992B2 (ja) * 2016-04-22 2020-09-16 パナソニックIpマネジメント株式会社 三次元再構成方法
US10771697B2 (en) * 2016-09-06 2020-09-08 Apple Inc. Still image stabilization/optical image stabilization synchronization in multi-camera image capture
CN107958482B (zh) 2016-10-17 2021-06-18 杭州海康威视数字技术股份有限公司 一种三维场景模型构建方法及装置
CN108122277B (zh) * 2016-11-28 2021-06-22 华为技术有限公司 一种建模方法及装置
CN106875450B (zh) * 2017-02-20 2019-09-20 清华大学 用于相机重定位的训练集优化方法及装置
US10311312B2 (en) * 2017-08-31 2019-06-04 TuSimple System and method for vehicle occlusion detection
US10250801B2 (en) * 2017-04-13 2019-04-02 Institute For Information Industry Camera system and image-providing method
CN107123142B (zh) * 2017-05-09 2020-05-01 北京京东尚科信息技术有限公司 位姿估计方法和装置
US10009640B1 (en) * 2017-05-31 2018-06-26 Verizon Patent And Licensing Inc. Methods and systems for using 2D captured imagery of a scene to provide virtual reality content
CN107330917B (zh) * 2017-06-23 2019-06-25 歌尔股份有限公司 移动目标的跟踪拍摄方法和跟踪设备
US10460471B2 (en) * 2017-07-18 2019-10-29 Kabushiki Kaisha Toshiba Camera pose estimating method and system
CN109325978B (zh) * 2017-07-31 2022-04-05 深圳市腾讯计算机系统有限公司 增强现实显示的方法、姿态信息的确定方法及装置
US10783381B2 (en) * 2017-08-31 2020-09-22 Tusimple, Inc. System and method for vehicle occlusion detection
CN107742275B (zh) * 2017-11-09 2021-02-19 联想(北京)有限公司 一种信息处理方法及电子设备
CN108257216A (zh) * 2017-12-12 2018-07-06 北京克科技有限公司 一种在虚拟现实环境构建实体模型的方法、装置及设备
CN109949218B (zh) * 2017-12-21 2023-04-18 富士通株式会社 图像处理装置和方法
CN108801274B (zh) * 2018-04-16 2021-08-13 电子科技大学 一种融合双目视觉和差分卫星定位的地标地图生成方法
CN108596102B (zh) * 2018-04-26 2022-04-05 北京航空航天大学青岛研究院 基于rgb-d的室内场景物体分割分类器构造方法
CN110660105B (zh) * 2018-06-29 2022-05-31 杭州海康威视数字技术股份有限公司 一种全景环视系统的标定参数优化方法及装置
CN109146943B (zh) 2018-08-03 2019-12-03 百度在线网络技术(北京)有限公司 静止物体的检测方法、装置及电子设备
WO2020133080A1 (zh) * 2018-12-27 2020-07-02 深圳市优必选科技有限公司 物体定位方法、装置、计算机设备及存储介质
CN109889736B (zh) * 2019-01-10 2020-06-19 深圳市沃特沃德股份有限公司 基于双摄像头、多摄像头的图像获取方法、装置及设备
CN110163903B (zh) * 2019-05-27 2022-02-25 百度在线网络技术(北京)有限公司 三维图像的获取及图像定位方法、装置、设备和存储介质
CN110378966B (zh) * 2019-06-11 2023-01-06 北京百度网讯科技有限公司 车路协同相机外参标定方法、装置、设备及存储介质
CN110349213B (zh) * 2019-06-28 2023-12-12 Oppo广东移动通信有限公司 基于深度信息的位姿确定方法、装置、介质与电子设备
CN110517305B (zh) * 2019-08-16 2022-11-04 兰州大学 一种基于图像序列的固定物体三维图像重构方法
CN110660095B (zh) * 2019-09-27 2022-03-25 中国科学院自动化研究所 动态环境下的视觉slam初始化方法、系统、装置
CN112698315B (zh) * 2019-10-23 2024-04-09 浙江菜鸟供应链管理有限公司 移动设备定位系统、方法及设备
CN112837227B (zh) * 2019-11-22 2023-07-04 杭州海康威视数字技术股份有限公司 一种参数校正方法、装置、系统、电子设备及存储介质
CN111797268B (zh) * 2020-07-17 2023-12-26 中国海洋大学 Rgb-d图像检索方法
CN113140042B (zh) * 2021-04-19 2023-07-25 思看科技(杭州)股份有限公司 三维扫描拼接方法、装置、电子装置和计算机设备
CN113610711B (zh) * 2021-08-02 2023-05-23 南京信息工程大学 一种单图像引导的三维表面重建方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245593A1 (en) * 2009-03-27 2010-09-30 Electronics And Telecommunications Research Institute Apparatus and method for calibrating images between cameras
CN101877143A (zh) * 2009-12-09 2010-11-03 中国科学院自动化研究所 一种二维图像组的三维场景重建方法
CN101908231A (zh) * 2010-07-27 2010-12-08 清华大学 处理含有主平面场景的三维点云重建方法和系统
CN102074015A (zh) * 2011-02-24 2011-05-25 哈尔滨工业大学 一种基于二维图像序列的目标对象的三维重建方法
US20120243774A1 (en) * 2010-07-28 2012-09-27 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Method for reconstruction of urban scenes
CN103456038A (zh) * 2013-08-19 2013-12-18 华中科技大学 一种井下环境三维场景重建方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6504569B1 (en) * 1998-04-22 2003-01-07 Grass Valley (U.S.), Inc. 2-D extended image generation from 3-D data extracted from a video sequence
US7446798B2 (en) * 2003-02-05 2008-11-04 Siemens Corporate Research, Inc. Real-time obstacle detection with a calibrated camera and known ego-motion
US20070165942A1 (en) * 2006-01-18 2007-07-19 Eastman Kodak Company Method for rectifying stereoscopic display systems
CN101312539B (zh) * 2008-07-03 2010-11-10 浙江大学 用于三维电视的分级图像深度提取方法
CN101630406B (zh) * 2008-07-14 2011-12-28 华为终端有限公司 摄像机的标定方法及摄像机标定装置
CN101551907B (zh) * 2009-04-28 2011-05-04 浙江大学 一种多照相机自动化高精度标定方法
US8537200B2 (en) * 2009-10-23 2013-09-17 Qualcomm Incorporated Depth map generation techniques for conversion of 2D video data to 3D video data
US8587583B2 (en) 2011-01-31 2013-11-19 Microsoft Corporation Three-dimensional environment reconstruction
US20130095920A1 (en) * 2011-10-13 2013-04-18 Microsoft Corporation Generating free viewpoint video using stereo imaging
US9098908B2 (en) * 2011-10-21 2015-08-04 Microsoft Technology Licensing, Llc Generating a depth map
CN102692236A (zh) * 2012-05-16 2012-09-26 浙江大学 基于rgb-d相机的视觉里程计方法
CN104933755B (zh) * 2014-03-18 2017-11-28 华为技术有限公司 一种静态物体重建方法和系统
US10574974B2 (en) * 2014-06-27 2020-02-25 A9.Com, Inc. 3-D model generation using multiple cameras
DE102016200225B4 (de) * 2016-01-12 2017-10-19 Siemens Healthcare Gmbh Perspektivisches Darstellen eines virtuellen Szenebestandteils

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245593A1 (en) * 2009-03-27 2010-09-30 Electronics And Telecommunications Research Institute Apparatus and method for calibrating images between cameras
CN101877143A (zh) * 2009-12-09 2010-11-03 中国科学院自动化研究所 一种二维图像组的三维场景重建方法
CN101908231A (zh) * 2010-07-27 2010-12-08 清华大学 处理含有主平面场景的三维点云重建方法和系统
US20120243774A1 (en) * 2010-07-28 2012-09-27 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Method for reconstruction of urban scenes
CN102074015A (zh) * 2011-02-24 2011-05-25 哈尔滨工业大学 一种基于二维图像序列的目标对象的三维重建方法
CN103456038A (zh) * 2013-08-19 2013-12-18 华中科技大学 一种井下环境三维场景重建方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3093823A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI709062B (zh) * 2019-09-20 2020-11-01 財團法人資訊工業策進會 虛實疊合方法與系統

Also Published As

Publication number Publication date
EP3093823A1 (en) 2016-11-16
CN104933755A (zh) 2015-09-23
CN104933755B (zh) 2017-11-28
US9830701B2 (en) 2017-11-28
EP3093823A4 (en) 2017-02-01
US20160350904A1 (en) 2016-12-01
EP3093823B1 (en) 2017-12-20

Similar Documents

Publication Publication Date Title
WO2015139574A1 (zh) 一种静态物体重建方法和系统
CN109859296B (zh) Smpl参数预测模型的训练方法、服务器及存储介质
Sankaranarayanan et al. Learning from synthetic data: Addressing domain shift for semantic segmentation
Xu et al. Self-supervised multi-view stereo via effective co-segmentation and data-augmentation
CN111127304B (zh) 跨域图像转换
Muratov et al. 3DCapture: 3D Reconstruction for a Smartphone
WO2012126135A1 (en) Method of augmented makeover with 3d face modeling and landmark alignment
CN113298934B (zh) 一种基于双向匹配的单目视觉图像三维重建方法及系统
CN111625667A (zh) 一种基于复杂背景图像的三维模型跨域检索方法及系统
CN110688947A (zh) 一种同步实现人脸三维点云特征点定位和人脸分割的方法
EP3012781A1 (en) Method and apparatus for extracting feature correspondences from multiple images
Laga A survey on deep learning architectures for image-based depth reconstruction
CN110910433A (zh) 一种基于深度学习的点云匹配方法
Weerasekera et al. Dense monocular reconstruction using surface normals
Ozbay et al. A hybrid method for skeleton extraction on Kinect sensor data: Combination of L1-Median and Laplacian shrinking algorithms
CN111368733B (zh) 一种基于标签分布学习的三维手部姿态估计方法、存储介质及终端
CN114663880A (zh) 基于多层级跨模态自注意力机制的三维目标检测方法
WO2024037562A1 (zh) 三维重建方法、装置及计算机可读存储介质
Chen et al. Structure guided texture inpainting through multi-scale patches and global optimization for image completion
CN117351078A (zh) 基于形状先验的目标尺寸与6d姿态估计方法
CN116883590A (zh) 一种三维人脸点云优化方法、介质及系统
CN113128292A (zh) 一种图像识别方法、存储介质及终端设备
Hou et al. Depth estimation and object detection for monocular semantic SLAM using deep convolutional network
CN112906432A (zh) 一种应用于人脸关键点定位任务的检错纠错方法
Peng et al. Deep-Learning-Based Precision Visual Tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15764364

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015764364

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015764364

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE