WO2022183657A1 - Point cloud model construction method and apparatus, electronic device, storage medium, and program - Google Patents

Point cloud model construction method and apparatus, electronic device, storage medium, and program Download PDF

Info

Publication number
WO2022183657A1
WO2022183657A1 PCT/CN2021/105574 CN2021105574W WO2022183657A1 WO 2022183657 A1 WO2022183657 A1 WO 2022183657A1 CN 2021105574 W CN2021105574 W CN 2021105574W WO 2022183657 A1 WO2022183657 A1 WO 2022183657A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
point
feature point
pairs
Prior art date
Application number
PCT/CN2021/105574
Other languages
French (fr)
Chinese (zh)
Inventor
冯友计
江明轩
周立阳
姜翰青
章国锋
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Priority to KR1020227013015A priority Critical patent/KR102638632B1/en
Priority to JP2022525128A priority patent/JP2023519466A/en
Publication of WO2022183657A1 publication Critical patent/WO2022183657A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present disclosure relates to the technical field of three-dimensional modeling, and relates to, but is not limited to, a method, apparatus, electronic device, computer storage medium and computer program for constructing a point cloud model.
  • the motion recovery structure technology can construct a 3D point cloud model from multiple images, which can be widely used in the fields of physical space digitization, high-precision map construction, and augmented reality. Therefore, how to improve the accuracy and quality of point cloud model construction is of great significance in the field of 3D modeling technology.
  • Embodiments of the present disclosure provide a method, apparatus, electronic device, computer storage medium, and computer program for constructing a point cloud model.
  • a method for constructing a point cloud model is provided, which is applied to an electronic device, including:
  • At least one set of image pairs in the panoramic image set and corresponding matching results are determined according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, and the matching results indicate two panoramic images the correspondence between the first features of the image;
  • a point cloud model is constructed according to the at least one set of image pairs and the corresponding matching results.
  • the obtaining the first feature of the panoramic image in the panoramic image set includes:
  • a first feature of the panoramic image is determined based on at least one of the first sub-features.
  • the determining a plurality of perspective images corresponding to the panoramic image includes:
  • the third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship.
  • the mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
  • the second feature of the fluoroscopic image includes a second feature point and a corresponding second descriptor
  • the determining of the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image includes:
  • the first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
  • the first feature includes a first feature point and a corresponding first descriptor
  • the determining of at least one group of image pairs in the panoramic image set and the corresponding matching results according to the first feature includes:
  • a first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
  • the method further includes:
  • the image pairs whose number of feature point pairs meet the preset first condition are filtered.
  • the first essential matrix is determined according to the multiple sets of feature point pairs, including:
  • the angle error of the pair of feature points is determined according to the angle errors of the two first feature points in the pair of feature points, wherein the angle errors of the first feature points are the unit spheres corresponding to the first feature points The angle between the line connecting the spherical point of the unit sphere and the optical center of the unit sphere and the outer polar plane;
  • the number of interior points of each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
  • the determining the number of interior points corresponding to each of the essential matrices includes:
  • the number of interior points corresponding to the essential matrix is determined according to all interior points.
  • the filtering of multiple sets of feature point pairs by using the first essential matrix includes:
  • the feature point pairs whose angle errors meet the preset third condition are filtered.
  • it also includes:
  • a perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related
  • the fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
  • the image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
  • the filtering of the image pair using the fluoroscopic image that is related to the feature point pair includes:
  • the constructing a point cloud model according to the at least one set of image pairs and corresponding matching results includes:
  • an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image.
  • a registered image wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
  • the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
  • determining a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix including:
  • determining whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix includes:
  • the reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
  • an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
  • it also includes:
  • the feature points of the registered image are triangulated to form corresponding three-dimensional points
  • the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
  • it also includes:
  • the panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
  • an apparatus for constructing a point cloud model including:
  • an acquisition module configured to acquire the first feature of the panoramic image in the panoramic image set
  • a matching module configured to determine at least one set of image pairs in the panoramic image set and corresponding matching results according to the first feature, wherein the image pairs include two panoramic images matched by the first feature, and the matching the result indicates the correspondence between the first features of the two panoramic images;
  • the building module is configured to build a point cloud model according to the at least one set of image pairs and corresponding matching results.
  • the obtaining module is specifically configured as:
  • a first feature of the panoramic image is determined based on at least one of the first sub-features.
  • the specific configuration is:
  • the third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship.
  • the mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
  • the second feature of the fluoroscopic image includes a second feature point and a corresponding second descriptor
  • the acquisition module is configured to determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image
  • the specific configuration is as follows:
  • the first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
  • the first feature includes a first feature point and a corresponding first descriptor
  • the matching module is specifically configured as:
  • a plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of each set of the image pairs, wherein each set of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images ;
  • a first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
  • the matching module is configured to determine multiple sets of feature point pairs according to the first descriptors of the two panoramic images of each set of the image pairs, it is further configured to:
  • the image pairs whose number of feature point pairs meet the preset first condition are filtered.
  • the specific configuration is:
  • the angle error of the feature point pair is determined according to the angle error of the two first feature points in the feature point pair, wherein the angle error of the first feature point is the angular error of the unit sphere corresponding to the first feature point The angle between the line connecting the spherical point and the optical center of the unit sphere and the outer polar plane;
  • the number of interior points corresponding to each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
  • the specific configuration is:
  • the number of interior points corresponding to the essential matrix is determined according to all interior points.
  • the specific configuration is:
  • the feature point pairs whose angle errors meet the preset third condition are filtered.
  • the matching module is further configured to:
  • a perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related
  • the fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
  • the image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
  • the specific configuration is:
  • the building module is specifically configured as:
  • an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image.
  • a registered image wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
  • the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
  • the building module when configured to determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs, and the first essential matrix, the specific configuration is:
  • the specific configuration is:
  • the reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
  • an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
  • the building block is further configured to:
  • the feature points of the registered image are triangulated to form corresponding three-dimensional points
  • the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
  • the matching module is further configured to:
  • the panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
  • an electronic device the device includes a memory and a processor, the memory is used for storing computer instructions executable on the processor, the processor is used for executing the A point cloud model is constructed based on the method described in the first aspect when instructed by the computer.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of the first aspect.
  • a computer program comprising computer-readable codes, when the computer-readable codes are executed in an electronic device, a processor in the electronic device executes any of the above a way.
  • a panoramic image set is formed by panoramic images, the panoramic images in the panoramic image set are further matched according to the first feature, and the two panoramic images that have completed the first feature matching are regarded as a set of image pairs, so as to determine at least one group of image pairs and corresponding matching results, and finally construct a point cloud model according to the determined at least one group of image pairs and corresponding matching results. Since the panoramic image is used for matching and the point cloud model is further constructed according to the matching result, the number of images in the image set can be reduced, thereby improving the matching efficiency and modeling efficiency, and the spatial range corresponding to the panoramic image is large, so the panoramic image can be improved. The matching effect between images, thereby improving the accuracy and quality of the point cloud model.
  • FIG. 1 is a flowchart of a method for constructing a point cloud model according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a panoramic image according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a manner of acquiring a first feature of a panoramic image according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a manner for determining an image pair and a corresponding matching result according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an angle error shown in an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a method for comparing initialization conditions, feature point pairs and a first essential matrix according to an embodiment of the present disclosure
  • FIG. 7 is a structural diagram of an apparatus for constructing a point cloud model according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • the motion recovery structure technology can construct a 3D point cloud model from multiple images, which can be widely used in the fields of physical space digitization, high-precision map construction, and augmented reality.
  • the matching effect between multiple images is not good, which leads to low precision and poor quality of the point cloud models constructed sequentially.
  • At least one embodiment of the present disclosure provides a method for constructing a point cloud model. Please refer to FIG. 1 , which shows a flow of the method, including steps S101 to S103 .
  • the point cloud model is a three-dimensional model corresponding to the space, and the space may refer to the real world.
  • Each object in the real world is represented by a corresponding point cloud in the model, and a point cloud is a collection of points composed of three-dimensional points.
  • the method can be executed by electronic equipment such as terminal equipment or server, and the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, Personal Digital Assistant (Personal Digital Assistant, PDA) handheld device, computing device, vehicle-mounted device, wearable device, etc.
  • the method can be implemented by the processor calling the computer-readable instructions stored in the memory.
  • the method may be performed by a server, and the server may be a local server, a cloud server, or the like.
  • step S101 the first feature of the panoramic image in the panoramic image set is acquired.
  • the panoramic image set includes multiple panoramic images, and the panoramic images may be panoramic images from various angles, such as 360° panoramic images; the panoramic images may be images obtained by spherical cameras, or stitched images obtained by multiple fisheye cameras
  • the image obtained by the equidistant rectangular projection method please refer to FIG. 2, which exemplarily shows a panoramic image.
  • Each panorama image corresponds to a local subspace of the modeling space, and the size of the local subspace corresponding to the panorama image is larger than the size of the local subspace corresponding to the ordinary image under the same parameters, and all the panorama images of the panorama image set correspond to Local subspaces can form the entire modeling space; local subspaces corresponding to different panoramic images can have overlapping regions.
  • a pre-trained neural network may be used to obtain the first feature of the panoramic image, or other methods may be used to obtain the first feature of the panoramic image, which is not intended to be specifically limited in the present disclosure.
  • the first feature of each panorama image in the panorama image set may be acquired.
  • step S102 at least one set of image pairs in the panoramic image set and corresponding matching results are determined according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, and the matching The result indicates the correspondence between the first features of the two panoramic images.
  • the first feature matching refers to the existence of a first feature point corresponding to the same space in the two panoramic images, that is, at least one first feature point of one panoramic image and at least one first feature point of another panoramic image, corresponding to reality the same space in the world.
  • the panorama images in the panorama image set can be matched by means of brute force matching, that is, the way of traversal is used for matching.
  • Matching when two panoramic images are matched, the first feature of the two panoramic images is used for matching, that is, the matching result of the panoramic image is determined according to the matching result of the first feature of the two panoramic images.
  • the first features of the two panoramic images are matched, that is, the feature matching of the two panoramic images is completed, then the two images can be determined as a set of image pairs, and the corresponding relationship between the first features of the two panoramic images is determined at the same time. for matching results.
  • each panoramic image can form an image pair with another panoramic image, and can also form multiple sets of image pairs with other panoramic images respectively. That is to say, after each panoramic image forms an image pair, it is not locked. You can also continue to compose new image pairs with other panoramic images.
  • step S103 a point cloud model is constructed according to the at least one set of image pairs and the corresponding matching results.
  • the point cloud model is constructed by using the matching result in the above step S102.
  • the modeling process is also the process of motion recovery structure, including camera registration and point cloud reconstruction.
  • Camera registration is to restore the camera motion parameters of each panoramic image in the panoramic image set (for example, it can be represented by camera pose), point cloud reconstruction It is to restore the point cloud of the three-dimensional structure of the corresponding local subspace (that is, the local subspace mentioned in step S102).
  • the point cloud model can be constructed by means of incremental reconstruction.
  • a panoramic image set is composed of panoramic images, the panoramic images in the panoramic image set are further matched according to the first feature, and the two panoramic images whose features have been matched are regarded as a set of image pairs, thereby determining at least one set of at least one set of panoramic images.
  • Image pairs and corresponding matching results and finally construct a point cloud model according to the determined at least one set of image pairs and corresponding matching results. Since the panoramic image is used for matching and the point cloud model is further constructed according to the matching result, the number of images in the image set can be reduced, thereby improving the matching efficiency and modeling efficiency, and the spatial range corresponding to the panoramic image is large, so the panoramic image can be improved.
  • the matching effect between images thereby improving the accuracy and quality of the point cloud model.
  • the first feature of the panoramic image of the panoramic image set can be obtained in the following manner, please refer to FIG.
  • step S301 multiple perspective images corresponding to the panoramic image are determined, wherein a set of spaces corresponding to the multiple perspective images is a space corresponding to the panoramic image.
  • multiple perspective images may be determined in the following manner: first, acquiring the unit sphere corresponding to the panoramic image, and determining a first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere; then Next, determine a plurality of perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein the set of spherical points corresponding to the plurality of perspective images is a unit spherical surface; finally, a third mapping relationship between the pixel coordinates of the panoramic image and the pixel coordinates of the perspective image is determined according to the first mapping relationship and the second mapping relationship, and the pixel information of the pixels of the panoramic image and The third mapping relationship determines pixel information of pixel points of the fluoroscopic image.
  • acquiring the unit sphere corresponding to the panoramic image may be by back-projecting the panoramic image back to the unit sphere.
  • a virtual perspective camera with the center of the unit sphere as the optical center can shoot the unit sphere, and the obtained image can be determined as a perspective image; when shooting the unit sphere, the virtual camera can be made according to a certain The angle is rotated at equal intervals, and a perspective image is taken for each rotation, until the entire spherical surface is captured, and multiple perspective images covering all viewing angles are obtained. For example, every 60° rotation to take a perspective image requires six perspective images. Only the image can capture the entire spherical surface; the field of view (FOV) and focal length of the virtual camera can be set to the parameters of commonly used image acquisition devices (such as mobile phones, digital cameras, etc.) image.
  • FOV field of view
  • focal length of the virtual camera can be set to the parameters of commonly used image acquisition devices (such as mobile phones, digital cameras, etc.) image.
  • the first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere can be expressed by the following formula (1):
  • (s x , s y , s z ) are the point coordinates of the unit sphere
  • (u, v) are the pixel coordinates of the panoramic image
  • w is the width of the panoramic image
  • h is the height of the panoramic image.
  • (s x , s y , s z ) are the point coordinates of the unit sphere
  • (x, y) are the pixel coordinates of the perspective image
  • is the width of the perspective image is the height of the perspective image
  • f is the focal length of the virtual camera.
  • the coordinates transformed by the rotation matrix R, (x, y) are the coordinates on the perspective image, is the width of the perspective image, is the height of the perspective image, and f is the focal length of the virtual camera.
  • one of the multiple shooting angles of the virtual camera is the same as the coordinate system of the unit sphere. Therefore, at this angle, the third mapping relationship between the pixel coordinates of the panoramic image and the pixel coordinates of the perspective image is It can be obtained by combining the above formula (1) and formula (2); under other angles, the third mapping relationship between the pixel coordinates of the panoramic image and the pixel coordinates of the perspective image can be obtained by combining the above formula (1) and formula (3) get.
  • determining the pixel information of the pixel point of the fluoroscopic image according to the pixel information of the pixel point of the panoramic image and the third mapping relationship may be to directly determine the pixel information of the pixel point of the panoramic image as the pixel of the corresponding pixel point of the fluoroscopic image
  • the pixel information of the perspective image can also be obtained by sampling and/or bilinear interpolation of the pixel points of the panoramic image.
  • the pixel information may be the luminance value of the pixel, or the value of each color channel (eg, three channels of red, green, and blue) of the pixel.
  • step S302 a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images is acquired.
  • a pre-trained neural network can be used to extract the second feature of the fluoroscopic image, or other methods can be used to extract the second feature of the fluoroscopic image, and the present disclosure does not intend to make specific restrictions on the extraction method.
  • the second feature of each perspective image corresponding to the panoramic image may be acquired.
  • the second feature is the second feature point and the corresponding second descriptor, that is, all the second feature points and the corresponding second descriptor in the fluoroscopic image constitute the second feature of the fluoroscopic image.
  • step S303 the first sub-feature of the corresponding position of the panoramic image is determined according to the second feature of the fluoroscopic image, wherein the fluoroscopic image and the corresponding position of the panoramic image correspond to the same space.
  • the corresponding positions of the perspective image and the panoramic image correspond to the same space, that is, the corresponding positions of the perspective image and the panoramic image correspond to the same spherical point set on a single spherical surface.
  • the first sub-feature may include all the first feature points and corresponding first descriptors in the corresponding positions of the panoramic image.
  • the first sub-feature of the corresponding position of the panoramic image may be determined in the following manner: First, determine according to the coordinates of the second feature point of the perspective image and the third mapping relationship The coordinates of the first feature point of the panoramic image; next, the first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
  • the point on the panoramic image corresponding to the second feature point is the first feature point, that is, the first feature point corresponds to the second feature point, or the spherical point of the unit sphere corresponding to the first feature point is the same as the The spherical points of the unit sphere corresponding to the second feature point are the same.
  • the second descriptor corresponding to the second feature point may be directly used as the first descriptor of the corresponding first feature point.
  • step S304 a first feature of the panoramic image is determined according to at least one of the first sub-features.
  • the first feature of the panoramic image includes all the first feature points in the panoramic image and the corresponding first descriptors.
  • the mapping relationship between the panoramic image and the perspective image is determined through the mapping relationship between the panoramic image and the unit sphere and the mapping relationship between the unit sphere and the perspective image, that is, the unit sphere is used as the medium to map the panoramic image.
  • the image is divided into a plurality of perspective images, and the extraction of the first feature of the panoramic image is further achieved by extracting the second feature of the perspective image and inversely mapping the second feature point back to the first feature point of the panoramic image.
  • the first feature includes a first feature point and a corresponding first descriptor.
  • at least one of the panoramic image sets may be determined according to the first feature in the following manner.
  • FIG. 4 shows the flow of the above determination method, including steps S401 to S403 .
  • step S401 multiple groups of image pairs are determined according to each panoramic image and the corresponding panoramic image to be matched.
  • the two panoramic images form a set of image pairs.
  • the corresponding panorama image to be matched may be determined according to the space corresponding to each panorama image; or the panorama image to be matched corresponding to each panorama image may be determined according to a preset collocation rule. That is to say, when determining a panoramic image of a panoramic image to be matched, the panoramic image that overlaps with the corresponding space of the panoramic image may be used as the panoramic image to be matched, and may also be determined according to a preset matching rule.
  • the collocation rules can be determined according to the above principles, for example, the panoramic images are numbered according to the corresponding spatial order, and then a preset number (eg, 10) of panoramic images after each panoramic image are used as the panoramic images to be matched. All other panoramic images other than one panoramic image may also be used as the panoramic image to be matched for the panoramic image.
  • step S402 a plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of the image pair, wherein each set of the feature point pairs includes two correspondingly matched first descriptors belonging to the two panoramic images. a feature point.
  • the first descriptor with the closest Euclidean distance can be searched for each first descriptor in the first panoramic image of the image pair in the second panoramic image, and then reversely, for the first descriptor of the image pair
  • find the first descriptor with the closest Euclidean distance in the first panoramic image if a certain first descriptor in the first panoramic image and the second panoramic image
  • a certain first descriptor in the image is the first descriptor with the closest Euclidean distance in another panoramic image, then the above two first descriptors are considered to match, and then the two corresponding first descriptors are determined.
  • One first feature point matching that is, two first feature points complete feature matching to form a feature point pair.
  • the first condition is preset, and the first condition is used to filter the data determined in step S401.
  • Multiple sets of image pairs that is, some image pairs can be removed by using the first condition.
  • the first condition may be less than the second number threshold, that is, filter image pairs whose number of feature point pairs is less than the second number threshold, that is, remove image pairs whose number of feature point pairs is less than the second number threshold, for example , the second quantity threshold may be set to 5 or 10, etc.
  • the embodiments of the present disclosure do not intend to impose specific limitations on the specific value of the second quantity threshold.
  • each group of feature point pairs represents the corresponding relationship between the two first feature points, and multiple groups of feature point pairs constitute the matching result of the image pairs.
  • step S403 a first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain a matching result corresponding to the image pair.
  • the first essential matrix may be determined in the following manner: first, the angle error of the pair of feature points is determined according to the angle error of the two first feature points in the pair of feature points, wherein the first The angle error of the feature point is the angle between the line connecting the spherical point of the unit sphere corresponding to the first feature point and the optical center of the unit sphere and the outer polar plane; next, the angle error of the corresponding pair of feature points is the residual item, and calculates the essential matrix according to the preset number of the feature point pairs for many times; finally, the number of interior points corresponding to each essential matrix is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix .
  • E is the essential matrix
  • S (s x , s y , s z ) are the point coordinates of the unit sphere corresponding to the first feature point in the second panoramic image.
  • the larger of the angle errors of the two first feature points is determined as the angle error of the feature point pair, that is, the angle error of the feature point pair is determined according to the following formula (5):
  • the angle error of the first feature point of the second panoramic image that is, the difference between the connection line between the spherical point S' corresponding to the first feature point on the second panoramic image and the optical center O 2 and the corresponding outer polar plane.
  • the included angle wherein the corresponding outer polar plane is formed by the connection line between the spherical point S corresponding to the first feature point on the first panoramic image and the optical center O 1 and the connection line between the two optical centers Q 1 and Q 2 flat.
  • the angle error of the first feature point of the first panoramic image is, the angle between the connection line between the spherical point and the optical center corresponding to the first feature point on the first panoramic image and the corresponding epipolar plane
  • the corresponding outer polar plane is the plane formed by the connection line between the spherical point corresponding to the first feature point on the second panoramic image and the optical center and the connection line between the two optical centers.
  • the angle error and the spherical error can be used to better adapt to the camera model of the panoramic image.
  • multiple essential matrices can be calculated by RANSAC (Random Sample Consensus) and the 5-point algorithm with the angle error of the feature point pair as the residual item, and an essential matrix can be calculated for every 5 sets of feature point pairs. Therefore, the above method is used. Multiple essential matrices can be derived.
  • the number of interior points corresponding to the essential matrix can be determined in the following manner: first, the angle error of each group of feature point pairs of the image pair is calculated according to the essential matrix; next, it is determined that the angle error conforms to the preset first
  • the feature point pairs of the two conditions are interior points; finally, the number of interior points corresponding to the essential matrix is determined according to all interior points. That is, the above formula (5) and the essential matrix are used to determine the angle error of each group of feature point pairs; the second condition is preset, and the second condition is used to filter the interior points.
  • the second condition may be that the angle error is less than the first An angle threshold, that is, the feature point pair corresponding to the angle error smaller than the first angle threshold is determined as an interior point.
  • the first essential matrix may be used to filter the multiple sets of feature point pairs: first, the angle error of each set of feature point pairs of the image pair is determined according to the first essential matrix ; Next, filter the feature point pairs whose angle error meets the preset third condition.
  • the above-mentioned formula (5) and the first essential matrix can be used to calculate the angle error of each group of feature point pairs; a third condition is preset, and the feature point pair is screened by the third condition.
  • the third condition can be The angle error is greater than or equal to the second angle threshold (such as greater than or equal to 0.4 degrees), that is, filtering the feature point pairs corresponding to the angle error greater than or equal to the second angle threshold, that is, removing the angle error greater than or equal to the second angle threshold corresponding to The feature point pair corresponding to the angle error smaller than the second angle threshold is reserved.
  • the feature point pairs between the image pairs are determined by performing feature matching on the image pairs, and the first essential matrix is further determined according to the feature point pairs, and finally the first essential matrix is used to filter the feature point pairs, and in the The angle error is used when determining the first essential matrix and filtering the feature point pairs. Therefore, compared with other essential matrices, the feature point pairs consistent with the first essential matrix are the most, and the filtering step will be the feature points that are inconsistent with the first essential matrix. Pair removal not only improves the accuracy of the first essential matrix, but also maximizes the number of feature point pairs on the premise of removing wrong feature point pairs, thereby improving the matching accuracy of the two panoramic images of the image pair. and accuracy. Moreover, the 360-degree viewing angle coverage of the panoramic image is used to increase the number of feature matches between images and reduce the probability of camera registration failure in weak texture areas.
  • the distribution of the first feature points can also be used to determine whether the matching of the two panoramic images is determined by repeated texture, and further filter multiple groups of image pairs, specifically, the following methods can be used: first, obtain the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong; The third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image determine a perspective image related to the feature point pair, wherein the perspective image related to the feature point pair The image is a fluoroscopic image in which a second feature point corresponding to the first feature point belonging to the feature point pair exists; finally, the image pair is filtered using the fluoroscopic image associated with the feature point pair.
  • the above-mentioned third number threshold can be set to 5, which can avoid erroneous statistics caused by a small amount of noise matching.
  • the filtering threshold can be determined according to the total amount of fluoroscopic images and a preset first ratio. For example, if the total amount of fluoroscopic images is 6 and the preset first ratio is 0.5, it is considered that the number of fluoroscopic images related to feature point pairs is less than 3. , the image pair is filtered out.
  • the first feature points for which feature matching is completed all correspond to the second feature points of the fluoroscopic image. Therefore, by determining the distribution of the second feature points, it can be determined whether the matching of the two panoramic images is caused by repeated textures, and By removing noise matching, the accuracy of the above judgment can be further improved, and erroneously matched image pairs can be excluded. Most of the repeated textures are local, and the global matching between panoramic images is used to eliminate false matching as much as possible to avoid the error rate of camera registration based on this.
  • a point cloud model may be constructed according to the at least one group of image pairs and the corresponding matching results in the following manner: First, according to preset initialization conditions and feature points of each group of image pairs and the first essential matrix to determine a set of image pairs as initial image pairs, and determine the camera pose of each panoramic image of the initial image pair, and triangulate the first feature point pair of the initial image pair to form The initial three-dimensional point; next, according to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until the Each panoramic image is a registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, and the registered image is a panoramic image with triangulated first feature points, so
  • the first three-dimensional point includes the initial three-dimensional point, or includes the three-dimensional point formed by triangulation of the initial three-dimensional point and the first feature point
  • the image pairs when determining the initial image pairs, may be selected in descending order of the number of feature point pairs, and after each image pair is selected, the Whether the image pair satisfies the initialization condition, until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is the initial image pair.
  • step S601 Go to step S604.
  • step S601 at least one group (for example, four groups) of displacement variables is determined according to the first essential matrix of the image pair, and the feature points of the pair of feature points are triangulated for each group of displacement variables to form the corresponding displacement variables of each group and filtering the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, wherein the displacement variables include rotation variables and translation variables.
  • the displacement variables include rotation variables and translation variables.
  • the rotation variable can be represented by a 3*3 matrix R
  • the translation variable can be represented by a 3-dimensional vector T.
  • a third angle threshold and a fourth angle threshold may be set, and then the re-projection errors retained in the two panoramic images are both smaller than the third angle threshold.
  • step S602 in response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determine the corresponding displacement variable as the first displacement variable.
  • step S603 select an essential matrix whose inner number of points is greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, and determine at least one group (for example, four groups) of displacement variables according to each essential matrix, and for each group of displacement variables Triangulate the feature points of the feature point pairs respectively to form the three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the one with the largest number of each essential matrix.
  • Displacement variables corresponding to a set of 3D points are examples of displacement variables according to each essential matrix, and for each group of displacement variables Triangulate the feature points of the feature point pairs respectively to form the three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the one with the largest number of each essential matrix.
  • the remaining essential matrices can be reserved for use in this step; Essential Matrix.
  • a second ratio can be preset, and then the threshold of points can be determined by using the number of inner points of the first essential matrix and the above-mentioned second ratio; the product of the number of inner points of the first essential matrix and the second ratio can be used as the threshold of points.
  • the second ratio is preset to be 0.6, but the present disclosure does not intend to limit the specific value of the second ratio.
  • the operations performed on the selected essential matrix in this step are the same as the operations in steps S601 to S602, and a displacement variable is reserved for each essential matrix.
  • step S604 if the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
  • the difference between the displacement variable retained by the essential matrix and the first displacement variable can be represented by the angle between the directions of the two displacement variables, and the angle between the directions is obtained by multiplying the rotation matrices of the two displacement variables;
  • the preset range can be expressed as A preset fifth angle threshold value indicates that the value less than the fifth angle threshold value satisfies the preset range; therefore, when the included angle between the displacement variable retained by each essential matrix and the direction of the first displacement variable is smaller than the fifth angle threshold value, determine The image pair satisfies the initialization condition.
  • steps S601 to S604 are used to continue to judge whether other image pairs satisfy the initialization condition.
  • the camera pose of each panoramic image and the position of the initial 3D point can also be optimized by minimizing the reprojection error of the initial 3D point on the two panoramic images of the initial image pair;
  • the camera pose of the registration image can also be optimized by minimizing the reprojection error of the three-dimensional point on the registration image;
  • X i is the coordinate vector of the i-th three-dimensional point
  • f is the virtual perspective camera.
  • P [R i
  • T i ] be the camera matrix of the ith panoramic image
  • R and T are the rotation variables and translation variables corresponding to the three-dimensional point, respectively
  • m is the number of panoramic images
  • n is the number of three-dimensional points.
  • the above-mentioned determination of the camera pose of the image can be performed using RANSAC (Random Sample Consensus) and P3P algorithms.
  • this method can use panoramic images for camera registration and point cloud reconstruction, thereby completing the construction of point cloud models, and the point cloud model constructed based on panoramic images is better than the traditional one based on ordinary perspective images.
  • the constructed point cloud model has higher accuracy, better robustness to repeated textures, and more comprehensive scene reconstruction.
  • This method can be used to build a high-precision visual map, providing visual features and 3D landmark points for positioning for autonomous driving and AR, and can also use this method to build a 3D model of a specific scene for scene display and VR applications, such as tourist attractions, museums , AR/VR tour of the exhibition hall, or build a 3D model of a building, a block or a city, AR special effects, etc.
  • FIG. 7 shows the structure of the apparatus, including:
  • the obtaining module 701 is configured to obtain the first feature of the panoramic image in the panoramic image set;
  • a matching module 702 configured to determine at least one set of image pairs in the panoramic image set and corresponding matching results according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, the The matching result indicates the correspondence between the first features of the two panoramic images;
  • the construction module 703 is configured to construct a point cloud model according to the at least one set of image pairs and corresponding matching results.
  • the obtaining module is specifically configured as:
  • a first feature of the panoramic image is determined based on at least one of the first sub-features.
  • the specific configuration is:
  • the third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship.
  • the mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
  • the second feature of the fluoroscopic image includes a second feature point and a corresponding second descriptor
  • the acquisition module is configured to determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image
  • the specific configuration is as follows:
  • the first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
  • the first feature includes a first feature point and a corresponding first descriptor
  • the matching module is specifically configured as:
  • a plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of each set of the image pairs, wherein each set of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images ;
  • a first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
  • the matching module is configured to determine multiple sets of feature point pairs according to the first descriptors of the two panoramic images of each set of the image pairs, it is further configured to:
  • the image pairs whose number of feature point pairs meet the preset first condition are filtered.
  • the specific configuration is:
  • the angle error of the feature point pair is determined according to the angle error of the two first feature points in the feature point pair, wherein the angle error of the first feature point is the angular error of the unit sphere corresponding to the first feature point The angle between the line connecting the spherical point and the optical center of the unit sphere and the outer polar plane;
  • the number of interior points corresponding to each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
  • the specific configuration is:
  • the number of interior points corresponding to the essential matrix is determined according to all interior points.
  • the specific configuration is:
  • the feature point pairs whose angle errors meet the preset third condition are filtered.
  • the matching module is further configured to:
  • a perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related
  • the fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
  • the image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
  • the specific configuration is:
  • the building module is specifically configured as:
  • an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image.
  • a registered image wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
  • the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
  • the building module when configured to determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs, and the first essential matrix, the specific configuration is:
  • the specific configuration is:
  • the reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
  • an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
  • the building block is further configured to:
  • the feature points of the registered image are triangulated to form corresponding three-dimensional points
  • the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
  • the matching module is further configured to:
  • the panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
  • the specific manner in which each module performs the operation has been described in detail in the embodiment of the method related to the first aspect, and will not be described in detail here.
  • an electronic device includes a memory 801 and a processor 802 , and the memory 801 is used to store a computer that can run on the processor 802 .
  • the processor 802 is configured to construct a point cloud model based on the method described in the first aspect when executing the computer instructions.
  • the memory 801 may use a volatile or non-volatile storage medium.
  • Computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of the first aspect.
  • Computer-readable storage media can be volatile or non-volatile storage media.
  • the present disclosure also provides a computer program product, including computer-readable code, when the computer-readable code is run on a device, a processor in the device executes instructions for implementing the method for constructing a point cloud model provided in any of the above embodiments .
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK
  • first and second are used for descriptive purposes only, and should not be construed as indicating or implying relative importance.
  • the term “plurality” refers to two or more, unless expressly limited otherwise.
  • Embodiments of the present disclosure provide a point cloud model construction method, device, device, and storage medium, wherein the method includes: acquiring a first feature of a panoramic image in a panoramic image set; determining the panoramic image according to the first feature At least one group of image pairs in the set and corresponding matching results, wherein the image pairs include two panoramic images matched by a first feature, and the matching results indicate the correspondence between the first features of the two panoramic images; according to The at least one set of image pairs and the corresponding matching results construct a point cloud model.
  • the panoramic image is used for matching and the point cloud model is further constructed according to the matching result, the number of images in the image set can be reduced, thereby improving the matching efficiency and modeling efficiency, and the spatial range corresponding to the panoramic image is large, so the panoramic image can be improved.
  • the matching effect between images thereby improving the accuracy and quality of the point cloud model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Nitrogen Condensed Heterocyclic Rings (AREA)

Abstract

Embodiments of the present disclosure relate to a point cloud model construction method and apparatus, a device, and a storage medium. The method comprises: obtaining first features of panoramic images in a panoramic image set; determining at least one group of image pairs in the panoramic image set and a corresponding matching result according to the first features, wherein the image pairs comprise two panoramic images subjected to first feature matching, and the matching result indicates the correspondence between the first features of the two panoramic images; and constructing a point cloud model according to the at least one group of image pairs and the corresponding matching result. The panoramic images are adopted for matching, and the point cloud model is further constructed according to the matching result, such that the number of images in the image set can be decreased, and the matching efficiency and the modeling efficiency are improved; and the range of a space corresponding to the panoramic images is relatively large, such that the matching effect between the panoramic images can be improved, and the precision and quality of constructing the point cloud model is improved.

Description

点云模型构建方法、装置、电子设备、存储介质和程序Point cloud model construction method, device, electronic device, storage medium and program
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本公开基于申请号为202110240320.4、申请日为2021年03月04日、专利名称为“点云模型构建方法、装置、设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。The present disclosure is based on the Chinese patent application with the application number of 202110240320.4, the application date of March 04, 2021, and the patent name of "point cloud model construction method, device, equipment and storage medium", and claims the priority of the Chinese patent application , the entire content of the Chinese patent application is incorporated herein by reference.
技术领域technical field
本公开涉及三维建模技术领域,涉及但不限于一种点云模型构建方法、装置、电子设备、计算机存储介质和计算机程序。The present disclosure relates to the technical field of three-dimensional modeling, and relates to, but is not limited to, a method, apparatus, electronic device, computer storage medium and computer program for constructing a point cloud model.
背景技术Background technique
随着人工智能技术的发展,空间建模技术原来越丰富,精度越来越高。运动恢复结构技术能够利用多张图像构建三维点云模型,其可以广泛用于物理空间数字化、高精度地图构建和增强现实等领域。因此,如何提高构建点云模型的精度和质量在三维建模技术领域具有重要意义。With the development of artificial intelligence technology, the spatial modeling technology has become richer and more accurate. The motion recovery structure technology can construct a 3D point cloud model from multiple images, which can be widely used in the fields of physical space digitization, high-precision map construction, and augmented reality. Therefore, how to improve the accuracy and quality of point cloud model construction is of great significance in the field of 3D modeling technology.
发明内容SUMMARY OF THE INVENTION
本公开实施例提供一种点云模型构建方法、装置、电子设备、计算机存储介质和计算机程序。Embodiments of the present disclosure provide a method, apparatus, electronic device, computer storage medium, and computer program for constructing a point cloud model.
根据本公开实施例的第一方面,提供一种点云模型构建方法,应用于电子设备中,包括:According to a first aspect of the embodiments of the present disclosure, a method for constructing a point cloud model is provided, which is applied to an electronic device, including:
获取全景图像集中的全景图像的第一特征;obtaining the first feature of the panoramic image in the panoramic image set;
根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系;At least one set of image pairs in the panoramic image set and corresponding matching results are determined according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, and the matching results indicate two panoramic images the correspondence between the first features of the image;
根据所述至少一组图像对及对应的匹配结果构建点云模型。A point cloud model is constructed according to the at least one set of image pairs and the corresponding matching results.
在一个实施例中,所述获取全景图像集中的全景图像的第一特征,包括:In one embodiment, the obtaining the first feature of the panoramic image in the panoramic image set includes:
确定所述全景图像对应的多张透视图像,其中,多张透视图像对应的空间的集合为全景图像对应的空间;determining multiple perspective images corresponding to the panoramic image, wherein the set of spaces corresponding to the multiple perspective images is the space corresponding to the panoramic image;
获取所述多张透视图像中的至少一张透视图像的第二特征;acquiring a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images;
根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,其中,所述透视图像与所述全景图像的对应位置对应相同的空间;Determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, wherein the fluoroscopic image corresponds to the same space as the corresponding position of the panoramic image;
根据至少一个所述第一子特征确定所述全景图像的第一特征。A first feature of the panoramic image is determined based on at least one of the first sub-features.
在一个实施例中,所述确定所述全景图像对应的多张透视图像,包括:In one embodiment, the determining a plurality of perspective images corresponding to the panoramic image includes:
获取所述全景图像对应的单位球面,并确定全景图像的像素点坐标与单位球面的点坐标间的第一映射关系;acquiring the unit sphere corresponding to the panoramic image, and determining the first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere;
根据所述单位球面确定多张透视图像,并确定透视图像的像素点坐标与单位球面的点坐标间的第二映射关系,其中,所述多张透视图像对应的球面点的集合为单位球面;Determine a plurality of perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein the set of spherical points corresponding to the plurality of perspective images is the unit sphere;
根据所述第一映射关系和所述第二映射关系确定全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系,并根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息。The third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship. The mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
在一个实施例中,所述透视图像的第二特征包括第二特征点和对应的第二描述子;In one embodiment, the second feature of the fluoroscopic image includes a second feature point and a corresponding second descriptor;
所述根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,包括:The determining of the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image includes:
根据所述透视图像的第二特征点的坐标和所述第三映射关系确定所述全景图像的第一特征点的坐标;Determine the coordinates of the first feature point of the panoramic image according to the coordinates of the second feature point of the fluoroscopic image and the third mapping relationship;
根据所述透视图像的第二特征点对应的第二描述子确定所述全景图像的第一特征点对应的第一描述子。The first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
在一个实施例中,所述第一特征包括第一特征点和对应的第一描述子;In one embodiment, the first feature includes a first feature point and a corresponding first descriptor;
所述根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,包括:The determining of at least one group of image pairs in the panoramic image set and the corresponding matching results according to the first feature includes:
根据每个全景图像与对应的待匹配的全景图像确定多组图像对;Determine multiple groups of image pairs according to each panoramic image and the corresponding panoramic image to be matched;
根据所述图像对的两个全景图像的第一描述子确定多组特征点对,其中,每组所述特征点对包括两个对应匹配且分属两个全景图像的第一特征点;Determine a plurality of sets of feature point pairs according to the first descriptors of the two panoramic images of the image pair, wherein each group of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images;
根据所述多组特征点对确定第一本质矩阵,并利用所述第一本质矩阵对所述多组特征点对进行过滤,得到所述图像对对应的匹配结果。A first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
在一个实施例中,所述根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对后,还包括:In one embodiment, after determining multiple groups of feature point pairs according to the first descriptors of the two panoramic images of each group of the image pairs, the method further includes:
获取所述图像对的两个全景图像的特征点对的数量;obtaining the number of feature point pairs of the two panoramic images of the image pair;
过滤特征点对的数量符合预设的第一条件的图像对。The image pairs whose number of feature point pairs meet the preset first condition are filtered.
在一个实施例中,根据所述多组特征点对确定第一本质矩阵,包括:In one embodiment, the first essential matrix is determined according to the multiple sets of feature point pairs, including:
根据所述特征点对中的两个第一特征点的角度误差确定所述特征点对的角度误差,其中,所述第一特征点的角度误差均为所述第一特征点对应的单位球面的球面点与单位球面的光心的连线和外极平面间的夹角;The angle error of the pair of feature points is determined according to the angle errors of the two first feature points in the pair of feature points, wherein the angle errors of the first feature points are the unit spheres corresponding to the first feature points The angle between the line connecting the spherical point of the unit sphere and the optical center of the unit sphere and the outer polar plane;
以对应的所述特征点对的角度误差为残差项,多次根据预设数目的所述特征点对确定本质矩阵;Taking the angular error of the corresponding pair of feature points as the residual term, determining the essential matrix according to the preset number of pairs of the feature points for many times;
确定每个所述本质矩阵的内点数,并确定内点数最多的本质矩阵为所述第一本质矩阵。The number of interior points of each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
在一个实施例中,所述确定每个所述本质矩阵对应的内点数,包括:In one embodiment, the determining the number of interior points corresponding to each of the essential matrices includes:
根据所述本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the essential matrix;
确定所述角度误差符合预设的第二条件的特征点对为内点;It is determined that the feature point pair whose angle error meets the preset second condition is an interior point;
根据所有内点确定所述本质矩阵对应的内点数。The number of interior points corresponding to the essential matrix is determined according to all interior points.
在一个实施例中,所述利用所述第一本质矩阵对多组特征点对进行过滤,包括:In one embodiment, the filtering of multiple sets of feature point pairs by using the first essential matrix includes:
根据所述第一本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the first essential matrix;
过滤所述角度误差符合预设的第三条件的特征点对。The feature point pairs whose angle errors meet the preset third condition are filtered.
在一个实施例中,还包括:In one embodiment, it also includes:
获取所述图像对的特征点对的两个第一特征点在所属全景图像的坐标;Obtaining the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong;
根据所述全景图像对应的第三映射关系以及所述全景图像上的属于特征点对的第一特征点的坐标,确定与特征点对相关的透视图像,其中,所述与特征点对相关的透视图像为存在与属于特征点对的第一特征点相对应的第二特征点的透视图像;A perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related The fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
利用所述与特征点对相关的透视图像对所述图像对进行过滤。The image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
在一个实施例中,所述利用所述属于与特征点对相关的透视图像对所述图像对进行过滤,包括:In one embodiment, the filtering of the image pair using the fluoroscopic image that is related to the feature point pair includes:
响应于所述图像对的至少一个全景图像对应的与特征点对相关的透视图像为连续的多张图像,且所述连续的多张图像的数量小于预设的过滤阈值,过滤所述图像对。In response to the fluoroscopic images related to the feature point pair corresponding to at least one panoramic image of the image pair being consecutive multiple images, and the number of the consecutive multiple images is less than a preset filtering threshold, filtering the image pair .
在一个实施例中,所述根据所述至少一组图像对及对应的匹配结果构建点云模型,包括:In one embodiment, the constructing a point cloud model according to the at least one set of image pairs and corresponding matching results includes:
根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,并确定所述初始图像对的每个全景图像的相机位姿,以及将所述初始图像对的第一特征点对进行三角化,形成初始三维点;Determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, and determine the camera pose of each panoramic image of the initial image pair, and triangulating the first feature point pair of the initial image pair to form an initial three-dimensional point;
多次根据第一三维点对应的第一特征点和每个未注册图像的第一特征点间的匹配关系,确定一个未注册图像为注册图像,直至全景图像集中的每个全景图像均为已注册图像,其中,所述未注册图像为全部第一特征点均未三角化的全景图像,所述已注册图像为存在第一特征点被三角化的全景图像,所述第一三维点包括所述初始三维点,或包括所述初始三维点和所述已注册图像的第一特征点三角化形成的三维点;According to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image. A registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
每次确定注册图像后,确定所述注册图像的相机位姿,并将所述注册图像的第一特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点,所述第三特征点为所述已注册图像中与所述注册图像的第一特征点匹配的第一特征点。After each registration image is determined, the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
在一个实施例中,所述根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,包括:In one embodiment, determining a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, including:
按照特征点对的数量从大到小的顺序依次选择图像对,每次选择图像对后均根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,直至选择的图像对满足所述初始化条件,确定选择的图像对为初始图像对。Select image pairs in descending order of the number of feature point pairs. After each image pair is selected, it is determined whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix. Until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is an initial image pair.
在一个实施例中,所述根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,包括:In one embodiment, determining whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix includes:
根据所述图像对的第一本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,其中,所述位移变量包括旋转变量和平移变量;Determine at least one group of displacement variables according to the first essential matrix of the image pair, and triangulate the feature points of the feature point pair for each group of displacement variables to form three-dimensional points corresponding to each group of displacement variables, and according to each group of three-dimensional points The reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
响应于数量最多的一组三维点的数量大于预设的第一数量阈值,确定对应的位移变量为第一位移变量;In response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determining the corresponding displacement variable as the first displacement variable;
从多次计算得到的本质矩阵中选择内点数大于或等于点数阈值的本质矩阵,分别根据每个本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,保留每个本质矩阵的数量最多的一组三维点对应的位移变量;Select an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
在每个本质矩阵保留的位移变量与所述第一位移变量间的差异满足预设范围的情况下,确定所述图像对满足所述初始化条件。In the case that the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
在一个实施例中,还包括:In one embodiment, it also includes:
通过最小化初始的三维点在初始图像对的两个全景图像上的重投影误差,优化每个全景图像的相机位姿以及初始的三维点的位置;和/或,optimize the camera pose of each panorama image and the position of the initial 3D point by minimizing the reprojection error of the initial 3D point on the two panorama images of the initial image pair; and/or,
每次确定注册图像的相机位姿后,通过最小化三维点在所述注册图像上的重投影误差优化所述注册图像的相机位姿;和/或,After each determination of the camera pose of the registration image, optimize the camera pose of the registration image by minimizing the reprojection error of 3D points on the registration image; and/or,
每次将所述注册图像的特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点后,通过最小化每个三维点在每个已注册图像上的重投影误差,优化每个已注册图像的相机位姿以及每个三维点的位置。After each time the feature points of the registered image are triangulated to form corresponding three-dimensional points, and the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
在一个实施例中,还包括:In one embodiment, it also includes:
根据每个全景图像对应的空间确定对应的待匹配的全景图像;或,Determine the corresponding panoramic image to be matched according to the space corresponding to each panoramic image; or,
根据预设的搭配规则确定每个全景图像对应的待匹配的全景图像。The panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
根据本公开实施例的第二方面,提供一种点云模型构建装置,包括:According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for constructing a point cloud model, including:
获取模块,配置为获取全景图像集中的全景图像的第一特征;an acquisition module, configured to acquire the first feature of the panoramic image in the panoramic image set;
匹配模块,配置为根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系;a matching module, configured to determine at least one set of image pairs in the panoramic image set and corresponding matching results according to the first feature, wherein the image pairs include two panoramic images matched by the first feature, and the matching the result indicates the correspondence between the first features of the two panoramic images;
构建模块,配置为根据所述至少一组图像对及对应的匹配结果构建点云模型。The building module is configured to build a point cloud model according to the at least one set of image pairs and corresponding matching results.
在一个实施例中,所述获取模块具体配置为:In one embodiment, the obtaining module is specifically configured as:
确定所述全景图像对应的多张透视图像,其中,多张透视图像对应的空间的集合为全景图像对应的空间;determining multiple perspective images corresponding to the panoramic image, wherein the set of spaces corresponding to the multiple perspective images is the space corresponding to the panoramic image;
获取所述多张透视图像中的至少一张透视图像的第二特征;acquiring a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images;
根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,其中,所述透视图像与所述全景图像的对应位置对应相同的空间;Determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, wherein the fluoroscopic image corresponds to the same space as the corresponding position of the panoramic image;
根据至少一个所述第一子特征确定所述全景图像的第一特征。A first feature of the panoramic image is determined based on at least one of the first sub-features.
在一个实施例中,所述获取模块配置为确定所述全景图像对应的多张透视图像时,具体配置为:In one embodiment, when the acquisition module is configured to determine multiple perspective images corresponding to the panoramic image, the specific configuration is:
获取所述全景图像对应的单位球面,并确定全景图像的像素点坐标与单位球面的点坐标间的第一映射关系;acquiring the unit sphere corresponding to the panoramic image, and determining the first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere;
根据所述单位球面确定多张透视图像,并确定透视图像的像素点坐标与单位球面的点坐标间的第二映射关系,其中,所述多张透视图像对应的球面点的集合为单位球面;Determine a plurality of perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein the set of spherical points corresponding to the plurality of perspective images is the unit sphere;
根据所述第一映射关系和所述第二映射关系确定全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系,并根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息。The third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship. The mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
在一个实施例中,所述透视图像的第二特征包括第二特征点和对应的第二描述子;In one embodiment, the second feature of the fluoroscopic image includes a second feature point and a corresponding second descriptor;
所述获取模块配置为根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征时,具体配置为:When the acquisition module is configured to determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, the specific configuration is as follows:
根据所述透视图像的第二特征点的坐标和所述第三映射关系确定所述全景图像的第一特征点的坐标;Determine the coordinates of the first feature point of the panoramic image according to the coordinates of the second feature point of the fluoroscopic image and the third mapping relationship;
根据所述透视图像的第二特征点对应的第二描述子确定所述全景图像的第一特征点对应的第一描述子。The first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
在一个实施例中,所述第一特征包括第一特征点和对应的第一描述子;In one embodiment, the first feature includes a first feature point and a corresponding first descriptor;
所述匹配模块具体配置为:The matching module is specifically configured as:
根据每个全景图像与对应的待匹配的全景图像确定多组图像对;Determine multiple groups of image pairs according to each panoramic image and the corresponding panoramic image to be matched;
根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对,其中,每组所述特征点对包括两个对应匹配且分属两个全景图像的第一特征点;A plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of each set of the image pairs, wherein each set of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images ;
根据所述多组特征点对确定第一本质矩阵,并利用所述第一本质矩阵对所述多组特征点对进行过滤,得到所述图像对对应的匹配结果。A first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
在一个实施例中,所述匹配模块配置为根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对后,还配置为:In one embodiment, after the matching module is configured to determine multiple sets of feature point pairs according to the first descriptors of the two panoramic images of each set of the image pairs, it is further configured to:
获取每组所述图像对的两个全景图像的特征点对的数量;Obtain the number of feature point pairs of the two panoramic images of each group of the image pairs;
过滤特征点对的数量符合预设的第一条件的图像对。The image pairs whose number of feature point pairs meet the preset first condition are filtered.
在一个实施例中,所述匹配模块配置为根据所述多组特征点对确定第一本质矩阵时,具体配置为:In one embodiment, when the matching module is configured to determine the first essential matrix according to the multiple sets of feature point pairs, the specific configuration is:
根据所述特征点对中的两个第一特征点的角度误差确定所述特征点对的角度误差,其中,所述第一特征点的角度误差为所述第一特征点对应的单位球面的球面点与单位球面的光心的连线和外极平面间的夹角;The angle error of the feature point pair is determined according to the angle error of the two first feature points in the feature point pair, wherein the angle error of the first feature point is the angular error of the unit sphere corresponding to the first feature point The angle between the line connecting the spherical point and the optical center of the unit sphere and the outer polar plane;
以对应的所述特征点对的角度误差为残差项,多次根据预设数目的所述特征点对确定本质矩阵;Taking the angular error of the corresponding pair of feature points as the residual term, determining the essential matrix according to the preset number of pairs of the feature points for many times;
确定每个所述本质矩阵对应的内点数,并确定内点数最多的本质矩阵为所述第一本质矩阵。The number of interior points corresponding to each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
在一个实施例中,所述匹配模块配置为确定每个所述本质矩阵对应的内点数时,具体配置为:In one embodiment, when the matching module is configured to determine the number of interior points corresponding to each of the essential matrices, the specific configuration is:
根据所述本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the essential matrix;
确定所述角度误差符合预设的第二条件的特征点对为内点;It is determined that the feature point pair whose angle error meets the preset second condition is an interior point;
根据所有内点确定所述本质矩阵对应的内点数。The number of interior points corresponding to the essential matrix is determined according to all interior points.
在一个实施例中,所述匹配模块配置为利用所述第一本质矩阵对多组特征点对进行过滤时,具体配置为:In one embodiment, when the matching module is configured to use the first essential matrix to filter multiple sets of feature point pairs, the specific configuration is:
根据所述第一本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the first essential matrix;
过滤所述角度误差符合预设的第三条件的特征点对。The feature point pairs whose angle errors meet the preset third condition are filtered.
在一个实施例中,所述匹配模块还配置为:In one embodiment, the matching module is further configured to:
获取所述图像对的特征点对的两个第一特征点在所属全景图像的坐标;Obtaining the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong;
根据所述全景图像对应的第三映射关系以及所述全景图像上的属于特征点对的第一特征点的坐标,确定与特征点对相关的透视图像,其中,所述与特征点对相关的透视图像为存在与属于特征点对的第一特征点相对应的第二特征点的透视图像;A perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related The fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
利用所述与特征点对相关的透视图像对所述图像对进行过滤。The image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
在一个实施例中,所述匹配模块配置为利用所述与特征点对相关的透视图像对所述图像对进行过滤时,具体配置为:In one embodiment, when the matching module is configured to filter the image pair by using the perspective image related to the feature point pair, the specific configuration is:
响应于所述图像对的至少一个全景图像对应的与特征点对相关的透视图像为连续的多张图像,且所述连续的多张图像的数量小于预设的过滤阈值,过滤所述图像对。In response to the fluoroscopic images related to the feature point pair corresponding to at least one panoramic image of the image pair being consecutive multiple images, and the number of the consecutive multiple images is less than a preset filtering threshold, filtering the image pair .
在一个实施例中,所述构建模块具体配置为:In one embodiment, the building module is specifically configured as:
根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,并确定所述初始图像对的每个全景图像的相机位姿,以及将所述初始图像对的第一特征点对进行三角化,形成初始三维点;Determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, and determine the camera pose of each panoramic image of the initial image pair, and triangulating the first feature point pair of the initial image pair to form an initial three-dimensional point;
多次根据第一三维点对应的第一特征点和每个未注册图像的第一特征点间的匹配关系,确定一个未注册图像为注册图像,直至全景图像集中的每个全景图像均为已注册图像,其中,所述未注册图像为全部第一特征点均未三角化的全景图像,所述已注册图像为存在第一特征点被三角化的全景图像,所述第一三维点包括所述初始三维点,或包括所述初始三维点和所述已注册图像的第一特征点三角化形成的三维点;According to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image. A registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
每次确定注册图像后,确定所述注册图像的相机位姿,并将所述注册图像的第一特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点,所述第三特征点为所述已注册图像中与所述注册图像的第一特征点匹配的第一特征点。After each registration image is determined, the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
在一个实施例中,所述构建模块配置为根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对时,具体配置为:In one embodiment, when the building module is configured to determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs, and the first essential matrix, the specific configuration is:
按照特征点对的数量从大到小的顺序依次选择图像对,每次选择图像对后均根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,直至选择的图像对满足所述初始化条件,确定选择的图像对为初始图像对。Select image pairs in descending order of the number of feature point pairs. After each image pair is selected, it is determined whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix. Until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is an initial image pair.
在一个实施例中,所述构建模块配置为根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件时,具体配置为:In one embodiment, when the building module is configured to determine whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix, the specific configuration is:
根据所述图像对的第一本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,其中,所述位移变量包括旋转变量和平移变量;Determine at least one group of displacement variables according to the first essential matrix of the image pair, and triangulate the feature points of the feature point pair for each group of displacement variables to form three-dimensional points corresponding to each group of displacement variables, and according to each group of three-dimensional points The reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
响应于数量最多的一组三维点的数量大于预设的第一数量阈值,确定对应的位移变量为第一位移变量;In response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determining the corresponding displacement variable as the first displacement variable;
从多次计算得到的本质矩阵中选择内点数大于或等于点数阈值的本质矩阵,分别根据每个本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,保留每个本质矩阵的数量最多的一组三维点对应的位移变量;Select an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
在每个本质矩阵保留的位移变量与所述第一位移变量间的差异满足预设范围的情况下,确定所述图像对满足所述初始化条件。In the case that the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
在一个实施例中,所述构建模块还配置为:In one embodiment, the building block is further configured to:
通过最小化初始的三维点在初始图像对的两个全景图像上的重投影误差,优化每个全景图像的相机位姿以及初始的三维点的位置;和/或,optimize the camera pose of each panorama image and the position of the initial 3D point by minimizing the reprojection error of the initial 3D point on the two panorama images of the initial image pair; and/or,
每次确定注册图像的相机位姿后,通过最小化三维点在所述注册图像上的重投影误差优化所述注册图像的相机位姿;和/或,After each determination of the camera pose of the registration image, optimize the camera pose of the registration image by minimizing the reprojection error of 3D points on the registration image; and/or,
每次将所述注册图像的特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点后,通过最小化每个三维点在每个已注册图像上的重投影误差,优化每个已注册图像的相机位姿以及每个三维点的位置。After each time the feature points of the registered image are triangulated to form corresponding three-dimensional points, and the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
在一个实施例中,所述匹配模块还配置为:In one embodiment, the matching module is further configured to:
根据每个全景图像对应的空间确定对应的待匹配的全景图像;或,Determine the corresponding panoramic image to be matched according to the space corresponding to each panoramic image; or,
根据预设的搭配规则确定每个全景图像对应的待匹配的全景图像。The panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
根据本公开实施例的第三方面,提供一种电子设备,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时基于第一方面所述的方法构建点云模型。According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, the device includes a memory and a processor, the memory is used for storing computer instructions executable on the processor, the processor is used for executing the A point cloud model is constructed based on the method described in the first aspect when instructed by the computer.
根据本公开实施例的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现第一方面所述的方法。According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of the first aspect.
根据本公开实施例的第五方面,提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述任意一种方法。According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program, comprising computer-readable codes, when the computer-readable codes are executed in an electronic device, a processor in the electronic device executes any of the above a way.
根据上述实施例可知,通过全景图像组成全景图像集,进一步根据第一特征匹配全景图像集中的全景图像,并将完成第一特征匹配的两个全景图像作为一组图像对,从而确定至少一组图像对及对应的匹配结果,最后根据确定的至少一组图像对及对应的匹配结果构建点云模型。由于采用全景图像进行匹配并进一步根据匹配的结果构建点云模型,因此能够减少图像集的图像数量,从而提高匹配效率和建模效率,且全景图像对应的空间的范围较大,因此能够提高全景图像间的匹配效果,进而提高构建点云模型的精度和质量。According to the above embodiment, a panoramic image set is formed by panoramic images, the panoramic images in the panoramic image set are further matched according to the first feature, and the two panoramic images that have completed the first feature matching are regarded as a set of image pairs, so as to determine at least one group of image pairs and corresponding matching results, and finally construct a point cloud model according to the determined at least one group of image pairs and corresponding matching results. Since the panoramic image is used for matching and the point cloud model is further constructed according to the matching result, the number of images in the image set can be reduced, thereby improving the matching efficiency and modeling efficiency, and the spatial range corresponding to the panoramic image is large, so the panoramic image can be improved. The matching effect between images, thereby improving the accuracy and quality of the point cloud model.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure.
图1为本公开实施例示出的点云模型构建方法的流程图;1 is a flowchart of a method for constructing a point cloud model according to an embodiment of the present disclosure;
图2为本公开实施例示出的全景图像的示意图;2 is a schematic diagram of a panoramic image according to an embodiment of the present disclosure;
图3为本公开实施例示出的获取全景图像的第一特征的方式的示意图;3 is a schematic diagram of a manner of acquiring a first feature of a panoramic image according to an embodiment of the present disclosure;
图4为本公开实施例示出的确定图像对及对应的匹配结果的方式的示意图;4 is a schematic diagram of a manner for determining an image pair and a corresponding matching result according to an embodiment of the present disclosure;
图5为本公开实施例示出的角度误差的示意图;5 is a schematic diagram of an angle error shown in an embodiment of the present disclosure;
图6为本公开实施例示出的比较初始化条件和特征点对及第一本质矩阵的方式的示意图;6 is a schematic diagram of a method for comparing initialization conditions, feature point pairs and a first essential matrix according to an embodiment of the present disclosure;
图7为本公开实施例示出的点云模型构建装置的结构图;7 is a structural diagram of an apparatus for constructing a point cloud model according to an embodiment of the present disclosure;
图8为本公开实施例示出的电子设备的结构示意图。FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used in this disclosure and the appended claims, the singular forms "a," "the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other. For example, the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure. Depending on the context, the word "if" as used herein can be interpreted as "at the time of" or "when" or "in response to determining."
随着人工智能技术的发展,空间建模技术原来越丰富,精度越来越高。运动恢复结构技术能够利用多张图像构建三维点云模型,其可以广泛用于物理空间数字化、高精度地图构建和增强现实等领域。相关技术中,多张图像间的匹配效果不佳,进而导致依次构建的点云模型精度低,质量差。With the development of artificial intelligence technology, the spatial modeling technology has become richer and more accurate. The motion recovery structure technology can construct a 3D point cloud model from multiple images, which can be widely used in the fields of physical space digitization, high-precision map construction, and augmented reality. In the related art, the matching effect between multiple images is not good, which leads to low precision and poor quality of the point cloud models constructed sequentially.
基于此,第一方面,本公开至少一个实施例提供了一种点云模型构建方法,请参照图1,其示出了该方法的流程,包括步骤S101至步骤S103。Based on this, in the first aspect, at least one embodiment of the present disclosure provides a method for constructing a point cloud model. Please refer to FIG. 1 , which shows a flow of the method, including steps S101 to S103 .
其中,所述点云模型为与空间对应的三维模型,空间可以指现实世界。现实世界中的各个物体在模型中以对应的点云表示,点云即为三维点构成的点的集合。Wherein, the point cloud model is a three-dimensional model corresponding to the space, and the space may refer to the real world. Each object in the real world is represented by a corresponding point cloud in the model, and a point cloud is a collection of points composed of three-dimensional points.
另外,该方法可以由终端设备或服务器等电子设备执行,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)手持设备、计算设备、车载设备、可穿戴设备等,该方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。或者,可以通过服务器执行该方法,服务器可以为本地服务器、云端服务器等。In addition, the method can be executed by electronic equipment such as terminal equipment or server, and the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, Personal Digital Assistant (Personal Digital Assistant, PDA) handheld device, computing device, vehicle-mounted device, wearable device, etc., the method can be implemented by the processor calling the computer-readable instructions stored in the memory. Alternatively, the method may be performed by a server, and the server may be a local server, a cloud server, or the like.
在步骤S101中,获取全景图像集中的全景图像的第一特征。In step S101, the first feature of the panoramic image in the panoramic image set is acquired.
其中,全景图像集内包括多个全景图像,全景图像可以是多种角度的全景图像,例如360°全景图像;全景图像可以是由球面相机所获取的图像,或者多个鱼眼相机获取的拼接的图像按照等距矩形投影方式得到的图像,请参照附图2,其示例性的示出了一个全景图像。每个全景图像对应建模空间的一个局部子空间,而且全景图像对应的局部子空间的尺寸大于相同参数下的普通图像对应的局部子空间的尺寸,所述全景图像集的全部全景图像对应的局部子空间可以组成整个建模空间;不同的全景图像对应的局部子空间可以存在重合区域。The panoramic image set includes multiple panoramic images, and the panoramic images may be panoramic images from various angles, such as 360° panoramic images; the panoramic images may be images obtained by spherical cameras, or stitched images obtained by multiple fisheye cameras The image obtained by the equidistant rectangular projection method, please refer to FIG. 2, which exemplarily shows a panoramic image. Each panorama image corresponds to a local subspace of the modeling space, and the size of the local subspace corresponding to the panorama image is larger than the size of the local subspace corresponding to the ordinary image under the same parameters, and all the panorama images of the panorama image set correspond to Local subspaces can form the entire modeling space; local subspaces corresponding to different panoramic images can have overlapping regions.
本步骤中,可以采用预先训练的神经网络获取全景图像的第一特征,也可以采用其他方式获取全景图像的第一特征,本公开对获取方式无意作出具体限制。可以获取全景图像集中每个全景图像的第一特征。In this step, a pre-trained neural network may be used to obtain the first feature of the panoramic image, or other methods may be used to obtain the first feature of the panoramic image, which is not intended to be specifically limited in the present disclosure. The first feature of each panorama image in the panorama image set may be acquired.
在步骤S102中,根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系。In step S102, at least one set of image pairs in the panoramic image set and corresponding matching results are determined according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, and the matching The result indicates the correspondence between the first features of the two panoramic images.
其中,第一特征匹配,是指两个全景图像存在对应于相同空间的第一特征点,即一个全景图像的至少一个第一特征点与另一个全景图像的至少一个第一特征点,对应现实世界中相同的空间。The first feature matching refers to the existence of a first feature point corresponding to the same space in the two panoramic images, that is, at least one first feature point of one panoramic image and at least one first feature point of another panoramic image, corresponding to reality the same space in the world.
本步骤中,可以采用暴力匹配的方式对全景图像集内的全景图像进行匹配,也就是采用遍历的方式进行匹配,例如,可以依次取每一个全景图像,并将其与其他每个全景图像进行匹配;在两个全景图像进行匹配的情况下,利用两个全景图像的第一特征进行匹配,也就是根据两个全景图像的第一特征的匹配结果确定全景图像的匹配结果。两个全景图像的第一特征完成匹配,也就是两个全景图像完成了特征匹配,则这两张图像可以确定为一组图像对,同时将两个全景图像的第一特征的对应关系确定为匹配结果。In this step, the panorama images in the panorama image set can be matched by means of brute force matching, that is, the way of traversal is used for matching. Matching: when two panoramic images are matched, the first feature of the two panoramic images is used for matching, that is, the matching result of the panoramic image is determined according to the matching result of the first feature of the two panoramic images. The first features of the two panoramic images are matched, that is, the feature matching of the two panoramic images is completed, then the two images can be determined as a set of image pairs, and the corresponding relationship between the first features of the two panoramic images is determined at the same time. for matching results.
其中,每个全景图像可以和另一个全景图像组成图像对,也可以分别和另多个全景图像组成多组图像对,也就是说,每个全景图像组成图像对后,并未被锁定,还可以继续与其他全景图像组成新的图像对。Among them, each panoramic image can form an image pair with another panoramic image, and can also form multiple sets of image pairs with other panoramic images respectively. That is to say, after each panoramic image forms an image pair, it is not locked. You can also continue to compose new image pairs with other panoramic images.
在步骤S103中,根据所述至少一组图像对及对应的匹配结果构建点云模型。In step S103, a point cloud model is constructed according to the at least one set of image pairs and the corresponding matching results.
本步骤中,利用上述步骤S102中的匹配结果进行点云模型的构建。建模的过程也就是运动恢复结构过程,包括相机注册和点云重建,相机注册就是恢复全景图像集内每一个全景图像的相机运动参数(例如,可以采用相机位姿进行表示),点云重建就是恢复对应的局部子空间(即步骤S102中提到的局部子空间)的三维结构的点云。In this step, the point cloud model is constructed by using the matching result in the above step S102. The modeling process is also the process of motion recovery structure, including camera registration and point cloud reconstruction. Camera registration is to restore the camera motion parameters of each panoramic image in the panoramic image set (for example, it can be represented by camera pose), point cloud reconstruction It is to restore the point cloud of the three-dimensional structure of the corresponding local subspace (that is, the local subspace mentioned in step S102).
在一个示例中,可以采用增量式重建的方式构建点云模型。In one example, the point cloud model can be constructed by means of incremental reconstruction.
本公开的实施例中,通过全景图像组成全景图像集,进一步根据第一特征匹配全景图像集中的全景图像,并将完成特征匹配的两个全景图像作为一组图像对,从而确定至少一组图像对及对应的匹配结果,最后根据确定的至少一组图像对及对应的匹配结果构建点云模型。由于采用全景图像进行匹配并进一步 根据匹配的结果构建点云模型,因此能够减少图像集的图像数量,从而提高匹配效率和建模效率,且全景图像对应的空间的范围较大,因此能够提高全景图像间的匹配效果,进而提高构建点云模型的精度和质量。In the embodiment of the present disclosure, a panoramic image set is composed of panoramic images, the panoramic images in the panoramic image set are further matched according to the first feature, and the two panoramic images whose features have been matched are regarded as a set of image pairs, thereby determining at least one set of at least one set of panoramic images. Image pairs and corresponding matching results, and finally construct a point cloud model according to the determined at least one set of image pairs and corresponding matching results. Since the panoramic image is used for matching and the point cloud model is further constructed according to the matching result, the number of images in the image set can be reduced, thereby improving the matching efficiency and modeling efficiency, and the spatial range corresponding to the panoramic image is large, so the panoramic image can be improved. The matching effect between images, thereby improving the accuracy and quality of the point cloud model.
在本公开的一些实施例中,可以采用下述方式获取全景图像集的全景图像的第一特征,请参照附图3,其示出了上述获取方式的流程,包括步骤S301至步骤S304。In some embodiments of the present disclosure, the first feature of the panoramic image of the panoramic image set can be obtained in the following manner, please refer to FIG.
在步骤S301中,确定所述全景图像对应的多张透视图像,其中,多张透视图像对应的空间的集合为全景图像对应的空间。In step S301, multiple perspective images corresponding to the panoramic image are determined, wherein a set of spaces corresponding to the multiple perspective images is a space corresponding to the panoramic image.
在一个示例中,可以采用下述方式确定多张透视图像:首先,获取所述全景图像对应的单位球面,并确定全景图像的像素点坐标与单位球面的点坐标间的第一映射关系;接下来,根据所述单位球面确定多张透视图像,并确定透视图像的像素点坐标与单位球面的点坐标间的第二映射关系,其中,所述多张透视图像对应的球面点的集合为单位球面;最后,根据所述第一映射关系和所述第二映射关系确定全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系,并根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息。In an example, multiple perspective images may be determined in the following manner: first, acquiring the unit sphere corresponding to the panoramic image, and determining a first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere; then Next, determine a plurality of perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein the set of spherical points corresponding to the plurality of perspective images is a unit spherical surface; finally, a third mapping relationship between the pixel coordinates of the panoramic image and the pixel coordinates of the perspective image is determined according to the first mapping relationship and the second mapping relationship, and the pixel information of the pixels of the panoramic image and The third mapping relationship determines pixel information of pixel points of the fluoroscopic image.
其中,获取全景图像对应的单位球面,可以是将全景图像反投射回单位球面。根据单位球面确定多张透视图像,可以是以单位球面的球心为光心的虚拟透视相机拍摄单位球面,并将所得到的图像确定为透视图像;拍摄单位球面时,可以使虚拟相机按照一定的角度等间隔旋转,每旋转一次拍摄一张透视图像,直至拍摄完整个球面,获取到覆盖所有视角的多张透视图像,例如,每旋转60°拍摄一张透视图像,则需要拍摄六张透视图像才能拍摄完整个球面;可以将虚拟相机的视场(Field of view,FOV)和焦距设置为常用的图像采集设备(例如手机、数码相机等)的参数,以接近常用的图像采集设备获取的图像。Wherein, acquiring the unit sphere corresponding to the panoramic image may be by back-projecting the panoramic image back to the unit sphere. To determine multiple perspective images according to the unit sphere, a virtual perspective camera with the center of the unit sphere as the optical center can shoot the unit sphere, and the obtained image can be determined as a perspective image; when shooting the unit sphere, the virtual camera can be made according to a certain The angle is rotated at equal intervals, and a perspective image is taken for each rotation, until the entire spherical surface is captured, and multiple perspective images covering all viewing angles are obtained. For example, every 60° rotation to take a perspective image requires six perspective images. Only the image can capture the entire spherical surface; the field of view (FOV) and focal length of the virtual camera can be set to the parameters of commonly used image acquisition devices (such as mobile phones, digital cameras, etc.) image.
全景图像的像素点坐标与单位球面的点坐标间的第一映射关系可以采用下述公式(1)进行表示:The first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere can be expressed by the following formula (1):
Figure PCTCN2021105574-appb-000001
Figure PCTCN2021105574-appb-000001
其中,(s x,s y,s z)是单位球面的点坐标,(u,v)是全景图像的像素点坐标,w是全景图像的宽,h是全景图像的高。 Among them, (s x , s y , s z ) are the point coordinates of the unit sphere, (u, v) are the pixel coordinates of the panoramic image, w is the width of the panoramic image, and h is the height of the panoramic image.
与单位球面的坐标系相同的虚拟相机获取到的透视图像的像素点坐标与单位球面的点坐标间的第二映射关系可以采用下述公式(2)进行表示:The second mapping relationship between the pixel point coordinates of the perspective image obtained by the virtual camera with the same coordinate system as the unit sphere and the point coordinates of the unit sphere can be expressed by the following formula (2):
Figure PCTCN2021105574-appb-000002
Figure PCTCN2021105574-appb-000002
其中,(s x,s y,s z)是单位球面的点坐标,(x,y)是透视图图像的像素点坐标,
Figure PCTCN2021105574-appb-000003
是透视图像的宽,
Figure PCTCN2021105574-appb-000004
为透视图像的高,f为虚拟相机的焦距。
Among them, (s x , s y , s z ) are the point coordinates of the unit sphere, (x, y) are the pixel coordinates of the perspective image,
Figure PCTCN2021105574-appb-000003
is the width of the perspective image,
Figure PCTCN2021105574-appb-000004
is the height of the perspective image, and f is the focal length of the virtual camera.
相对于单位球面的坐标系旋转了一定角度的虚拟相机获取到的透视图像的像素点坐标与单位球面的点坐标间的第二映射关系可以采用下述公式(3)进行表示:The second mapping relationship between the pixel point coordinates of the perspective image obtained by the virtual camera rotated by a certain angle with respect to the coordinate system of the unit sphere and the point coordinates of the unit sphere can be expressed by the following formula (3):
Figure PCTCN2021105574-appb-000005
Figure PCTCN2021105574-appb-000005
其中,(s′ x,s′ y,s′ z)T=R(s x,s y,s z) T是单位球面的坐标点(s x,s y,s z)经虚拟相机的逆旋转矩阵R变换后的坐标,(x,y)是透视图像上的坐标,
Figure PCTCN2021105574-appb-000006
是透视图像的宽,
Figure PCTCN2021105574-appb-000007
为透视图像的高,f为虚拟相机的焦距。
Among them, (s' x , s' y , s' z )T=R(s x ,s y ,s z ) T is the inverse of the coordinate point (s x ,s y ,s z ) of the unit sphere through the virtual camera The coordinates transformed by the rotation matrix R, (x, y) are the coordinates on the perspective image,
Figure PCTCN2021105574-appb-000006
is the width of the perspective image,
Figure PCTCN2021105574-appb-000007
is the height of the perspective image, and f is the focal length of the virtual camera.
本公开实施例中,虚拟相机的多个拍摄角度中,有一个角度与单位球面的坐标系相同,因此该角度下,全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系可以通过联立上述公式(1)和公式(2)得到;其他角度下,全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系可以通过联立上述公式(1)和公式(3)得到。In the embodiment of the present disclosure, one of the multiple shooting angles of the virtual camera is the same as the coordinate system of the unit sphere. Therefore, at this angle, the third mapping relationship between the pixel coordinates of the panoramic image and the pixel coordinates of the perspective image is It can be obtained by combining the above formula (1) and formula (2); under other angles, the third mapping relationship between the pixel coordinates of the panoramic image and the pixel coordinates of the perspective image can be obtained by combining the above formula (1) and formula (3) get.
其中,根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息,可以是将全景图像的像素点的像素信息直接确定为透视图像的对应像素点的像素信息,也可以通过对全景图像的像素点进行采样和/或双线性插值得到透视图像的像素信息。像素信息可以是像素点的亮度值,或者是像素点的各个颜色通道(如红、绿、蓝三个通道)的值。Wherein, determining the pixel information of the pixel point of the fluoroscopic image according to the pixel information of the pixel point of the panoramic image and the third mapping relationship may be to directly determine the pixel information of the pixel point of the panoramic image as the pixel of the corresponding pixel point of the fluoroscopic image The pixel information of the perspective image can also be obtained by sampling and/or bilinear interpolation of the pixel points of the panoramic image. The pixel information may be the luminance value of the pixel, or the value of each color channel (eg, three channels of red, green, and blue) of the pixel.
在步骤S302中,获取所述多张透视图像中的至少一张透视图像的第二特征。In step S302, a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images is acquired.
本步骤中,可以采用预先训练的神经网络提取透视图像的第二特征,也可以采用其他方式提取透视 图像的第二特征,本公开对提取方式无意作出具体限制。可以获取全景图像对应的的每个透视图像的第二特征。In this step, a pre-trained neural network can be used to extract the second feature of the fluoroscopic image, or other methods can be used to extract the second feature of the fluoroscopic image, and the present disclosure does not intend to make specific restrictions on the extraction method. The second feature of each perspective image corresponding to the panoramic image may be acquired.
在一个示例中,第二特征为第二特征点及对应的第二描述子,也就是说,透视图像内的全部的第二特征点及对应的第二描述子构成透视图像的第二特征。In one example, the second feature is the second feature point and the corresponding second descriptor, that is, all the second feature points and the corresponding second descriptor in the fluoroscopic image constitute the second feature of the fluoroscopic image.
在步骤S303中,根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,其中,所述透视图像与所述全景图像的对应位置对应相同的空间。In step S303, the first sub-feature of the corresponding position of the panoramic image is determined according to the second feature of the fluoroscopic image, wherein the fluoroscopic image and the corresponding position of the panoramic image correspond to the same space.
其中,所述透视图像与所述全景图像的对应位置对应相同的空间,也就是说,透视图像与全景图像的对应位置对应单面球面上相同的球面点集合。Wherein, the corresponding positions of the perspective image and the panoramic image correspond to the same space, that is, the corresponding positions of the perspective image and the panoramic image correspond to the same spherical point set on a single spherical surface.
其中,第一子特征可以包括全景图像的对应位置内全部的第一特征点及对应的第一描述子。The first sub-feature may include all the first feature points and corresponding first descriptors in the corresponding positions of the panoramic image.
在与步骤S302的示例对应的示例中,可以采用下述方式确定全景图像的对应位置的第一子特征:首先,根据所述透视图像的第二特征点的坐标和所述第三映射关系确定所述全景图像的第一特征点的坐标;接下来,根据所述透视图像的第二特征点对应的第二描述子确定所述全景图像的第一特征点对应的第一描述子。In an example corresponding to the example of step S302, the first sub-feature of the corresponding position of the panoramic image may be determined in the following manner: First, determine according to the coordinates of the second feature point of the perspective image and the third mapping relationship The coordinates of the first feature point of the panoramic image; next, the first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
其中,第二特征点对应的全景图像上的点即为第一特征点,也就是所,第一特征点和第二特征点相对应,或者说第一特征点对应的单位球面的球面点与第二特征点对应的单位球面的球面点一致。可以将第二特征点对应的第二描述子直接作为对应的第一特征点的第一描述子。The point on the panoramic image corresponding to the second feature point is the first feature point, that is, the first feature point corresponds to the second feature point, or the spherical point of the unit sphere corresponding to the first feature point is the same as the The spherical points of the unit sphere corresponding to the second feature point are the same. The second descriptor corresponding to the second feature point may be directly used as the first descriptor of the corresponding first feature point.
在步骤S304中,根据至少一个所述第一子特征确定所述全景图像的第一特征。In step S304, a first feature of the panoramic image is determined according to at least one of the first sub-features.
其中,全景图像的第一特征包括全景图像内全部的第一特征点及对应的第一描述子。Wherein, the first feature of the panoramic image includes all the first feature points in the panoramic image and the corresponding first descriptors.
本公开的实施例中,通过全景图像与单位球面之间的映射关系以及单位球面与透视图像间的映射关系,确定了全景图像与透视图像间的映射关系,也就是以单位球面为媒介将全景图像拆分为多张透视图像,并且进一步通过提取透视图像的第二特征以及将第二特征点反向映射回全景图像的第一特征点,实现全景图像的第一特征的提取。In the embodiment of the present disclosure, the mapping relationship between the panoramic image and the perspective image is determined through the mapping relationship between the panoramic image and the unit sphere and the mapping relationship between the unit sphere and the perspective image, that is, the unit sphere is used as the medium to map the panoramic image. The image is divided into a plurality of perspective images, and the extraction of the first feature of the panoramic image is further achieved by extracting the second feature of the perspective image and inversely mapping the second feature point back to the first feature point of the panoramic image.
在本公开的一些实施例中,所述第一特征包括第一特征点和对应的第一描述子,相对应的,可以采用下述方式根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,请参照附图4,其示出了上述确定方式的流程,包括步骤S401至步骤S403。In some embodiments of the present disclosure, the first feature includes a first feature point and a corresponding first descriptor. Correspondingly, at least one of the panoramic image sets may be determined according to the first feature in the following manner. For a set of image pairs and corresponding matching results, please refer to FIG. 4 , which shows the flow of the above determination method, including steps S401 to S403 .
在步骤S401中,根据每个全景图像与对应的待匹配的全景图像确定多组图像对。In step S401, multiple groups of image pairs are determined according to each panoramic image and the corresponding panoramic image to be matched.
本步骤中,两个全景图像组成一组图像对。可以根据每个全景图像对应的空间确定对应的待匹配的全景图像;或根据预设的搭配规则确定每个全景图像对应的待匹配的全景图像。也就是说,确定一个全景图像的待匹配的全景图像时,可以将与该个全景图像的对应空间存在重合的全景图像作为待匹配的全景图像,还可以按照预设的搭配规则确定,预设的搭配规则可以根据上述原则确定,例如根据对应的空间顺序对全景图像进行编号,然后将每个全景图像之后的预设数量(例如10张)的全景图像作为待匹配的全景图像。还可以将一个全景图像之外的其他全部全景图像均作为该个全景图像的待匹配全景图像。In this step, the two panoramic images form a set of image pairs. The corresponding panorama image to be matched may be determined according to the space corresponding to each panorama image; or the panorama image to be matched corresponding to each panorama image may be determined according to a preset collocation rule. That is to say, when determining a panoramic image of a panoramic image to be matched, the panoramic image that overlaps with the corresponding space of the panoramic image may be used as the panoramic image to be matched, and may also be determined according to a preset matching rule. The collocation rules can be determined according to the above principles, for example, the panoramic images are numbered according to the corresponding spatial order, and then a preset number (eg, 10) of panoramic images after each panoramic image are used as the panoramic images to be matched. All other panoramic images other than one panoramic image may also be used as the panoramic image to be matched for the panoramic image.
在步骤S402中,根据所述图像对的两个全景图像的第一描述子确定多组特征点对,其中,每组所述特征点对包括两个对应匹配且分属两个全景图像的第一特征点。In step S402, a plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of the image pair, wherein each set of the feature point pairs includes two correspondingly matched first descriptors belonging to the two panoramic images. a feature point.
本步骤中,可以先为图像对的第一个全景图像中的每个第一描述子在第二个全景图像中寻找欧氏距离最近的第一描述子,然后再反过来,为图像对的第二个全景图像中的每个第一描述子在第一个全景图像中寻找欧氏距离最近的第一描述子,若第一个全景图像中的某个第一描述子和第二个全景图像中的某个第一描述子互为另一个全景图像中的欧氏距离最近的第一描述子,则认为上述两个第一描述子匹配,进而确定上述两个第一描述子对应的两个第一特征点匹配,也就是两个第一特征点完成特征匹配,组成特征点对。In this step, the first descriptor with the closest Euclidean distance can be searched for each first descriptor in the first panoramic image of the image pair in the second panoramic image, and then reversely, for the first descriptor of the image pair For each first descriptor in the second panoramic image, find the first descriptor with the closest Euclidean distance in the first panoramic image, if a certain first descriptor in the first panoramic image and the second panoramic image A certain first descriptor in the image is the first descriptor with the closest Euclidean distance in another panoramic image, then the above two first descriptors are considered to match, and then the two corresponding first descriptors are determined. One first feature point matching, that is, two first feature points complete feature matching to form a feature point pair.
本步骤中,按照上述方式确定图像对的两个全景图像间的全部特征点对后,还可以统计特征点对的数量,再预设第一条件,并利用第一条件过滤步骤S401中确定的多组图像对,也就是可以利用第一条件去除部分图像对。在一个示例中,第一条件可以为小于第二数量阈值,也就是过滤特征点对的数量小于第二数量阈值的图像对,即去除特征点对的数量小于第二数量阈值的图像对,例如,可以将第二数量阈值设置为5或10等,本公开实施例对第二数量阈值的具体数值无意作出具体限制。通过过滤掉部分图像对,能够减少对匹配度低的图像对的后续操作,从而可以降低操作的复杂程度,同时提高处理的效率。In this step, after all feature point pairs between the two panoramic images of the image pair are determined in the above manner, the number of feature point pairs can also be counted, the first condition is preset, and the first condition is used to filter the data determined in step S401. Multiple sets of image pairs, that is, some image pairs can be removed by using the first condition. In one example, the first condition may be less than the second number threshold, that is, filter image pairs whose number of feature point pairs is less than the second number threshold, that is, remove image pairs whose number of feature point pairs is less than the second number threshold, for example , the second quantity threshold may be set to 5 or 10, etc. The embodiments of the present disclosure do not intend to impose specific limitations on the specific value of the second quantity threshold. By filtering out some image pairs, subsequent operations on image pairs with low matching degrees can be reduced, thereby reducing the complexity of operations and improving processing efficiency.
其中,每组特征点对表示了两个第一特征点的对应关系,多组特征点对构成了图像对的匹配结果。Among them, each group of feature point pairs represents the corresponding relationship between the two first feature points, and multiple groups of feature point pairs constitute the matching result of the image pairs.
在步骤S403中,根据所述多组特征点对确定第一本质矩阵,并利用所述第一本质矩阵对所述多组特征点对进行过滤,得到所述图像对对应的匹配结果。In step S403, a first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain a matching result corresponding to the image pair.
本步骤中,可以采用下述方式确定第一本质矩阵:首先,根据所述特征点对中的两个第一特征点的角度误差确定所述特征点对的角度误差,其中,所述第一特征点的角度误差为该第一特征点对应的单位 球面的球面点与单位球面的光心的连线和外极平面间的夹角;接下来,以对应的所述特征点对的角度误差为残差项,多次根据预设数目的所述特征点对计算本质矩阵;最后,确定每个所述本质矩阵对应的内点数,并确定内点数最多的本质矩阵为所述第一本质矩阵。In this step, the first essential matrix may be determined in the following manner: first, the angle error of the pair of feature points is determined according to the angle error of the two first feature points in the pair of feature points, wherein the first The angle error of the feature point is the angle between the line connecting the spherical point of the unit sphere corresponding to the first feature point and the optical center of the unit sphere and the outer polar plane; next, the angle error of the corresponding pair of feature points is the residual item, and calculates the essential matrix according to the preset number of the feature point pairs for many times; finally, the number of interior points corresponding to each essential matrix is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix .
其中,每组特征点对的两个第一特征点对应的单位球面的点坐标,以及本质矩阵满足下述公式(4)所示出的关系:Wherein, the point coordinates of the unit sphere corresponding to the two first feature points of each group of feature point pairs, and the essential matrix satisfy the relationship shown in the following formula (4):
(s x′,s y′,s z′)E(s x,s y,s z) T=0        (4) (s x ′,s y ′,s z ′)E(s x ,s y ,s z ) T =0 (4)
上述公式(4)中,E为本质矩阵,S′=(s x′,s y′,s z′)为第一个全景图像中的第一特征点对应的单位球面的点坐标,S=(s x,s y,s z)为第二个全景图像中的第一特征点对应的单位球面的点坐标。 In the above formula (4), E is the essential matrix, S'=(s x ', s y ', s z ') is the point coordinate of the unit sphere corresponding to the first feature point in the first panoramic image, S = (s x , s y , s z ) are the point coordinates of the unit sphere corresponding to the first feature point in the second panoramic image.
其中,确定两个第一特征点的角度误差中的较大者为特征点对的角度误差,也就是按照下式(5)确定特征点对的角度误差:Among them, the larger of the angle errors of the two first feature points is determined as the angle error of the feature point pair, that is, the angle error of the feature point pair is determined according to the following formula (5):
Figure PCTCN2021105574-appb-000008
Figure PCTCN2021105574-appb-000008
上述公式(5)中,
Figure PCTCN2021105574-appb-000009
即为第二个全景图像的第一特征点的角度误差,也就是第二个全景图像上第一特征点对应的球面点S’和光心O 2的连线与对应的外极平面之间的夹角,其中,对应的外极平面为第一个全景图像上的第一特征点对应的球面点S与光心O 1的连线和两个光心Q 1和Q 2的连线构成的平面。相对应的,请参照附图5,
Figure PCTCN2021105574-appb-000010
即为第一个全景图像的第一特征点的角度误差,也就是第一个全景图像上第一特征点对应的球面点和光心的连线与对应的外极平面之间的夹角,其中,对应的外极平面为第二个全景图像上的第一特征点对应的球面点与光心的连线和两个光心的连线构成的平面。本公开实施例中采用角度误差和球面误差能更好的适应全景图像的相机模型。
In the above formula (5),
Figure PCTCN2021105574-appb-000009
That is, the angle error of the first feature point of the second panoramic image, that is, the difference between the connection line between the spherical point S' corresponding to the first feature point on the second panoramic image and the optical center O 2 and the corresponding outer polar plane. The included angle, wherein the corresponding outer polar plane is formed by the connection line between the spherical point S corresponding to the first feature point on the first panoramic image and the optical center O 1 and the connection line between the two optical centers Q 1 and Q 2 flat. Correspondingly, please refer to Figure 5,
Figure PCTCN2021105574-appb-000010
is the angle error of the first feature point of the first panoramic image, that is, the angle between the connection line between the spherical point and the optical center corresponding to the first feature point on the first panoramic image and the corresponding epipolar plane, where , the corresponding outer polar plane is the plane formed by the connection line between the spherical point corresponding to the first feature point on the second panoramic image and the optical center and the connection line between the two optical centers. In the embodiment of the present disclosure, the angle error and the spherical error can be used to better adapt to the camera model of the panoramic image.
其中,以特征点对的角度误差为残差项通过RANSAC(Random Sample Consensus)和5点算法可以计算出多个本质矩阵,每5组特征点对即可计算出一个本质矩阵,因此利用上述方式可以得出多个本质矩阵。Among them, multiple essential matrices can be calculated by RANSAC (Random Sample Consensus) and the 5-point algorithm with the angle error of the feature point pair as the residual item, and an essential matrix can be calculated for every 5 sets of feature point pairs. Therefore, the above method is used. Multiple essential matrices can be derived.
其中,可按照下述方式确定本质矩阵对应的内点数:首先,根据所述本质矩阵计算所述图像对的每组特征点对的角度误差;接下来,确定所述角度误差符合预设的第二条件的特征点对为内点;最后,根据所有内点确定所述本质矩阵对应的内点数。也就是利用上述公式(5)和本质矩阵确定每组特征点对的角度误差;预设第二条件,并利用第二条件筛选内点,在一个实例中,第二条件可以是角度误差小于第一角度阈值,也就是确定小于第一角度阈值的角度误差对应的特征点对为内点。Wherein, the number of interior points corresponding to the essential matrix can be determined in the following manner: first, the angle error of each group of feature point pairs of the image pair is calculated according to the essential matrix; next, it is determined that the angle error conforms to the preset first The feature point pairs of the two conditions are interior points; finally, the number of interior points corresponding to the essential matrix is determined according to all interior points. That is, the above formula (5) and the essential matrix are used to determine the angle error of each group of feature point pairs; the second condition is preset, and the second condition is used to filter the interior points. In an example, the second condition may be that the angle error is less than the first An angle threshold, that is, the feature point pair corresponding to the angle error smaller than the first angle threshold is determined as an interior point.
本步骤中,可以采用下述方式利用所述第一本质矩阵对所述多组特征点对进行过滤:首先,根据所述第一本质矩阵确定所述图像对的每组特征点对的角度误差;接下来,过滤角度误差符合预设的第三条件的特征点对。In this step, the first essential matrix may be used to filter the multiple sets of feature point pairs: first, the angle error of each set of feature point pairs of the image pair is determined according to the first essential matrix ; Next, filter the feature point pairs whose angle error meets the preset third condition.
其中,可以利用上述公式(5)和第一本质矩阵计算每组特征点对的角度误差;预设第三条件,并利用第三条件筛选特征点对,在一个实例中,第三条件可以是角度误差大于或等于第二角度阈值(如大于或等于0.4度),也就是过滤大于或等于第二角度阈值的角度误差对应的特征点对,即去除大于或等于第二角度阈值的角度误差对应的特征点对,保留小于第二角度阈值的角度误差对应的特征点对。Wherein, the above-mentioned formula (5) and the first essential matrix can be used to calculate the angle error of each group of feature point pairs; a third condition is preset, and the feature point pair is screened by the third condition. In an example, the third condition can be The angle error is greater than or equal to the second angle threshold (such as greater than or equal to 0.4 degrees), that is, filtering the feature point pairs corresponding to the angle error greater than or equal to the second angle threshold, that is, removing the angle error greater than or equal to the second angle threshold corresponding to The feature point pair corresponding to the angle error smaller than the second angle threshold is reserved.
本公开实施例中,通过对图像对进行特征匹配确定图像对之间的特征点对,并进一步根据特征点对确定第一本质矩阵,最后再用第一本质矩阵过滤上述特征点对,并且在确定第一本质矩阵和过滤特征点对时均使用了角度误差,因此相较于其他本质矩阵,与第一本质矩阵一致的特征点对最多,且过滤步骤将与第一本质矩阵不一致的特征点对去除,不仅提高了第一本质矩阵的准确度,而且在去除错误的特征点对的前提下,最大程度的提高了特征点对的数量,进而提高了图像对的两个全景图像的匹配精度和准确度。而且利用全景图像360度的视角覆盖提升图像之间的特征匹配数量,降低弱纹理区域相机注册失败的几率。In the embodiment of the present disclosure, the feature point pairs between the image pairs are determined by performing feature matching on the image pairs, and the first essential matrix is further determined according to the feature point pairs, and finally the first essential matrix is used to filter the feature point pairs, and in the The angle error is used when determining the first essential matrix and filtering the feature point pairs. Therefore, compared with other essential matrices, the feature point pairs consistent with the first essential matrix are the most, and the filtering step will be the feature points that are inconsistent with the first essential matrix. Pair removal not only improves the accuracy of the first essential matrix, but also maximizes the number of feature point pairs on the premise of removing wrong feature point pairs, thereby improving the matching accuracy of the two panoramic images of the image pair. and accuracy. Moreover, the 360-degree viewing angle coverage of the panoramic image is used to increase the number of feature matches between images and reduce the probability of camera registration failure in weak texture areas.
在本公开的一些实施例中,在完成每组图像对的第一本质矩阵计算及特征点对过滤后,还可以利用第一特征点的分布来判断两个全景图像的匹配是否由重复的纹理造成,并进一步对多组图像对进行过滤,具体可以采用下述方式:首先,获取所述图像对的特征点对的两个第一特征点在所属全景图像的坐标;接下来,根据所述全景图像对应的第三映射关系以及所述全景图像上的属于特征点对的第一特征点的坐标,确定与特征点对相关的透视图像,其中,所述与特征点对相关的透视图像为存在与属于特征点对的第一特征点相对应的第二特征点的透视图像;最后,利用所述与特征点对相关的透视图像对所述图像对进行过滤。In some embodiments of the present disclosure, after completing the calculation of the first essential matrix of each group of image pairs and the filtering of the feature point pairs, the distribution of the first feature points can also be used to determine whether the matching of the two panoramic images is determined by repeated texture, and further filter multiple groups of image pairs, specifically, the following methods can be used: first, obtain the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong; The third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image determine a perspective image related to the feature point pair, wherein the perspective image related to the feature point pair The image is a fluoroscopic image in which a second feature point corresponding to the first feature point belonging to the feature point pair exists; finally, the image pair is filtered using the fluoroscopic image associated with the feature point pair.
还可以预设第三数量阈值,并确定透视图像包括的对应于特征点对的第二特征点的数量,在该数量大于或等于上述第三数量阈值的情况下,透视图像才确定为与特征点对相关的透视图像,例如,可以将上述第三数量阈值设置为5,这样可以避免少量噪声匹配带来的错误统计。It is also possible to preset a third quantity threshold, and determine the quantity of the second feature points corresponding to the pair of feature points included in the fluoroscopic image, and only when the quantity is greater than or equal to the above-mentioned third quantity threshold, the fluoroscopic image is determined to be the same as the feature point. For point-to-correlation fluoroscopic images, for example, the above-mentioned third number threshold can be set to 5, which can avoid erroneous statistics caused by a small amount of noise matching.
其中,响应于所述图像对的至少一个全景图像对应的与特征点对相关的透视图像为连续的多张图像,且所述连续的多张图像的数量小于预设的过滤阈值,过滤所述图像对。也就是说,当与特征点对相关的第一特征点对应的第二特征点都集中在部分透视图像,且这部分透视图像的数量小于预设的过滤阈值,则认为这两个全景图像的匹配是由重复纹理造成的错误匹配,因此过滤掉该图像对,也就是去除该图像对。过滤阈值可以根据透视图像的总量和预设的第一比例确定,例如透视图像的总量为6,预设的第一比例为0.5,则认为与特征点对相关的透视图像的数量小于3时,则过滤掉该图像对。Wherein, in response to the perspective images related to the feature point pair corresponding to at least one panoramic image of the image pair being consecutive multiple images, and the number of the consecutive multiple images is less than a preset filtering threshold, filtering the image pair. That is to say, when the second feature points corresponding to the first feature points related to the feature point pair are all concentrated in the partial fluoroscopic image, and the number of the partial fluoroscopic images is less than the preset filtering threshold, it is considered that the two panoramic images are A match is a false match caused by a repeating texture, so the image pair is filtered out, ie, the image pair is removed. The filtering threshold can be determined according to the total amount of fluoroscopic images and a preset first ratio. For example, if the total amount of fluoroscopic images is 6 and the preset first ratio is 0.5, it is considered that the number of fluoroscopic images related to feature point pairs is less than 3. , the image pair is filtered out.
本公开的实施例中,完成特征匹配的第一特征点均对应于透视图像的第二特征点,因此通过确定第二特征点的分布可以判断两个全景图像的匹配是否由重复纹理造成,且通过去除噪声匹配可进一步提高上述判断的准确性,排除错误匹配的图像对。重复纹理大多是局部的,利用全景图像之间全局的匹配情况尽量排除错误匹配,避免据此进行相机注册的错误率。In the embodiment of the present disclosure, the first feature points for which feature matching is completed all correspond to the second feature points of the fluoroscopic image. Therefore, by determining the distribution of the second feature points, it can be determined whether the matching of the two panoramic images is caused by repeated textures, and By removing noise matching, the accuracy of the above judgment can be further improved, and erroneously matched image pairs can be excluded. Most of the repeated textures are local, and the global matching between panoramic images is used to eliminate false matching as much as possible to avoid the error rate of camera registration based on this.
在本公开的一些实施例中,可以采用下述方式根据所述至少一组图像对及对应的匹配结果构建点云模型:首先,根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,并确定初始图像对的每个全景图像的相机位姿,以及将所述初始图像对的第一特征点对进行三角化,形成初始三维点;接下来,多次根据第一三维点对应的第一特征点和每个未注册图像的第一特征点间的匹配关系,确定一个未注册图像为注册图像,直至全景图像集中的每个全景图像均为已注册图像,其中,所述未注册图像为全部第一特征点均未三角化的全景图像,所述已注册图像为存在第一特征点被三角化的全景图像,所述第一三维点包括所述初始三维点,或包括所述初始三维点和所述已注册图像的第一特征点三角化形成的三维点;并且,在每次确定注册图像后,确定所述注册图像的相机位姿,并将所述注册图像的第一特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点,所述第三特征点为所述已注册图像中与所述注册图像的第一特征点匹配的第一特征点。In some embodiments of the present disclosure, a point cloud model may be constructed according to the at least one group of image pairs and the corresponding matching results in the following manner: First, according to preset initialization conditions and feature points of each group of image pairs and the first essential matrix to determine a set of image pairs as initial image pairs, and determine the camera pose of each panoramic image of the initial image pair, and triangulate the first feature point pair of the initial image pair to form The initial three-dimensional point; next, according to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until the Each panoramic image is a registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, and the registered image is a panoramic image with triangulated first feature points, so The first three-dimensional point includes the initial three-dimensional point, or includes the three-dimensional point formed by triangulation of the initial three-dimensional point and the first feature point of the registered image; and, after each time the registered image is determined, the registering the camera pose of the image, triangulating the first feature point of the registered image to form a corresponding three-dimensional point, and triangulating the third feature point of the registered image to form a corresponding three-dimensional point, the The third feature point is the first feature point in the registered image that matches the first feature point of the registered image.
其中,确定初始图像对时,可以按照特征点对的数量从大到小的顺序依次选择图像对,每次选择图像对后均根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,直至选择的图像对满足所述初始化条件,确定选择的图像对为初始图像对。Wherein, when determining the initial image pairs, the image pairs may be selected in descending order of the number of feature point pairs, and after each image pair is selected, the Whether the image pair satisfies the initialization condition, until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is the initial image pair.
而且,可以采用下述方式根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,请参照附图6,其示出上述方式的流程,包括步骤S601至步骤S604。Moreover, the following method can be used to determine whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix. Please refer to FIG. 6 , which shows the flow of the above method, including step S601 Go to step S604.
在步骤S601中,根据所述图像对的第一本质矩阵确定至少一组(例如四组)位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,其中,所述位移变量包括旋转变量和平移变量。In step S601, at least one group (for example, four groups) of displacement variables is determined according to the first essential matrix of the image pair, and the feature points of the pair of feature points are triangulated for each group of displacement variables to form the corresponding displacement variables of each group and filtering the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, wherein the displacement variables include rotation variables and translation variables.
其中,旋转变量可以用3*3的矩阵R表示,平移变量用3维的向量T表示。计算各组三维点X在第i个全景图像(即第一个或第二个全景图像)中的重投影误差时,可以采用下述公式:Among them, the rotation variable can be represented by a 3*3 matrix R, and the translation variable can be represented by a 3-dimensional vector T. When calculating the reprojection error of each group of three-dimensional points X in the ith panoramic image (ie, the first or second panoramic image), the following formula can be used:
Figure PCTCN2021105574-appb-000011
Figure PCTCN2021105574-appb-000011
上述公式中:S=(s x,s y,s z)为第一特征点对应的单位球面的点坐标,X为三维点坐标向量,f为虚拟透视相机的焦距,令P=[R i|T i]为第i个全景图像的相机矩阵,在初始化时,R 1=I为单位矩阵,T 1=0为零向量,R 2=R,T 2=T,且R和T分别为该组三维点对应的旋转变量和平移变量。 In the above formula: S=(s x , s y , s z ) is the point coordinate of the unit sphere corresponding to the first feature point, X is the three-dimensional point coordinate vector, f is the focal length of the virtual perspective camera, let P=[R i |T i ] is the camera matrix of the ith panoramic image. During initialization, R 1 =I is a unit matrix, T 1 =0 is a zero vector, R 2 =R, T 2 =T, and R and T are respectively The rotation variables and translation variables corresponding to the set of three-dimensional points.
其中,根据各组三维点的重投影误差和三角化角度过滤所述三维点时,可以设置第三角度阈值和第四角度阈值,然后保留在两个全景图像中的重投影误差都小于第三角度阈值且三角化角度大于第四角度阈值的三维点。Wherein, when filtering the three-dimensional points according to the re-projection error and triangulation angle of each group of three-dimensional points, a third angle threshold and a fourth angle threshold may be set, and then the re-projection errors retained in the two panoramic images are both smaller than the third angle threshold. Three-dimensional points with an angle threshold and a triangulation angle greater than a fourth angle threshold.
在步骤S602中,响应于数量最多的一组三维点的数量大于预设的第一数量阈值,确定对应的位移变量为第一位移变量。In step S602, in response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determine the corresponding displacement variable as the first displacement variable.
在步骤S603中,从多次计算得到的本质矩阵中选择内点数大于或等于点数阈值的本质矩阵,分别根据每个本质矩阵确定至少一组(例如四组)位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,保留每个本质矩阵的数量最多的一组三维点对应的位移变量。In step S603, select an essential matrix whose inner number of points is greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, and determine at least one group (for example, four groups) of displacement variables according to each essential matrix, and for each group of displacement variables Triangulate the feature points of the feature point pairs respectively to form the three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the one with the largest number of each essential matrix. Displacement variables corresponding to a set of 3D points.
其中,可以在步骤S403计算得到多个本质矩阵并确定第一本质矩阵后,保留其余的本质矩阵,留作本步骤使用;或者在本步骤中再次利用与步骤S403中相同的方式计算得到多个本质矩阵。Wherein, after calculating and obtaining a plurality of essential matrices in step S403 and determining the first essential matrix, the remaining essential matrices can be reserved for use in this step; Essential Matrix.
其中,可以预设第二比例,然后利用第一本质矩阵的内点数与上述第二比例确定点数阈值;可以将第一本质矩阵的内点数与第二比例的乘积作为点数阈值,例如,可以将第二比例预设为0.6,但本公开 无意对第二比例的具体数值做出限制。Wherein, a second ratio can be preset, and then the threshold of points can be determined by using the number of inner points of the first essential matrix and the above-mentioned second ratio; the product of the number of inner points of the first essential matrix and the second ratio can be used as the threshold of points. The second ratio is preset to be 0.6, but the present disclosure does not intend to limit the specific value of the second ratio.
本步骤对选择出的本质矩阵执行的操作与步骤S601至步骤S602的操作相同,为每个本质矩阵保留一个位移变量。The operations performed on the selected essential matrix in this step are the same as the operations in steps S601 to S602, and a displacement variable is reserved for each essential matrix.
在步骤S604中,在每个本质矩阵保留的位移变量与所述第一位移变量间的差异满足预设范围的情况下,确定所述图像对满足所述初始化条件。In step S604, if the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
其中,本质矩阵保留的位移变量与第一位移变量间的差异,可以采用两个位移变量的方向夹角表示,方向夹角以两个位移变量的旋转矩阵的乘积得到;预设范围可以是用一个预设的第五角度阈值表示,也就是小于第五角度阈值为满足预设范围;因此当每个本质矩阵保留的位移变量与第一位移变量的方向夹角均小于第五角度阈值,确定所述图像对满足所述初始化条件。Wherein, the difference between the displacement variable retained by the essential matrix and the first displacement variable can be represented by the angle between the directions of the two displacement variables, and the angle between the directions is obtained by multiplying the rotation matrices of the two displacement variables; the preset range can be expressed as A preset fifth angle threshold value indicates that the value less than the fifth angle threshold value satisfies the preset range; therefore, when the included angle between the displacement variable retained by each essential matrix and the direction of the first displacement variable is smaller than the fifth angle threshold value, determine The image pair satisfies the initialization condition.
在图像对不满足初始化条件的情况下,采用步骤S601至步骤S604,继续判断其他图像对是否满足初始化条件。In the case that the image pair does not satisfy the initialization condition, steps S601 to S604 are used to continue to judge whether other image pairs satisfy the initialization condition.
上述判断初始化条件的过程中,对每个本质矩阵的多个解的判定,可以让初始化条件的判断结果更加稳定。In the above process of judging the initialization conditions, the judgment of multiple solutions of each essential matrix can make the judgment result of the initialization conditions more stable.
另外,还可以通过最小化初始的三维点在初始图像对的两个全景图像上的重投影误差,优化每个全景图像的相机位姿以及初始的三维点的位置;In addition, the camera pose of each panoramic image and the position of the initial 3D point can also be optimized by minimizing the reprojection error of the initial 3D point on the two panoramic images of the initial image pair;
另外,每次确定注册图像的相机位姿后,还可以通过最小化三维点在所述注册图像上的重投影误差优化所述注册图像的相机位姿;In addition, after each time the camera pose of the registration image is determined, the camera pose of the registration image can also be optimized by minimizing the reprojection error of the three-dimensional point on the registration image;
另外,每次将所述注册图像的特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点后,还可以通过最小化每个三维点在每个已注册图像上的重投影误差,优化每个已注册图像的相机位姿以及每个三维点的位置。In addition, each time the feature points of the registered image are triangulated to form corresponding three-dimensional points, and the third feature points of the registered images are triangulated to form corresponding three-dimensional points, each time can also be minimized by minimizing each The reprojection error of 3D points on each registered image, optimizing the camera pose for each registered image and the position of each 3D point.
上述优化,可以采用如下损失函数进行优化:The above optimization can be optimized using the following loss function:
Figure PCTCN2021105574-appb-000012
Figure PCTCN2021105574-appb-000012
上述公式(7)中,S=(s x,s y,s z)为第一特征点对应的单位球面的点坐标,X i为第i个三维点的坐标向量,f为虚拟透视相机的焦距,令P=[R i|T i]为第i个全景图像的相机矩阵,在初始化时,R 1=I为单位矩阵,T 1=0为零向量,R 2=R,T 2=T,且R和T分别为该三维点对应的旋转变量和平移变量,m为全景图像的个数,n为三维点的个数。 In the above formula (7), S=(s x , s y , s z ) is the point coordinates of the unit sphere corresponding to the first feature point, X i is the coordinate vector of the i-th three-dimensional point, and f is the virtual perspective camera. Focal length, let P=[R i |T i ] be the camera matrix of the ith panoramic image, during initialization, R 1 =I is the identity matrix, T 1 =0 is a zero vector, R 2 =R,T 2 = T, and R and T are the rotation variables and translation variables corresponding to the three-dimensional point, respectively, m is the number of panoramic images, and n is the number of three-dimensional points.
上述确定图像的相机位姿,可以采用RANSAC(Random Sample Consensus)和P3P算法进行。The above-mentioned determination of the camera pose of the image can be performed using RANSAC (Random Sample Consensus) and P3P algorithms.
基于上述对点云模型构建方法的描述可知,该方法可以利用全景图像进行相机注册和点云重建,从而完成点云模型的构建,而且基于全景图构建的点云模型比传统的基于普通透视图构建的点云模型精度更高、对重复纹理的鲁棒性更好、场景重建更全面。可以利用该方法构建视觉高精度地图,为自动驾驶、AR提供定位用的视觉特征和三维地标点,还可以利用该方法构建特定场景的三维模型,供场景展示和VR应用,例如旅游景点、博物馆、展览馆的AR/VR导览,或者构建某一建筑、某一街区或某一城市的三维模型、AR特效等。Based on the above description of the point cloud model construction method, it can be seen that this method can use panoramic images for camera registration and point cloud reconstruction, thereby completing the construction of point cloud models, and the point cloud model constructed based on panoramic images is better than the traditional one based on ordinary perspective images. The constructed point cloud model has higher accuracy, better robustness to repeated textures, and more comprehensive scene reconstruction. This method can be used to build a high-precision visual map, providing visual features and 3D landmark points for positioning for autonomous driving and AR, and can also use this method to build a 3D model of a specific scene for scene display and VR applications, such as tourist attractions, museums , AR/VR tour of the exhibition hall, or build a 3D model of a building, a block or a city, AR special effects, etc.
根据本公开实施例的第二方面,提供一种点云模型构建装置,请参照附图7,其示出了该装置的结构,包括:According to a second aspect of the embodiments of the present disclosure, an apparatus for constructing a point cloud model is provided. Please refer to FIG. 7 , which shows the structure of the apparatus, including:
获取模块701,配置为获取全景图像集中的全景图像的第一特征;The obtaining module 701 is configured to obtain the first feature of the panoramic image in the panoramic image set;
匹配模块702,配置为根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系;A matching module 702, configured to determine at least one set of image pairs in the panoramic image set and corresponding matching results according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, the The matching result indicates the correspondence between the first features of the two panoramic images;
构建模块703,配置为根据所述至少一组图像对及对应的匹配结果构建点云模型。The construction module 703 is configured to construct a point cloud model according to the at least one set of image pairs and corresponding matching results.
在一个实施例中,所述获取模块具体配置为:In one embodiment, the obtaining module is specifically configured as:
确定所述全景图像对应的多张透视图像,其中,多张透视图像对应的空间的集合为全景图像对应的空间;determining multiple perspective images corresponding to the panoramic image, wherein the set of spaces corresponding to the multiple perspective images is the space corresponding to the panoramic image;
获取所述多张透视图像中的至少一张透视图像的第二特征;acquiring a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images;
根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,其中,所述透视图像与所述全景图像的对应位置对应相同的空间;Determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, wherein the fluoroscopic image and the corresponding position of the panoramic image correspond to the same space;
根据至少一个所述第一子特征确定所述全景图像的第一特征。A first feature of the panoramic image is determined based on at least one of the first sub-features.
在一个实施例中,所述获取模块配置为确定所述全景图像对应的多张透视图像时,具体配置为:In one embodiment, when the acquisition module is configured to determine multiple perspective images corresponding to the panoramic image, the specific configuration is:
获取所述全景图像对应的单位球面,并确定全景图像的像素点坐标与单位球面的点坐标间的第一映射关系;acquiring the unit sphere corresponding to the panoramic image, and determining the first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere;
根据所述单位球面确定多张透视图像,并确定透视图像的像素点坐标与单位球面的点坐标间的第二 映射关系,其中,所述多张透视图像对应的球面点的集合为单位球面;Determine multiple perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein, the set of spherical points corresponding to the multiple perspective images is the unit sphere;
根据所述第一映射关系和所述第二映射关系确定全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系,并根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息。The third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship. The mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
在一个实施例中,所述透视图像的第二特征包括第二特征点和对应的第二描述子;In one embodiment, the second feature of the fluoroscopic image includes a second feature point and a corresponding second descriptor;
所述获取模块配置为根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征时,具体配置为:When the acquisition module is configured to determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, the specific configuration is as follows:
根据所述透视图像的第二特征点的坐标和所述第三映射关系确定所述全景图像的第一特征点的坐标;Determine the coordinates of the first feature point of the panoramic image according to the coordinates of the second feature point of the fluoroscopic image and the third mapping relationship;
根据所述透视图像的第二特征点对应的第二描述子确定所述全景图像的第一特征点对应的第一描述子。The first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
在一个实施例中,所述第一特征包括第一特征点和对应的第一描述子;In one embodiment, the first feature includes a first feature point and a corresponding first descriptor;
所述匹配模块具体配置为:The matching module is specifically configured as:
根据每个全景图像与对应的待匹配的全景图像确定多组图像对;Determine multiple groups of image pairs according to each panoramic image and the corresponding panoramic image to be matched;
根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对,其中,每组所述特征点对包括两个对应匹配且分属两个全景图像的第一特征点;A plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of each set of the image pairs, wherein each set of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images ;
根据所述多组特征点对确定第一本质矩阵,并利用所述第一本质矩阵对所述多组特征点对进行过滤,得到所述图像对对应的匹配结果。A first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
在一个实施例中,所述匹配模块配置为根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对后,还配置为:In one embodiment, after the matching module is configured to determine multiple sets of feature point pairs according to the first descriptors of the two panoramic images of each set of the image pairs, it is further configured to:
获取每组所述图像对的两个全景图像的特征点对的数量;Obtain the number of feature point pairs of the two panoramic images of each group of the image pairs;
过滤特征点对的数量符合预设的第一条件的图像对。The image pairs whose number of feature point pairs meet the preset first condition are filtered.
在一个实施例中,所述匹配模块配置为根据所述多组特征点对确定第一本质矩阵时,具体配置为:In one embodiment, when the matching module is configured to determine the first essential matrix according to the multiple sets of feature point pairs, the specific configuration is:
根据所述特征点对中的两个第一特征点的角度误差确定所述特征点对的角度误差,其中,所述第一特征点的角度误差为所述第一特征点对应的单位球面的球面点与单位球面的光心的连线和外极平面间的夹角;The angle error of the feature point pair is determined according to the angle error of the two first feature points in the feature point pair, wherein the angle error of the first feature point is the angular error of the unit sphere corresponding to the first feature point The angle between the line connecting the spherical point and the optical center of the unit sphere and the outer polar plane;
以对应的所述特征点对的角度误差为残差项,多次根据预设数目的所述特征点对确定本质矩阵;Taking the angular error of the corresponding pair of feature points as the residual term, determining the essential matrix according to the preset number of pairs of the feature points for many times;
确定每个所述本质矩阵对应的内点数,并确定内点数最多的本质矩阵为所述第一本质矩阵。The number of interior points corresponding to each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
在一个实施例中,所述匹配模块配置为确定每个所述本质矩阵对应的内点数时,具体配置为:In one embodiment, when the matching module is configured to determine the number of interior points corresponding to each of the essential matrices, the specific configuration is:
根据所述本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the essential matrix;
确定所述角度误差符合预设的第二条件的特征点对为内点;It is determined that the feature point pair whose angle error meets the preset second condition is an interior point;
根据所有内点确定所述本质矩阵对应的内点数。The number of interior points corresponding to the essential matrix is determined according to all interior points.
在一个实施例中,所述匹配模块配置为利用所述第一本质矩阵对多组特征点对进行过滤时,具体配置为:In one embodiment, when the matching module is configured to use the first essential matrix to filter multiple sets of feature point pairs, the specific configuration is:
根据所述第一本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the first essential matrix;
过滤所述角度误差符合预设的第三条件的特征点对。The feature point pairs whose angle errors meet the preset third condition are filtered.
在一个实施例中,所述匹配模块还配置为:In one embodiment, the matching module is further configured to:
获取所述图像对的特征点对的两个第一特征点在所属全景图像的坐标;Obtaining the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong;
根据所述全景图像对应的第三映射关系以及所述全景图像上的属于特征点对的第一特征点的坐标,确定与特征点对相关的透视图像,其中,所述与特征点对相关的透视图像为存在与属于特征点对的第一特征点相对应的第二特征点的透视图像;A perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related The fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
利用所述与特征点对相关的透视图像对所述图像对进行过滤。The image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
在一个实施例中,所述匹配模块配置为利用所述与特征点对相关的透视图像对所述图像对进行过滤时,具体配置为:In one embodiment, when the matching module is configured to filter the image pair by using the perspective image related to the feature point pair, the specific configuration is:
响应于所述图像对的至少一个全景图像对应的与特征点对相关的透视图像为连续的多张图像,且所述连续的多张图像的数量小于预设的过滤阈值,过滤所述图像对。In response to the fluoroscopic images related to the feature point pair corresponding to at least one panoramic image of the image pair being consecutive multiple images, and the number of the consecutive multiple images is less than a preset filtering threshold, filtering the image pair .
在一个实施例中,所述构建模块具体配置为:In one embodiment, the building module is specifically configured as:
根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,并确定所述初始图像对的每个全景图像的相机位姿,以及将所述初始图像对的第一特征点对进行三角化,形成初始三维点;Determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, and determine the camera pose of each panoramic image of the initial image pair, and triangulating the first feature point pair of the initial image pair to form an initial three-dimensional point;
多次根据第一三维点对应的第一特征点和每个未注册图像的第一特征点间的匹配关系,确定一个未注册图像为注册图像,直至全景图像集中的每个全景图像均为已注册图像,其中,所述未注册图像为全 部第一特征点均未三角化的全景图像,所述已注册图像为存在第一特征点被三角化的全景图像,所述第一三维点包括所述初始三维点,或包括所述初始三维点和所述已注册图像的第一特征点三角化形成的三维点;According to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image. A registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
每次确定注册图像后,确定所述注册图像的相机位姿,并将所述注册图像的第一特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点,所述第三特征点为所述已注册图像中与所述注册图像的第一特征点匹配的第一特征点。After each registration image is determined, the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
在一个实施例中,所述构建模块配置为根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对时,具体配置为:In one embodiment, when the building module is configured to determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs, and the first essential matrix, the specific configuration is:
按照特征点对的数量从大到小的顺序依次选择图像对,每次选择图像对后均根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,直至选择的图像对满足所述初始化条件,确定选择的图像对为初始图像对。Select image pairs in descending order of the number of feature point pairs. After each image pair is selected, it is determined whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix. Until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is an initial image pair.
在一个实施例中,所述构建模块配置为根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件时,具体配置为:In one embodiment, when the building module is configured to determine whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix, the specific configuration is:
根据所述图像对的第一本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,其中,所述位移变量包括旋转变量和平移变量;Determine at least one group of displacement variables according to the first essential matrix of the image pair, and triangulate the feature points of the feature point pair for each group of displacement variables to form three-dimensional points corresponding to each group of displacement variables, and according to each group of three-dimensional points The reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
响应于数量最多的一组三维点的数量大于预设的第一数量阈值,确定对应的位移变量为第一位移变量;In response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determining the corresponding displacement variable as the first displacement variable;
从多次计算得到的本质矩阵中选择内点数大于或等于点数阈值的本质矩阵,分别根据每个本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,保留每个本质矩阵的数量最多的一组三维点对应的位移变量;Select an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
在每个本质矩阵保留的位移变量与所述第一位移变量间的差异满足预设范围的情况下,确定所述图像对满足所述初始化条件。In the case that the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
在一个实施例中,所述构建模块还配置为:In one embodiment, the building block is further configured to:
通过最小化初始的三维点在初始图像对的两个全景图像上的重投影误差,优化每个全景图像的相机位姿以及初始的三维点的位置;和/或,optimize the camera pose of each panorama image and the position of the initial 3D point by minimizing the reprojection error of the initial 3D point on the two panorama images of the initial image pair; and/or,
每次确定注册图像的相机位姿后,通过最小化三维点在所述注册图像上的重投影误差优化所述注册图像的相机位姿;和/或,After each determination of the camera pose of the registration image, optimize the camera pose of the registration image by minimizing the reprojection error of 3D points on the registration image; and/or,
每次将所述注册图像的特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点后,通过最小化每个三维点在每个已注册图像上的重投影误差,优化每个已注册图像的相机位姿以及每个三维点的位置。After each time the feature points of the registered image are triangulated to form corresponding three-dimensional points, and the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
在一个实施例中,所述匹配模块还配置为:In one embodiment, the matching module is further configured to:
根据每个全景图像对应的空间确定对应的待匹配的全景图像;或,Determine the corresponding panoramic image to be matched according to the space corresponding to each panoramic image; or,
根据预设的搭配规则确定每个全景图像对应的待匹配的全景图像。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在第一方面有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。The panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule. Regarding the apparatus in the above-mentioned embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment of the method related to the first aspect, and will not be described in detail here.
请参照附图8,根据本公开实施例的第三方面,提供一种电子设备,所述电子设备包括存储器801、处理器802,所述存储器801用于存储可在处理器802上运行的计算机指令,所述处理器802用于在执行所述计算机指令时基于第一方面所述的方法构建点云模型。所述存储器801可以采用易失性或非易失性存储介质。Referring to FIG. 8 , according to a third aspect of the embodiments of the present disclosure, an electronic device is provided, the electronic device includes a memory 801 and a processor 802 , and the memory 801 is used to store a computer that can run on the processor 802 . Instructions, the processor 802 is configured to construct a point cloud model based on the method described in the first aspect when executing the computer instructions. The memory 801 may use a volatile or non-volatile storage medium.
根据本公开实施例的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现第一方面所述的方法。计算机可读存储介质可以是易失性或非易失性存储介质。According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method of the first aspect. Computer-readable storage media can be volatile or non-volatile storage media.
本公开还提供一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行用于实现如上任一实施例提供的点云模型构建方法的指令。The present disclosure also provides a computer program product, including computer-readable code, when the computer-readable code is run on a device, a processor in the device executes instructions for implementing the method for constructing a point cloud model provided in any of the above embodiments .
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。The computer program product can be specifically implemented by hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
在本公开中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性。术语“多个”指两个或两个以上,除非另有明确的限定。In the present disclosure, the terms "first" and "second" are used for descriptive purposes only, and should not be construed as indicating or implying relative importance. The term "plurality" refers to two or more, unless expressly limited otherwise.
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循 本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of this disclosure that follow the general principles of this disclosure and include common general knowledge or techniques in the technical field not disclosed by this disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
工业实用性Industrial Applicability
本公开实施例提供了一种点云模型构建方法、装置、设备及存储介质,其中,该方法包括:获取全景图像集中的全景图像的第一特征;根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系;根据所述至少一组图像对及对应的匹配结果构建点云模型。由于采用全景图像进行匹配并进一步根据匹配的结果构建点云模型,因此能够减少图像集的图像数量,从而提高匹配效率和建模效率,且全景图像对应的空间的范围较大,因此能够提高全景图像间的匹配效果,进而提高构建点云模型的精度和质量。Embodiments of the present disclosure provide a point cloud model construction method, device, device, and storage medium, wherein the method includes: acquiring a first feature of a panoramic image in a panoramic image set; determining the panoramic image according to the first feature At least one group of image pairs in the set and corresponding matching results, wherein the image pairs include two panoramic images matched by a first feature, and the matching results indicate the correspondence between the first features of the two panoramic images; according to The at least one set of image pairs and the corresponding matching results construct a point cloud model. Since the panoramic image is used for matching and the point cloud model is further constructed according to the matching result, the number of images in the image set can be reduced, thereby improving the matching efficiency and modeling efficiency, and the spatial range corresponding to the panoramic image is large, so the panoramic image can be improved. The matching effect between images, thereby improving the accuracy and quality of the point cloud model.

Claims (35)

  1. 一种点云模型构建方法,应用于电子设备中,包括:A method for constructing a point cloud model, which is applied to electronic equipment, including:
    获取全景图像集中的全景图像的第一特征;obtaining the first feature of the panoramic image in the panoramic image set;
    根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系;At least one set of image pairs in the panoramic image set and corresponding matching results are determined according to the first feature, wherein the image pair includes two panoramic images matched by the first feature, and the matching results indicate two panoramic images the correspondence between the first features of the image;
    根据所述至少一组图像对及对应的匹配结果构建点云模型。A point cloud model is constructed according to the at least one set of image pairs and the corresponding matching results.
  2. 根据权利要求1所述的点云模型构建方法,其中,所述获取全景图像集中的全景图像的第一特征,包括:The method for constructing a point cloud model according to claim 1, wherein the acquiring the first feature of the panoramic image in the panoramic image set comprises:
    确定所述全景图像对应的多张透视图像,其中,多张透视图像对应的空间的集合为全景图像对应的空间;determining multiple perspective images corresponding to the panoramic image, wherein the set of spaces corresponding to the multiple perspective images is the space corresponding to the panoramic image;
    获取所述多张透视图像中的至少一张透视图像的第二特征;acquiring a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images;
    根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,其中,所述透视图像与所述全景图像的对应位置对应相同的空间;Determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, wherein the fluoroscopic image corresponds to the same space as the corresponding position of the panoramic image;
    根据至少一个所述第一子特征确定所述全景图像的第一特征。A first feature of the panoramic image is determined based on at least one of the first sub-features.
  3. 根据权利要求2所述的点云模型构建方法,其中,所述确定所述全景图像对应的多张透视图像,包括:The method for constructing a point cloud model according to claim 2, wherein the determining a plurality of perspective images corresponding to the panoramic image comprises:
    获取所述全景图像对应的单位球面,并确定全景图像的像素点坐标与单位球面的点坐标间的第一映射关系;acquiring the unit sphere corresponding to the panoramic image, and determining the first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere;
    根据所述单位球面确定多张透视图像,并确定透视图像的像素点坐标与单位球面的点坐标间的第二映射关系,其中,所述多张透视图像对应的球面点的集合为单位球面;Determine a plurality of perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein the set of spherical points corresponding to the plurality of perspective images is the unit sphere;
    根据所述第一映射关系和所述第二映射关系确定全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系,并根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息。The third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship. The mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
  4. 根据权利要求3所述的点云模型构建方法,其中,所述透视图像的第二特征包括第二特征点和对应的第二描述子;The method for constructing a point cloud model according to claim 3, wherein the second feature of the perspective image comprises a second feature point and a corresponding second descriptor;
    所述根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,包括:The determining of the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image includes:
    根据所述透视图像的第二特征点的坐标和所述第三映射关系确定所述全景图像的第一特征点的坐标;Determine the coordinates of the first feature point of the panoramic image according to the coordinates of the second feature point of the fluoroscopic image and the third mapping relationship;
    根据所述透视图像的第二特征点对应的第二描述子确定所述全景图像的第一特征点对应的第一描述子。The first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
  5. 根据权利要求4所述的点云模型构建方法,其中,所述第一特征包括第一特征点和对应的第一描述子;The method for constructing a point cloud model according to claim 4, wherein the first feature comprises a first feature point and a corresponding first descriptor;
    所述根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,包括:The determining of at least one group of image pairs in the panoramic image set and the corresponding matching results according to the first feature includes:
    根据每个全景图像与对应的待匹配的全景图像确定多组图像对;Determine multiple groups of image pairs according to each panoramic image and the corresponding panoramic image to be matched;
    根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对,其中,每组所述特征点对包括两个对应匹配且分属两个全景图像的第一特征点;A plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of each set of the image pairs, wherein each set of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images ;
    根据所述多组特征点对确定第一本质矩阵,并利用所述第一本质矩阵对所述多组特征点对进行过滤,得到所述图像对对应的匹配结果。A first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
  6. 根据权利要求5所述的点云模型构建方法,其中,所述根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对后,还包括:The method for constructing a point cloud model according to claim 5, wherein after determining multiple groups of feature point pairs according to the first descriptors of the two panoramic images of each group of the image pairs, the method further comprises:
    获取每组所述图像对的两个全景图像的特征点对的数量;Obtain the number of feature point pairs of the two panoramic images of each group of the image pairs;
    过滤特征点对的数量符合预设的第一条件的图像对。The image pairs whose number of feature point pairs meet the preset first condition are filtered.
  7. 根据权利要求5或6所述的点云模型构建方法,其中,根据所述多组特征点对确定第一本质矩阵,包括:The method for constructing a point cloud model according to claim 5 or 6, wherein determining the first essential matrix according to the multiple sets of feature point pairs, comprising:
    根据所述特征点对中的两个第一特征点的角度误差确定所述特征点对的角度误差,其中,所述第一特征点的角度误差为所述第一特征点对应的单位球面的球面点与单位球面的光心的连线和外极平面间的夹 角;The angle error of the feature point pair is determined according to the angle error of the two first feature points in the feature point pair, wherein the angle error of the first feature point is the angular error of the unit sphere corresponding to the first feature point The angle between the line connecting the spherical point and the optical center of the unit sphere and the outer polar plane;
    以对应的所述特征点对的角度误差为残差项,多次根据预设数目的所述特征点对确定本质矩阵;Taking the angular error of the corresponding pair of feature points as the residual term, determining the essential matrix according to the preset number of pairs of the feature points for many times;
    确定每个所述本质矩阵对应的内点数,并确定内点数最多的本质矩阵为所述第一本质矩阵。The number of interior points corresponding to each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
  8. 根据权利要求7所述的点云模型构建方法,其中,所述确定每个所述本质矩阵对应的内点数,包括:The method for constructing a point cloud model according to claim 7, wherein said determining the number of interior points corresponding to each said essential matrix comprises:
    根据所述本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the essential matrix;
    确定所述角度误差符合预设的第二条件的特征点对为内点;It is determined that the feature point pair whose angle error meets the preset second condition is an interior point;
    根据所有内点确定所述本质矩阵对应的内点数。The number of interior points corresponding to the essential matrix is determined according to all interior points.
  9. 根据权利要求7或8所述的点云模型构建方法,其中,所述利用所述第一本质矩阵对多组特征点对进行过滤,包括:The method for constructing a point cloud model according to claim 7 or 8, wherein the filtering of multiple sets of feature point pairs by using the first essential matrix comprises:
    根据所述第一本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the first essential matrix;
    过滤所述角度误差符合预设的第三条件的特征点对。The feature point pairs whose angle errors meet the preset third condition are filtered.
  10. 根据权利要求5-9任一项所述的点云模型构建方法,其中,还包括:The point cloud model construction method according to any one of claims 5-9, wherein, further comprising:
    获取所述图像对的特征点对的两个第一特征点在所属全景图像的坐标;Obtaining the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong;
    根据所述全景图像对应的第三映射关系以及所述全景图像上的属于特征点对的第一特征点的坐标,确定与特征点对相关的透视图像,其中,所述与特征点对相关的透视图像为存在与属于特征点对的第一特征点相对应的第二特征点的透视图像;A perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related The fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
    利用所述与特征点对相关的透视图像对所述图像对进行过滤。The image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
  11. 根据权利要求10所述的点云模型构建方法,其中,所述利用所述与特征点对相关的透视图像对所述图像对进行过滤,包括:The method for constructing a point cloud model according to claim 10, wherein the filtering the image pair by using the perspective image related to the feature point pair comprises:
    响应于所述图像对的至少一个全景图像对应的与特征点对相关的透视图像为连续的多张图像,且所述连续的多张图像的数量小于预设的过滤阈值,过滤所述图像对。In response to the fluoroscopic images related to the feature point pair corresponding to at least one panoramic image of the image pair being consecutive multiple images, and the number of the consecutive multiple images is less than a preset filtering threshold, filtering the image pair .
  12. 根据权利要求5-11任一项所述的点云模型构建方法,其中,所述根据所述至少一组图像对及对应的匹配结果构建点云模型,包括:The method for constructing a point cloud model according to any one of claims 5-11, wherein the constructing a point cloud model according to the at least one group of image pairs and corresponding matching results, comprising:
    根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,并确定所述初始图像对的每个全景图像的相机位姿,以及将所述初始图像对的第一特征点对进行三角化,形成初始三维点;Determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, and determine the camera pose of each panoramic image of the initial image pair, and triangulating the first feature point pair of the initial image pair to form an initial three-dimensional point;
    多次根据第一三维点对应的第一特征点和每个未注册图像的第一特征点间的匹配关系,确定一个未注册图像为注册图像,直至全景图像集中的每个全景图像均为已注册图像,其中,所述未注册图像为全部第一特征点均未三角化的全景图像,所述已注册图像为存在第一特征点被三角化的全景图像,所述第一三维点包括所述初始三维点,或包括所述初始三维点和所述已注册图像的第一特征点三角化形成的三维点;According to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image. A registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
    每次确定注册图像后,确定所述注册图像的相机位姿,并将所述注册图像的第一特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点,所述第三特征点为所述已注册图像中与所述注册图像的第一特征点匹配的第一特征点。After each registration image is determined, the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
  13. 根据权利要求12所述的点云模型构建方法,其中,所述根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,包括:The method for constructing a point cloud model according to claim 12, wherein determining a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, comprising: :
    按照特征点对的数量从大到小的顺序依次选择图像对,每次选择图像对后均根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,直至选择的图像对满足所述初始化条件,确定选择的图像对为初始图像对。Select image pairs in descending order of the number of feature point pairs. After each image pair is selected, it is determined whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix. Until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is an initial image pair.
  14. 根据权利要求13所述的点云模型构建方法,其中,所述根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,包括:The method for constructing a point cloud model according to claim 13, wherein the determining whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix comprises:
    根据所述图像对的第一本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,其中,所述位移变量包括旋转变量和平移变量;Determine at least one group of displacement variables according to the first essential matrix of the image pair, and triangulate the feature points of the feature point pair for each group of displacement variables to form three-dimensional points corresponding to each group of displacement variables, and according to each group of three-dimensional points The reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
    响应于数量最多的一组三维点的数量大于预设的第一数量阈值,确定对应的位移变量为第一位移变量;In response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determining the corresponding displacement variable as the first displacement variable;
    从多次计算得到的本质矩阵中选择内点数大于或等于点数阈值的本质矩阵,分别根据每个本质矩阵确 定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,保留每个本质矩阵的数量最多的一组三维点对应的位移变量;Select an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
    在每个本质矩阵保留的位移变量与所述第一位移变量间的差异满足预设范围的情况下,确定所述图像对满足所述初始化条件。In the case that the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
  15. 根据权利要求12所述的点云模型构建方法,其中,还包括:The method for constructing a point cloud model according to claim 12, further comprising:
    通过最小化初始的三维点在初始图像对的两个全景图像上的重投影误差,优化每个全景图像的相机位姿以及初始的三维点的位置;和/或,optimize the camera pose of each panorama image and the position of the initial 3D point by minimizing the reprojection error of the initial 3D point on the two panorama images of the initial image pair; and/or,
    每次确定注册图像的相机位姿后,通过最小化三维点在所述注册图像上的重投影误差优化所述注册图像的相机位姿;和/或,After each determination of the camera pose of the registration image, optimize the camera pose of the registration image by minimizing the reprojection error of 3D points on the registration image; and/or,
    每次将所述注册图像的特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点后,通过最小化每个三维点在每个已注册图像上的重投影误差,优化每个已注册图像的相机位姿以及每个三维点的位置。After each time the feature points of the registered image are triangulated to form corresponding three-dimensional points, and the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
  16. 根据权利要求5所述的点云模型构建方法,其中,还包括:The method for constructing a point cloud model according to claim 5, further comprising:
    根据每个全景图像对应的空间确定对应的待匹配的全景图像;或,Determine the corresponding panoramic image to be matched according to the space corresponding to each panoramic image; or,
    根据预设的搭配规则确定每个全景图像对应的待匹配的全景图像。The panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
  17. 一种点云模型构建装置,包括:A point cloud model construction device, comprising:
    获取模块,配置为获取全景图像集中的全景图像的第一特征;an acquisition module, configured to acquire the first feature of the panoramic image in the panoramic image set;
    匹配模块,配置为根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,其中,所述图像对包括第一特征匹配的两个全景图像,所述匹配结果指示两个全景图像的第一特征间的对应关系;a matching module, configured to determine at least one set of image pairs in the panoramic image set and corresponding matching results according to the first feature, wherein the image pairs include two panoramic images matched by the first feature, and the matching the result indicates the correspondence between the first features of the two panoramic images;
    构建模块,配置为根据所述至少一组图像对及对应的匹配结果构建点云模型。The building module is configured to build a point cloud model according to the at least one set of image pairs and corresponding matching results.
  18. 根据权利要求17所述的点云模型构建装置,其中,所述获取模块,配置为获取全景图像集中的全景图像的第一特征,包括:The device for constructing a point cloud model according to claim 17, wherein the obtaining module, configured to obtain the first feature of the panoramic image in the panoramic image set, comprises:
    确定所述全景图像对应的多张透视图像,其中,多张透视图像对应的空间的集合为全景图像对应的空间;determining multiple perspective images corresponding to the panoramic image, wherein the set of spaces corresponding to the multiple perspective images is the space corresponding to the panoramic image;
    获取所述多张透视图像中的至少一张透视图像的第二特征;acquiring a second feature of at least one fluoroscopic image in the plurality of fluoroscopic images;
    根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,其中,所述透视图像与所述全景图像的对应位置对应相同的空间;Determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, wherein the fluoroscopic image corresponds to the same space as the corresponding position of the panoramic image;
    根据至少一个所述第一子特征确定所述全景图像的第一特征。A first feature of the panoramic image is determined based on at least one of the first sub-features.
  19. 根据权利要求18所述的点云模型构建装置,其中,所述获取模块,配置为确定所述全景图像对应的多张透视图像,包括:The device for constructing a point cloud model according to claim 18, wherein the acquisition module is configured to determine a plurality of perspective images corresponding to the panoramic image, comprising:
    获取所述全景图像对应的单位球面,并确定全景图像的像素点坐标与单位球面的点坐标间的第一映射关系;acquiring the unit sphere corresponding to the panoramic image, and determining the first mapping relationship between the pixel point coordinates of the panoramic image and the point coordinates of the unit sphere;
    根据所述单位球面确定多张透视图像,并确定透视图像的像素点坐标与单位球面的点坐标间的第二映射关系,其中,所述多张透视图像对应的球面点的集合为单位球面;Determine a plurality of perspective images according to the unit sphere, and determine the second mapping relationship between the pixel point coordinates of the perspective image and the point coordinates of the unit sphere, wherein the set of spherical points corresponding to the plurality of perspective images is the unit sphere;
    根据所述第一映射关系和所述第二映射关系确定全景图像的像素点坐标与透视图像的像素点坐标间的第三映射关系,并根据全景图像的像素点的像素信息和所述第三映射关系确定透视图像的像素点的像素信息。The third mapping relationship between the pixel point coordinates of the panoramic image and the pixel point coordinates of the fluoroscopic image is determined according to the first mapping relationship and the second mapping relationship, and the third mapping relationship between the pixel point coordinates of the panoramic image and the third mapping relationship is determined according to the first mapping relationship and the second mapping relationship. The mapping relationship determines the pixel information of the pixels of the fluoroscopic image.
  20. 根据权利要求19所述的点云模型构建装置,其中,所述透视图像的第二特征包括第二特征点和对应的第二描述子;The device for constructing a point cloud model according to claim 19, wherein the second feature of the perspective image comprises a second feature point and a corresponding second descriptor;
    所述获取模块,配置为根据所述透视图像的第二特征确定所述全景图像的对应位置的第一子特征,包括:The acquisition module, configured to determine the first sub-feature of the corresponding position of the panoramic image according to the second feature of the fluoroscopic image, includes:
    根据所述透视图像的第二特征点的坐标和所述第三映射关系确定所述全景图像的第一特征点的坐标;Determine the coordinates of the first feature point of the panoramic image according to the coordinates of the second feature point of the fluoroscopic image and the third mapping relationship;
    根据所述透视图像的第二特征点对应的第二描述子确定所述全景图像的第一特征点对应的第一描述 子。The first descriptor corresponding to the first feature point of the panoramic image is determined according to the second descriptor corresponding to the second feature point of the fluoroscopic image.
  21. 根据权利要求20所述的点云模型构建装置,其中,所述第一特征包括第一特征点和对应的第一描述子;The device for constructing a point cloud model according to claim 20, wherein the first feature comprises a first feature point and a corresponding first descriptor;
    所述匹配模块,配置为根据所述第一特征确定所述全景图像集中的至少一组图像对及对应的匹配结果,包括:The matching module, configured to determine at least one group of image pairs in the panoramic image set and corresponding matching results according to the first feature, includes:
    根据每个全景图像与对应的待匹配的全景图像确定多组图像对;Determine multiple groups of image pairs according to each panoramic image and the corresponding panoramic image to be matched;
    根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对,其中,每组所述特征点对包括两个对应匹配且分属两个全景图像的第一特征点;A plurality of sets of feature point pairs are determined according to the first descriptors of the two panoramic images of each set of the image pairs, wherein each set of the feature point pairs includes two correspondingly matched first feature points belonging to the two panoramic images ;
    根据所述多组特征点对确定第一本质矩阵,并利用所述第一本质矩阵对所述多组特征点对进行过滤,得到所述图像对对应的匹配结果。A first essential matrix is determined according to the multiple sets of feature point pairs, and the multiple sets of feature point pairs are filtered by using the first essential matrix to obtain matching results corresponding to the image pairs.
  22. 根据权利要求21所述的点云模型构建装置,其中,所述匹配模块,配置为根据每组所述图像对的两个全景图像的第一描述子确定多组特征点对后,还包括:The device for constructing a point cloud model according to claim 21, wherein the matching module, after being configured to determine multiple sets of feature point pairs according to the first descriptors of the two panoramic images of each set of the image pairs, further comprises:
    获取每组所述图像对的两个全景图像的特征点对的数量;Obtain the number of feature point pairs of the two panoramic images of each group of the image pairs;
    过滤特征点对的数量符合预设的第一条件的图像对。The image pairs whose number of feature point pairs meet the preset first condition are filtered.
  23. 根据权利要求21或22所述的点云模型构建装置,其中,所述匹配模块,配置为根据所述多组特征点对确定第一本质矩阵,包括:The device for constructing a point cloud model according to claim 21 or 22, wherein the matching module, configured to determine the first essential matrix according to the multiple sets of feature point pairs, comprises:
    根据所述特征点对中的两个第一特征点的角度误差确定所述特征点对的角度误差,其中,所述第一特征点的角度误差为所述第一特征点对应的单位球面的球面点与单位球面的光心的连线和外极平面间的夹角;The angle error of the feature point pair is determined according to the angle error of the two first feature points in the feature point pair, wherein the angle error of the first feature point is the angular error of the unit sphere corresponding to the first feature point The angle between the line connecting the spherical point and the optical center of the unit sphere and the outer polar plane;
    以对应的所述特征点对的角度误差为残差项,多次根据预设数目的所述特征点对确定本质矩阵;Taking the angular error of the corresponding pair of feature points as the residual term, determining the essential matrix according to the preset number of pairs of the feature points for many times;
    确定每个所述本质矩阵对应的内点数,并确定内点数最多的本质矩阵为所述第一本质矩阵。The number of interior points corresponding to each of the essential matrices is determined, and the essential matrix with the largest number of interior points is determined as the first essential matrix.
  24. 根据权利要求23所述的点云模型构建装置,其中,所述匹配模块,配置为确定每个所述本质矩阵对应的内点数,包括:The device for constructing a point cloud model according to claim 23, wherein the matching module, configured to determine the number of interior points corresponding to each of the essential matrices, comprises:
    根据所述本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the essential matrix;
    确定所述角度误差符合预设的第二条件的特征点对为内点;It is determined that the feature point pair whose angle error meets the preset second condition is an interior point;
    根据所有内点确定所述本质矩阵对应的内点数。The number of interior points corresponding to the essential matrix is determined according to all interior points.
  25. 根据权利要求23或24所述的点云模型构建装置,其中,所述利用所述第一本质矩阵对多组特征点对进行过滤,包括:The device for constructing a point cloud model according to claim 23 or 24, wherein the filtering of multiple sets of feature point pairs by using the first essential matrix comprises:
    根据所述第一本质矩阵确定所述图像对的每组特征点对的角度误差;Determine the angle error of each group of feature point pairs of the image pair according to the first essential matrix;
    过滤所述角度误差符合预设的第三条件的特征点对。The feature point pairs whose angle errors meet the preset third condition are filtered.
  26. 根据权利要求21-25任一项所述的点云模型构建装置,其中,还包括:The device for constructing a point cloud model according to any one of claims 21-25, further comprising:
    获取所述图像对的特征点对的两个第一特征点在所属全景图像的坐标;Obtaining the coordinates of the two first feature points of the feature point pair of the image pair in the panorama image to which they belong;
    根据所述全景图像对应的第三映射关系以及所述全景图像上的属于特征点对的第一特征点的坐标,确定与特征点对相关的透视图像,其中,所述与特征点对相关的透视图像为存在与属于特征点对的第一特征点相对应的第二特征点的透视图像;A perspective image related to the feature point pair is determined according to the third mapping relationship corresponding to the panoramic image and the coordinates of the first feature point belonging to the feature point pair on the panoramic image, wherein the feature point pair-related The fluoroscopic image is a fluoroscopic image in which there is a second feature point corresponding to the first feature point belonging to the feature point pair;
    利用所述与特征点对相关的透视图像对所述图像对进行过滤。The image pairs are filtered using the fluoroscopic images associated with the feature point pairs.
  27. 根据权利要求26所述的点云模型构建装置,其中,所述利用所述与特征点对相关的透视图像对所述图像对进行过滤,包括:The device for constructing a point cloud model according to claim 26, wherein the filtering of the image pair by using the perspective image related to the feature point pair comprises:
    响应于所述图像对的至少一个全景图像对应的与特征点对相关的透视图像为连续的多张图像,且所述连续的多张图像的数量小于预设的过滤阈值,过滤所述图像对。In response to the fluoroscopic images related to the feature point pair corresponding to at least one panoramic image of the image pair being consecutive multiple images, and the number of the consecutive multiple images is less than a preset filtering threshold, filtering the image pair .
  28. 根据权利要求21-27任一项所述的点云模型构建装置,其中,所述根据所述至少一组图像对及对应的匹配结果构建点云模型,包括:The device for constructing a point cloud model according to any one of claims 21-27, wherein the constructing the point cloud model according to the at least one set of image pairs and corresponding matching results comprises:
    根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,并确定所述初始图像对的每个全景图像的相机位姿,以及将所述初始图像对的第一特征点对进行三角化,形成初始三维点;Determine a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, and determine the camera pose of each panoramic image of the initial image pair, and triangulating the first feature point pair of the initial image pair to form an initial three-dimensional point;
    多次根据第一三维点对应的第一特征点和每个未注册图像的第一特征点间的匹配关系,确定一个未注册图像为注册图像,直至全景图像集中的每个全景图像均为已注册图像,其中,所述未注册图像为全部第一特征点均未三角化的全景图像,所述已注册图像为存在第一特征点被三角化的全景图像,所述第一三维点包括所述初始三维点,或包括所述初始三维点和所述已注册图像的第一特征点三角化形成的三维点;According to the matching relationship between the first feature point corresponding to the first three-dimensional point and the first feature point of each unregistered image, an unregistered image is determined as a registered image, until each panoramic image in the panoramic image set is a registered image. A registered image, wherein the unregistered image is a panoramic image in which all the first feature points are not triangulated, the registered image is a panoramic image with triangulated first feature points, and the first three-dimensional point includes all the initial three-dimensional point, or a three-dimensional point formed by triangulating the initial three-dimensional point and the first feature point of the registered image;
    每次确定注册图像后,确定所述注册图像的相机位姿,并将所述注册图像的第一特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点,所述第三特征点为所述已注册图像中与所述注册图像的第一特征点匹配的第一特征点。After each registration image is determined, the camera pose of the registration image is determined, the first feature point of the registration image is triangulated to form a corresponding three-dimensional point, and the third feature point of the registered image is triangulated. to form a corresponding three-dimensional point, and the third feature point is a first feature point in the registered image that matches the first feature point of the registered image.
  29. 根据权利要求28所述的点云模型构建装置,其中,所述根据预设的初始化条件和每组图像对的特征点对及第一本质矩阵确定一组图像对为初始图像对,包括:The device for constructing a point cloud model according to claim 28, wherein determining a group of image pairs as initial image pairs according to preset initialization conditions, feature point pairs of each group of image pairs and the first essential matrix, comprising: :
    按照特征点对的数量从大到小的顺序依次选择图像对,每次选择图像对后均根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,直至选择的图像对满足所述初始化条件,确定选择的图像对为初始图像对。Select image pairs in descending order of the number of feature point pairs. After each image pair is selected, it is determined whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix. Until the selected image pair satisfies the initialization condition, it is determined that the selected image pair is an initial image pair.
  30. 根据权利要求29所述的点云模型构建装置,其中,所述根据所述特征点对及所述第一本质矩阵,确定所述图像对是否满足所述初始化条件,包括:The device for constructing a point cloud model according to claim 29, wherein determining whether the image pair satisfies the initialization condition according to the feature point pair and the first essential matrix comprises:
    根据所述图像对的第一本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,其中,所述位移变量包括旋转变量和平移变量;Determine at least one group of displacement variables according to the first essential matrix of the image pair, and triangulate the feature points of the feature point pair for each group of displacement variables to form three-dimensional points corresponding to each group of displacement variables, and according to each group of three-dimensional points The reprojection error and triangulation angle filter the three-dimensional point, wherein the displacement variable includes a rotation variable and a translation variable;
    响应于数量最多的一组三维点的数量大于预设的第一数量阈值,确定对应的位移变量为第一位移变量;In response to the number of the largest group of three-dimensional points being greater than the preset first number threshold, determining the corresponding displacement variable as the first displacement variable;
    从多次计算得到的本质矩阵中选择内点数大于或等于点数阈值的本质矩阵,分别根据每个本质矩阵确定至少一组位移变量,并针对每组位移变量分别三角化特征点对的特征点,以形成各组位移变量对应的三维点,以及根据各组三维点的重投影误差和三角化角度过滤所述三维点,保留每个本质矩阵的数量最多的一组三维点对应的位移变量;Select an essential matrix with the number of inner points greater than or equal to the threshold of the number of points from the essential matrices obtained by multiple calculations, determine at least one set of displacement variables according to each essential matrix, and triangulate the feature points of the feature point pair for each set of displacement variables respectively, To form three-dimensional points corresponding to each group of displacement variables, and filter the three-dimensional points according to the reprojection error and triangulation angle of each group of three-dimensional points, and retain the displacement variables corresponding to a group of three-dimensional points with the largest number of each essential matrix;
    在每个本质矩阵保留的位移变量与所述第一位移变量间的差异满足预设范围的情况下,确定所述图像对满足所述初始化条件。In the case that the difference between the displacement variable retained by each essential matrix and the first displacement variable satisfies a preset range, it is determined that the image pair satisfies the initialization condition.
  31. 根据权利要求28所述的点云模型构建装置,其中,还包括:The device for constructing a point cloud model according to claim 28, further comprising:
    通过最小化初始的三维点在初始图像对的两个全景图像上的重投影误差,优化每个全景图像的相机位姿以及初始的三维点的位置;和/或,optimize the camera pose of each panorama image and the position of the initial 3D point by minimizing the reprojection error of the initial 3D point on the two panorama images of the initial image pair; and/or,
    每次确定注册图像的相机位姿后,通过最小化三维点在所述注册图像上的重投影误差优化所述注册图像的相机位姿;和/或,After each determination of the camera pose of the registration image, optimize the camera pose of the registration image by minimizing the reprojection error of 3D points on the registration image; and/or,
    每次将所述注册图像的特征点进行三角化以形成对应的三维点,以及将已注册图像的第三特征点进行三角化以形成对应的三维点后,通过最小化每个三维点在每个已注册图像上的重投影误差,优化每个已注册图像的相机位姿以及每个三维点的位置。After each time the feature points of the registered image are triangulated to form corresponding three-dimensional points, and the third feature points of the registered images are triangulated to form corresponding three-dimensional points, by minimizing each three-dimensional point in each The reprojection error on each registered image, optimizes the camera pose for each registered image and the position of each 3D point.
  32. 根据权利要求21所述的点云模型构建装置,其中,还包括:The device for constructing a point cloud model according to claim 21, further comprising:
    根据每个全景图像对应的空间确定对应的待匹配的全景图像;或,Determine the corresponding panoramic image to be matched according to the space corresponding to each panoramic image; or,
    根据预设的搭配规则确定每个全景图像对应的待匹配的全景图像。The panorama image to be matched corresponding to each panorama image is determined according to a preset collocation rule.
  33. 一种电子设备,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现权利要求1至16任一项所述的方法。An electronic device comprising a memory and a processor, wherein the memory is used to store computer instructions that can be executed on the processor, and the processor is used to implement any one of claims 1 to 16 when executing the computer instructions method described in item.
  34. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现权利要求1至16任一所述的方法。A computer-readable storage medium on which a computer program is stored, which implements the method of any one of claims 1 to 16 when the program is executed by a processor.
  35. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至16中任一所述的方法。A computer program comprising computer readable code which, when executed in an electronic device, a processor in the electronic device executes for implementing the method of any one of claims 1 to 16.
PCT/CN2021/105574 2021-03-04 2021-07-09 Point cloud model construction method and apparatus, electronic device, storage medium, and program WO2022183657A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020227013015A KR102638632B1 (en) 2021-03-04 2021-07-09 Methods, devices, electronic devices, storage media and programs for building point cloud models
JP2022525128A JP2023519466A (en) 2021-03-04 2021-07-09 POINT CLOUD MODEL CONSTRUCTION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110240320.4 2021-03-04
CN202110240320.4A CN112837419B (en) 2021-03-04 2021-03-04 Point cloud model construction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022183657A1 true WO2022183657A1 (en) 2022-09-09

Family

ID=75934603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/105574 WO2022183657A1 (en) 2021-03-04 2021-07-09 Point cloud model construction method and apparatus, electronic device, storage medium, and program

Country Status (4)

Country Link
JP (1) JP2023519466A (en)
KR (1) KR102638632B1 (en)
CN (1) CN112837419B (en)
WO (1) WO2022183657A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116858215A (en) * 2023-09-05 2023-10-10 武汉大学 AR navigation map generation method and device
CN117437289A (en) * 2023-12-20 2024-01-23 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112837419B (en) * 2021-03-04 2022-06-24 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium
CN113920263A (en) * 2021-10-18 2022-01-11 浙江商汤科技开发有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN114429495B (en) * 2022-03-14 2022-08-30 荣耀终端有限公司 Three-dimensional scene reconstruction method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398937A (en) * 2008-10-29 2009-04-01 北京航空航天大学 Three-dimensional reconstruction method based on fringe photograph collection of same scene
CN110021065A (en) * 2019-03-07 2019-07-16 杨晓春 A kind of indoor environment method for reconstructing based on monocular camera
CN110211223A (en) * 2019-05-28 2019-09-06 哈工大新材料智能装备技术研究院(招远)有限公司 A kind of increment type multiview three-dimensional method for reconstructing
CN110363838A (en) * 2019-06-06 2019-10-22 浙江大学 Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models
CN112837419A (en) * 2021-03-04 2021-05-25 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002216163A (en) * 2001-01-12 2002-08-02 Oojisu Soken:Kk Generating method of panoramic image by optional viewpoint, computer program and recording medium
CN101692187A (en) * 2008-05-11 2010-04-07 捷讯研究有限公司 Mobile electronic device and associated method enabling transliteration of a text input
EP3422711A1 (en) * 2017-06-29 2019-01-02 Koninklijke Philips N.V. Apparatus and method for generating an image
KR102015099B1 (en) * 2018-01-25 2019-10-21 전자부품연구원 Apparatus and method for providing wrap around view monitoring using dis information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398937A (en) * 2008-10-29 2009-04-01 北京航空航天大学 Three-dimensional reconstruction method based on fringe photograph collection of same scene
CN110021065A (en) * 2019-03-07 2019-07-16 杨晓春 A kind of indoor environment method for reconstructing based on monocular camera
CN110211223A (en) * 2019-05-28 2019-09-06 哈工大新材料智能装备技术研究院(招远)有限公司 A kind of increment type multiview three-dimensional method for reconstructing
CN110363838A (en) * 2019-06-06 2019-10-22 浙江大学 Big field-of-view image three-dimensionalreconstruction optimization method based on more spherical surface camera models
CN112837419A (en) * 2021-03-04 2021-05-25 浙江商汤科技开发有限公司 Point cloud model construction method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116858215A (en) * 2023-09-05 2023-10-10 武汉大学 AR navigation map generation method and device
CN116858215B (en) * 2023-09-05 2023-12-05 武汉大学 AR navigation map generation method and device
CN117437289A (en) * 2023-12-20 2024-01-23 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment
CN117437289B (en) * 2023-12-20 2024-04-02 绘见科技(深圳)有限公司 Space calculation method based on multi-source sensor and related equipment

Also Published As

Publication number Publication date
JP2023519466A (en) 2023-05-11
CN112837419B (en) 2022-06-24
CN112837419A (en) 2021-05-25
KR20220125714A (en) 2022-09-14
KR102638632B1 (en) 2024-02-20

Similar Documents

Publication Publication Date Title
WO2022183657A1 (en) Point cloud model construction method and apparatus, electronic device, storage medium, and program
JP7328366B2 (en) Information processing method, positioning method and device, electronic device and storage medium
Sturm et al. Camera models and fundamental concepts used in geometric computer vision
WO2020206903A1 (en) Image matching method and device, and computer readable storage medium
US11816810B2 (en) 3-D reconstruction using augmented reality frameworks
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
EP3274964B1 (en) Automatic connection of images using visual features
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN110580720B (en) Panorama-based camera pose estimation method
US20090232415A1 (en) Platform for the production of seamless orthographic imagery
Aly et al. Street view goes indoors: Automatic pose estimation from uncalibrated unordered spherical panoramas
WO2023024393A1 (en) Depth estimation method and apparatus, computer device, and storage medium
CN115082617A (en) Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium
US8509522B2 (en) Camera translation using rotation from device
CN113902802A (en) Visual positioning method and related device, electronic equipment and storage medium
CN113808269A (en) Map generation method, positioning method, system and computer readable storage medium
WO2024032101A1 (en) Feature map generation method and apparatus, storage medium, and computer device
KR100944293B1 (en) Mechanism for reconstructing full 3D model using single-axis turntable images
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
Ventura et al. Structure and motion in urban environments using upright panoramas
CN110135474A (en) A kind of oblique aerial image matching method and system based on deep learning
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium
CN114937123B (en) Building modeling method and device based on multi-source image fusion
US20210385428A1 (en) System and method for identifying a relative position and direction of a camera relative to an object
CA3239769A1 (en) System and methods for validating imagery pipelines

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022525128

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21928735

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21928735

Country of ref document: EP

Kind code of ref document: A1