CN109472741B - Three-dimensional splicing method and device - Google Patents

Three-dimensional splicing method and device Download PDF

Info

Publication number
CN109472741B
CN109472741B CN201811163271.3A CN201811163271A CN109472741B CN 109472741 B CN109472741 B CN 109472741B CN 201811163271 A CN201811163271 A CN 201811163271A CN 109472741 B CN109472741 B CN 109472741B
Authority
CN
China
Prior art keywords
dimensional
frame depth
relationship
determining
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811163271.3A
Other languages
Chinese (zh)
Other versions
CN109472741A (en
Inventor
刘增艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN201811163271.3A priority Critical patent/CN109472741B/en
Publication of CN109472741A publication Critical patent/CN109472741A/en
Application granted granted Critical
Publication of CN109472741B publication Critical patent/CN109472741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional splicing method and device. Wherein the method comprises the following steps: determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first corresponding relation between identification points in different texture maps; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; and based on the first relative position relation, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps. The invention solves the technical problem of higher splicing failure rate of the three-dimensional splicing scheme adopted in the prior art.

Description

Three-dimensional splicing method and device
Technical Field
The invention relates to the field of three-dimensional scanning, in particular to a three-dimensional splicing method and device.
Background
In the existing handheld three-dimensional scanning technical scheme, the basic splicing principle is that ICP splicing is carried out on the basis of the overlapped surfaces between three-dimensional points reconstructed by front and rear frames, the initial value requirement of the existing scheme on the initial relative position between the front and rear frames is relatively high, and meanwhile, splicing failure is easily caused on plane characteristics, no obvious characteristics or symmetrical objects, so that scanning cannot be carried out.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a three-dimensional splicing method and a device, which at least solve the technical problem of high splicing failure rate of a three-dimensional splicing scheme adopted in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a three-dimensional stitching method, including: determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first corresponding relation between identification points in different texture maps; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; and based on the first relative position relation, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps.
Further, determining a first mapping relationship between the three-dimensional point of the current frame depth map and the pixel point of the texture map includes: determining a second relative positional relationship between the depth camera and the texture camera by calibrating the depth camera and the texture camera; performing three-dimensional reconstruction based on the depth camera to obtain three-dimensional points of the current frame depth map; based on the second relative positional relationship, the first mapping relationship is obtained by projecting the three-dimensional points onto the texture map acquired by the texture camera.
Further, determining a first relative positional relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first correspondence relationship includes: determining a second mapping relation between the identification points and the pixel points; determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation; and determining the first relative position relationship according to the third mapping relationship and the first corresponding relationship.
Further, determining the first relative positional relationship according to the third mapping relationship and the first correspondence relationship includes: determining a second corresponding relation between the three-dimensional points in the current frame depth map and the other frame depth maps according to the third mapping relation and the first corresponding relation; and determining the first relative position relationship based on the second corresponding relationship.
Further, based on the first relative positional relationship, performing three-dimensional stitching on the three-dimensional points in the current frame depth map and the other frame depth maps, including: performing initial three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative position relation; and on the basis of the initial three-dimensional stitching, adopting an iterative nearest point algorithm to accurately stitch three-dimensional points in the current frame depth map and the other frame depth maps.
Further, the identification point includes at least one of the following: the method further comprises the steps of, before determining a second mapping relation between the identification point and the pixel point, if the identification point is the corner point: extracting each of the corner points in the texture map by a target extraction algorithm, wherein the target extraction algorithm comprises at least one of the following: harris corner extraction algorithm, SIFT algorithm, SURF feature extraction algorithm, FAST corner detection algorithm, AGAST corner detection algorithm, BRISK feature extraction algorithm, FREAK feature extraction algorithm, ORB feature extraction algorithm.
Further, the identification points in different texture maps are subjected to point-to-point matching through a RANSAC algorithm, and the first corresponding relation is obtained.
According to another aspect of the embodiment of the present invention, there is also provided a three-dimensional stitching device, including: the first determining module is used for determining a first mapping relation between the three-dimensional point of the current frame depth map and the pixel point of the texture map and a first corresponding relation between the identification points in different texture maps; the second determining module is configured to determine a first relative positional relationship between the current frame depth map and another frame depth map according to the first mapping relationship and the first correspondence relationship, where the other frame depth map is a depth map different from the current frame depth map; and the splicing module is used for carrying out three-dimensional splicing on the three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative position relation.
Further, the first determining module includes: the determining unit is used for determining a second relative position relationship between the depth camera and the texture camera through calibrating the depth camera and the texture camera; the acquisition unit is used for carrying out three-dimensional reconstruction based on the depth camera and acquiring three-dimensional points of the current frame depth map; and a mapping unit, configured to obtain the first mapping relationship by projecting the three-dimensional point onto the texture map acquired by the texture camera based on the second relative positional relationship.
According to another aspect of the embodiment of the present invention, there is further provided a storage medium, where the storage medium includes a stored program, and the device in which the storage medium is controlled to execute any one of the three-dimensional stitching methods when the program runs.
According to another aspect of the embodiment of the present invention, there is further provided a processor, where the processor is configured to execute a program, and when the program is executed, execute any one of the three-dimensional stitching methods described above.
In the embodiment of the invention, a first mapping relation between a three-dimensional point of a depth map of a current frame and a pixel point of a texture map and a first corresponding relation between identification points in different texture maps are determined; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; based on the first relative position relationship, three-dimensional stitching is performed on the three-dimensional points in the current frame depth map and the other frame depth maps, so that the purpose of improving the accuracy of three-dimensional stitching is achieved, the technical effects of reducing the times of three-dimensional stitching failure and improving the smoothness of scanning are achieved, and the technical problem that the stitching failure rate of a three-dimensional stitching scheme adopted in the prior art is high is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a three-dimensional stitching method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative three-dimensional stitching method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an alternative three-dimensional stitching method according to an embodiment of the present invention;
FIG. 4 is a flow chart of an alternative three-dimensional stitching method according to an embodiment of the present invention; and
fig. 5 is a schematic structural view of a three-dimensional stitching device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided an embodiment of a three-dimensional stitching method, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
Fig. 1 is a flowchart of a three-dimensional stitching method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first mapping relation between identification points in different texture maps;
step S104, determining a first relative position relation between the current frame depth map and other frame depth maps according to the first mapping relation and the first corresponding relation, wherein the other frame depth maps are different from the current frame depth map;
and step S106, based on the first relative position relation, performing three-dimensional stitching on the three-dimensional points in the current frame depth map and the other frame depth maps.
It should be noted that the embodiments of the present application may be, but are not limited to, suitable for use in the field of three-dimensional scanning based on structured light, especially in the field of hand-held scanners, e.g., for consumer scanning, dental portal scanning, etc. In addition, in the field of handheld scanners, the method can be used for three-dimensional scanning schemes of mark points and non-mark points, and can reduce ambiguity on symmetrical objects caused by purely relying on three-dimensional ICP splicing to the greatest extent by carrying out two-dimensional rough matching through texture information of a texture map, so that the accuracy of three-dimensional splicing is improved.
Optionally, the three-dimensional points are three-dimensional points of a current frame depth map obtained by three-dimensional reconstruction based on a depth camera; the identification point comprises at least one of the following: corner points and mark points, wherein the positions of the corner points are positioned at the intersection points with great variation in the x and y directions in the texture map; the marking point can be a point for manually marking; the other frame depth map may be, but is not limited to, a previous frame depth map to the current frame depth map, a subsequent frame depth map to the current frame depth map, and so on.
In an alternative embodiment, calibration may be performed on the depth camera and the texture camera based on the foregoing advance, where the depth camera and the texture camera are cameras in the handheld scanner, and based on the calibrated relative positional relationship between the depth camera and the texture camera, the three-dimensional point is projected onto a texture map acquired by the texture camera, and the first mapping relationship between the three-dimensional point and the pixel point is obtained by mapping the three-dimensional point and the pixel point on the texture map one by one. It should be noted that, the three-dimensional point and the pixel point may be embodied in the form of coordinates, for example, the mapping relationship between the three-dimensional point and the pixel point may be, but not limited to, understood as the mapping relationship between the three-dimensional coordinates and the pixel coordinates.
In the above alternative embodiment, in the case where the identified point is a corner point, a point-to-point matching is required for the corner points extracted from the different texture maps to determine a first correspondence between the corner points in the different texture maps.
In this embodiment of the present application, each corner in each texture map corresponds to a depth value, where the depth value refers to a z value in a three-dimensional point (x, y, z) measured by a depth camera corresponding to each corner.
In the above optional embodiment, since each corner in each texture map corresponds to a depth value, a third mapping relationship between the corner and the three-dimensional point may be determined according to the depth value of each corner and the depth value of the three-dimensional point.
In another alternative embodiment, in the embodiment of the present application, the first correspondence may be obtained by performing point-to-point matching on the identification points in different texture maps through, but not limited to, a RANSAC algorithm.
It should be noted that, as an alternative embodiment, the present application may, but is not limited to, use a RANSAC matching algorithm to accurately match unordered corner points on two or more different texture maps. The basic idea of the RANSAC matching algorithm is to randomly select a small data point subset, then fit the small data point subset, check how many other points are matched to the fitted model, and iterate the process until a larger probability of finding the model to be fitted is found. And obtaining a first corresponding relation between corner points in different texture maps through matching operation of a RANSAC algorithm, and then obtaining a first relative position relation between the current frame depth map and other frame depth maps based on the first mapping relation and the first corresponding relation, so as to perform three-dimensional splicing on three-dimensional points in the current frame depth map and other frame depth maps based on the first relative position relation.
In the embodiment of the application, in the process of carrying out three-dimensional scanning on an object without a mark point by using a handheld scanner, the three-dimensional stitching is carried out in an auxiliary manner based on the texture information of the scanned object by combining the texture information in the texture map with the frame image acquired by the depth camera, so that the problem that stitching failure is easily caused by the fact that the scanned object is planar, has no obvious characteristic or is symmetrical can be solved, and the accuracy and fluency of the handheld scanner can be remarkably improved.
In the embodiment of the invention, a first mapping relation between a three-dimensional point of a depth map of a current frame and a pixel point of a texture map and a first corresponding relation between identification points in different texture maps are determined; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; based on the first relative position relationship, three-dimensional stitching is performed on the three-dimensional points in the current frame depth map and the other frame depth maps, so that the purpose of improving the accuracy of three-dimensional stitching is achieved, the technical effects of reducing the times of three-dimensional stitching failure and improving the smoothness of scanning are achieved, and the technical problem that the stitching failure rate of a three-dimensional stitching scheme adopted in the prior art is high is solved.
In an optional embodiment, in a case where the identification point is the corner point, before determining the second mapping relationship between the identification point and the pixel point, the method further includes: extracting each of the corner points in the texture map by a target extraction algorithm, wherein the target extraction algorithm comprises at least one of the following: harris corner extraction algorithm, SIFT algorithm, SURF feature extraction algorithm, FAST corner detection algorithm, AGAST corner detection algorithm, BRISK feature extraction algorithm, FREAK feature extraction algorithm, ORB feature extraction algorithm.
In an alternative embodiment, fig. 2 is a flowchart of an alternative three-dimensional stitching method according to an embodiment of the present invention, where determining, as shown in fig. 2, a first mapping relationship between a three-dimensional point of a depth map of a current frame and a pixel point of a texture map includes:
step S202, determining a second relative position relation between the depth camera and the texture camera through calibrating the depth camera and the texture camera;
step S204, performing three-dimensional reconstruction based on the depth camera to obtain three-dimensional points of the current frame depth map;
step S206, based on the second relative positional relationship, obtaining the first mapping relationship by projecting the three-dimensional points onto the texture map acquired by the texture camera.
In the above optional embodiment, the second relative position relationship between the depth camera and the texture camera is calibrated by calibrating the depth camera and the texture camera, and the three-dimensional reconstruction is performed based on the depth camera, so as to obtain the three-dimensional point of the depth map of the current frame, and the first mapping relationship is further obtained by projecting the three-dimensional point onto the texture map collected by the texture camera based on the second relative position relationship.
In an alternative embodiment, fig. 3 is a flowchart of an alternative three-dimensional stitching method according to an embodiment of the present invention, and as shown in fig. 3, determining, according to the first mapping relationship and the first correspondence relationship, a first relative positional relationship between the current frame depth map and other frame depth maps includes:
step S302, determining a second mapping relation between the identification points and the pixel points;
step S304, determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation;
step S306, determining the first relative position relationship according to the third mapping relationship and the first correspondence relationship.
Optionally, the identification point may be a corner point and a mark point in the texture map, and a mapping relationship between coordinate data exists between the mark point and a pixel point in the texture map, and in this embodiment, since a second mapping relationship exists between the mark point and the pixel point, and a first mapping relationship exists between the three-dimensional point and the pixel point, according to the first mapping relationship and the second mapping relationship, a third mapping relationship between the mark point and the three-dimensional point may be uniquely determined, and according to the third mapping relationship and the first mapping relationship, a first relative position relationship between the current frame depth map and other frame depth maps may be determined.
As an optional embodiment, fig. 4 is a flowchart of an optional three-dimensional stitching method according to an embodiment of the present invention, and as shown in fig. 4, determining the first relative positional relationship according to the third mapping relationship and the first correspondence relationship includes:
step S402, determining a second corresponding relation between the three-dimensional points in the current frame depth map and the other frame depth maps according to the third mapping relation and the first corresponding relation;
step S404, determining the first relative position relationship based on the second corresponding relationship.
In this embodiment of the present application, the second correspondence between the three-dimensional points in the current frame depth map and the other frame depth maps is determined by the third mapping relationship and the first correspondence, and in this embodiment of the present application, the first relative positional relationship between the current frame depth map and the other frame depth maps may be obtained, but is not limited to, based on a quaternion method.
In an embodiment of the present application, there is also an optional embodiment, based on the first relative positional relationship, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps, including:
step S502, performing initial three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative position relationship;
and step S504, on the basis of the initial three-dimensional stitching, adopting an iterative nearest point algorithm to accurately stitch three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, the first relative position relationship may be, but not limited to, an initial position relationship, after performing initial three-dimensional stitching on three-dimensional points in the current frame depth map and other frame depth maps based on the initial position relationship, further performing ICP stitching by using a multi-round iterative closest point algorithm on the basis of the initial three-dimensional stitching, so as to perform accurate three-dimensional stitching on the three-dimensional points in the current frame depth map and other frame depth maps.
Alternatively, in this embodiment of the present application, the first correlation positional relationship may be, but is not limited to, understood as a rotation translation matrix, and by using the rotation translation matrix to transfer the rear frame image to the coordinate system of the three-dimensional point of the front frame image, unified coordinates are implemented, so as to obtain a relative position result of initial three-dimensional stitching, where an error may exist, and by using ICP algorithm iteration, more accurate position correction is performed, so that the relative position determination is more accurate.
Example 2
According to an embodiment of the present invention, there is further provided an apparatus embodiment for implementing the three-dimensional stitching method, and fig. 5 is a schematic structural diagram of a three-dimensional stitching apparatus according to an embodiment of the present invention, as shown in fig. 5, where the three-dimensional stitching apparatus includes: a first determination module 50, a second determination module 52, and a stitching module 54, wherein,
a first determining module 50, configured to determine a first mapping relationship between a three-dimensional point of the current frame depth map and a pixel point of the texture map, and a first correspondence relationship between identification points in different texture maps; a second determining module 52, configured to determine a first relative positional relationship between the current frame depth map and another frame depth map according to the first mapping relationship and the first correspondence relationship, where the other frame depth map is a depth map different from the current frame depth map; and a stitching module 54, configured to perform three-dimensional stitching on the three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative positional relationship.
It should be noted that each of the above modules may be implemented by software or hardware, for example, in the latter case, it may be implemented by: the above modules may be located in the same processor; alternatively, the various modules described above may be located in different processors in any combination.
Here, it should be noted that the first determining module 50, the second determining module 52, and the splicing module 54 correspond to steps S102 to S106 in embodiment 1, and the modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1. It should be noted that the above modules may be run in a computer terminal as part of the apparatus.
It should be noted that, the optional or preferred implementation manner of this embodiment may be referred to the related description in embodiment 1, and will not be repeated here.
In an alternative embodiment, the first determining module includes: the determining unit is used for determining a second relative position relationship between the depth camera and the texture camera through calibrating the depth camera and the texture camera; the acquisition unit is used for carrying out three-dimensional reconstruction based on the depth camera and acquiring three-dimensional points of the current frame depth map; and a mapping unit, configured to obtain the first mapping relationship by projecting the three-dimensional point onto the texture map acquired by the texture camera based on the second relative positional relationship.
The three-dimensional stitching device may further include a processor and a memory, where the first determining module 50, the second determining module 52, the stitching module 54, and the like are stored as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, the kernel fetches corresponding program units from the memory, and one or more of the kernels can be arranged. The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
According to an embodiment of the present application, there is also provided a storage medium embodiment. Optionally, in this embodiment, the storage medium includes a stored program, where the device in which the storage medium is controlled to execute any one of the three-dimensional stitching methods when the program runs.
Alternatively, in this embodiment, the storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group, and the storage medium includes a stored program.
Optionally, the program controls the device in which the storage medium is located to perform the following functions when running: determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first corresponding relation between identification points in different texture maps; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; and based on the first relative position relation, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, the program controls the device in which the storage medium is located to perform the following functions when running: determining a second relative positional relationship between the depth camera and the texture camera by calibrating the depth camera and the texture camera; performing three-dimensional reconstruction based on the depth camera to obtain three-dimensional points of the current frame depth map; based on the second relative positional relationship, the first mapping relationship is obtained by projecting the three-dimensional points onto the texture map acquired by the texture camera.
Optionally, the program controls the device in which the storage medium is located to perform the following functions when running: determining a second mapping relation between the identification points and the pixel points; determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation; and determining the first relative position relationship according to the third mapping relationship and the first corresponding relationship.
Optionally, the program controls the device in which the storage medium is located to perform the following functions when running: determining a second corresponding relation between the three-dimensional points in the current frame depth map and the other frame depth maps according to the third mapping relation and the first corresponding relation; and determining the first relative position relationship based on the second corresponding relationship.
Optionally, the program controls the device in which the storage medium is located to perform the following functions when running: performing initial three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative position relation; and on the basis of the initial three-dimensional stitching, adopting an iterative nearest point algorithm to accurately stitch three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, the program controls the device in which the storage medium is located to perform the following functions when running: extracting each of the corner points in the texture map by a target extraction algorithm, wherein the target extraction algorithm comprises at least one of the following: harris corner extraction algorithm, SIFT algorithm, SURF feature extraction algorithm, FAST corner detection algorithm, AGAST corner detection algorithm, BRISK feature extraction algorithm, FREAK feature extraction algorithm, ORB feature extraction algorithm.
According to an embodiment of the present application, there is also provided a processor embodiment. Optionally, in this embodiment, the processor is configured to run a program, where any one of the three-dimensional stitching methods is performed when the program runs.
The embodiment of the application provides equipment, which comprises a processor, a memory and a program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the following steps: determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first corresponding relation between identification points in different texture maps; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; and based on the first relative position relation, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, when the processor executes the program, the processor may further determine a second relative positional relationship between the depth camera and the texture camera by calibrating the depth camera and the texture camera; performing three-dimensional reconstruction based on the depth camera to obtain three-dimensional points of the current frame depth map; based on the second relative positional relationship, the first mapping relationship is obtained by projecting the three-dimensional points onto the texture map acquired by the texture camera.
Optionally, when the computer program product executes the program, a second mapping relationship between the identification point and the pixel point may be determined; determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation; and determining the first relative position relationship according to the third mapping relationship and the first corresponding relationship.
Optionally, when the processor executes the program, a second correspondence between three-dimensional points in the current frame depth map and the other frame depth maps may be determined according to the third mapping relationship and the first correspondence; and determining the first relative position relationship based on the second corresponding relationship.
Optionally, when the processor executes the program, initial three-dimensional stitching may be performed on three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative positional relationship; and on the basis of the initial three-dimensional stitching, adopting an iterative nearest point algorithm to accurately stitch three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, when the processor executes the program, each of the corner points in the texture map may be further extracted by a target extraction algorithm, where the target extraction algorithm includes at least one of: harris corner extraction algorithm, SIFT algorithm, SURF feature extraction algorithm, FAST corner detection algorithm, AGAST corner detection algorithm, BRISK feature extraction algorithm, FREAK feature extraction algorithm, ORB feature extraction algorithm.
The present application also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of: determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first corresponding relation between identification points in different texture maps; determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are different from the current frame depth map; and based on the first relative position relation, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, when the computer program product executes the program, the second relative positional relationship between the depth camera and the texture camera may be determined by calibrating the depth camera and the texture camera; performing three-dimensional reconstruction based on the depth camera to obtain three-dimensional points of the current frame depth map; based on the second relative positional relationship, the first mapping relationship is obtained by projecting the three-dimensional points onto the texture map acquired by the texture camera.
Optionally, when the computer program product executes the program, a second mapping relationship between the identification point and the pixel point may be determined; determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation; and determining the first relative position relationship according to the third mapping relationship and the first corresponding relationship.
Optionally, when the computer program product executes the program, a second correspondence between three-dimensional points in the current frame depth map and the other frame depth maps may be determined according to the third mapping relationship and the first correspondence; and determining the first relative position relationship based on the second corresponding relationship.
Optionally, when the computer program product executes the program, the initial three-dimensional stitching may be performed on the three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative positional relationship; and on the basis of the initial three-dimensional stitching, adopting an iterative nearest point algorithm to accurately stitch three-dimensional points in the current frame depth map and the other frame depth maps.
Optionally, when the computer program product executes the program, each of the corner points in the texture map may be further extracted by a target extraction algorithm, where the target extraction algorithm includes at least one of: harr is corner extraction algorithm, SIFT algorithm, SURF feature extraction algorithm, FAST corner detection algorithm, AGAST corner detection algorithm, BRISK feature extraction algorithm, FREAK feature extraction algorithm, ORB feature extraction algorithm.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (9)

1. A three-dimensional stitching method, comprising:
determining a first mapping relation between three-dimensional points of a depth map of a current frame and pixel points of a texture map and a first corresponding relation between identification points in different texture maps;
determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the other frame depth maps are depth maps different from the current frame depth map;
based on the first relative position relation, performing three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps;
determining a first relative position relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first corresponding relationship, wherein the first relative position relationship comprises determining a second mapping relationship between the identification point and the pixel point; determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation; and determining the first relative position relation according to the third mapping relation and the first corresponding relation.
2. The method of claim 1, wherein determining a first mapping relationship between three-dimensional points of the current frame depth map and pixel points of the texture map comprises:
determining a second relative positional relationship between the depth camera and the texture camera by calibrating the depth camera and the texture camera;
performing three-dimensional reconstruction based on the depth camera to obtain three-dimensional points of the current frame depth map;
and based on the second relative position relationship, the first mapping relationship is obtained by projecting the three-dimensional points onto the texture map acquired by the texture camera.
3. The method of claim 1, wherein determining the first relative positional relationship in accordance with the third mapping relationship and the first correspondence relationship comprises:
determining a second corresponding relation between three-dimensional points in the current frame depth map and the other frame depth maps according to the third mapping relation and the first corresponding relation;
and determining the first relative position relationship based on the second corresponding relationship.
4. The method of claim 1, wherein three-dimensionally stitching the three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative positional relationship, comprises:
performing initial three-dimensional stitching on three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative position relation;
and on the basis of the initial three-dimensional stitching, adopting an iterative nearest point algorithm to accurately stitch three-dimensional points in the current frame depth map and the other frame depth maps.
5. The method of claim 1, wherein the identified points comprise at least one of: the method further comprises the steps of, under the condition that the identification point is the corner point, before determining a second mapping relation between the identification point and the pixel point:
extracting each of the corner points in the texture map by a target extraction algorithm, wherein the target extraction algorithm comprises at least one of the following: harris corner extraction algorithm, SIFT algorithm, SURF feature extraction algorithm, FAST corner detection algorithm, AGAST corner detection algorithm, BRISK feature extraction algorithm, FREAK feature extraction algorithm, ORB feature extraction algorithm.
6. The method according to any one of claims 1 to 5, wherein the first correspondence is obtained by performing point-to-point matching on the identified points in different texture maps by means of a RANSAC algorithm.
7. A three-dimensional splicing device, comprising:
the first determining module is used for determining a first mapping relation between the three-dimensional point of the current frame depth map and the pixel point of the texture map and a first corresponding relation between the identification points in different texture maps;
the second determining module is configured to determine a first relative positional relationship between the current frame depth map and other frame depth maps according to the first mapping relationship and the first correspondence relationship, where the other frame depth maps are depth maps different from the current frame depth map;
the splicing module is used for carrying out three-dimensional splicing on the three-dimensional points in the current frame depth map and the other frame depth maps based on the first relative position relation;
the second determining module is further configured to determine a second mapping relationship between the identification point and the pixel point; determining a third mapping relation between the identification point and the three-dimensional point according to the first mapping relation and the second mapping relation; and determining the first relative position relation according to the third mapping relation and the first corresponding relation.
8. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the three-dimensional stitching method of any one of claims 1 to 6.
9. A processor, characterized in that the processor is configured to run a program, wherein the program when run performs the three-dimensional stitching method according to any of claims 1-6.
CN201811163271.3A 2018-09-30 2018-09-30 Three-dimensional splicing method and device Active CN109472741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811163271.3A CN109472741B (en) 2018-09-30 2018-09-30 Three-dimensional splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811163271.3A CN109472741B (en) 2018-09-30 2018-09-30 Three-dimensional splicing method and device

Publications (2)

Publication Number Publication Date
CN109472741A CN109472741A (en) 2019-03-15
CN109472741B true CN109472741B (en) 2023-05-30

Family

ID=65663294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811163271.3A Active CN109472741B (en) 2018-09-30 2018-09-30 Three-dimensional splicing method and device

Country Status (1)

Country Link
CN (1) CN109472741B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096922B (en) * 2019-05-08 2022-07-12 深圳市易尚展示股份有限公司 Method and device for processing coding points, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945565A (en) * 2012-10-18 2013-02-27 深圳大学 Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103886593A (en) * 2014-03-07 2014-06-25 华侨大学 Method for detecting hook face circular hole based on three-dimensional point cloud
CN104299211A (en) * 2014-09-25 2015-01-21 周翔 Free-moving type three-dimensional scanning method
CN107680039A (en) * 2017-09-22 2018-02-09 武汉中观自动化科技有限公司 A kind of point cloud method and system based on white light scanning instrument
CN108267097A (en) * 2017-07-17 2018-07-10 杭州先临三维科技股份有限公司 Three-dimensional reconstruction method and device based on binocular three-dimensional scanning system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000348199A (en) * 1999-06-04 2000-12-15 Minolta Co Ltd Method and device for texture mapping
JP4680104B2 (en) * 2006-03-22 2011-05-11 日本電信電話株式会社 Panorama image creation method
US8463073B2 (en) * 2010-11-29 2013-06-11 Microsoft Corporation Robust recovery of transform invariant low-rank textures
CN103198523B (en) * 2013-04-26 2016-09-21 清华大学 A kind of three-dimensional non-rigid body reconstruction method based on many depth maps and system
CN103761765B (en) * 2013-12-25 2017-12-19 浙江慧谷信息技术有限公司 Three-dimensional object model texture mapping method based on mapped boundaries optimization
CN104008571B (en) * 2014-06-12 2017-01-18 深圳奥比中光科技有限公司 Human body model obtaining method and network virtual fitting system based on depth camera
CN105279508A (en) * 2015-09-08 2016-01-27 哈尔滨工程大学 Medical image classification method based on KAP digraph model
CN106568394A (en) * 2015-10-09 2017-04-19 西安知象光电科技有限公司 Hand-held three-dimensional real-time scanning method
CN106952331B (en) * 2017-02-28 2020-12-08 深圳信息职业技术学院 Texture mapping method and device based on three-dimensional model
CN108694740A (en) * 2017-03-06 2018-10-23 索尼公司 Information processing equipment, information processing method and user equipment
CN107784687A (en) * 2017-09-22 2018-03-09 武汉中观自动化科技有限公司 A kind of three-dimensional rebuilding method and system based on white light scanning instrument
CN108470323B (en) * 2018-03-13 2020-07-31 京东方科技集团股份有限公司 Image splicing method, computer equipment and display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945565A (en) * 2012-10-18 2013-02-27 深圳大学 Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103886593A (en) * 2014-03-07 2014-06-25 华侨大学 Method for detecting hook face circular hole based on three-dimensional point cloud
CN104299211A (en) * 2014-09-25 2015-01-21 周翔 Free-moving type three-dimensional scanning method
CN108267097A (en) * 2017-07-17 2018-07-10 杭州先临三维科技股份有限公司 Three-dimensional reconstruction method and device based on binocular three-dimensional scanning system
CN107680039A (en) * 2017-09-22 2018-02-09 武汉中观自动化科技有限公司 A kind of point cloud method and system based on white light scanning instrument

Also Published As

Publication number Publication date
CN109472741A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN113643378B (en) Active rigid body pose positioning method in multi-camera environment and related equipment
CN110568447B (en) Visual positioning method, device and computer readable medium
CN109389665B (en) Texture obtaining method, device and equipment of three-dimensional model and storage medium
CN111291584B (en) Method and system for identifying two-dimensional code position
JP6507730B2 (en) Coordinate transformation parameter determination device, coordinate transformation parameter determination method, and computer program for coordinate transformation parameter determination
US9418480B2 (en) Systems and methods for 3D pose estimation
US20140168367A1 (en) Calibrating visual sensors using homography operators
CN111091063A (en) Living body detection method, device and system
US20080089577A1 (en) Feature extraction from stereo imagery
CN111480342B (en) Encoding device, encoding method, decoding device, decoding method, and storage medium
JP2014530391A (en) Network capture and 3D display of localized and segmented images
CN104156998A (en) Implementation method and system based on fusion of virtual image contents and real scene
CN111950426A (en) Target detection method and device and delivery vehicle
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN107517346B (en) Photographing method and device based on structured light and mobile device
CN107680039B (en) Point cloud splicing method and system based on white light scanner
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
JP2015133101A (en) Method for constructing descriptor for image of scene
KR101931564B1 (en) Device and method for processing image using image registration
CN109472741B (en) Three-dimensional splicing method and device
WO2020186900A1 (en) Narrow-strip two-dimensional barcode, and method, apparatus and device for generating and recognizing narrow-strip two-dimensional barcode
US20170178351A1 (en) Method for determining missing values in a depth map, corresponding device, computer program product and non-transitory computer-readable carrier medium
JP2006113832A (en) Stereoscopic image processor and program
Dickscheid et al. Evaluating the suitability of feature detectors for automatic image orientation systems
CN112262411B (en) Image association method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant