CN109671151B - Three-dimensional data processing method and device, storage medium and processor - Google Patents

Three-dimensional data processing method and device, storage medium and processor Download PDF

Info

Publication number
CN109671151B
CN109671151B CN201811426997.1A CN201811426997A CN109671151B CN 109671151 B CN109671151 B CN 109671151B CN 201811426997 A CN201811426997 A CN 201811426997A CN 109671151 B CN109671151 B CN 109671151B
Authority
CN
China
Prior art keywords
data
dental model
dimensional
dimensional scanner
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811426997.1A
Other languages
Chinese (zh)
Other versions
CN109671151A (en
Inventor
马超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN201811426997.1A priority Critical patent/CN109671151B/en
Publication of CN109671151A publication Critical patent/CN109671151A/en
Application granted granted Critical
Publication of CN109671151B publication Critical patent/CN109671151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a three-dimensional data processing method and device, a storage medium and a processor. Wherein the method comprises the following steps: acquiring first data of a dental model, wherein the first data are three-dimensional data of the dental model acquired by a three-dimensional scanner with a static photographing time sequence and a phase matching mode; determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by a static photographing type time sequence phase matching type three-dimensional scanner, and converting the first data into a coordinate system where the dynamic video streaming type three-dimensional scanner is located; acquiring second data of the dental model, wherein the second data is data to be filled in corresponding first data acquired through a three-dimensional scanner of a dynamic video stream; and fusing the second data to the first data to obtain target data of the dental model. The method solves the technical problem that the three-dimensional data of the dental model cannot be obtained rapidly and comprehensively in the related art.

Description

Three-dimensional data processing method and device, storage medium and processor
Technical Field
The invention relates to the field of three-dimensional data processing, in particular to a three-dimensional data processing method and device, a storage medium and a processor.
Background
In the prior art, three-dimensional data of a dental model can be obtained based on a three-dimensional scanner of the three-dimensional reconstructed dental model with static photographing time sequence phase matching, so that one-key automatic and rapid acquisition is realized, but a plurality of visual dead angles exist on the dental model due to the included angles of cameras, and especially, the situation of incomplete scanning exists under the condition of continuous tooth preparation or direct scanning of an impression; the three-dimensional data of the dental model can be obtained based on the dynamic video streaming intraoral three-dimensional scanner, so that the multi-angle acquisition of tooth full-view including difficult-to-sweep areas such as slits and the like is basically realized due to small visual dead angles of the probe, but the window breadth is much smaller than that of the dental model three-dimensional scanner, and the scanning target of the full-mouth dental model can be completed by multiple angles. It can be seen that the three-dimensional data of the dental model obtained by the model three-dimensional scanner has the disadvantage of being incomplete, whereas the intraoral scanner has the complexity of manual operation, increasing the scanning time.
Aiming at the technical problem that three-dimensional data of dental models cannot be obtained rapidly and comprehensively in the related art, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a three-dimensional data processing method and device, a storage medium and a processor, which at least solve the technical problem that the technology of acquiring the three-dimensional data of a complete dental model cannot be realized in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a method for processing three-dimensional data, including: acquiring first data of a dental model, wherein the first data is three-dimensional data of the dental model acquired by a three-dimensional scanner with a static photographing time sequence and a phase matching mode; determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by the static photographing time sequence phase matching type three-dimensional scanner; converting the first data into a coordinate system where a three-dimensional scanner of a dynamic video stream is located; acquiring second data of the dental model, wherein the second data is data to be filled in the first data, which is acquired through a three-dimensional scanner of a dynamic video streaming type; and fusing the first data and the second data to obtain target data of the dental model.
Further, the fusing the first data and the second data to obtain the target data of the dental model includes: calculating barycentric coordinates of the first data; converting the barycentric coordinates of the first data to the coordinate origin of the second data by utilizing a space conversion relation; acquiring the exact center of the calibration space of the three-dimensional scanner of the dynamic video streaming; and moving the barycentric coordinates of the first data to the exact center of the calibration space, and determining the target data of the dental model in the dynamic video streaming three-dimensional scanner.
Further, the fusing the first data and the second data to obtain the target data of the dental model includes: calculating geometrical key feature descriptors of the first data and the second data respectively; matching the features based on the descriptors by adopting a RANSAC method; and acquiring the three-dimensional conversion relation between the first data and the second data to obtain the target data of the dental model.
Further, after fusing the first data and the second data to obtain the target data of the dental model, the method includes: and fusing the first data into the second data by a point cloud fusion technology, and deleting the overlapped data.
According to another aspect of the embodiment of the present invention, there is provided a method for processing three-dimensional data, including: different data generated by scanning dental models by different scanners are obtained, wherein the data are three-dimensional data of the dental models; and obtaining target data of the dental model based on the different data.
According to another aspect of the embodiment of the present invention, there is also provided a processing apparatus for three-dimensional data, including: the first acquisition unit is used for acquiring first data of the dental model, wherein the first data are three-dimensional data of the dental model acquired through a three-dimensional scanner in a static photographing time sequence phase matching mode; the determining unit is used for determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by the static photographing type time sequence phase matching type three-dimensional scanner; the conversion unit is used for converting the first data into a coordinate system where the three-dimensional scanner of the dynamic video stream is located; the second acquisition unit is used for acquiring second data of the dental model, wherein the second data are data to be filled in the first data acquired through a dynamic video streaming three-dimensional scanner; and the fusion unit is used for fusing the first data and the second data to obtain the target data of the dental model.
Further, the fusion unit includes: the first calculation module is used for calculating barycentric coordinates of the first data; the conversion module is used for converting the barycentric coordinates of the first data to the coordinate origin where the second data are located by utilizing a space conversion relation; the acquisition module is used for acquiring the exact center of the calibration space of the dynamic video streaming three-dimensional scanner; and the first determining module is used for moving the barycentric coordinates of the first data to the right center of the calibration space and determining the target data of the dental model in the dynamic video streaming three-dimensional scanner.
Further, the fusion unit includes: the second calculation module is used for calculating geometrical key feature descriptors of the first data and the second data respectively; a matching module for matching the features based on the descriptors using a RANSAC (Random Sample Consensus) method; the acquisition module is used for acquiring the three-dimensional conversion relation between the first data and the second data to obtain the target data of the dental model.
Further, the apparatus comprises: and the deleting unit is used for fusing the first data and the second data to obtain target data of the dental model, and then fusing the first data into the second data by a point cloud fusion technology to delete the overlapped data.
According to another aspect of the embodiment of the present invention, there is also provided a processing apparatus for three-dimensional data, including: the third acquisition unit is used for acquiring different data generated by scanning dental models by different scanners, wherein the data are three-dimensional data of the dental models; and the processing unit is used for obtaining target data of the dental model based on the different data.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program executes the method for processing three-dimensional data according to any one of the above.
According to another aspect of the embodiment of the present invention, there is also provided a processor, where the processor is configured to execute a program, where the program executes the method for processing three-dimensional data according to any one of the above.
In the embodiment of the invention, the first data of the dental model is obtained by a three-dimensional scanner of a static photographing time sequence phase matching type; determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by a static photographing type time sequence phase matching type three-dimensional scanner; converting the first data into a coordinate system where a three-dimensional scanner of a dynamic video stream is located; and acquiring second data of the dental model, wherein the second data is data to be filled in corresponding first data acquired through a three-dimensional scanner of a dynamic video streaming type, so that the aim of acquiring complete dental model data by fusing data of dental model acquired by different equipment is fulfilled, and further, the technical problem that three-dimensional data of the dental model cannot be acquired rapidly and comprehensively in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a method of processing three-dimensional data according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method of processing three-dimensional data according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a method of processing dental model data according to a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a processing device for three-dimensional data according to an embodiment of the present invention; and
fig. 5 is a schematic diagram of another apparatus for processing three-dimensional data according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is also provided an embodiment of a method of processing three-dimensional data, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 1 is a flowchart of a method for processing three-dimensional data according to an embodiment of the present invention, as shown in fig. 1, the method for processing three-dimensional data including the steps of:
step S102, first data of the dental model are obtained, wherein the first data are three-dimensional data of the dental model are obtained through a three-dimensional scanner with a static photographing time sequence and a phase matching mode.
Step S104, determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by the three-dimensional scanner in a static photographing type time sequence phase matching type.
And S106, converting the first data into a coordinate system of the three-dimensional scanner of the dynamic video stream.
Step S108, second data of the dental model are obtained, wherein the second data are data to be filled in corresponding first data obtained through a dynamic video streaming three-dimensional scanner.
It should be noted that, the first data in the step S102 may be the data to be matched, and the second data in the corresponding step S108 may be the data to be matched.
Step S110, fusing the first data and the second data to obtain target data of the dental model.
The fusing the first data and the second data to obtain the target data of the dental model may include: calculating barycentric coordinates of the first data; converting the barycentric coordinates of the first data to the origin of a camera coordinate system of the intraoral three-dimensional scanner by utilizing a space conversion relation; acquiring the exact center of a calibration space of the intraoral three-dimensional scanner; and moving the barycentric coordinates of the first data to the exact center of the calibration space of the intraoral three-dimensional scanner, and determining third data of the dental model in the intraoral three-dimensional scanner.
It should be noted that, fusing the first data and the second data to obtain the target data of the dental model may further include: calculating three-dimensional geometric feature description factors of the dental model through the three-dimensional shape technical features of the point cloud; and matching the corresponding relation of the description factors by adopting a preset algorithm, and determining the three-dimensional conversion relation of the target data of the dental model.
Through the steps, the first data of the dental model is obtained, wherein the first data is three-dimensional data of the dental model is obtained through a three-dimensional scanner with a static photographing time sequence and a phase matching mode; determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by a static photographing type time sequence phase matching type three-dimensional scanner; converting the first data into a coordinate system where a three-dimensional scanner of a dynamic video stream is located; acquiring second data of the dental model, wherein the second data is data to be filled in corresponding first data acquired through a three-dimensional scanner of a dynamic video stream; the first data and the second data are fused to obtain target data of the dental model, the purpose of obtaining complete dental model data by fusing the data of the dental model obtained by different equipment is achieved, and the technical problem that three-dimensional data of the dental model cannot be obtained rapidly and comprehensively in the related technology is solved.
As an alternative embodiment, fusing the first data with the second data to obtain the target data of the dental model may include: calculating geometrical key feature descriptors of the first data and the second data respectively; adopting a RANSAC method to match each characteristic based on the descriptors; and acquiring a three-dimensional conversion relation between the first data and the second data to obtain target data of the dental model.
The method specifically comprises the steps of acquiring first data and second data through different scanning devices, calculating characteristic factors of the data, and performing three-dimensional conversion on the first data and the second data according to description factor matching data to obtain final complete dental model data.
As an alternative embodiment, the method further comprises: and fusing the first data and the second data to obtain target data of the dental model, and then fusing the first data into the second data by a point cloud fusion technology to delete overlapped data.
Through the above alternative embodiments, the repeated data may be deleted to obtain the complete dental model data. And the three-dimensional data of the complete model of the tooth model without holes can be obtained by adopting multi-angle supplementation.
In accordance with an embodiment of the present invention, there is also provided another embodiment of a method of processing three-dimensional data, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 2 is a flowchart of a method for processing three-dimensional data according to an embodiment of the present invention, as shown in fig. 2, the method for processing three-dimensional data includes the steps of:
step S202, different data generated by scanning dental models by different scanners are obtained, wherein the data are three-dimensional data of the dental models.
Step S204, obtaining the target data of the dental model based on different data.
Through the steps, different data generated by scanning the dental model by different scanners are obtained, wherein the data are three-dimensional data of the dental model, and the target data of the dental model are obtained based on the different data, so that the aim of obtaining complete dental model data by fusing the data of the dental model obtained by different equipment is fulfilled, and the technical problem that the technology of incapable obtaining the three-dimensional data of the complete dental model in the related technology is solved.
The invention also provides a preferred embodiment, and the preferred embodiment provides a method for processing dental model data. As shown in fig. 3, a schematic diagram of a processing method of dental model data is shown. The specific method is as follows.
Extracting key points of the edge of a dental model obtained through a dental model three-dimensional scanner, carrying out local feature description on the key points, simultaneously, also needing to extract the key points of the dental model obtained through an intraoral three-dimensional scanner, finding out the local feature description of the key points, carrying out feature matching, grouping the matched key points during matching, further realizing matching accuracy, and after the data model matching is finished, carrying out pose verification on the current model, and carrying out scene segmentation to obtain part of features in the dental model.
Wherein, the three-dimensional data of the dental model can be operated as follows:
1. and converting the dental model three-dimensional scanner data into a camera coordinate system of the intraoral three-dimensional scanner.
The world coordinates of the three-dimensional data acquired by the dental model three-dimensional scanner are generally established under the world coordinate system of the dental model three-dimensional scanner, firstly, the barycenter coordinate Wz of the three-dimensional data is required to be obtained, the barycenter Wz of the three-dimensional data is converted to the original point Ok of the camera coordinate system of the intraoral three-dimensional scanner by utilizing a space conversion relation, then the exact center Wk of the calibration space of the intraoral three-dimensional scanner is acquired, and the barycenter of the three-dimensional data is moved to the Wk, namely the three-dimensional data under the camera coordinate system of the intraoral three-dimensional scanner is acquired.
2. The rapid repositioning process on the dental model data is performed using a rapid feature stitching technique,
the rapid feature stitching technology generally utilizes three-dimensional shape features of point clouds to calculate three-dimensional geometric feature descriptors of objects, adopts RANSAC and other algorithms to match corresponding relations of the descriptors and obtains three-dimensional conversion relations among the point clouds.
Wherein, the characteristics of the good characteristic descriptors: the method has the advantages of strong distinguishing performance, simplicity, repeated calculation, high noise resistance and high calculation efficiency, and the same or similar results can be obtained by calculating the same or similar characteristics in different areas. Because these several features are often contradictory, the preferred embodiment uses a computationally efficient and guaranteed discriminative feature descriptor to quickly match the single piece of three-dimensional data of the mouth scan to the dental model data to complete the quick repositioning process. A balance is achieved with computational efficiency by controlling the dimensions of the feature descriptors.
The following is a specific description of the acquisition of complete dental model data: firstly, obtaining key points of the edge of a dental model, and finding corresponding three-dimensional data, wherein the three-dimensional data can be obtained through a dental model three-dimensional scanner, and then, carrying out local feature description on the key points to obtain feature factors capable of carrying out data matching; and finally, matching different dental model data according to the matching factors.
In the case of performing data matching, packet matching may be performed in order to save time for data matching. And adjusting the pose of the dental model obtained by data matching so as to obtain the correct pose of the dental model in a normal target state, and then performing scene segmentation.
3. And (5) performing splicing optimization and data fusion post-treatment.
And removing redundant overlapping areas and supplementing hole areas from the mouth scan single-chip data and the dental model data by a point cloud fusion technology, and acquiring complete hole-free dental model three-dimensional data by adopting multi-angle supplementation.
By the aid of the preferential embodiment, three-dimensional data of dental model obtained by different scanning devices can be spliced or fused, so that complete dental model data close to reality is obtained, a corresponding model can be found for teeth more accurately, and correction of teeth is achieved. The automatic rapid and incomplete first data and the manual slow and complete second data can be obtained to obtain complete three-dimensional data of the dental model, and the automatic rapid and small amount of manual slow speed can be realized by combining the advantages of the automatic rapid and incomplete first data and the manual slow speed.
According to the embodiment of the present invention, there is further provided an embodiment of an apparatus for processing three-dimensional data, and it should be noted that the apparatus for processing three-dimensional data may be used to execute the method for processing three-dimensional data in the embodiment of the present invention, that is, the method for processing three-dimensional data in the embodiment of the present invention may be executed in the apparatus for processing three-dimensional data.
Fig. 4 is a schematic diagram of a three-dimensional data processing apparatus according to an embodiment of the present invention, and as shown in fig. 4, the three-dimensional data processing apparatus may include: a first acquisition unit 41, a determination unit 43, a conversion unit 45, a second acquisition unit 47, and a fusion unit 49. The specific description is as follows.
The first obtaining unit 41 is configured to obtain first data of the dental model, where the first data is three-dimensional data of the dental model obtained by a three-dimensional scanner of a time-series phase-matching type of a still camera.
A determining unit 43, configured to determine data to be filled in the first data, where the data to be filled is data corresponding to an area not scanned by the three-dimensional scanner of the time-series phase-matching type of the still photography;
a conversion unit 45, configured to convert the first data into a coordinate system where the three-dimensional scanner of the dynamic video stream is located;
a second obtaining unit 47, configured to obtain second data of the dental model, where the second data is data to be filled in corresponding first data obtained by a dynamic video streaming three-dimensional scanner;
and a fusion unit 49, configured to fuse the first data and the second data to obtain target data of the dental model.
The dental model-based target data can be used for correcting teeth and can also be used as a data basis for other processing of the teeth.
It should be noted that the above fusion unit may include: the first calculation module is used for calculating barycentric coordinates of the first data; the conversion module is used for converting the barycentric coordinates of the first data to the coordinate origin where the second data are located by utilizing a space conversion relation; the acquisition module is used for acquiring the exact center of the calibration space of the three-dimensional scanner of the dynamic video stream; and the first determining module is used for moving the barycentric coordinates of the first data to the exact center of the calibration space and determining the target data of the dental model in the dynamic video streaming three-dimensional scanner.
The above fusion unit may further include: the processing module is used for calculating three-dimensional geometric feature description factors of the dental model through the three-dimensional shape technical features of the point cloud; and the second determining module is used for adopting a preset algorithm to match the corresponding relation of the description factors and determining the three-dimensional conversion relation of the target data of the dental model.
By the above device, the first data of the dental model is acquired by the first acquisition unit 41, wherein the first data is three-dimensional data of the dental model is acquired by a three-dimensional scanner with a static photographing time sequence and a phase matching mode; a determining unit 43, configured to determine data to be filled in the first data, where the data to be filled is data corresponding to an area not scanned by the three-dimensional scanner of the time-series phase-matching type of the still photography; a conversion unit 45, configured to convert the first data into a coordinate system where the three-dimensional scanner of the dynamic video stream is located; a second obtaining unit 47, configured to obtain second data of the dental model, where the second data is data to be filled in corresponding first data obtained by a dynamic video streaming three-dimensional scanner; the fusion unit 49 fuses the first data and the second data to obtain target data of the dental model. The method achieves the aim of fusing the data of the dental model obtained by different devices to obtain the complete dental model data, and further solves the technical problem that the three-dimensional data of the dental model cannot be obtained rapidly and comprehensively in the related technology.
It should be noted that, the first obtaining unit 41 in this embodiment may be used to perform step S102 in the embodiment of the present invention, the second obtaining unit 43 in this embodiment may be used to perform step S104 in the embodiment of the present invention, and the fusing unit 45 in this embodiment may be used to perform step S106 in the embodiment of the present invention; the fusion unit 47 in this embodiment may be used to perform step S108 in the embodiment of the present invention; the fusion unit 49 in this embodiment may be used to perform step S110 in an embodiment of the present invention. The above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments.
As an alternative embodiment, the above fusion unit may further include: the second calculation module is used for calculating geometrical key feature descriptors of the first data and the second data respectively; the matching module is used for matching each characteristic based on the descriptors by adopting a RANSAC method; the acquisition module is used for acquiring the three-dimensional conversion relation between the first data and the second data to obtain the target data of the dental model.
As an alternative embodiment, the above device further comprises: and the deleting unit is used for fusing the first data and the second data to obtain target data of the dental model, and then fusing the first data into the second data through a point cloud fusion technology to delete overlapped data.
According to the embodiment of the present invention, another embodiment of a device for processing three-dimensional data is provided, and it should be noted that the device for processing three-dimensional data may be used to execute the method for processing three-dimensional data in the embodiment of the present invention, that is, the method for processing three-dimensional data in the embodiment of the present invention may be executed in the device for processing three-dimensional data.
Fig. 5 is a schematic diagram of a three-dimensional data processing apparatus according to an embodiment of the present invention, and as shown in fig. 5, the three-dimensional data processing apparatus may include: a third acquisition unit 51 and a processing unit 53. The specific description is as follows.
The third obtaining unit 51 is configured to obtain different data generated by scanning a dental model by different scanners, where the data is three-dimensional data of the dental model.
And a processing unit 53, configured to obtain target data of the dental model based on the different data.
By the above arrangement, the third acquisition unit 51 acquires different data generated by scanning a dental model by different scanners, wherein the data is three-dimensional data of the dental model. The processing unit 53 obtains target data of the dental model based on the different data. The method achieves the aim of fusing the data of the dental model obtained by different devices to obtain the complete dental model data, and further solves the technical problem that the three-dimensional data of the complete dental model cannot be obtained in the related technology.
According to another aspect of the embodiments of the present invention, there is provided a storage medium, including a stored program, wherein the program controls a device in which the storage medium is located to perform the following operations when running: acquiring first data of a dental model, wherein the first data are three-dimensional data of the dental model acquired by a three-dimensional scanner with a static photographing time sequence and a phase matching mode; determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by a static photographing type time sequence phase matching type three-dimensional scanner; converting the first data into a coordinate system where a three-dimensional scanner of a dynamic video stream is located; acquiring second data of the dental model, wherein the second data is data to be filled in corresponding first data acquired through a three-dimensional scanner of a dynamic video stream; and fusing the first data and the second data to obtain target data of the dental model.
Fusing the first data and the second data to obtain target data of the dental model comprises the following steps: calculating barycentric coordinates of the first data; converting the barycentric coordinates of the first data to the coordinate origin where the second data is located by utilizing a space conversion relation; acquiring the exact center of a calibration space of a three-dimensional scanner of a dynamic video stream; and moving the barycentric coordinates of the first data to the exact center of the calibration space, and determining the target data of the dental model in the three-dimensional scanner of the dynamic video streaming.
Fusing the first data and the second data to obtain target data of the dental model comprises the following steps: calculating a three-dimensional geometric feature description factor of the dental model through the three-dimensional shape technical features of the point cloud; and matching the corresponding relation of the description factors by adopting a preset algorithm, and determining the three-dimensional conversion relation between the point clouds of the dental model.
Fusing the first data and the second data to obtain target data of the dental model comprises the following steps: calculating geometrical key feature descriptors of the first data and the second data respectively; adopting a RANSAC method to match each characteristic based on the descriptors; and acquiring a three-dimensional conversion relation between the first data and the second data to obtain target data of the dental model.
After fusing the first data and the second data to obtain target data of the dental model, the method comprises the following steps: and fusing the first data into the second data by a point cloud fusion technology, and deleting the overlapped data.
Different data generated by scanning dental models by different scanners are obtained, wherein the data are three-dimensional data of the dental models; and obtaining target data of the dental model based on different data.
According to another aspect of the embodiment of the present invention, there is provided a processor for executing a program, wherein the program executes the following operations: acquiring first data of a dental model, wherein the first data are three-dimensional data of the dental model acquired by a three-dimensional scanner with a static photographing time sequence and a phase matching mode; determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by a static photographing type time sequence phase matching type three-dimensional scanner; converting the first data into a coordinate system where a three-dimensional scanner of a dynamic video stream is located; acquiring second data of the dental model, wherein the second data is data to be filled in corresponding first data acquired through a three-dimensional scanner of a dynamic video stream; and fusing the first data and the second data to obtain target data of the dental model.
Fusing the first data and the second data to obtain target data of the dental model comprises the following steps: calculating barycentric coordinates of the first data; converting the barycentric coordinates of the first data to the coordinate origin where the second data is located by utilizing a space conversion relation; acquiring the exact center of a calibration space of a three-dimensional scanner of a dynamic video stream; and moving the barycentric coordinates of the first data to the exact center of the calibration space, and determining the target data of the dental model in the three-dimensional scanner of the dynamic video streaming.
Fusing the first data and the second data to obtain target data of the dental model comprises the following steps: calculating a three-dimensional geometric feature description factor of the dental model through the three-dimensional shape technical features of the point cloud; and matching the corresponding relation of the description factors by adopting a preset algorithm, and determining the three-dimensional conversion relation between the point clouds of the dental model.
Fusing the first data and the second data to obtain target data of the dental model comprises the following steps: calculating geometrical key feature descriptors of the first data and the second data respectively; adopting a RANSAC method to match each characteristic based on the descriptors; and acquiring a three-dimensional conversion relation between the first data and the second data to obtain target data of the dental model.
After fusing the first data and the second data to obtain target data of the dental model, the method comprises the following steps: and fusing the first data into the second data by a point cloud fusion technology, and deleting the overlapped data.
Different data generated by scanning dental models by different scanners are obtained, wherein the data are three-dimensional data of the dental models; and obtaining target data of the dental model based on different data.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. A method for processing three-dimensional data, comprising:
acquiring first data of a dental model, wherein the first data is three-dimensional data of the dental model acquired by a three-dimensional scanner with a static photographing time sequence and a phase matching mode;
determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by the static photographing time sequence phase matching type three-dimensional scanner;
converting the first data into a coordinate system where a three-dimensional scanner of a dynamic video stream is located,
acquiring second data of the dental model, wherein the second data is data to be filled in the first data, which is acquired through a three-dimensional scanner of a dynamic video streaming type;
and fusing the first data and the second data to obtain target data of the dental model.
2. The method of claim 1, wherein converting the first data into the coordinate system of the three-dimensional scanner of the dynamic video stream comprises:
calculating barycentric coordinates of the first data;
converting the barycentric coordinates of the first data to the coordinate origin of the second data by utilizing a space conversion relation;
acquiring the exact center of the calibration space of the three-dimensional scanner of the dynamic video streaming;
and moving the barycentric coordinates of the first data to the exact center of the calibration space, and determining the target data of the dental model in the dynamic video streaming three-dimensional scanner.
3. The method of claim 1, wherein the fusing the first data with the second data to obtain the target data for the dental model comprises:
calculating geometrical key feature descriptors of the first data and the second data respectively;
matching the features based on the descriptors by adopting a RANSAC method;
and acquiring the three-dimensional conversion relation between the first data and the second data to obtain the target data of the dental model.
4. The method of claim 1, wherein after fusing the first data with the second data to obtain the target data for the dental model, the method comprises:
and fusing the second data into the first data through a point cloud fusion technology and removing overlapped data.
5. A three-dimensional data processing apparatus, comprising:
the first acquisition unit is used for acquiring first data of the dental model, wherein the first data are three-dimensional data of the dental model acquired through a three-dimensional scanner in a static photographing time sequence phase matching mode;
the determining unit is used for determining data to be filled in the first data, wherein the data to be filled is data corresponding to an area which is not scanned by the static photographing type time sequence phase matching type three-dimensional scanner;
the conversion unit is used for converting the first data into a coordinate system where the three-dimensional scanner of the dynamic video stream is located;
the second acquisition unit is used for acquiring second data of the dental model, wherein the second data are data to be filled in the first data acquired through a dynamic video streaming three-dimensional scanner;
and the fusion unit is used for fusing the first data and the second data to obtain the target data of the dental model.
6. The apparatus of claim 5, wherein the fusion unit comprises:
the first calculation module is used for calculating barycentric coordinates of the first data;
the conversion module is used for converting the barycentric coordinates of the first data to the coordinate origin where the second data are located by utilizing a space conversion relation;
the acquisition module is used for acquiring the exact center of the calibration space of the dynamic video streaming three-dimensional scanner;
and the first determining module is used for moving the barycentric coordinates of the first data to the right center of the calibration space and determining the target data of the dental model in the dynamic video streaming three-dimensional scanner.
7. The apparatus of claim 5, wherein the fusion unit comprises:
the second calculation module is used for calculating geometrical key feature descriptors of the first data and the second data respectively;
the matching module is used for matching the features based on the descriptors by adopting a RANSAC method;
the acquisition module is used for acquiring the three-dimensional conversion relation between the first data and the second data to obtain the target data of the dental model.
8. The apparatus of claim 5, wherein the apparatus comprises:
and the deleting unit is used for fusing the first data and the second data to obtain target data of the dental model, and then fusing the first data into the second data by a point cloud fusion technology to delete the overlapped data.
9. A storage medium comprising a stored program, wherein the program performs the method of processing three-dimensional data according to any one of claims 1 to 4.
10. A processor for running a program, wherein the program when run performs the method of processing three-dimensional data according to any one of claims 1 to 4.
CN201811426997.1A 2018-11-27 2018-11-27 Three-dimensional data processing method and device, storage medium and processor Active CN109671151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811426997.1A CN109671151B (en) 2018-11-27 2018-11-27 Three-dimensional data processing method and device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811426997.1A CN109671151B (en) 2018-11-27 2018-11-27 Three-dimensional data processing method and device, storage medium and processor

Publications (2)

Publication Number Publication Date
CN109671151A CN109671151A (en) 2019-04-23
CN109671151B true CN109671151B (en) 2023-07-18

Family

ID=66143225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811426997.1A Active CN109671151B (en) 2018-11-27 2018-11-27 Three-dimensional data processing method and device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN109671151B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111700698B (en) * 2020-05-14 2022-07-08 先临三维科技股份有限公司 Dental scanning method, apparatus, system and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101940503A (en) * 2005-11-30 2011-01-12 3形状股份有限公司 Impression scanning for manufacturing of dental restorations
CN105096372A (en) * 2007-06-29 2015-11-25 3M创新有限公司 Synchronized views of video data and three-dimensional model data
CN106504321A (en) * 2016-11-07 2017-03-15 达理 Method using the method for photo or video reconstruction three-dimensional tooth mould and using RGBD image reconstructions three-dimensional tooth mould
CN108269247A (en) * 2017-08-23 2018-07-10 杭州先临三维科技股份有限公司 3-D scanning method and apparatus in mouthful

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418574B2 (en) * 2002-10-31 2008-08-26 Lockheed Martin Corporation Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction
WO2007142643A1 (en) * 2006-06-08 2007-12-13 Thomson Licensing Two pass approach to three dimensional reconstruction
US20180253894A1 (en) * 2015-11-04 2018-09-06 Intel Corporation Hybrid foreground-background technique for 3d model reconstruction of dynamic scenes

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101940503A (en) * 2005-11-30 2011-01-12 3形状股份有限公司 Impression scanning for manufacturing of dental restorations
CN105096372A (en) * 2007-06-29 2015-11-25 3M创新有限公司 Synchronized views of video data and three-dimensional model data
CN106504321A (en) * 2016-11-07 2017-03-15 达理 Method using the method for photo or video reconstruction three-dimensional tooth mould and using RGBD image reconstructions three-dimensional tooth mould
CN108269247A (en) * 2017-08-23 2018-07-10 杭州先临三维科技股份有限公司 3-D scanning method and apparatus in mouthful

Also Published As

Publication number Publication date
CN109671151A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
CN110363858B (en) Three-dimensional face reconstruction method and system
US10789765B2 (en) Three-dimensional reconstruction method
KR101820349B1 (en) Picture presentation method and apparatus
EP3308323B1 (en) Method for reconstructing 3d scene as 3d model
WO2018047687A1 (en) Three-dimensional model generating device and three-dimensional model generating method
JP2013524593A (en) Methods and configurations for multi-camera calibration
US10750147B2 (en) Method and apparatus for generating HDRI
CN109525786B (en) Video processing method and device, terminal equipment and storage medium
US10987198B2 (en) Image simulation method for orthodontics and image simulation device thereof
US20150043817A1 (en) Image processing method, image processing apparatus and image processing program
KR101259550B1 (en) Method And Apparatus Contrasting Image Through Perspective Distortion Correction
CN109671151B (en) Three-dimensional data processing method and device, storage medium and processor
KR102248352B1 (en) Method and device for removing objects in video
KR20110133677A (en) Method and apparatus for processing 3d image
JP4578653B2 (en) Depth image generation apparatus, depth image generation method, and computer-readable recording medium storing a program for causing a computer to execute the method
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
CN110008911B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
CN111402314A (en) Material attribute parameter obtaining method and device
CN113706429B (en) Image processing method, device, electronic equipment and storage medium
CN113034617B (en) Method, device and equipment for acquiring focal length of camera
JP7423595B2 (en) Methods, corresponding devices and computer program products for estimating pixel depth
EP3182367A1 (en) Apparatus and method for generating and visualizing a 3d model of an object
CN112801935A (en) Plankton underwater imaging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant