WO2019085392A1 - Procédé, dispositif et système de reconstitution de données tridimensionnelles de dents - Google Patents

Procédé, dispositif et système de reconstitution de données tridimensionnelles de dents Download PDF

Info

Publication number
WO2019085392A1
WO2019085392A1 PCT/CN2018/082235 CN2018082235W WO2019085392A1 WO 2019085392 A1 WO2019085392 A1 WO 2019085392A1 CN 2018082235 W CN2018082235 W CN 2018082235W WO 2019085392 A1 WO2019085392 A1 WO 2019085392A1
Authority
WO
WIPO (PCT)
Prior art keywords
light source
point cloud
dimensional
teeth
virtual light
Prior art date
Application number
PCT/CN2018/082235
Other languages
English (en)
Chinese (zh)
Inventor
董子龙
黄磊杰
马超
赵晓波
Original Assignee
先临三维科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 先临三维科技股份有限公司 filed Critical 先临三维科技股份有限公司
Publication of WO2019085392A1 publication Critical patent/WO2019085392A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling

Definitions

  • the present invention relates to the field of computer Internet, and in particular to a method, device and system for reconstructing three-dimensional data of teeth.
  • the three-dimensional data collection of the internal teeth of the oral cavity is divided into two methods: an extraoral scan and an intraoral scan.
  • the extraoral scan uses a scanning device to scan the plaster model of the patient's dentition to obtain a digital three-dimensional image
  • the intraoral scan is to insert the scanning device into the mouth.
  • the patient's mouth directly scans the tooth and related soft and hard tissues, and obtains digital three-dimensional in real time.
  • the intraoral scanning achieves the true sense of modelessness, digitization, convenience and efficiency, and the advantages are obvious.
  • the commonly used techniques for intraoral scanning are mainly intra-oral reconstruction based on color stripe and reconstruction technology based on confocal technology.
  • the color stripe technique uses a monocular or binocular camera to acquire a raster fringe image projected by the projector during the reconstruction process. Then the registration of the left and right views is performed by stripe coding, and finally the three-dimensional information of the teeth is obtained by triangulation.
  • This method has better performance in real-time and reconstruction accuracy, but due to the relatively complicated hardware composition, the product is in volume.
  • the overall hardware cost is higher; the confocal technology-based reconstruction technology directly acquires the depth information of the tooth through the known camera focal length by acquiring the image of the camera in a plurality of different focal lengths in a very short time.
  • the situation is higher due to hardware constraints.
  • the above two scanning technologies require more sophisticated hardware such as cameras, projectors, etc. for scanning devices.
  • the method of dusting the teeth before scanning is also used. Accumulating together will make the scanning device bulky, difficult to operate in the limited intraoral environment, bring uncomfortable experience to the patient, and cause high equipment cost, which puts great pressure on the promotion of the purchaser and the user of the device.
  • the embodiment of the invention provides a method, a device and a system for reconstructing a three-dimensional tooth data, so as to at least solve the technical problem that the hardware device has high requirements and complicated calculation when collecting three-dimensional data of teeth in the prior art.
  • a method for reconstructing a three-dimensional tooth data includes: acquiring a sparse point cloud set of teeth at different viewing angles using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources; The sparse point cloud set determines the dense three-dimensional point cloud of the tooth from each perspective; splicing and fusing the dense three-dimensional point cloud from different perspectives to obtain three-dimensional data of the tooth.
  • a dental three-dimensional data reconstruction apparatus comprising: a first acquisition module configured to acquire a sparse point cloud set of teeth at different viewing angles using a three-dimensional imaging system, wherein the three-dimensional imaging system The main camera and the plurality of light sources are included; the first determining module is configured to determine a dense three-dimensional point cloud of the teeth in each view according to the sparse point cloud set; the stitching fusion module is configured to stitch the dense three-dimensional point clouds at different viewing angles Fusion to get three-dimensional data of teeth.
  • a dental three-dimensional data reconstruction system comprising the above-described three-dimensional tooth reconstruction apparatus, further comprising a three-dimensional imaging system, wherein the three-dimensional imaging system comprises a main camera, a slave camera and a plurality of light sources .
  • a storage medium comprising a stored program, wherein the device in which the storage medium is located is controlled to execute the above-described three-dimensional data reconstruction method of the tooth when the program is running.
  • a processor configured to execute a program, wherein the program is executed to perform the above-described three-dimensional data reconstruction method of teeth.
  • a terminal comprising: a first acquisition module configured to acquire a sparse point cloud set of teeth at different viewing angles using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources; a first determining module configured to determine a dense three-dimensional point cloud of the teeth at each viewing angle according to the sparse point cloud set; the splicing fusion module is configured to splicing and fusing the dense three-dimensional point clouds at different viewing angles to obtain the teeth
  • the three-dimensional data the processor, the processor running program, wherein the program runs the above-described three-dimensional data reconstruction method for the data output from the first obtaining module, the first determining module, and the stitching blending module.
  • a terminal comprising: a first acquisition module configured to acquire a sparse point cloud set of teeth at different viewing angles using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources; a first determining module configured to determine a dense three-dimensional point cloud of the teeth at each viewing angle according to the sparse point cloud set; the splicing fusion module is configured to splicing and fusing the dense three-dimensional point clouds at different viewing angles to obtain the teeth
  • the three-dimensional data storage medium is set as a storage program, wherein the program executes the above-described three-dimensional data reconstruction method for the teeth from the first acquisition module, the first determination module, and the mosaic fusion module at runtime.
  • a sparse point cloud set of teeth is obtained by using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources; and the dense three-dimensional shape of the teeth is determined according to the sparse point cloud set.
  • Point cloud splicing and merging dense three-dimensional point clouds from different perspectives to obtain three-dimensional data of teeth, achieving the purpose of reconstructing three-dimensional data of teeth using only a three-dimensional imaging system including a main camera and multiple light sources, thereby realizing Using small and lightweight equipment to quickly and efficiently collect three-dimensional data of teeth in the mouth, ensuring scanning accuracy and efficiency, while avoiding steps such as scanning dusting, improving scanning experience, small hardware, low cost, simple manufacturing process, convenient operation, easy to be
  • the technical effect of large-scale promotion further solves the technical problem that the hardware device has high requirements and complicated calculation when collecting three-dimensional data of teeth in the prior art.
  • FIG. 1 is a schematic diagram of a method for reconstructing a three-dimensional tooth data according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an optional three-dimensional reconstruction method of teeth according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an optional three-dimensional reconstruction method of teeth according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an optional three-dimensional reconstruction method of teeth according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an optional three-dimensional reconstruction method of teeth according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an optional three-dimensional reconstruction method of teeth according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an alternative method of three-dimensional reconstruction of teeth according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a three-dimensional tooth reconstruction apparatus according to an embodiment of the present invention.
  • a method embodiment of a method for reconstructing a three-dimensional data of teeth there is provided a method embodiment of a method for reconstructing a three-dimensional data of teeth, and it is noted that the steps illustrated in the flowchart of the accompanying drawings may be performed in a computer system such as a set of computer executable instructions. Also, although logical sequences are shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a method for reconstructing a three-dimensional tooth data according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • Step S102 Acquire a sparse point cloud set of teeth at different viewing angles using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources.
  • the three-dimensional imaging system used in the present invention only needs to include a main camera and a plurality of light sources.
  • the position of the light source and the main camera may be uniformly distributed around the main camera, and the number of the light sources may be 2 m.
  • the three-dimensional imaging system may further include one or more slave cameras, wherein the three-dimensional imaging system includes a main camera, a slave camera, and a plurality of light sources uniformly distributed around the main camera.
  • the structure of the three-dimensional imaging system can be as shown in FIG. 2. In FIG. 2, the main camera 11, the slave camera 12 and the light source 13 are packaged at the foremost end of the scanning handle 14, and the main camera 11, the slave camera 12 and the light source 13 are The position is relatively fixed.
  • Step S104 determining a dense three-dimensional point cloud of the teeth in each view according to the sparse point cloud set.
  • Step S106 splicing and fusing the dense three-dimensional point cloud at different viewing angles to obtain three-dimensional data of the teeth.
  • splicing and fusion may be The merging and point cloud fusion of M 0 and M 1 can be performed, and then more dense three-dimensional point clouds M t can be collected and spliced and fused with the previously acquired three-dimensional data to finally obtain the required three-dimensional data of the teeth.
  • a sparse point cloud set of teeth at different viewing angles is obtained by using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources; and the dense three-dimensional shape of the teeth is determined according to the sparse point cloud set.
  • Point cloud splicing and merging dense three-dimensional point clouds from different perspectives to obtain three-dimensional data of teeth, achieving the purpose of reconstructing three-dimensional data of teeth using only a three-dimensional imaging system including a main camera and multiple light sources, thereby realizing Using small and lightweight equipment to quickly and efficiently collect three-dimensional data of teeth in the mouth, ensuring scanning accuracy and efficiency, while avoiding steps such as scanning dusting, improving scanning experience, small hardware, low cost, simple manufacturing process, convenient operation, easy to be
  • the technical effect of large-scale promotion further solves the technical problem that the hardware device has high requirements and complicated calculation when collecting three-dimensional data of teeth in the prior art.
  • determining the dense three-dimensional point cloud of the tooth in each view according to the sparse point cloud set in step S104 includes: step S202, determining each view according to the sparse point cloud set by means of photometric stereoscopic three-dimensional reconstruction A dense three-dimensional point cloud of the lower teeth.
  • the photometric stereoscopic three-dimensional reconstruction method may reconstruct the dense three-dimensional depth map based on the sparse point cloud set.
  • the sparse point cloud set T s may be obtained by binocular reconstruction, and the photometric stereoscopic three-dimensional reconstruction mode is based on the image acquired by the main camera. .
  • the method for determining the dense three-dimensional point cloud of the tooth in each view according to the sparse point cloud set by using the photometric stereoscopic three-dimensional reconstruction in step S202 includes:
  • Step S302 acquiring a light source image of the tooth collected by the main camera after sequentially lighting each light source, to obtain a light source image set;
  • Step S304 determining a contour line of each pixel in the light source image
  • Step S306 diffusing the sparse point cloud in the sparse point cloud set according to the contour line to obtain a dense three-dimensional point cloud.
  • determining the contour line of each pixel in the light source image in step S304 comprises:
  • Step S402 determining an azimuth of a position of each pixel in the light source image
  • Step S404 determining a contour line of each pixel in the light source image according to the azimuth angle.
  • the azimuth angle ⁇ can be defined as an angle between a projection of a point of a surface of the space object on the imaging plane of the main camera and a counterclockwise direction of the negative direction of the x-axis.
  • the azimuth of each pixel location needs to be calculated and calculated independently.
  • the method before determining the contour of each pixel in the light source image in step S304, the method further includes:
  • Step S502 the light source image set is calibrated according to the whiteboard image pre-acquired after the main camera sequentially lights each light source, to obtain a calibrated light source image set;
  • Step S504 using an interpolation algorithm to acquire a plurality of virtual light source images according to the set of calibrated light source images, to obtain a virtual light source image set.
  • the three-dimensional imaging system may be calibrated before acquiring the sparse point cloud set of teeth at different viewing angles in step S102, wherein the calibration is divided into geometric calibration and light source calibration, and the three-dimensional imaging system includes a main camera and a slave.
  • the camera and the light source and the light source are evenly distributed around the main camera, and the number is 2m.
  • the geometric calibration refers to calculating the relative orientation and imaging parameters between the two cameras on the three-dimensional imaging system.
  • the light source calibration refers to It is to calculate the intensity of the light source in the space on the three-dimensional imaging system, wherein the light source can be an LED light.
  • the geometric calibration first fix the three-dimensional imaging system and the geometric calibration plate as shown in Figure 4, illuminate all the light sources, use two cameras to capture the image of the geometric calibration plate at the corresponding viewing angle, and then change the geometric calibration plate. Spatial position, using two cameras to capture the image of the geometric calibration plate at the corresponding angle of view, repeat the above steps, and finally each camera can obtain images of multiple geometric calibration plates under multiple viewing angles, and detect each image on a black circular shape. The center point is numbered according to the distribution of the large circle.
  • the geometric calibration algorithm is executed.
  • the geometric calibration algorithm can be based on the Zhang Zhengyou calibration method, and finally the imaging parameters and distortion coefficients of the two cameras can be output. And relative orientation.
  • the main camera and the whiteboard can be used for light source calibration.
  • the whiteboard refers to the white balance calibration plate. Since the scanning range of the three-dimensional imaging system during scanning can be 3 mm-15 mm from the imaging plane of the main camera, And the distance relationship between the teeth and the scanning device during the intraoral scanning process is relatively fixed, so the white balance calibration plate can be fixed at a preset distance from the main camera during the calibration of the light source, wherein the preset distance can be as shown in FIG. 9mm, adjust the 3D imaging system so that the main camera imaging plane and the white balance calibration plate are as parallel as possible, fix the 3D imaging system, and sequentially illuminate 2m light sources.
  • the main camera separately captures the image of the white balance calibration plate under each light source i.
  • the pixel color value of the i-th image I i in the set I at a certain position j is I ij , white
  • the color value of the i-th image W i of the calibration plate image set W in the corresponding pixel j is W ij .
  • determining the azimuth of the position of each pixel in the light source image in step S402 includes: Step S602, determining an azimuth of a position of each pixel in the light source image according to the virtual light source image set.
  • the step S504 is performed by using an interpolation algorithm to obtain a plurality of virtual light source images according to the calibrated light source image set, and obtaining the virtual light source image set includes:
  • Step S702 constructing an initial virtual light source, wherein the position of the initial virtual light source is the intersection of the plurality of light sources and the virtual circle, the center of the virtual circle is the geometric center of the plurality of light sources, and the radius of the virtual circle is the average distance from the plurality of light sources to the center of the circle ;
  • Step S704 constructing a derivative virtual light source corresponding to each initial virtual light source according to the initial virtual light source, wherein the position of the derived virtual light source is an intersection of a preset angle of the initial virtual light source and the center of the circle on the virtual circle;
  • Step S706 using an interpolation algorithm to calculate a virtual light source image under the illumination of the initial virtual light source and the derived virtual light source according to the set of the calibrated light source image, to obtain a virtual light source image set.
  • the basis of the luminosity stereoscopic three-dimensional reconstruction method is the isotropic property of the surface of the object.
  • the premise of satisfying this property is to use a circularly distributed light source, but in the specific implementation process, the light source is in space due to factors such as assembly process and equipment size limitation. It is not completely circular.
  • the photometric stereoscopic three-dimensional reconstruction method needs to construct a virtual light source in space, and the virtual light source image under the virtual light source position is interpolated by the known light source image.
  • it is necessary to ensure that the plane of the light source is coplanar with the imaging plane of the camera during the design and assembly process of the hardware device.
  • the solid line frame in FIG. 6 represents 2m light sources in the three-dimensional imaging system, and can be centered on the geometric center of 2m real light sources, and the real light source and c
  • the average distance is the radius r is the virtual circle in the imaging plane of the camera, and the intersection of the true light source position and the line connecting c on the virtual circle is the initial virtual light source to be interpolated, which is represented as a dotted line frame in FIG. 6 and the position set of the initial virtual light source.
  • the line connecting L i and the center c can be rotated counterclockwise around the center c by a predetermined first angle. After the rotation, the intersection of the connecting line and the virtual circle is S, and the L i is made on the virtual circle.
  • the point of symmetry of point S recorded as a derivative virtual light source
  • the calculating, by using the interpolation algorithm, the virtual light source image under the illumination of the derived virtual light source according to the calibrated light source image set in step S706 includes:
  • Step S802 triangulating a plurality of light sources on the image plane of the main camera by using a triangulation algorithm to obtain a split triangle;
  • Step S804 according to the positional relationship between the initial virtual light source and the derived virtual light source and the split triangle, the virtual light source image under the initial virtual light source and the derived virtual light source illumination is calculated according to the set of the corrected light source image set using different formulas.
  • the virtual light source image includes an initial virtual light source image and a derived virtual light source image.
  • the initial virtual light source image is calculated, if the three-dimensional imaging system includes 2m light sources, the 2m real light sources can be triangulated on the main camera imaging plane.
  • the split triangle is obtained.
  • the Delaunay triangulation algorithm can be used.
  • the initial virtual light source L i in the position set L of the initial virtual light source has three positional relationships with the split triangle: L i is inside the triangle, and L i is in the triangle. On the side, L i is outside the triangle.
  • the light source L i virtual image corresponding to V i is: If L i in the triangular sides: two vertices referred source image corresponding to the edge of the triangle in the set of index I 'is from a, b, and two vertex distance L i is ⁇ a, ⁇ b, then L The virtual light source image V i corresponding to i is: If L i outside the triangle: search from the nearest source virtual triangle, the triangle denoted L i from the two nearest light source image corresponding to the vertex index set I 'as in a, b, and two from the vertex of L i as ⁇ a, ⁇ b, L i corresponding
  • the initial virtual light source image of the initial virtual light source can be obtained by the above method, and the virtual light source is derived. Repeat the above image interpolation algorithm for the initial virtual light source to obtain the derived virtual light source Virtual light source image of teeth under illumination Repeat the above operation, rotate a total of 128 times, you can get the set of 128 virtual light source images obtained by interpolation as Each image in V Li Both correspond to a rotation angle of 1.4*k.
  • the formula used in determining the azimuth of the position of each pixel in the light source image according to the virtual light source image set in step S602 includes:
  • i represents the label of the initial virtual light source
  • j represents the position of the pixel as j
  • k represents the label of the derived virtual light source
  • 2m represents the total number of initial virtual light sources
  • k all represents the corresponding virtual light source.
  • the total number of derived virtual light sources, V ij represents the color value of the initial virtual light source image corresponding to i at the position where the pixel is located at j.
  • the color value of the derived virtual light source image corresponding to the derived virtual light source with the initial virtual light source label i corresponding to i at the position of the pixel is j, and the R, G, and B represent the red channel, the green channel, and the blue channel, P j denotes a second set of intermediate variables, k j denotes a corresponding k when P j is a minimum value, ⁇ denotes a preset constant, and ⁇ j denotes an azimuth where the pixel is located at j.
  • the image of the light source has a total of w*h pixels, and the step of calculating the azimuth is repeated for each pixel, and finally the same azimuth data is obtained.
  • 2m first intermediate variable sets can be calculated, and the 2m data is calculated.
  • the set is averaged to obtain a set of 128 average intermediate variables, that is, a second intermediate variable set.
  • the operations based on R, G, and B respectively represent the color values of the corresponding red, green, and blue channels, and ⁇ can be 1.4.
  • each pixel includes two contour lines
  • determining a contour line of each pixel in the light source image according to the azimuth angle in step S404 includes:
  • Step S902 for each pixel, respectively, a plurality of new pixels are expanded in the direction of two contour lines corresponding to the pixels;
  • Step S904 calculating a position of the plurality of new pixels according to the azimuth of the pixel by using a bilinear interpolation algorithm
  • Step S906 counting the location set of the new pixels on each contour line.
  • the azimuth angle data set ⁇ has an azimuth angle of ⁇ j at the pixel position j , and two vectors are defined. among them:
  • each pixel j can diffuse up to 2N new pixel positions to obtain a set of pixels on two contour lines:
  • the D () operation rounds the pixel coordinates rounded off. Repeat the above steps for each pixel position to obtain contours for all pixels.
  • step S306 the sparse point cloud in the sparse point cloud set is diffused according to the contour line to obtain a dense three-dimensional point cloud, including: step S1002, the sparse point cloud in the sparse point cloud set is based on The set of positions is sequentially spread to obtain a dense three-dimensional point cloud.
  • the current process of scanning the dense point cloud of the tooth is the deep propagation, and the three-dimensional point q in the tooth sparse point cloud set T s is imaged in the main camera.
  • the projection pixel position of the plane is j, and the pixel set on the contour line corresponding to j is First, press n to spread from small to large. Each new pixel is internally calculated, and the curvature value of the new pixel currently being propagated on the contour line is calculated.
  • the new pixel When the curvature value is less than the set curvature threshold, the new pixel is given the depth value of the point q, and if the curvature value is greater than the set value
  • the fixed curvature threshold stops the propagation of the contour; completion After the spread, continue to spread
  • the new pixel that has been propagated already has a depth value the depth value remains unchanged, and the new pixel position is skipped to continue the propagation.
  • Each three-dimensional point in the tooth sparse point cloud set T s is at the same height according to the above propagation method. The depth of the line propagation can ultimately lead to a dense three-dimensional point cloud on the surface of the tooth, specifically a dense three-dimensional point cloud on the surface of the tooth.
  • step S106 the dense three-dimensional point cloud in different viewing angles is spliced and fused, including: step S1102, using an iterative closest point algorithm to splicing dense three-dimensional point clouds at different viewing angles, After splicing, the dense three-dimensional point cloud; and/or step S1104, using the truncated directed distance function to perform point cloud fusion on the spliced dense three-dimensional point cloud.
  • an Iterative Closest Point (ICP) algorithm may be used for splicing, and a truncated signed distance function (TSDF) may be used for the fusion.
  • ICP Iterative Closest Point
  • TSDF truncated signed distance function
  • using the three-dimensional imaging system to acquire a sparse point cloud set of teeth at different viewing angles in step S102 comprising: step S1202, using a three-dimensional imaging system to obtain sparseness of teeth at different viewing angles by means of binocular reconstruction A point cloud collection, wherein the 3D imaging system is also included from the camera.
  • the sparse three-dimensional point cloud of the tooth can be obtained based on binocular reconstruction, and the binocular reconstruction requires at least two cameras. Therefore, the three-dimensional imaging system of the present invention includes the slave camera in the binocular reconstruction in addition to the master camera.
  • the difference in the X direction of the pixel positions of the projections P left and P right of the three-dimensional point P on the left imaging plane and the right imaging plane is defined as a parallax, represented by d, the main camera corresponds to the left camera, from The camera corresponds to the right camera.
  • the left camera image in Figure 7 refers to the main camera image
  • the right camera image refers to the camera image
  • the distance between the left camera optical center O left and the right camera optical center O right is called the baseline distance
  • the mark For B the focal length of the left camera is f
  • the position of the left point of the left camera is (C leftx , C lefty )
  • the coordinates of the projection P left of the 3D point Q on the left imaging plane are (X left , Y left ), on the right.
  • the following three-dimensional coordinates (x, y, z) of P in the world coordinate system can be obtained:
  • the two cameras respectively take the teeth, obtain the tooth images from different viewing angles, extract the features of the images taken by the main camera, and take all the images taken from the camera. Searching for feature matching, and then performing binocular reconstruction, the tooth sparse point cloud set T s corresponding to the image features is obtained.
  • FIG. 8 is a tooth three-dimensional data reconstruction device according to an embodiment of the present invention.
  • the device includes a first acquisition module, a determining module and a splicing fusion module, wherein the first obtaining module is configured to acquire a sparse point cloud set of teeth at different viewing angles using a three-dimensional imaging system, wherein the three-dimensional imaging system comprises a main camera and a plurality of light sources; and a first determining module, It is set to determine the dense three-dimensional point cloud of the teeth under each view according to the sparse point cloud set; the splicing fusion module is set to splicing and merging the dense three-dimensional point clouds at different viewing angles to obtain three-dimensional data of the teeth.
  • the first acquisition module acquires a sparse point cloud set of teeth at different viewing angles by using a three-dimensional imaging system, wherein the three-dimensional imaging system includes a main camera and a plurality of light sources; and the first determining module determines the set according to the sparse point cloud set.
  • the purpose of reconstructing the three-dimensional data of teeth is to realize the rapid and efficient collection of three-dimensional data of teeth in the mouth by using small and lightweight equipment, ensuring scanning accuracy and efficiency, avoiding steps such as scanning dusting, improving scanning experience, small hardware volume and cost.
  • the utility model has the advantages of low manufacturing process, simple operation, convenient operation and easy to be promoted on a large scale, thereby solving the technical problem that the hardware device has high requirements and complicated calculation when collecting three-dimensional data of teeth in the prior art.
  • the foregoing first obtaining module, the first determining module, and the splicing and merging module correspond to steps S102 to S106 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but It is not limited to the contents disclosed in the above embodiment 1. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the first determining module includes a second determining module configured to determine a dense three-dimensional point cloud of the teeth at each viewing angle according to the sparse point cloud set by means of photometric stereoscopic three-dimensional reconstruction.
  • the foregoing second determining module corresponds to step S202 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in Embodiment 1 above. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the second determining module includes a second obtaining module, a third determining module, and a first diffusion module, wherein the second acquiring module is configured to acquire the main camera after sequentially lighting each light source Collecting a light source image of the tooth to obtain a light source image set; a third determining module configured to determine a contour line of each pixel in the light source image; and a first diffusion module configured to sparse the sparse point cloud set according to the contour line The point cloud spreads to obtain a dense three-dimensional point cloud.
  • the foregoing second obtaining module, the third determining module, and the first diffusion module correspond to step S302 to step S306 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps. However, it is not limited to the contents disclosed in the above embodiment 1. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the third determining module includes a fourth determining module and a fifth determining module, wherein the fourth determining module is configured to determine an azimuth of a position of each pixel in the light source image; A module that is set to determine a contour of each pixel in the source image based on the azimuth.
  • the foregoing fourth determining module and the fifth determining module correspond to step S402 to step S404 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the foregoing implementation.
  • the content disclosed in Example 1. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the apparatus further includes a calibration module and a third acquisition module, wherein the calibration module is configured to be in accordance with the main camera before the third determination module determines the contour of each pixel in the light source image
  • the whiteboard image pre-acquired after each light source is sequentially calibrated to obtain a set of the light source image after calibration
  • the third acquisition module is configured to obtain multiple virtual light source images according to the calibrated light source image set by using an interpolation algorithm. A collection of virtual light sources.
  • the foregoing calibration module and the third acquisition module correspond to step S502 to step S504 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the foregoing Embodiment 1
  • the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the fourth determining module includes a sixth determining module configured to determine an azimuth of a location of each pixel in the source image based on the set of virtual light source images.
  • the foregoing sixth determining module corresponds to step S602 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the content disclosed in Embodiment 1 above. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the third obtaining module comprises a first building module, a second building module and a first calculating module, wherein the first building module is configured to construct an initial virtual light source, wherein the initial virtual light source The position is the intersection of the plurality of light sources and the virtual circle, the center of the virtual circle is the geometric center of the plurality of light sources, the radius of the virtual circle is the average distance of the plurality of light sources to the center of the circle; and the second building block is configured to construct each according to the initial virtual light source a virtual light source corresponding to the initial virtual light source, wherein the position of the derived virtual light source is the intersection of the initial virtual light source and the center line of the preset angle on the virtual circle; the first calculation module is set to use the interpolation algorithm according to the calibration
  • the post-light source image set calculates a virtual light source image under the initial virtual light source and the derived virtual light source illumination to obtain a virtual light source image set.
  • the foregoing first building module, the second building module, and the first calculating module correspond to step S702 to step S706 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps. However, it is not limited to the contents disclosed in the above embodiment 1. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the first calculation module comprises a triangulation module and a second calculation module, wherein the triangulation module is configured to adopt a triangulation algorithm on the plurality of light sources on the image plane of the main camera Triangulation is performed to obtain a split triangle; a second calculation module is set to calculate the positional relationship between the initial virtual light source and the derived virtual light source and the split triangle according to the calibration, and use different formulas to calculate the initial virtual light source and derivative according to the calibrated light source image set.
  • the above-mentioned triangulation module and the second calculation module correspond to steps S802 to S804 in Embodiment 1, and the modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above implementation.
  • the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the formula used in the sixth determining module includes:
  • i represents the label of the initial virtual light source
  • j represents the position of the pixel as j
  • k represents the label of the derived virtual light source
  • 2m represents the total number of initial virtual light sources
  • k all represents the corresponding virtual light source.
  • the total number of derived virtual light sources, V ij represents the color value of the initial virtual light source image corresponding to i at the position where the pixel is located at j.
  • the color value of the derived virtual light source image corresponding to the derived virtual light source with the initial virtual light source label i corresponding to i at the position of the pixel is j, and the R, G, and B represent the red channel, the green channel, and the blue channel, P j denotes a second set of intermediate variables, k j denotes a corresponding k when P j is a minimum value, ⁇ denotes a preset constant, and ⁇ j denotes an azimuth where the pixel is located at j.
  • each pixel includes two contour lines
  • the fifth determining module includes an expansion module, a third computing module, and a statistics module
  • the expansion module is configured to be in pixels for each pixel.
  • a plurality of new pixels are respectively extended in the direction of the corresponding two contour lines
  • the third calculating module is configured to calculate the positions of the plurality of new pixels according to the azimuth of the pixels by using a bilinear interpolation algorithm
  • the statistical module is set to count each The set of locations of new pixels on the contour.
  • the foregoing expansion module, the third calculation module, and the statistics module correspond to steps S902 to S906 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above.
  • the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the first diffusion module includes a second diffusion module configured to sequentially spread the sparse point cloud in the sparse point cloud set according to the position set to obtain a dense three-dimensional point cloud.
  • the foregoing second diffusion module corresponds to step S1002 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in Embodiment 1 above. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the splicing fusion module includes: a splicing module and/or a merging module, wherein the splicing module is configured to splicing the dense three-dimensional point cloud at different viewing angles by using an iterative closest point algorithm, and after splicing Dense 3D point cloud; fusion module, set to use the truncated directed distance function to perform point cloud fusion on the spliced dense 3D point cloud.
  • the foregoing splicing module and the merging module correspond to step S1102 to step S1104 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above-mentioned Embodiment 1 Content. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • the first obtaining module includes a fourth acquiring module configured to acquire a sparse point cloud set of teeth at different viewing angles by using a three-dimensional imaging system in a binocular reconstruction manner, wherein the three-dimensional imaging system further includes From the camera.
  • the foregoing fourth obtaining module corresponds to step S1202 in Embodiment 1, and the foregoing modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in Embodiment 1 above. It should be noted that the above modules may be implemented as part of a device in a computer system such as a set of computer executable instructions.
  • a product embodiment of a three-dimensional data reconstruction system for teeth comprising the above-described three-dimensional data reconstruction device for teeth, further comprising a three-dimensional imaging system; the three-dimensional imaging system comprising a main camera, a slave camera, and Multiple light sources.
  • a product embodiment of a storage medium comprising a stored program, wherein the device in which the storage medium is located is controlled to execute the above-described three-dimensional data reconstruction method of the tooth when the program is running.
  • a product embodiment of a processor is provided that is configured to execute a program, wherein the three dimensional data reconstruction method described above is performed while the program is running.
  • a product embodiment of a terminal includes a first acquiring module, a first determining module, a splicing and merging module, and a processor, where the first acquiring module is configured to obtain by using a three-dimensional imaging system.
  • the three-dimensional imaging system comprises a main camera and a plurality of light sources
  • the first determining module is configured to determine a dense three-dimensional point cloud of the teeth at each viewing angle according to the sparse point cloud set
  • the splicing fusion module configured to splicing and fusing the dense three-dimensional point cloud at different viewing angles to obtain three-dimensional data of the tooth
  • the processor and the processor running the program, wherein the program is running from the first acquiring module, the first determining module, and the splicing fusion module
  • the output data performs the above-described three-dimensional reconstruction method of teeth.
  • a product embodiment of a terminal includes a first acquiring module, a first determining module, a splicing and merging module, and a storage medium, where the first acquiring module is configured to obtain by using a three-dimensional imaging system.
  • the three-dimensional imaging system comprises a main camera and a plurality of light sources
  • the first determining module is configured to determine a dense three-dimensional point cloud of the teeth at each viewing angle according to the sparse point cloud set
  • the splicing fusion module configured to splicing and fusing a dense three-dimensional point cloud at different viewing angles to obtain three-dimensional data of the tooth
  • the storage medium is set as a storage program, wherein the program is fused at the runtime from the first acquiring module, the first determining module, and the stitching
  • the data output by the module performs the above-described three-dimensional reconstruction method of teeth.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

La présente invention concerne un procédé, un dispositif et un système de reconstitution de données tridimensionnelles de dents. Le procédé comporte les étapes consistant à: acquérir un ensemble de nuages de points épars de dents sous différents angles de vision à l'aide d'un système d'imagerie tridimensionnelle, le système d'imagerie tridimensionnelle comportant une caméra principale et une pluralité de sources lumineuses; déterminer un nuage de points tridimensionnel dense de dents sous chaque angle de vision d'après un ensemble de nuages de points épars; et raccorder et fusionner les nuages de points tridimensionnels denses sous différents angles pour obtenir les données tridimensionnelles de dents. Le problème technique, rencontré dans l'état antérieur de la technique, des exigences élevées sur les dispositifs matériels et de la complexité de calcul lorsque des données tridimensionnelles de dents sont recueillies est résolu.
PCT/CN2018/082235 2017-10-31 2018-04-09 Procédé, dispositif et système de reconstitution de données tridimensionnelles de dents WO2019085392A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711052902.XA CN108269300B (zh) 2017-10-31 2017-10-31 牙齿三维数据重建方法、装置和系统
CN201711052902.X 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019085392A1 true WO2019085392A1 (fr) 2019-05-09

Family

ID=62771692

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082235 WO2019085392A1 (fr) 2017-10-31 2018-04-09 Procédé, dispositif et système de reconstitution de données tridimensionnelles de dents

Country Status (2)

Country Link
CN (1) CN108269300B (fr)
WO (1) WO2019085392A1 (fr)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410318B (zh) 2018-09-30 2020-09-08 先临三维科技股份有限公司 三维模型生成方法、装置、设备和存储介质
CN109489583B (zh) * 2018-11-19 2021-09-17 先临三维科技股份有限公司 投影装置、采集装置及具有其的三维扫描系统
CN110276758B (zh) * 2019-06-28 2021-05-04 电子科技大学 基于点云空间特征的牙咬合分析系统
CN112146564B (zh) * 2019-06-28 2022-04-15 先临三维科技股份有限公司 三维扫描方法、装置、计算机设备和计算机可读存储介质
CN110864613B (zh) * 2019-11-05 2021-05-04 北京航空航天大学 一种基于电场力模型的食物体积测量方法
CN110998671B (zh) * 2019-11-22 2024-04-02 驭势科技(浙江)有限公司 三维重建方法、装置、系统和存储介质
CN111710426A (zh) * 2020-05-14 2020-09-25 先临三维科技股份有限公司 牙齿模型补洞方法、装置、系统和计算机可读存储介质
CN111798571A (zh) * 2020-05-29 2020-10-20 先临三维科技股份有限公司 牙齿扫描方法、装置、系统和计算机可读存储介质
CN111784754B (zh) * 2020-07-06 2024-01-12 浙江得图网络有限公司 基于计算机视觉的牙齿正畸方法、装置、设备及存储介质
CN113610172B (zh) * 2021-08-13 2023-08-18 北京地平线信息技术有限公司 神经网络模型训练方法和装置、传感数据融合方法和装置
CN113658329A (zh) * 2021-08-17 2021-11-16 南方电网调峰调频发电有限公司有限责任有限公司 一种建筑物体框模型精细三维建模方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018901A1 (en) * 2003-07-23 2005-01-27 Orametrix, Inc. Method for creating single 3D surface model from a point cloud
CN106504321A (zh) * 2016-11-07 2017-03-15 达理 使用照片或视频重建三维牙模的方法及使用rgbd图像重建三维牙模的方法
CN106537225A (zh) * 2014-05-27 2017-03-22 F·迪莱特 一种对患者口腔内部的可视化装置
CN106600675A (zh) * 2016-12-07 2017-04-26 西安蒜泥电子科技有限责任公司 一种基于深度图约束的点云合成方法
CN106600531A (zh) * 2016-12-01 2017-04-26 深圳市维新登拓医疗科技有限公司 手持扫描仪、手持扫描仪点云拼接方法和装置
CN106875472A (zh) * 2017-01-16 2017-06-20 成都信息工程大学 一种3d牙齿成像建模方法
CN107220928A (zh) * 2017-05-31 2017-09-29 中国工程物理研究院应用电子学研究所 一种牙齿ct图像像素数据转化至3d打印数据的方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010099036A1 (fr) * 2009-02-25 2010-09-02 Dimensional Photonics International, Inc. Affichage d'intensité et de couleur pour un système de métrologie tridimensionnelle
EP3091508B1 (fr) * 2010-09-03 2018-12-26 California Institute of Technology Système d'imagerie tridimensionnelle
CN102496183B (zh) * 2011-11-03 2013-12-25 北京航空航天大学 基于互联网照片集的多视角立体重构方法
CN104346608B (zh) * 2013-07-26 2017-09-08 株式会社理光 稀疏深度图稠密化方法和装置
TWI556798B (zh) * 2014-05-27 2016-11-11 Metal Ind Res & Dev Ct The method of establishing three - dimensional image of tooth
EP3178067A4 (fr) * 2014-08-08 2018-12-05 Carestream Dental Technology Topco Limited Mappage de texture faciale sur une image volumique
CN104867183A (zh) * 2015-06-11 2015-08-26 华中科技大学 一种基于区域增长的三维点云重建方法
CN106802138B (zh) * 2017-02-24 2019-09-24 先临三维科技股份有限公司 一种三维扫描系统及其扫描方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018901A1 (en) * 2003-07-23 2005-01-27 Orametrix, Inc. Method for creating single 3D surface model from a point cloud
CN106537225A (zh) * 2014-05-27 2017-03-22 F·迪莱特 一种对患者口腔内部的可视化装置
CN106504321A (zh) * 2016-11-07 2017-03-15 达理 使用照片或视频重建三维牙模的方法及使用rgbd图像重建三维牙模的方法
CN106600531A (zh) * 2016-12-01 2017-04-26 深圳市维新登拓医疗科技有限公司 手持扫描仪、手持扫描仪点云拼接方法和装置
CN106600675A (zh) * 2016-12-07 2017-04-26 西安蒜泥电子科技有限责任公司 一种基于深度图约束的点云合成方法
CN106875472A (zh) * 2017-01-16 2017-06-20 成都信息工程大学 一种3d牙齿成像建模方法
CN107220928A (zh) * 2017-05-31 2017-09-29 中国工程物理研究院应用电子学研究所 一种牙齿ct图像像素数据转化至3d打印数据的方法

Also Published As

Publication number Publication date
CN108269300A (zh) 2018-07-10
CN108269300B (zh) 2019-07-09

Similar Documents

Publication Publication Date Title
WO2019085392A1 (fr) Procédé, dispositif et système de reconstitution de données tridimensionnelles de dents
WO2021121320A1 (fr) Procédé et système de balayage tridimensionnel multi-mode
JP6564537B1 (ja) 単眼3次元走査システムによる3次元再構成法および装置
KR102248944B1 (ko) 3차원 스캐닝 시스템 및 그 스캐닝 방법
CN108876926B (zh) 一种全景场景中的导航方法及系统、ar/vr客户端设备
WO2020038277A1 (fr) Procédés et appareils d'acquisition et de traitement d'images pour balayage tridimensionnel, et dispositif de balayage tridimensionnel
WO2021203883A1 (fr) Procédé de balayage tridimensionnel, système de balayage tridimensionnel et support d'enregistrement lisible par ordinateur
WO2017008226A1 (fr) Procédé et système de reconstruction faciale tridimensionnelle
WO2019007180A1 (fr) Système de scanner de mesure tridimensionnel à grande échelle portatif possédant simultanément des fonctions de balayage tridimensionnel et de mesure à partir de photographie
US20120176478A1 (en) Forming range maps using periodic illumination patterns
KR20180003535A (ko) 라이더 스테레오 융합 실사 3d 모델 가상 현실 비디오
US20120176380A1 (en) Forming 3d models using periodic illumination patterns
TW201922163A (zh) 用於分析皮膚狀況的系統和方法
CN109242898B (zh) 一种基于图像序列的三维建模方法及系统
JP2012527787A (ja) 画像から高速に立体構築を行なう方法
Reichinger et al. Evaluation of methods for optical 3-D scanning of human pinnas
TWI581051B (zh) Three - dimensional panoramic image generation method
JP2016537901A (ja) ライトフィールド処理方法
CN106408664A (zh) 一种基于三维扫描装置的三维模型曲面重建方法
CN112233165B (zh) 一种基于多平面图像学习视角合成的基线扩展实现方法
WO2018032841A1 (fr) Procédé, dispositif et système de tracé d'image tridimensionnelle
KR20120018915A (ko) 컬러 영상과 시점 및 해상도가 동일한 깊이 영상 생성 방법 및 장치
WO2019085402A1 (fr) Dispositif et procédé de balayage 3d intra-oral
WO2018056802A1 (fr) Procédé d'estimation de valeur de profondeur tridimensionnelle à partir d'images bidimensionnelles
JP4193292B2 (ja) 多眼式データ入力装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873739

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18873739

Country of ref document: EP

Kind code of ref document: A1