CN112017280A - Method for generating digital tooth model with color texture information - Google Patents

Method for generating digital tooth model with color texture information Download PDF

Info

Publication number
CN112017280A
CN112017280A CN202010981631.1A CN202010981631A CN112017280A CN 112017280 A CN112017280 A CN 112017280A CN 202010981631 A CN202010981631 A CN 202010981631A CN 112017280 A CN112017280 A CN 112017280A
Authority
CN
China
Prior art keywords
point
tooth model
cusp
points
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010981631.1A
Other languages
Chinese (zh)
Other versions
CN112017280B (en
Inventor
唐会
熊体超
凌永权
吴志杰
庞康高
李康荣
李观华
林宇恒
郑而容
鮑浩能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010981631.1A priority Critical patent/CN112017280B/en
Publication of CN112017280A publication Critical patent/CN112017280A/en
Application granted granted Critical
Publication of CN112017280B publication Critical patent/CN112017280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for generating a digital tooth model with color texture information. The method reduces the complicated steps of manually selecting the corresponding points, and avoids subjective errors caused by manually selecting the corresponding points; the cusp points are clearer and more reliable in the visual angles of the two-dimensional image and the three-dimensional model, the accuracy is high, and the registration effect is ideal. Overcomes various defects of two-dimensional digital smile design and perfects the digital smile design.

Description

Method for generating digital tooth model with color texture information
Technical Field
The present invention relates to the field of cosmetology, and more particularly, to a method of generating a digitized tooth model with color texture information.
Background
In recent years, an oral aesthetic matting called Digital Smile Design (DSD) has rolled the field of dentistry. The treatment regimen for DSD takes into account the correlation between lips, gums, teeth and facial features and is based on a comprehensive analysis of the patient's tooth and facial proportions. Meanwhile, the DSD can show the postoperative effect before the operation, so that the patient can check the postoperative effect before the operation, which means that the patient can better share the target with a doctor team to express the desire and expectation of the patient, so that the patient participates in the operation and becomes a common designer of a treatment scheme. The DSD technology integrates the oral aesthetic concept into oral treatment, and designs and plans the length, the corners, the surface texture, the color, the contour and the arrangement regularity of the exposed teeth during smiling by means of a computer so as to achieve the double effects of tooth restoration and beautiful smile construction.
At present, most of traditional software used for DSD is two-dimensional, and oral pictures are imported into the software for analysis and design. However, the oral pictures have the defect of limited viewing angle, which only allows a doctor to smile the patient at a single viewing angle, and when the clinical experience of the doctor is insufficient, the designed treatment scheme is over-ideal, so that the expected target cannot be achieved; meanwhile, oral pictures are easily affected by problems such as shooting angles, and distortion of the pictures can cause inaccuracy of design schemes, so that the oral pictures cannot perfectly adapt to three-dimensional environments, and therefore the smile design based on the oral pictures is clinically limited.
Recently, three-dimensional facial photographs have been proposed for smile design, and instruments for generating three-dimensional facial photographs have become mature, but it is not easy to acquire digitized dentition shapes with color texture information. When a rubber impression is generated by impressing the teeth of a patient, color texture information is lost; the uncertainty of tooth reflection characteristics and the difficulty in controlling and standardizing background illumination in the process of using the oral cavity scanner limit the clinical application of the oral cavity scanner in DSD.
With the development of image-to-geometric model registration techniques, it is possible to combine the advantages of two-dimensional images with three-dimensional models. Hsung, 2018 proposes a method for generating a tooth model with color texture information based on an image-to-geometric model registration algorithm, wherein the corresponding point for registration of an oral cavity image to a three-dimensional tooth model is a gingival papilla (gingival papilla: a sharp point where a gingiva borders two adjacent teeth), but the edge between the gingiva and the tooth is usually represented as a weaker edge, so that when a plaster model is generated by taking an impression of a patient, the generated gingival-tooth edge is usually blurred, so that a deviation of 2-3mm usually exists between the extracted 3D corresponding point and the corresponding point on a 2D photo, and the registration effect is usually reduced.
Chinese patent CN 107252356B published 24/7/2020 provides a digital oral aesthetic restoration method, comprising the following steps: acquiring a smiling digital image of the face of a patient and an intraoral dental crown three-dimensional digital model; obtaining a two-dimensional digital aesthetic design sheet by utilizing the smile digital image of the face; overlapping the two-dimensional digital aesthetic design sheet and the three-dimensional digital model of the intraoral dental crown with each other by a three-dimensional point cloud digital splicing technology; adding a standard three-dimensional crown veneer digital model to a corresponding placement in the intraoral crown three-dimensional digital model to obtain a crown veneer three-dimensional digital model designed for the patient. According to the method, the two-dimensional digital aesthetic design list and the intraoral dental crown three-dimensional digital model are overlapped by using non-collinear characteristic mark points, the mark point selection standard and method are not clear, the registration effect cannot be guaranteed, and the finally obtained three-dimensional digital dental model has a large difference with real tooth information and low accuracy.
Disclosure of Invention
The invention provides a method for generating a digital tooth model with color texture information, which aims to overcome the problem that the existing tooth model is difficult to have accurate color texture information.
The technical scheme of the invention is as follows:
the invention provides a method for generating a digital tooth model with color texture information, which comprises the following steps:
s1: acquiring an oral cavity picture and an initial tooth model; the oral cavity picture has accurate tooth color texture information, and the initial tooth model does not have tooth color texture information;
s2: extracting a cusp point on the oral cavity picture;
s3: extracting cusp points on the initial tooth model;
s4: and (4) finishing registration and mapping from the oral cavity picture to the initial tooth model by taking the cusp points obtained in the S2 and the S3 as corresponding points to obtain the digital tooth model with accurate color texture information.
Preferably, the specific operation method of S1 is: oral cavity pictures are obtained by adopting an oral cavity photography method, and an initial tooth model is obtained by adopting a digital film printing method.
Preferably, the specific steps of S2 are:
s2.1: selecting a region of interest: in the mouth picture IAIn the method, a region of interest is selected and marked as IR(x, y) with a size of m x n; x and y are horizontal and vertical coordinates in a rectangular plane coordinate system, and m is IR(x, y) length on horizontal axis, n is IR(x, y) length on longitudinal axis; the region of interest includes a tooth under which the cusp point is locatedAn edge profile;
s2.2: in IRExtracting the cusp point edge profile in (x, y);
s2.3: in IRExtracting cusp points from (x, y);
s2.4: performing logic conjunction operation on the cusp point of each interested area to obtain IAAll cusp points.
Preferably, the specific steps of S2.2 are:
s2.2.1: will IR(x, y) conversion to CIELAB to IRThree components of (x, y): luminance L (x, y), color opponent dimensions a (x, y) and b (x, y);
by the formula
Figure BDA0002687725640000031
Calculate IRChroma θ of each pixel in (x, y)ab(x, y); the brightness L (x, y) and the chroma theta of each pixel point are calculatedab(x, y) are multiplied to obtain a product Lθ(x,y),Lθ(x,y)=L(x,y)·θab(x,y);
S2.2.2: according to L between pixel pointsθThe difference in (x, y) value will be IRAnd (x, y) clustering the pixels to obtain three types of pixels: c1 buccal, C2 non-buccal, C3 non-dental area of the tooth;
s2.2.3: calculation of IR(x, y) lower boundary of C1 region in each column of pixels to IR(x, y) distance of upper boundary to obtain IRThe one-dimensional sequence of the C1 region in (x, y), namely the one-dimensional sequence of the lower boundary contour Line of the tooth buccal side, is marked as Line 1;
s2.2.4: combining C1 with C2 region to obtain complete tooth region C12Calculating IR(x, y) C in each column of pixels12Lower boundary of region to IR(x, y) distance of upper boundary to obtain IRC in (x, y)12The one-dimensional sequence of regions, i.e., the one-dimensional sequence of lower boundary contours of the tooth region, is denoted Line 2.
Preferably, the specific steps of S2.3 are:
s2.3.1: subtracting Line2 from Line1, and if a segment with the difference value of more than m × 5% of continuous pixels being more than n × 10% exists, indicating that Line2 has dental cusp points on the segment, and recording the segments as A;
s2.3.2: optimizing the Line2 to obtain the Line3, wherein the optimization formula is
Figure BDA0002687725640000032
I represents IRThe ith column in (x, y);
s2.3.3: performing peak point search on Line1 and Line3 to obtain cusp points, and performing logical conjunction operation on the cusp points extracted from Line1 and Line3 to obtain IAAll cusp points in (c) and coordinates are expressed as (x)i,yi)。
Preferably, a Fuzzy c-mean algorithm is adopted when clustering is performed on the pixels in S2.2.2.
Preferably, in said S2.3.3, a spike point search is performed using an autofindpeak function.
Preferably, the specific steps of S3 are:
s3.1: converting the initial tooth model into a depth image R, wherein the size is M x N, M is the length of the depth image R on the horizontal axis, and N is the length of the degree image R on the vertical axis;
straightening the initial tooth model to be parallel to the occlusal surface, and then mapping the initial tooth model onto the occlusal surface according to the following rules to form a depth image R:
setting the coordinates of any point on the initial tooth model as: (x)v,yv,zv) And the value of each pixel point of the R in the depth image represents the distance from the pixel point to the occlusal surface corresponding to the point closest to the occlusal surface in a square sampling window with the side length c on the initial tooth model, so the value of the R point of the depth image is as follows: r (x, y) ═ min { z { (m {)v|xv-Sx≤xv≤xv+Sx;yv-Sy≤yv≤yv+SyAnd coordinates (x, y) in the depth image and any point coordinate (x) on the initial tooth model corresponding to the coordinates (x, y)v,yv,zv) The relationship between is:
Figure BDA0002687725640000041
Wherein, PxAnd PyIs the pixel size of the depth image, SxAnd SyHalf the side length c of the sampling window;
s3.2: realizing the tooth and gum segmentation;
dividing pixel points of each row in the depth image R (x, y) into two segments in half, and regarding the value of the pixel point of which the difference value between the value of each segment and the minimum value of the segment is greater than a critical value as a gum;
s3.3: extracting the coordinates of cusp points;
in the depth image, four rows of seed points are placed at equal intervals, and the number of the seed points placed in each row is not less than three times of the number of teeth on one side; the seeds in the 1 st and 3 rd columns use a gradient descent method with direction priority 1 to search a local minimum value on the right side of the seed point in the depth image, and the seeds in the 2 nd and 4 th columns use a gradient descent method with direction priority 2 to search a local minimum value on the left side of the seed point in the depth image; the priority order of directional priorities 1 and 2 is: 1. right > left > down > up, 2. left > right > down > up;
taking the found local minimum value point as a seed point of a region growing algorithm to find out an equivalent value point in the neighborhood of the point; according to the central point coordinate of each local minimum value area and its value can pass through formula
Figure BDA0002687725640000051
The coordinates of this point, corresponding to the cusp point on the initial tooth model, are calculated back and are noted as (x)pi,ypi,zpi)。
Preferably, the step number of each movement calculated by the gradient descent method in S3.3 is given by the following formula, taking the direction to the right as an example: Δ R (x, y) ═ R (x +1, y) -R (x, y),
Figure BDA0002687725640000052
where Δ R (x, y) is the forward difference and step is the number of steps moved(ii) a In the formula, N is a constant and has a value range of 0, …, N-1, x + N is less than N, iRIs the ith in the dark color image RRColumn, iR=0,...,n-1;
ΔR(x+iRAnd, y) 0 represents: the values of the pixel points in the R from (x, y) to (x + n, y) are equal, and gradient reduction does not exist. Δ R (x + n, y) < 0 represents: gradient decrease exists between pixel points (x + n +1, y) and (x + n, y) in the R; comprehensively obtaining: the values of the pixel points are equal between (x, y) and (x + n, y), no gradient is reduced, a gradient exists between (x + n, y) and (x + n +1, y), and the moving step number of the point (x, y) is n + 1;
if the calculated step number is 0, starting to calculate the moving step number of the direction with the priority lower by one level, if the step number in the right direction in the direction priority 1 is 0, calculating the moving step number in the left direction, if the step numbers in the right direction and the left direction are both 0, starting to calculate the lower direction, and finally, calculating the upper direction;
for calculating the number of steps in the left direction, we only need to change the forward difference of the above two equations to the backward difference: Δ R (x, y) ═ R (x-1, y) -R (x, y); if the number of moving steps in the downward direction and the upward direction is calculated, the forward and backward difference operation of the above formula for the x dimension is changed into the y dimension: Δ R (x, y) ═ R (x, y ± 1) -R (x, y). y +1 is a forward difference corresponding to a downward direction; y-1 is the backward difference, corresponding to the upward direction.
Preferably, the specific steps of S4 are:
s4.1: registering; converting the initial tooth model into a two-dimensional photograph I by a mapping matrix with camera parameters of CB(C)And obtaining the cusp point (x) of the initial tooth modelpi,ypi,zpi) Conversion to two-dimensional photograph IB(C)Coordinate of (x)pi(c),ypi(c)) (ii) a The camera parameters C are calculated using the following formula:
Figure BDA0002687725640000053
wherein k is a weight parameter; MI (I)AIB(C)) Picture for oral cavity IAAnd two-dimensional photograph IB(C)The mutual information between the two groups is obtained,
Figure BDA0002687725640000061
wherein p (a) denotes IAThe probability that the gray value of the middle pixel point is a, and p (b) refers to IB(C)The probability that the gray value of the middle pixel point is b, and p (a, b) refers to IAThe values of the middle pixel points are a and IB(C)The joint probability of the middle pixel point with the value of b; e (Corr, C) stands for oral cavity picture IACentral cusp coordinates (x)i,yi) And two-dimensional photograph IB(C)Corresponding cusp point coordinates (x)pi,ypi) The average of the euclidean distances between them,
Figure BDA0002687725640000062
when in said formula
Figure BDA0002687725640000063
The camera parameter C calculated when the minimum value is taken is the corresponding relation between the initial tooth model and the oral cavity picture, and C ═ θ, Φ, ψ, tx,ty,tzAnd f), wherein, θ,
Figure BDA0002687725640000064
psi is the Euler angle between the three axes of the coordinate system of the camera and the three axes of the spatial coordinate system in which the initial tooth model is located, tx,ty,tzThe translation amount of the camera coordinate system on three axes of the space coordinate system where the initial tooth model is located is shown, and f is the focal length of the camera;
s4.2: mapping; and assigning the color texture information of each pixel point in the oral cavity picture to a corresponding point on the initial tooth model by using the camera parameter C, thereby obtaining the digital tooth model with the color texture information.
The technical scheme of the invention has the beneficial effects that:
the invention provides a method for generating a digital tooth model with color texture information, which can obtain the digital tooth model with vivid color texture information. The method automatically extracts the corresponding points between the oral cavity picture and the initial tooth model, reduces the complicated steps of manually selecting the corresponding points, and avoids the subjective error caused by manually selecting the corresponding points; in addition, the corresponding points selected by the invention are cusp points, and compared with other corresponding points, the cusp points are clearer and more reliable in the visual angles of the two-dimensional image and the three-dimensional model, so that the accuracy is higher, and the final registration effect is more ideal.
Drawings
FIG. 1 is a flowchart of a method for generating a digital dental model with color texture information as described in example 1.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, the present embodiment provides a method for generating a digital tooth model with color texture information, comprising the steps of:
s1: acquiring an oral cavity picture and an initial tooth model; the oral cavity picture has accurate tooth color texture information, and the initial tooth model does not have tooth color texture information;
s2: extracting a cusp point on the oral cavity picture;
s3: extracting cusp points on the initial tooth model;
s4: and (4) finishing registration and mapping from the oral cavity picture to the initial tooth model by taking the cusp points obtained in the S2 and the S3 as corresponding points to obtain the digital tooth model with accurate color texture information.
The specific operation method of the S1 comprises the following steps: oral cavity pictures are obtained by adopting an oral cavity photography method, and an initial tooth model is obtained by adopting a digital film printing method.
The specific steps of S2 are as follows:
s2.1: selecting a region of interest: in the mouth picture IAIn the method, a region of interest is selected and marked as IR(x, y) with a size of m x n; x and y are horizontal and vertical coordinates in a rectangular plane coordinate system, and m is IR(x, y) length on horizontal axis, n is IR(x, y) length on longitudinal axis; the region of interest comprises a tooth lower edge contour where a cusp point is located; the size of the region of interest is 2000 × 200 in this embodiment, i.e., m is 2000 and n is 200;
s2.2: in IRExtracting the cusp point edge profile in (x, y);
s2.3: in IRExtracting cusp points from (x, y);
s2.4: performing logic conjunction operation on the cusp point of each interested area to obtain IAAll cusp points.
The S2.2 comprises the following specific steps:
s2.2.1: will IR(x, y) conversion to CIELAB to IRThree components of (x, y): luminance L (x, y), color opponent dimensions a (x, y) and b (x, y);
by the formula
Figure BDA0002687725640000071
Calculate IRChroma θ of each pixel in (x, y)ab(x, y); the brightness L (x, y) and the chroma theta of each pixel point are calculatedab(x, y) are multiplied to obtain a product Lθ(x,y),
Lθ(x,y)=L(x,y)·θab(x,y);
S2.2.2: according to L between pixel pointsθThe difference in (x, y) value will be IRAnd (x, y) clustering the pixels to obtain three types of pixels: c1 buccal, C2 non-buccal, C3 non-dental area of the tooth;
s2.2.3: calculation of IR(x, y) lower boundary of C1 region in each column of pixels to IR(x, y) distance of upper boundary to obtain IROne-dimensional sequence of the C1 region in (x, y), the buccal lower border of the toothThe one-dimensional sequence of the contour lines is marked as Line 1;
s2.2.4: combining C1 with C2 region to obtain complete tooth region C12Calculating IR(x, y) C in each column of pixels12Lower boundary of region to IR(x, y) distance of upper boundary to obtain IRC in (x, y)12The one-dimensional sequence of regions, i.e., the one-dimensional sequence of lower boundary contours of the tooth region, is denoted Line 2.
The S2.3 comprises the following specific steps:
s2.3.1: subtracting Line2 from Line1, and if a segment with the difference value of more than m × 5% of continuous pixels being more than n × 10% exists, indicating that Line2 has dental cusp points on the segment, and recording the segments as A; in this embodiment, if there is a segment with a difference value greater than 20 where m is 2000 and n is 200, it may indicate that Line2 has a cusp point on the segment;
s2.3.2: optimizing the Line2 to obtain the Line3, wherein the optimization formula is
Figure BDA0002687725640000081
I represents IRThe ith column in (x, y);
s2.3.3: performing peak point search on Line1 and Line3 to obtain cusp points, and performing logical conjunction operation on the cusp points extracted from Line1 and Line3 to obtain IAAll cusp points in (c) and coordinates are expressed as (x)i,yi)。
And when the pixels in the S2.2.2 are clustered, a Fuzzy c-mean algorithm is adopted.
In said S2.3.3, a peak point search is performed using an autofindpeak function.
The specific steps of S3 are as follows:
s3.1: converting the initial tooth model into a depth image R, wherein the size is M x N, M is the length of the depth image R on the horizontal axis, and N is the length of the degree image R on the vertical axis;
straightening the initial tooth model to be parallel to the occlusal surface, and then mapping the initial tooth model onto the occlusal surface according to the following rules to form a depth image R:
setting the coordinates of any point on the initial tooth model as: (x)v,yv,zv) The value of each pixel point of R in the depth image represents the distance from the pixel point to the occlusal surface corresponding to the closest point to the occlusal surface in the square sampling window with a side length c on the initial tooth model, in this embodiment, the side length c is 0.5, so the value of the point (x, y) of the depth image R is:
R(x,y)=min{zv|xv-Sx≤xv≤xv+Sx;yv-Sy≤yv≤yv+Syand coordinates (x, y) in the depth image and any point coordinate (x) on the initial tooth model corresponding to the coordinates (x, y)v,yv,zv) The relationship between them is:
Figure BDA0002687725640000091
wherein, PxAnd PyIs the pixel size of the depth image, SxAnd SyHalf the side length c of the sampling window; in this embodiment, the pixel size is 0.1, PxAnd PyIs 0.1, SxAnd SyIs 0.25;
s3.2: realizing the tooth and gum segmentation;
dividing pixel points of each row in the depth image R (x, y) into two segments in half, and regarding the value of the pixel point of which the difference value between the value of each segment and the minimum value of the segment is greater than a critical value as a gum; in this embodiment, the threshold value is 4;
s3.3: extracting the coordinates of cusp points;
in the depth image, four rows of seed points are placed at equal intervals, the number of the seed points placed in each row is not less than three times of the number of teeth on one side, and the number of the seed points is three times of the number of the teeth on one side in the embodiment; the seeds in the 1 st and 3 rd columns use a gradient descent method with direction priority 1 to search a local minimum value on the right side of the seed point in the depth image, and the seeds in the 2 nd and 4 th columns use a gradient descent method with direction priority 2 to search a local minimum value on the left side of the seed point in the depth image; the priority order of directional priorities 1 and 2 is: 1. right > left > down > up, 2. left > right > down > up;
taking the found local minimum value point as a seed point of a region growing algorithm to find out an equivalent value point in the neighborhood of the point; according to the central point coordinate of each local minimum value area and its value can pass through formula
Figure BDA0002687725640000092
The coordinates of this point, corresponding to the cusp point on the initial tooth model, are calculated back and are noted as (x)pi,ypi,zpi);
The number of steps of each movement calculated by the gradient descent method in S3.3 is given by the following formula, taking the direction to the right as an example: Δ R (x, y) ═ R (x +1, y) -R (x, y),
Figure BDA0002687725640000093
where Δ R (x, y) is the forward difference and step is the number of moving steps; n is constant in the formula, the value range is 0, 1, N-1, and x + N is less than N, iRIs the ith in the dark color image RRColumn, iR=0,...,n-1;
ΔR(x+iRAnd, y) 0 represents: the values of the pixel points in the R from (x, y) to (x + n, y) are equal, and gradient decline does not exist; Δ R (x + n, y) < 0 represents: gradient decrease exists between pixel points (x + n +1, y) and (x + n, y) in the R; comprehensively obtaining: the values of the pixel points are equal between (x, y) and (x + n, y), no gradient is reduced, a gradient exists between (x + n, y) and (x + n +1, y), and the moving step number of the point (x, y) is n + 1;
if the calculated step number is 0, starting to calculate the moving step number of the direction with the priority lower by one level, if the step number in the right direction in the direction priority 1 is 0, calculating the moving step number in the left direction, if the step numbers in the right direction and the left direction are both 0, starting to calculate the lower direction, and finally, calculating the upper direction;
for calculating the number of steps in the left direction, we only need to change the forward difference of the above two equations to the backward difference: Δ R (x, y) ═ R (x-1, y) -R (x, y); if the number of moving steps in the downward direction and the upward direction is calculated, the forward and backward difference operation of the above formula for the x dimension is changed into the y dimension: Δ R (x, y) ═ R (x, y ± 1) -R (x, y). y +1 is a forward difference corresponding to a downward direction; y-1 is the backward difference, corresponding to the upward direction.
The specific steps of S4 are as follows:
s4.1: registering; converting the initial tooth model into a two-dimensional photograph I by a mapping matrix with camera parameters of CB(C)And obtaining the cusp point (x) of the initial tooth modelpi,ypi,zpi) Conversion to two-dimensional photograph IB(C)Coordinate of (x)pi(c),ypi(c)) (ii) a Using an algorithm for registration of the two-dimensional image and the three-dimensional model based on the mutual information and the corresponding points to calculate a camera parameter C, wherein the formula is as follows:
Figure BDA0002687725640000101
wherein k is a weight parameter, and the value of k in this embodiment is 0.5; MI (I)AIB(C)) Representing two-dimensional oral pictures IAAnd two-dimensional photograph IB(C)The mutual information between the two groups is obtained,
Figure BDA0002687725640000102
wherein p (a) denotes IAThe probability that the gray value of the middle pixel point is a, and p (b) refers to IB(C)The probability that the gray value of the middle pixel point is b, and p (a, b) refers to IAThe values of the middle pixel points are a and IB(C)The joint probability of the middle pixel point with the value of b; e (Corr, C) stands for oral cavity picture IACentral cusp coordinates (x)i,yi) And two-dimensional photograph IB(C)Corresponding cusp point coordinates (x)pi,ypi) The average of the euclidean distances between them,
Figure BDA0002687725640000103
when in said formula
Figure BDA0002687725640000111
Taking the minimum value as the initial tooth model and the calculated camera parameter CCorrespondence between oral cavity pictures, C ═ θ, Φ, ψ, tx,ty,tzAnd f), wherein, θ,
Figure BDA0002687725640000112
psi is the Euler angle between the three axes of the coordinate system of the camera and the three axes of the spatial coordinate system in which the initial tooth model is located, tx,ty,tzThe translation amount of the camera coordinate system on three axes of the space coordinate system where the initial tooth model is located is shown, and f is the focal length of the camera;
s4.2: mapping; and assigning the color texture information of each pixel point in the oral cavity picture to a corresponding point on the initial tooth model by using the camera parameter C, thereby obtaining the digital tooth model with the color texture information.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method of generating a digitized tooth model having color texture information, comprising the steps of:
s1: acquiring an oral cavity picture and an initial tooth model; the oral cavity picture has accurate tooth color texture information;
s2: extracting a cusp point on the oral cavity picture;
s3: extracting cusp points on the initial tooth model;
s4: and (4) finishing registration and mapping from the oral cavity picture to the initial tooth model by taking the cusp points obtained in the S2 and the S3 as corresponding points to obtain the digital tooth model with accurate color texture information.
2. The method for generating a digital tooth model with color texture information as claimed in claim 1, wherein the specific operation method of S1 is: oral cavity pictures are obtained by adopting an oral cavity photography method, and an initial tooth model is obtained by adopting a digital film printing method.
3. The method for generating a digital dental model with color texture information as claimed in claim 2, wherein the specific steps of S2 are:
s2.1: selecting a region of interest: in the mouth picture IAIn the method, a region of interest is selected and marked as IR(x, y) with a size of m x n; x and y are horizontal and vertical coordinates in a rectangular plane coordinate system, and m is IR(x, y) length on horizontal axis, n is IR(x, y) length on longitudinal axis; the region of interest comprises a tooth lower edge contour where a cusp point is located;
s2.2: in IRExtracting the cusp point edge profile in (x, y);
s2.3: in IRExtracting cusp points from (x, y);
s2.4: performing logic conjunction operation on the cusp point of each interested area to obtain IAAll cusp points.
4. The method of claim 3, wherein the specific steps of S2.2 are:
s2.2.1: will IR(x, y) conversion to CIELAB to IRThree components of (x, y): luminance L (x, y), color opponent dimensions a (x, y) and b (x, y);
by the formula
Figure FDA0002687725630000011
Calculate IRIn (x, y)Chroma theta of each pixel pointab(x, y) and comparing the brightness L (x, y) and the chromaticity theta of each pixel pointab(x, y) are multiplied to obtain a product Lθ(x,y),Lθ(x,y)=L(x,y)·θab(x,y);
S2.2.2: according to L between pixel pointsθThe difference in (x, y) value will be IRAnd (x, y) clustering the pixels to obtain three types of pixels: c1 buccal, C2 non-buccal, C3 non-dental area of the tooth;
s2.2.3: calculation of IR(x, y) lower boundary of C1 region in each column of pixels to IR(x, y) distance of upper boundary to obtain IRThe one-dimensional sequence of the C1 region in (x, y), namely the one-dimensional sequence of the lower boundary contour Line of the tooth buccal side, is marked as Line 1;
s2.2.4: combining C1 with C2 region to obtain complete tooth region C12Calculating IR(x, y) C in each column of pixels12Lower boundary of region to IR(x, y) distance of upper boundary to obtain IRC in (x, y)12The one-dimensional sequence of regions, i.e., the one-dimensional sequence of lower boundary contours of the tooth region, is denoted Line 2.
5. The method of claim 4, wherein the specific steps of S2.3 are:
s2.3.1: subtracting Line2 from Line1, and if a segment with the difference value of more than m × 5% of continuous pixels being more than n × 10% exists, indicating that Line2 has dental cusp points on the segment, and recording the segments as A;
s2.3.2: optimizing the Line2 to obtain the Line3, wherein the optimization formula is
Figure FDA0002687725630000021
I represents IRThe ith column in (x, y);
s2.3.3: performing peak point search on Line1 and Line3 to obtain cusp points, and performing logical conjunction operation on the cusp points extracted from Line1 and Line3 to obtain IAAll cusp points in (c) and coordinates are expressed as (x)i,yi)。
6. The method of claim 5, wherein the Fuzzy c-mean algorithm is used to cluster the pixels in S2.2.2.
7. The method of generating a digitized tooth model with color texture information as claimed in claim 6, wherein in S2.3.3 a spike point search is performed using an autofindpeak function.
8. The method of claim 7, wherein the step of S3 is as follows:
s3.1: converting the initial tooth model into a depth image R, wherein the size is M x N, M is the length of the depth image R on the horizontal axis, and N is the length of the degree image R on the vertical axis;
straightening the initial tooth model to be parallel to the occlusal surface, and then mapping the initial tooth model onto the occlusal surface according to the following rules to form a depth image R:
setting the coordinates of any point on the initial tooth model as: (x)v,yv,zv) And the value of each pixel point of the R in the depth image represents the distance from the pixel point to the occlusal surface corresponding to the point closest to the occlusal surface in a square sampling window with the side length c on the initial tooth model, so the value of the R point of the depth image is as follows: r (x, y) ═ min { z { (m {)v|xv-Sx≤xv≤xv+Sx;yv-Sy≤yv≤yv+SyAnd coordinates (x, y) in the depth image and any point coordinate (x) on the initial tooth model corresponding to the coordinates (x, y)v,yv,zv) The relationship between them is:
Figure FDA0002687725630000031
wherein, PxAnd PyIs the pixel size of the depth image, SxAnd SyHalf the side length c of the sampling window;
s3.2: realizing the tooth and gum segmentation;
dividing pixel points of each row in the depth image R (x, y) into two segments in half, and regarding the value of the pixel point of which the difference value between the value of each segment and the minimum value of the segment is greater than a critical value as a gum;
s3.3: extracting the coordinates of cusp points;
in the depth image, four rows of seed points are placed at equal intervals, and the number of the seed points placed in each row is not less than three times of the number of teeth on one side; the seeds in the 1 st and 3 rd columns use a gradient descent method with direction priority 1 to search a local minimum value on the right side of the seed point in the depth image, and the seeds in the 2 nd and 4 th columns use a gradient descent method with direction priority 2 to search a local minimum value on the left side of the seed point in the depth image; the priority order of directional priorities 1 and 2 is: 1. right > left > down > up, 2. left > right > down > up;
taking the found local minimum value point as a seed point of a region growing algorithm to find out an equivalent value point in the neighborhood of the point; according to the central point coordinate of each local minimum value area and its value can pass through formula
Figure FDA0002687725630000032
The coordinates of this point, corresponding to the cusp point on the initial tooth model, are calculated back and are noted as (x)pi,ypi,zpi)。
9. The method of claim 8, wherein the step number of each movement is calculated by the gradient descent method in S3.3, as given by the following formula, taking the direction to the right as an example: Δ R (x, y) ═ R (x +1, y) -R (x, y),
Figure FDA0002687725630000033
where Δ R (x, y) is the forward difference and step is the number of moving steps; n is constant in the formula, the value range is 0, 1, N-1, and x + N is less than N, iRIs the ith in the dark color image RRColumn, iR=0,...,n-1。
10. The method of claim 9, wherein the step of S4 is as follows:
s4.1: registering; converting the initial tooth model into a two-dimensional photograph I by a mapping matrix with camera parameters of CB(C)And obtaining the cusp point (x) of the initial tooth modelpi,ypi,zpi) Conversion to two-dimensional photograph IB(C)Coordinate of (x)pi(c),ypi(c)) (ii) a The camera parameters C are calculated using the following formula:
Figure FDA0002687725630000041
wherein k is a weight parameter; MI (I)AIB(C)) Picture for oral cavity IAAnd two-dimensional photograph IB(C)The mutual information between the two groups is obtained,
Figure FDA0002687725630000042
wherein p (a) denotes IAThe probability that the gray value of the middle pixel point is a, and p (b) refers to IB(C)The probability that the gray value of the middle pixel point is b, and p (a, b) refers to IAThe values of the middle pixel points are a and IB(C)The joint probability of the middle pixel point with the value of b; e (Corr, C) stands for oral cavity picture IACentral cusp coordinates (x)i,yi) And two-dimensional photograph IB(C)Corresponding cusp point coordinates (x)pi,ypi) The average of the euclidean distances between them,
Figure FDA0002687725630000043
when in said formula
Figure FDA0002687725630000044
Phase calculated when taking the minimumThe machine parameter C is the corresponding relationship between the initial tooth model and the oral cavity picture, where C ═ θ, Φ, ψ, tx,ty,tzAnd f), wherein, θ,
Figure FDA0002687725630000045
psi is the Euler angle between the three axes of the coordinate system of the camera and the three axes of the spatial coordinate system in which the initial tooth model is located, tx,ty,tzThe translation amount of the camera coordinate system on three axes of the space coordinate system where the initial tooth model is located is shown, and f is the focal length of the camera;
s4.2: mapping; and assigning the color texture information of each pixel point in the oral cavity picture to a corresponding point on the initial tooth model by using the camera parameter C, thereby obtaining the digital tooth model with the color texture information.
CN202010981631.1A 2020-09-17 2020-09-17 Method for generating digital tooth model with color texture information Active CN112017280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010981631.1A CN112017280B (en) 2020-09-17 2020-09-17 Method for generating digital tooth model with color texture information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010981631.1A CN112017280B (en) 2020-09-17 2020-09-17 Method for generating digital tooth model with color texture information

Publications (2)

Publication Number Publication Date
CN112017280A true CN112017280A (en) 2020-12-01
CN112017280B CN112017280B (en) 2023-09-26

Family

ID=73522641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010981631.1A Active CN112017280B (en) 2020-09-17 2020-09-17 Method for generating digital tooth model with color texture information

Country Status (1)

Country Link
CN (1) CN112017280B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017571A (en) * 2022-04-27 2022-09-06 阿里巴巴(中国)有限公司 Information providing method for space structure and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040197727A1 (en) * 2001-04-13 2004-10-07 Orametrix, Inc. Method and system for comprehensive evaluation of orthodontic treatment using unified workstation
WO2006065955A2 (en) * 2004-12-14 2006-06-22 Orthoclear Holdings, Inc. Image based orthodontic treatment methods
CN111415419A (en) * 2020-03-19 2020-07-14 西安知北信息技术有限公司 Method and system for making tooth restoration model based on multi-source image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040197727A1 (en) * 2001-04-13 2004-10-07 Orametrix, Inc. Method and system for comprehensive evaluation of orthodontic treatment using unified workstation
WO2006065955A2 (en) * 2004-12-14 2006-06-22 Orthoclear Holdings, Inc. Image based orthodontic treatment methods
CN111415419A (en) * 2020-03-19 2020-07-14 西安知北信息技术有限公司 Method and system for making tooth restoration model based on multi-source image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YONGZHEN KE ET AL.: "A user-friendly method forconstructing realistic dentalmodel based on two-dimensional/three-dimensional registration", ENGINEERING COMPUTATIONS, pages 4 - 12 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017571A (en) * 2022-04-27 2022-09-06 阿里巴巴(中国)有限公司 Information providing method for space structure and electronic equipment

Also Published As

Publication number Publication date
CN112017280B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
US12090021B2 (en) Smile prediction
US11151753B2 (en) Generic framework for blurring of colors for teeth in generated images using height map
US10980621B2 (en) Dental design transfer
CN112087985B (en) Simulated orthodontic treatment via real-time enhanced visualization
CN113509281B (en) Historical scan reference for intraoral scan
KR101799878B1 (en) 2d image arrangement
US8731280B2 (en) Virtual cephalometric imaging
JP7224361B2 (en) A method for matching a three-dimensional model of a patient&#39;s dentition to an image of the patient&#39;s face recorded by a camera
AU4076999A (en) Method and apparatus for generating 3d models from medical images
JP2010524529A (en) Computer-aided creation of custom tooth setup using facial analysis
WO2009062020A2 (en) Lighting compensated dynamic texture mapping of 3-d models
CN112807108B (en) Method for detecting tooth correction state in orthodontic correction process
US20230206451A1 (en) Method for automatic segmentation of a dental arch
CN115457198A (en) Tooth model generation method and device, electronic equipment and storage medium
EP3629336A1 (en) Dental design transfer
CN112017280B (en) Method for generating digital tooth model with color texture information
US20240024076A1 (en) Combined face scanning and intraoral scanning
CN117281637A (en) Orthodontic bracket direct bonding method based on full-color perspective technology
CN118055728A (en) Construction of textured 3D models of dental structures
KR102670837B1 (en) Method for creating crown occlusal 3d mesh using deep learning and device using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant