Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
To facilitate understanding of the present solution, terms appearing in the present solution are explained herein:
CBCT data: CBCT is a short name of Cone beam CT, which is a Cone beam projection computer-based reconstruction tomographic imaging apparatus, and its principle is that an X-ray generator makes a ring DR (digital projection) around a projection object with a relatively low radiation amount (typically, a bulb current is about 10 milliamperes), and then "recombines" data obtained in an "intersection" after digital projection around the projection object multiple times (180-360 times, product-by-product) in a computer, and then obtains a three-dimensional image.
Oral scan model: the oral cavity scanning model is characterized in that a penetrating optical scanning head of a digital scanner is applied in an oral cavity, the oral cavity of a patient is directly scanned, three-dimensional morphology and color texture information of the surfaces of soft and hard tissues such as teeth, gingiva and mucous membrane in the oral cavity are obtained, an acquired image is subjected to computer processing reconstruction through an optical imaging system to obtain three-dimensional data information under the current visual angle, and a complete three-dimensional digital impression is obtained through a matching and splicing technology. Is widely applied to a plurality of fields such as oral restoration, orthodontics, periodontal, maxillofacial surgery and the like.
Example 1
The embodiment of the application provides a multi-modal rendering method based on three-dimensional tooth CBCT data and an oral cavity scanning model, and specifically referring to FIG. 1, the method comprises the following steps:
acquiring CBCT data, inputting the CBCT data into a trained tooth segmentation model for segmentation to obtain three-dimensional tooth volume data, extracting the tooth surface of the three-dimensional tooth volume data to obtain tooth surface data, and removing the tooth root part of each tooth in the tooth surface data to obtain CBCT crown surface data, wherein the CBCT crown surface data is stored in a triangular mesh form;
acquiring an oral cavity scanning model corresponding to CBCT data, and removing gingiva and other soft tissue areas in the oral cavity scanning model to obtain dental crown surface data of the oral cavity scanning model, wherein the dental crown surface data of the oral cavity scanning model is stored in a triangular mesh form;
acquiring triangular mesh vertexes of each triangular mesh which is not at a boundary in the CBCT dental crown surface data and the oral cavity scanning model dental crown surface data, calculating a normal vector of each triangular mesh vertex based on a face averaging algorithm, acquiring an area sum of adjacent triangular meshes of each triangular mesh vertex and an angle cosine square sum of the adjacent triangular mesh and the normal vector of the corresponding triangular mesh vertex, calculating the curvature of each triangular mesh vertex according to the angle cosine square sum and the area sum of the adjacent triangular mesh of each triangular mesh vertex, setting a curvature threshold value, and screening the curvature of each triangular mesh vertex to obtain a CBCT high curvature point set and an oral cavity scanning model high curvature point set respectively;
Calculating characteristic column vectors of each high-curvature triangle grid vertex in the CBCT high-curvature point set and the oral cavity scanning model high-curvature point set, splicing the characteristic column vectors of each high-curvature triangle grid vertex to respectively obtain a CBCT characteristic description matrix and an oral cavity scanning model characteristic description matrix, constructing a vector similarity matrix according to the CBCT characteristic description matrix and the oral cavity scanning model characteristic description matrix, and screening similar points in the vector similarity matrix to obtain a corresponding point set;
and calculating by using the corresponding point set to obtain a rigid transformation matrix, transforming the three-dimensional tooth volume data by using the rigid transformation matrix to obtain rigid transformation volume data, and rendering the rigid transformation volume data and the dental crown surface data of the oral scanning model.
In some embodiments, in the step of inputting the CBCT data into a trained tooth segmentation model to perform segmentation to obtain three-dimensional tooth volume data, the tooth segmentation model processes the CBCT data to obtain a segmentation result, the segmentation result is a tooth probability of each pixel point, a tooth threshold is set, the segmentation result is subjected to threshold segmentation by using the tooth threshold to obtain a tooth binary image, and the tooth binary image is output to obtain the three-dimensional tooth volume data.
Specifically, the tooth segmentation model may be any trained network model, which is not limited in this scheme.
In some embodiments, the tooth segmentation model used in the present solution includes an encoding module and a decoding module, where the encoding module includes a first encoder, a second encoder, and a third encoder that are sequentially connected in series, the decoding module includes a first decoder, a second decoder, and a third decoder that are sequentially connected in series, the first encoder extracts tooth edge information in the CBCT data, the second encoder extracts texture and shape information of a tooth according to the tooth edge information, the third encoder obtains tooth contour features according to the texture and shape information of the tooth, the first decoder decodes the tooth contour information to obtain a first decoding result, the second decoder upsamples the first decoding result and then splices the first decoding result with the texture and shape information of the tooth to obtain a second decoding result, the third decoder upsamples the second decoding result and then splices the second decoding result with the tooth edge information to obtain a third decoding result, and the third decoding result obtains the tooth probability of each pixel through an activation function.
Specifically, the input of the tooth model is a two-dimensional slice sequence of CT data, the encoding module of the tooth segmentation model extracts feature information of the CT data through convolution and pooling operations, the decoding module of the tooth segmentation model restores the feature information to the same size as the input data through operations such as deconvolution and upsampling, the first encoder acquires tooth edge information through extracting low-level features, the second encoder acquires texture and shape features of the teeth, the third encoder extracts high-level tooth contour features, the first decoder upsamples the high-level tooth contour features extracted by the third encoder, then performs stitching with texture and shape features of the teeth extracted by the second encoder, finally performs feature extraction and upsampling through a series of convolution layers to obtain an intermediate result image with the same size as the input image, the intermediate result image is the third decoding result, the second decoder performs upsampling on the third decoding and then performs stitching with texture and shape features of the teeth, finally performs convolutional layer extraction and upsampling on the third convolution layer, the second decoder performs upsampling on the third convolution layer and the second convolution layer decoding result and the third convolution layer decoding result is performed, and the third result is obtained, and the probability point-based on the third result is obtained, and the result is sampled and the third result is obtained after the third point-and the result is sampled.
Specifically, setting a tooth threshold t, setting a pixel point with the tooth probability larger than t as 1, setting the rest pixel points as 0 to obtain a tooth binary image, and outputting the tooth binary image to obtain three-dimensional tooth volume data.
Specifically, the tooth segmentation model continuously updates the weight parameters of the model through back propagation, so that the loss function is minimized.
Specifically, the method evaluates the performance of the model by using the verification set, and judges whether the model has the fitting problem or not by monitoring the change condition of the loss function and the evaluation index.
Specifically, the tooth segmentation model is tested by using the test set to obtain performance indexes such as segmentation accuracy of the model so as to evaluate the generalization capability of the model.
In particular, the tooth segmentation model is trained, validated and tested using the disclosed CT dataset CQ 500.
In some embodiments, in the step of removing the root portion of each tooth in the tooth surface data to obtain CBCT crown surface data, the tooth surface data is composed of a plurality of triangular meshes, a normal vector of each triangular mesh in the tooth surface data is calculated, a normal vector included angle of each triangular mesh and all neighboring triangular meshes is calculated, an included angle threshold is set, and if at least one normal vector included angle is greater than the included angle threshold, the triangular mesh and all neighboring triangular meshes are subdivided until all normal vector included angles are less than the included angle threshold to obtain CBCT crown surface data.
Specifically, considering that in all teeth, the teeth are uneven and have more detail information on the surface of the teeth due to the reasons of tooth grinding, tooth decay and the like, if a traditional Maring cube algorithm (a conventional reconstruction algorithm) is adopted to generate the crown surface, the tooth surface is not smooth or unnecessary triangular grids exist, and the follow-up operation is not facilitated, the scheme determines the triangular grids with smaller precision by calculating the included angle between each triangular grid and the normal vector of the adjacent triangular grids, and subdivides the triangular grids by a Loop subdivision iteration method to obtain the crown surface with higher precision.
Specifically, the inverse vector of each triangle mesh is calculated according to the vector cross multiplication of two sides of the triangle, assuming that three vertices of a certain triangle mesh are respectively denoted as a (x A ,y A ,z A )、B(x B ,y B ,z B ) And C (x) C ,y C ,z C ) The normal vector of the triangular mesh can be expressed as:
where n is the normal vector of the corresponding triangular mesh, AB is the vector representation of the triangular mesh AB edge, and AC is the vector representation of the triangular mesh AC edge.
Specifically, the Loop subdivision iteration method is used for refining the triangular mesh into a triangular mesh with higher resolution.
In the step of calculating the normal vector included angle between each triangular grid and all adjacent triangular grids, if a triangular grid a exists and k triangular grids are adjacent to each triangular grid, calculating the normal vector included angle b between each triangular grid in the k triangular grids and the triangular grid a, setting an included angle threshold t, and if the normal vector included angle b is larger than the included angle threshold t, subdividing the k triangular grids and the triangular grid a by using a Loop subdivision iteration method to obtain triangular grids with higher precision and higher resolution.
The CBCT crown surface data are obtained by carrying out the operation on each triangular mesh, and the tooth root part of each tooth in the CBCT crown surface data can be further refined and removed by a manual point selection mode.
In some embodiments, in the step of removing gums and other soft tissue regions in the oral cavity scan model to obtain dental crown surface data of the oral cavity scan model, a first number of triangular mesh vertices are randomly selected as seed points on the dental crown surface of each tooth in the oral cavity scan model, adjacent triangular meshes of each seed point are obtained, a similarity included angle is obtained by calculating a normal vector of each seed point and normal vector included angles of all adjacent meshes of each seed point, a growth threshold is set, and if the similarity included angle of any two seed points is larger than the growth threshold, the two seed points are combined until no seed points which can be combined exist in the oral cavity scan model or the designated combining times are reached, and dental crown surface data of the oral cavity scan model is obtained according to a seed point combining path.
In some embodiments, as shown in fig. 2, at least two triangular mesh vertices are selected as seed points on the crown surface of each tooth in the oral scan model.
Specifically, since the storage format of the oral cavity scanning model is stl file, which does not contain information such as color, texture, etc., the combination is performed by calculating the similarity included angle of the seed points, and the growth threshold is set for avoiding overgrowth of the region.
Specifically, when no seed points which can be combined exist in the oral cavity scanning model or the specified combination times are reached, the seed points are uniformly distributed in the crown area of each tooth, gums and other soft tissue areas are removed, and finally the seed points are connected along the seed point combination path to obtain the dental crown surface data of the oral cavity scanning model.
In some embodiments, in the "calculate normal vector of each triangle mesh vertex" step, all neighboring triangle meshes of each triangle mesh vertex are obtained as vertex neighboring triangle meshes, the normal vector of each vertex neighboring triangle mesh is calculated, and the normal vectors of all vertex neighboring triangle meshes are summed and averaged to obtain the normal vector of the triangle mesh vertex.
In some embodiments, as shown in FIG. 3, the normal vector for each triangle mesh vertex is calculated based on a face averaging algorithm.
For example, if triangle mesh vertex P has m vertex neighbors of the triangle mesh, this calculates the m vertices respectivelyThe normal vectors of adjacent triangular meshes are summed and averaged, if the normal vector of the ith triangular network in the m vertex adjacent triangular meshes is expressed asThe normal vector of the vertex P can be expressed as:
in some embodiments, the sum of areas of adjacent triangular meshes of each triangular mesh vertex is calculated, the sum of squares of the cosine values of the included angles of the normal vector of each triangular mesh vertex and the normal vector of the adjacent triangular mesh of the corresponding triangular mesh vertex is calculated, and the sum of areas and squares are used to obtain the curvature of the triangular mesh vertex.
Specifically, for the vertex P, the area and area_sum of the adjacent m triangular meshes are calculated, and area is used i Representing the area of the ith triangular mesh in the m adjacent triangular meshes, then for the vertex P, the sum of the areas of the adjacent m triangular meshes is represented as:
specifically, as shown in fig. 4, for the triangle mesh vertex P, the sum of squares cos_sum of angle cosine values of normal vectors of m triangle meshes adjacent to the normal vector of the triangle mesh vertex P is calculated, and the larger the sum of squares is, the more the normal vector is changed, that is, the greater the degree of curvature of the curved surface is, θ is used i Representing the angle between the normal vector of the ith triangular mesh in m adjacent triangular meshes and the normal vector of the triangular mesh vertex P, cos_sum can be expressed as follows:
specifically, the curvature of the triangle mesh vertex P is:
wherein k is p Representing the curvature of the p-point.
In some embodiments, in the step of screening the curvature of each triangular mesh vertex to obtain a CBCT high curvature point set and an oral scan model high curvature point set respectively by setting a curvature threshold, triangular mesh vertices with the triangular mesh vertex curvature greater than the curvature threshold in CBCT crown surface data are selected as the CBCT high curvature point set, and triangular mesh vertices with the triangular mesh vertex curvature greater than the curvature threshold in the oral scan model crown surface data are selected as the oral scan model high curvature point set.
In some embodiments, in the step of calculating the feature column vector of each high curvature triangle mesh vertex in the CBCT high curvature point set and the oral scan model high curvature point set, all triangle mesh normal vectors in each high curvature triangle mesh vertex neighborhood secondary are obtained, the normal vector of each high curvature triangle mesh vertex is obtained, the included angle between the normal vector of each high curvature triangle mesh vertex and all triangle mesh normal vectors in the corresponding high curvature triangle mesh vertex neighborhood secondary is calculated, all triangle meshes in the neighborhood secondary comprise the neighborhood triangle mesh of each high curvature triangle mesh vertex field triangle mesh, and the feature column vector of each high curvature triangle mesh vertex is generated according to the included angle between the normal vector of each high curvature triangle mesh vertex and all triangle mesh normal vectors in the corresponding high curvature triangle mesh vertex neighborhood secondary.
Specifically, as shown in FIG. 5, for CBCT a high curvature point set N 1 Is a high curvature triangle mesh vertex p 1 Calculating to obtain normal vector and p of all triangular grids in the neighborhood second ring 1 And if the included angle of the normal vectors is s triangular grids in the neighborhood second ring, calculating the included angles of the normal vectors of the s triangular grids and the p1 normal vector respectively.
Further, the included angle between the normal vector of each high-curvature triangular grid vertex and the normal vectors of all triangular grids in the neighborhood secondary ring of the corresponding high-curvature triangular grid vertex is rounded, the occurrence times of the included angles with different angles are calculated, and the characteristic column vector corresponding to the high-curvature triangular grid vertex is generated according to the angle and the occurrence times.
The method for calculating the characteristic column vector of each high curvature triangular mesh vertex in the high curvature point set of the oral cavity scanning model is the same as that described above, and the scheme is not repeated here.
Specifically, the feature column vector is a×1 dimension, and the feature column vector is used as a high-dimensional geometric feature corresponding to the high-curvature triangular mesh vertex.
In some embodiments, feature column vectors of each high-curvature triangular mesh vertex in the CBCT high-curvature point set are calculated and spliced to obtain a CBCT feature description matrix.
In some specific embodiments, the characteristic column vector of each high curvature triangle mesh vertex in the high curvature point set of the oral scan model is calculated, and is spliced to obtain the characteristic description matrix of the oral scan model.
Specifically, the CBCT feature description matrix and the oral scan model feature description matrix are geometry description matrices.
In some embodiments, in the step of constructing a vector similarity matrix according to the CBCT feature description matrix and the oral cavity scan model feature description matrix and screening similar points in the vector similarity matrix to obtain a corresponding point set, calculating vector similarity of the CBCT feature description matrix and the oral cavity scan model feature description matrix to obtain a vector similarity matrix, where the vector similarity matrix includes similarity of corresponding points in the CBCT feature description matrix and the oral cavity scan model feature description matrix, setting a similarity threshold, screening the vector similarity matrix by using the similarity threshold to obtain point pair information, and removing erroneous point pairs in the point pair information to obtain the corresponding point set.
Specifically, if CBCT has a high curvature point set N 1 Consensus n 1 A point of the light-emitting diode is located,the CBCT characterization matrix D 1 Is an m x n 1 If the oral cavity scans the matrix of the high curvature point set N 2 Consensus n 2 A plurality of points, the characteristic description matrix D of the oral cavity scanning model 2 Is an m x n 2 The vector similarity matrix S can be expressed as:
S=D 1 T ·D 2
wherein S is an n 1 X n2 matrix in which the elements s ij Representing a set of points N 1 Points i and point set N in (1) 2 Feature vector similarity of point j in (a).
Screening elements larger than the similarity threshold value from the vector similarity matrix S according to the similarity threshold value to obtain r pairs of corresponding points, wherein r is smaller than or equal to n 1 And r is less than or equal to n 2 The r corresponding points are the corresponding point set fcrr.
Further, any two point pairs are selected from the point pair information to be a first point pair and a second point pair respectively, a first distance absolute value between the first point pair and the second point pair is calculated, a rejection threshold is set, if the first distance absolute value is larger than the rejection threshold, any one point pair except the first point pair and the second point pair is selected from the point pair information to be used as a third point pair, a second distance absolute value between the first point pair and the third point pair is calculated, a third distance absolute value between the second point pair and the third point pair is calculated, if the second distance absolute value is larger than the third distance absolute value, the first point pair is rejected, and if the third distance absolute value is larger than the second distance absolute value, the second point pair is rejected, and each point pair in the point pair information is traversed to obtain a corresponding point set.
Specifically, under normal conditions, the situation that the second distance absolute value is equal to the third distance absolute value does not occur, and if the second distance absolute value is equal to the third distance absolute value, calculation errors are indicated, and parameters of the scheme need to be readjusted.
Specifically, there is a first point pair (p i ,p i ) A second point pair (p j ,p j ) Wherein p is i ,p j Describing the points in the matrix for CBCT features, q i ,q j For points in the characterization matrix of the oral scan model, the first absolute value of the distance of the first and second pairs of points can be expressed as:
d ij =|||p i -p j ||-||q i -q j |||
where dij represents the absolute value of the first distance of the first point pair and the second point pair.
Specifically, as shown in fig. 6, a rejection threshold is set based on the distance-keeping property of the rigid transformation, and if the first distance absolute value is greater than the rejection threshold, a third point pair (p k ,q k ) And calculates the second and third absolute distance values dik and djk, if d ik >d jk Then reject the first point pair, if d ik <d ik The second point pair is culled.
In some embodiments, in the step of calculating the rigid body transformation matrix by using the corresponding point set, the corresponding point set is divided into CBCT point information and oral scan model point information, CBCT data centroid coordinates are calculated according to the CBCT point information, oral scan model centroid coordinates are calculated according to the oral scan model point information, a CBCT coordinate matrix is obtained by obtaining coordinate information of each point bit in the CBCT point information relative to the CBCT data centroid coordinates, an oral scan model coordinate matrix is obtained by obtaining coordinate information of each point bit in the oral scan model point information relative to the oral scan model centroid coordinates, the CBCT coordinate matrix and the oral scan model coordinate matrix are integrated into a singular matrix, singular value decomposition is performed on the singular matrix to obtain a left singular matrix, a singular value matrix and a right singular matrix, a rotation matrix is obtained according to the left singular matrix and the right singular matrix, and a translation vector is calculated according to the CBCT data centroid coordinates, the oral scan model centroid coordinates and the rotation matrix, and the translation vector is calculated to obtain the rigid body transformation matrix.
Specifically, calculating an average value of each point bit coordinate in the CBCT point location information to obtain a CBCT data centroid coordinate, and the specific formula is as follows:
wherein,representing CBCT centroid coordinates, p i Represents one point in the CBCT point information, and n represents the total number of points in the CBCT point information.
Specifically, calculating an average value of each point position coordinate in the point position information of the oral cavity scanning model to obtain a centroid coordinate of the oral cavity scanning model, wherein the specific formula is as follows:
wherein,represents the centroid coordinates of the oral scan model, q i And (3) representing one point in the point information of the oral cavity scanning model, and n represents the total point number in the point information of the oral cavity scanning model.
Specifically, subtracting the CBCT data centroid point location information from each point bit information in the CBCT point location information to obtain coordinate information of each point bit relative to the CBCT data centroid coordinates, and the specific formula is as follows:
wherein,and the coordinate information of any point in the CBCT point information relative to the centroid coordinate of the CBCT data is represented.
Specifically, subtracting the centroid point position information of the oral cavity scanning model from the point position information of each point in the oral cavity scanning model to obtain the coordinate information of each point position relative to the centroid coordinate of the oral cavity scanning model, wherein the specific formula is as follows:
Wherein,coordinate information indicating coordinates of any point in the point information of the oral scan model with respect to centroid coordinates of the oral scan model.
Thereafter, all ofAnd +.>Respectively putting the matrix into the formula:
a is CBCT coordinate matrix, B is oral cavity scanning model coordinate matrix, A, B is 3×s matrix, and S is total point position number.
Specifically, the CBCT coordinate matrix and the transpose matrix of the oral scanning model coordinate matrix are integrated to obtain a singular matrix H, and the specific formula is as follows:
H=AB T
wherein the singular matrix H is a 3×3 matrix.
Specifically, the singular matrix H is formulated as:
H=U∑V T
wherein U is a 3×3 matrix, vectors in U are left singular vectors and are all orthogonal, sigma is a 3×3 matrix, diagonal lines in sigma are singular values, and the rest elements are 0, V T Is a 3 x 3 matrix, V T The vectors in (a) are right singular vectors, and are all orthogonal.
Specifically, multiplying the left singular matrix by the right singular matrix to obtain a rotation matrix R, where the formula is expressed as:
R=UV T
specifically, the product of the rotation matrix and the CBCT data centroid is subtracted from the centroid coordinate of the oral scanning model to obtain a translation vector t, and the formula is as follows:
Specifically, a rigid transformation matrix is obtained according to the rotation matrix R and the translation vector t, and the formula is expressed as follows:
where 0 is a zero vector of 1×3 and 1 is a real number.
In some embodiments, in the step of transforming the three-dimensional tooth volume data using the rigid transformation matrix to obtain rigid transformation volume data, and in the step of rendering the rigid transformation volume data and the oral scan model crown surface data, performing homogeneous coordinate transformation on the three-dimensional tooth volume data to obtain a CBCT homogeneous coordinate matrix, transforming the CBCT homogeneous coordinate matrix into cartesian coordinates to obtain the rigid transformation volume data by using the rigid transformation matrix, and rendering the oral scan model crown surface data and the rigid transformation volume data into a world coordinate system, the CBCT homogeneous coordinate matrix is transformed into cartesian coordinates by using the rigid transformation matrix.
Example two
Based on the same conception, referring to fig. 7, the present application also proposes a multi-modal rendering device based on three-dimensional tooth CBCT data and an oral scan model, comprising:
a first acquisition module: acquiring CBCT data, inputting the CBCT data into a trained tooth segmentation model for segmentation to obtain three-dimensional tooth volume data, extracting the tooth surface of the three-dimensional tooth volume data to obtain tooth surface data, and removing the tooth root part of each tooth in the tooth surface data to obtain CBCT crown surface data, wherein the CBCT crown surface data is stored in a triangular mesh form;
And a second acquisition module: acquiring an oral cavity scanning model corresponding to CBCT data, and removing gingiva and other soft tissue areas in the oral cavity scanning model to obtain dental crown surface data of the oral cavity scanning model, wherein the dental crown surface data of the oral cavity scanning model is stored in a triangular mesh form;
the curvature calculation module: acquiring triangular mesh vertexes of each triangular mesh which is not at a boundary in the CBCT dental crown surface data and the oral cavity scanning model dental crown surface data, calculating a normal vector of each triangular mesh vertex based on a face averaging algorithm, acquiring an area sum of adjacent triangular meshes of each triangular mesh vertex and an angle cosine square sum of the adjacent triangular mesh and the normal vector of the corresponding triangular mesh vertex, calculating the curvature of each triangular mesh vertex according to the angle cosine square sum and the area sum of the adjacent triangular mesh of each triangular mesh vertex, setting a curvature threshold value, and screening the curvature of each triangular mesh vertex to obtain a CBCT high curvature point set and an oral cavity scanning model high curvature point set respectively;
the point set acquisition module: calculating characteristic column vectors of each high-curvature triangle grid vertex in the CBCT high-curvature point set and the oral cavity scanning model high-curvature point set, splicing the characteristic column vectors of each high-curvature triangle grid vertex to respectively obtain a CBCT characteristic description matrix and an oral cavity scanning model characteristic description matrix, constructing a vector similarity matrix according to the CBCT characteristic description matrix and the oral cavity scanning model characteristic description matrix, and screening similar points in the vector similarity matrix to obtain a corresponding point set;
And a rendering module: and calculating by using the corresponding point set to obtain a rigid transformation matrix, transforming the three-dimensional tooth volume data by using the rigid transformation matrix to obtain rigid transformation volume data, and rendering the rigid transformation volume data and the dental crown surface data of the oral scanning model.
Example III
This embodiment also provides an electronic device, referring to fig. 8, comprising a memory 404 and a processor 402, the memory 404 having stored therein a computer program, the processor 402 being arranged to run the computer program to perform the steps of any of the method embodiments described above.
In particular, the processor 402 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
The memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may comprise a Hard Disk Drive (HDD), floppy disk drive, solid State Drive (SSD), flash memory, optical disk, magneto-optical disk, tape, or Universal Serial Bus (USB) drive, or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. Memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). Where appropriate, the ROM may be a mask-programmed ROM, a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), an electrically rewritable ROM (EAROM) or FLASH memory (FLASH) or a combination of two or more of these. The RAM may be Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM) where appropriate, and the DRAM may be fast page mode dynamic random access memory 404 (FPMDRAM), extended Data Output Dynamic Random Access Memory (EDODRAM), synchronous Dynamic Random Access Memory (SDRAM), or the like.
Memory 404 may be used to store or cache various data files that need to be processed and/or used for communication, as well as possible computer program instructions for execution by processor 402.
The processor 402 reads and executes the computer program instructions stored in the memory 404 to implement any of the multi-modal rendering methods of the above embodiments based on three-dimensional tooth CBCT data and an oral scan model.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402 and the input/output device 408 is connected to the processor 402.
The transmission device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wired or wireless network provided by a communication provider of the electronic device. In one example, the transmission device includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through the base station to communicate with the internet. In one example, the transmission device 406 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The input-output device 408 is used to input or output information. In this embodiment, the input information may be CBCT data, an oral scan model, and the like, and the output information may be CBCT rendering results, oral scan model rendering results, and the like.
Alternatively, in the present embodiment, the above-mentioned processor 402 may be configured to execute the following steps by a computer program:
s101, acquiring CBCT data, inputting the CBCT data into a trained tooth segmentation model for segmentation to obtain three-dimensional tooth volume data, extracting the tooth surface of the three-dimensional tooth volume data to obtain tooth surface data, and removing the root part of each tooth in the tooth surface data to obtain CBCT crown surface data, wherein the CBCT crown surface data is stored in a triangular mesh form;
s102, acquiring an oral cavity scanning model corresponding to CBCT data, and removing gingiva and other soft tissue areas in the oral cavity scanning model to obtain dental crown surface data of the oral cavity scanning model, wherein the dental crown surface data of the oral cavity scanning model is stored in a triangular mesh form;
s103, triangular mesh vertexes of triangular meshes which are not on the boundary in the CBCT dental crown surface data and the oral cavity scanning model dental crown surface data are obtained, a normal vector of each triangular mesh vertex is calculated based on a face average algorithm, the sum of areas of adjacent triangular meshes of each triangular mesh vertex and the sum of cosine squares of included angles of the adjacent triangular meshes and the normal vector of the corresponding triangular mesh vertex are obtained, the curvature of each triangular mesh vertex is calculated according to the sum of the cosine squares of included angles of the adjacent triangular meshes of each triangular mesh vertex and the sum of the areas, a curvature threshold is set, and the curvature of each triangular mesh vertex is screened to obtain a CBCT high curvature point set and an oral cavity scanning model high curvature point set respectively;
S104, calculating characteristic column vectors of each high-curvature triangular mesh vertex in the CBCT high-curvature point set and the oral cavity scanning model high-curvature point set, splicing the characteristic column vectors of each high-curvature triangular mesh vertex to respectively obtain a CBCT characteristic description matrix and an oral cavity scanning model characteristic description matrix, constructing a vector similarity matrix according to the CBCT characteristic description matrix and the oral cavity scanning model characteristic description matrix, and screening similar points in the vector similarity matrix to obtain a corresponding point set;
and S105, calculating by using the corresponding point set to obtain a rigid transformation matrix, transforming the three-dimensional tooth volume data by using the rigid transformation matrix to obtain rigid transformation volume data, and rendering the rigid transformation volume data and the dental crown surface data of the oral scan model.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of a mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets, and/or macros can be stored in any apparatus-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may include one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. In this regard, it should also be noted that any block of the logic flow as in fig. 8 may represent a program step, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on a physical medium such as a memory chip or memory block implemented within a processor, a magnetic medium such as a hard disk or floppy disk, and an optical medium such as, for example, a DVD and its data variants, a CD, etc. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that the technical features of the above embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The foregoing examples merely represent several embodiments of the present application, the description of which is more specific and detailed and which should not be construed as limiting the scope of the present application in any way. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.