CN109949319B - Depth learning-based panoramic photo permanent tooth identification method and device - Google Patents

Depth learning-based panoramic photo permanent tooth identification method and device Download PDF

Info

Publication number
CN109949319B
CN109949319B CN201910184807.8A CN201910184807A CN109949319B CN 109949319 B CN109949319 B CN 109949319B CN 201910184807 A CN201910184807 A CN 201910184807A CN 109949319 B CN109949319 B CN 109949319B
Authority
CN
China
Prior art keywords
rectangular frame
model
tooth
permanent tooth
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910184807.8A
Other languages
Chinese (zh)
Other versions
CN109949319A (en
Inventor
徐子能
白海龙
丁鹏
汪子晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deepcare Information Technology Co ltd
Original Assignee
Beijing Deepcare Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deepcare Information Technology Co ltd filed Critical Beijing Deepcare Information Technology Co ltd
Priority to CN201910184807.8A priority Critical patent/CN109949319B/en
Publication of CN109949319A publication Critical patent/CN109949319A/en
Application granted granted Critical
Publication of CN109949319B publication Critical patent/CN109949319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a panoramic photo permanent tooth identification method and device based on deep learning. The method comprises the following steps: inputting the original panoramic picture into a deep learning-based alveolar bone line segmentation model to obtain an alveolar bone line segmentation result; generating a fitting rectangular frame according to the alveolar bone segmentation result, generating an extended rectangular frame according to the fitting rectangular frame, and cutting out image blocks of the peripheral area from the original panoramic picture according to the extended rectangular frame; inputting the image block of the periodontal area into a constant tooth segmentation model based on deep learning to obtain a constant tooth segmentation result; and marking the tooth number of each permanent tooth in the permanent tooth segmentation result.

Description

Panoramic photo permanent tooth identification method and device based on deep learning
Technical Field
The invention relates to the technical field of computer image processing, in particular to a method and a device for recognizing panoramic photo permanent teeth based on deep learning.
Background
The oral panoramic film is the main basis for oral diagnosis, can clearly and completely display the complete picture of the maxilla, the complete picture of the mandible, the situations of the dentition of the maxilla and the maxilla, the situation of the alveolar bone, the situation of the maxillary sinus cavity, the sinus wall, the sinus floor and the situation of the temporomandibular joint, and provides accurate and effective help for the diagnosis of diseases around the maxilla.
The segmentation of permanent teeth and the identification of tooth position designations are essential for the subsequent assessment of tooth health and diagnosis of disease. Figure 1 shows the standard dental position numbering.
In the case of tooth loss, particularly, long tooth loss time, the artificial identification of the tooth position has certain difficulty. Moreover, medical resources in China are scarce, the number of medical experts is insufficient, and if the workers completely depend on manual operation, the workers can feel asthenopia, so that the accuracy and credibility of the judgment result are further reduced. In this situation, the need for developing a method and apparatus for automatically processing panorama permanent tooth recognition becomes urgent.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for identifying panorama permanent teeth based on deep learning, so as to overcome the above disadvantages in the prior art.
The panoramic picture permanent tooth identification method based on deep learning comprises the following steps: inputting the original panoramic picture into a deep learning-based alveolar bone line segmentation model to obtain an alveolar bone line segmentation result; generating a fitting rectangular frame according to the alveolar bone segmentation result, generating an extended rectangular frame according to the fitting rectangular frame, and cutting out image blocks of the peripheral area from the original panoramic picture according to the extended rectangular frame; inputting the image block of the periodontal area into a constant tooth segmentation model based on deep learning to obtain a constant tooth segmentation result; and marking the tooth number of each permanent tooth in the permanent tooth segmentation result.
Optionally, the deep learning-based alveolar bone line segmentation model is obtained by: (1) a model design stage: the network structure comprises an encoder part and a decoder part, wherein the encoder part adopts an Xprediction network structure, the decoder part adopts an FPN multi-scale fusion structure, and five convolution layers with different depths are extracted from the encoder part as input; (2) a model training stage: the method comprises the steps of carrying out size normalization, gray normalization, data enhancement and data balancing on alveolar bone line segmentation training data, wherein parameters of an encoder part adopt pre-trained parameters on a large-scale public image data set ImageNet, parameters of a decoder part adopt random initialization, iterative training is carried out on a model by adopting a gradient descent algorithm to obtain a network optimal solution, and the optimal parameters of the model are determined according to a Dice value on a verification set.
Optionally, the step of generating an extended rectangular frame according to the fitted rectangular frame includes: moving the left boundary of the fit rectangular frame leftwards by a preset distance to obtain the left boundary of the extended rectangular frame; the right boundary of the fit rectangular frame moves rightwards by the preset distance to obtain the right boundary of the extended rectangular frame; the upper boundary of the attaching rectangular frame moves upwards by 0.5 times of the height value of the attaching rectangular frame to obtain the upper boundary of the extended rectangular frame; and the lower boundary of the fitting rectangular frame moves upwards by 0.8 times of the height value of the fitting rectangular frame to obtain the lower boundary of the expanded rectangular frame.
Optionally, the deep learning-based permanent tooth segmentation model is obtained by: (1) a model design stage: the network structure comprises an encoder part and a decoder part, wherein the encoder part adopts an Xception network structure, the decoder part adopts an FPN multi-scale fusion structure, five convolution layers with different depths are extracted from the encoder part as input, and a scSE structure is added to an upper sampling layer of the decoder part; (2) a model training stage: the method comprises the steps of carrying out size normalization, gray normalization, data enhancement and data balancing processing on permanent tooth segmentation training data, wherein parameters of an encoder part adopt pre-trained parameters on a large-scale public image data set ImageNet, parameters of a decoder part adopt random initialization, iterative training is carried out on a model by adopting a gradient descent algorithm to obtain a network optimal solution, and the optimal parameters of the model are determined according to a Dice value on a verification set.
Optionally, the random initialization method includes: he _ normal, lecun _ uniform, gloot _ normal, gloot _ uniform, or lecun _ normal.
Optionally, the gradient descent algorithm comprises: adam, SGD, MSprop or Adadelta.
Optionally, before the step of labeling the tooth number of each permanent tooth according to the permanent tooth segmentation result, the method further includes: and performing morphological opening operation processing on the permanent tooth segmentation result.
The panoramic picture permanent tooth recognition device based on deep learning of the embodiment of the invention comprises: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement any of the methods of the present invention.
According to the technical scheme of the invention, based on an artificial intelligence technology, 32 classes of division and identification of 32 permanent teeth are realized by utilizing a deep learning algorithm, each class corresponds to one tooth number, the original manual operation can be automatically completed, and the method has the advantages of objectivity, rapidness, good repeatability and the like.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic illustration of standard tooth positions numbering;
FIG. 2 is a flow chart of a method for panoramic photo permanent tooth identification based on deep learning according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of generating a fitted rectangular frame and an expanded rectangular frame according to an alveolar bone segmentation result according to an embodiment of the present invention;
fig. 4 is a diagram illustrating the result of permanent tooth segmentation according to an embodiment of the present invention.
Detailed Description
Fig. 1 is a schematic flow chart of a panoramic photo permanent tooth recognition method based on deep learning according to an embodiment of the present invention, which includes the following steps a to D.
Step A: and inputting the original panoramic picture into a deep learning-based alveolar bone line segmentation model to obtain an alveolar bone line segmentation result.
Optionally, the deep learning-based alveolar bone line segmentation model in step a is obtained by: (1) a model design stage: the network structure comprises an encoder part and a decoder part, wherein the encoder part adopts an Xprediction network structure, the decoder part adopts an FPN multi-scale fusion structure, and convolution layers with five different depths are extracted from the encoder part as input; (2) a model training stage: the method comprises the steps of carrying out size normalization, gray normalization, data enhancement and data balancing on alveolar bone line segmentation training data, wherein parameters of an encoder part adopt pre-trained parameters on a large-scale public image data set ImageNet, parameters of a decoder part adopt random initialization, iterative training is carried out on a model by adopting a gradient descent algorithm to obtain a network optimal solution, and the optimal parameters of the model are determined according to a Dice value on a verification set.
And B: generating a fitting rectangular frame according to the alveolar bone segmentation result, generating an expanded rectangular frame according to the fitting rectangular frame, and cutting out image blocks of the peripheral area from the original panoramic picture according to the expanded rectangular frame. It should be noted that the subsequent training depth model and the application depth model are both based on the image blocks of the periodontal region. Compared with a complete panoramic picture, the information is more effectively concentrated, and compared with the range in the alveolar bone, the information is more comprehensively collected, so that the method is more scientific and reasonable.
Optionally, the step of "generating an extended rectangular frame according to the fitted rectangular frame" in step B specifically includes: moving the left boundary of the attached rectangular frame leftwards by a preset distance to obtain the left boundary of the extended rectangular frame; the right boundary of the rectangular frame is attached and moves rightwards for a preset distance to obtain the right boundary of the extended rectangular frame; moving the upper boundary of the attaching rectangular frame upwards by 0.5 times of the height value of the attaching rectangular frame to obtain the upper boundary of the expanded rectangular frame; and the lower boundary of the attaching rectangular frame moves upwards by 0.8 times of the height value of the attaching rectangular frame to obtain the lower boundary of the expanded rectangular frame.
And C: and inputting the image block of the periodontal area into a constant tooth segmentation model based on deep learning to obtain a constant tooth segmentation result.
Optionally, the deep learning-based permanent tooth segmentation model in step C is obtained by: (1) a model design stage: the network structure comprises an encoder part and a decoder part, wherein the encoder part adopts an Xception network structure, the decoder part adopts an FPN multi-scale fusion structure, convolution layers with five different depths are extracted from the encoder part as input, and a scSE structure is added into an upsampling layer of the decoder part; (2) a model training stage: the method comprises the steps of carrying out size normalization, gray normalization, data enhancement and data balancing processing on permanent tooth segmentation training data, wherein parameters of an encoder part adopt pre-trained parameters on a large-scale public image data set ImageNet, parameters of a decoder part adopt random initialization, iterative training is carried out on a model by adopting a gradient descent algorithm to obtain a network optimal solution, and the optimal parameters of the model are determined according to a Dice value on a verification set.
Step D: and marking the tooth number of each permanent tooth in the permanent tooth segmentation result.
The permanent tooth segmentation model is essentially 32 types of segmentation, and the trained model generates 32 segmentation result maps corresponding to the segmentation results of 32 permanent teeth in prediction. It is therefore possible to quickly correspond 32 segmentation results to the tooth number. And finally, overlapping the 32 segmentation results as 32 layers, and outputting the whole permanent tooth recognition result image.
It should be noted that, the method of random initialization in the training model phase above may be: he _ normal, lecun _ normal, gloriot _ normal or lecun _ normal, most preferably he _ normal. The method of training the gradient descent algorithm in the model phase above may be: adam, SGD, MSprop or Adadelta, most preferably Adam.
Optionally, before step D, further comprising: and performing morphological opening operation processing on the permanent tooth segmentation result. False positive segmentation results can be effectively removed through morphological opening operation processing of expansion and contraction.
The panoramic picture permanent tooth recognition device based on deep learning of the embodiment of the invention comprises: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of the embodiments of the present invention.
For better understanding of those skilled in the art, the depth learning-based panoramic photo permanent tooth recognition method according to the embodiment of the present invention is described below with reference to specific embodiments.
The method comprises the following steps of (I) alveolar bone line segmentation based on deep learning:
the designed alveolar bone line segmentation network model structure comprises an Encoder part and a Decode part. The Encoder part uses an Xception network structure, the Decoder part uses an FPN multi-scale fusion structure, and 5 convolution layers with different depths are extracted from the Encoder part as input; the model was noted as PanoNet-S1.
And carrying out ruler length normalization and gray level normalization on alveolar bone line segmentation training data. Specifically, the scale (around 1200 × 2700) of the original panorama image is first down-sampled to 448 × 896; then, the sampled image gray value is normalized, and the formula is as follows (wherein x represents the original gray value, and y represents the normalized gray value):
Figure BDA0001992502600000061
then training data enhancement is performed: and increasing sample processing is carried out on the image data in the original data set by rotating the image by (-10 degrees to 10 degrees) and carrying out left and right mirror image modes so as to meet the requirement of the deep network on the data volume.
Model training is then performed, alveolar bone line segmentation training data (both original and enhanced) are input into the model PanoNet-S1 for training, parameters of the Encoder portion of PanoNet-S1 are pre-trained on the large public image dataset ImageNet, and parameters of the Decoder portion are initialized randomly. And (3) performing iterative training on the model by adopting a gradient descent algorithm Adam, and obtaining the optimal solution of the network through continuous iteration. And determining the optimal parameters of the model according to the Dice value on the verification set.
And inputting the test panoramic picture into the trained Panonet-S1 to obtain an alveolar bone line segmentation result mask 1.
(II) extracting image blocks of periodontal area
As shown in fig. 3, a fitting rectangular frame rect is generated for the alveolar bone line segmentation result mask1, and an extended rectangular frame rect' is obtained by extension, wherein the extension formula is as follows:
w′=w+2*sw
h′=h+sh1+sh2
sh1=0.5*h
sh2=0.8*h
wherein sw can take the value of 250. The periodontal region image block patch1 is cut out from the original image based on the obtained extended rectangular frame rect'.
(III) obtaining the segmentation result of the permanent teeth
The design permanent tooth segmentation network model structure comprises an Encoder part and a Decode part. The Encoder part uses an Xceptation network structure, the Decoder part uses an FPN multi-scale fusion structure, 5 convolution layers with different depths are extracted from the Encoder part as input, and an scSE structure is added into an upsampling layer of the Decoder part; the model was noted as PanoNet-S2.
And carrying out scale length normalization and gray level normalization on the permanent tooth segmentation training data. Specifically, the original panoramic image is first sampled to 512 × 1024; then, the sampled image gray values are normalized (the specific method is the same as the above). Then training data enhancement is performed: and increasing sample processing is carried out on the image data in the original data set by rotating the image by (-10 degrees to 10 degrees) and carrying out left and right mirror image modes so as to meet the requirement of the deep network on the data volume.
Model training is then performed, permanent tooth segmentation training data (both original and enhanced) are input into the model PanoNet-S2 for training, the parameters of the Encode part of PanoNet-S2 are pre-trained on the large public image dataset ImageNet, and the parameters of the Decode part are randomly initialized with he _ normal. And (3) performing iterative training on the model by adopting a gradient descent algorithm Adam, and obtaining the optimal solution of the network through continuous iteration. And determining the optimal parameters of the model according to the Dice value on the verification set.
The periodontal region image block patch1 is input to the trained constant tooth segmentation model PanoNet-S2, and a constant tooth segmentation result mask2 is obtained. Preferably, the permanent tooth segmentation result mask2 is subjected to a morphological opening operation, which is mainly aimed at smoothing the contour of the tooth segmentation result and removing some of the segmented small negative non-tooth structures.
(IV) permanent tooth numbering
And finally, marking the tooth number of each permanent tooth in the permanent tooth segmentation result mask2, as shown in fig. 4.
According to the technical scheme of the invention, based on an artificial intelligence technology, the 32-class segmentation and identification of 32 permanent teeth are realized by utilizing a deep learning algorithm, the original manual operation can be automatically completed, and the method has the advantages of objectivity, rapidness, good repeatability and the like.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may occur depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A panoramic picture permanent tooth identification method based on deep learning is characterized by comprising the following steps:
inputting the original panoramic picture into a deep learning-based alveolar bone line segmentation model to obtain an alveolar bone line segmentation result;
generating a fitting rectangular frame according to the alveolar bone line segmentation result, then generating an extended rectangular frame according to the fitting rectangular frame, and then cutting out image blocks of the peripheral area from the original panoramic picture according to the extended rectangular frame;
inputting the image block of the periodontal area into a constant tooth segmentation model based on deep learning to obtain a constant tooth segmentation result;
marking the tooth number of each permanent tooth in the permanent tooth segmentation result;
the alveolar bone line segmentation model based on deep learning is obtained by the following steps:
(1) a model design stage: the network structure comprises an encoder part and a decoder part, wherein the encoder part adopts an Xprediction network structure, the decoder part adopts an FPN multi-scale fusion structure, and five convolution layers with different depths are extracted from the encoder part as input;
(2) a model training stage: carrying out size normalization, gray level normalization, data enhancement and data balancing processing on alveolar bone line segmentation training data, wherein parameters of an encoder part adopt pre-trained parameters on a large-scale public image data set ImageNet, parameters of a decoder part adopt random initialization, a model is subjected to iterative training by adopting a gradient descent algorithm to obtain a network optimal solution, and optimal parameters of the model are determined according to a Dice value on a verification set;
the permanent tooth segmentation model based on deep learning is obtained by the following method:
(1) a model design stage: the network structure comprises an encoder part and a decoder part, wherein the encoder part adopts an Xception network structure, the decoder part adopts an FPN multi-scale fusion structure, five convolution layers with different depths are extracted from the encoder part as input, and a scSE structure is added to an upper sampling layer of the decoder part;
(2) a model training stage: the method comprises the steps of carrying out size normalization, gray normalization, data enhancement and data balancing processing on permanent tooth segmentation training data, wherein parameters of an encoder part adopt pre-trained parameters on a large-scale public image data set ImageNet, parameters of a decoder part adopt random initialization, iterative training is carried out on a model by adopting a gradient descent algorithm to obtain a network optimal solution, and the optimal parameters of the model are determined according to a Dice value on a verification set.
2. The method of claim 1, wherein the step of generating an expanded rectangular frame from the fitted rectangular frame comprises:
moving the left boundary of the fit rectangular frame leftwards by a preset distance to obtain the left boundary of the extended rectangular frame;
the right boundary of the fit rectangular frame moves rightwards by the preset distance to obtain the right boundary of the extended rectangular frame;
moving the upper boundary of the fitting rectangular frame upwards by 0.5 times of the height value of the fitting rectangular frame to obtain the upper boundary of the expanded rectangular frame;
and the lower boundary of the fitting rectangular frame moves downwards by 0.8 times of the height value of the fitting rectangular frame so as to obtain the lower boundary of the expanded rectangular frame.
3. The method of claim 1, wherein the random initialization method is: he _ normal, lecun _ uniform, gloot _ normal, gloot _ uniform, or lecun _ normal.
4. The method of claim 1, wherein the gradient descent algorithm is performed by: adam, SGD, RMSprop or adapelta.
5. The method according to claim 1, wherein before the step of labeling the tooth number of each permanent tooth according to the permanent tooth segmentation result, the method further comprises: and performing morphological opening operation processing on the permanent tooth segmentation result.
6. The utility model provides a panorama permanent tooth recognition device based on deep learning which characterized in that includes: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method according to any one of claims 1 to 5.
CN201910184807.8A 2019-03-12 2019-03-12 Depth learning-based panoramic photo permanent tooth identification method and device Active CN109949319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910184807.8A CN109949319B (en) 2019-03-12 2019-03-12 Depth learning-based panoramic photo permanent tooth identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910184807.8A CN109949319B (en) 2019-03-12 2019-03-12 Depth learning-based panoramic photo permanent tooth identification method and device

Publications (2)

Publication Number Publication Date
CN109949319A CN109949319A (en) 2019-06-28
CN109949319B true CN109949319B (en) 2022-05-20

Family

ID=67009750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910184807.8A Active CN109949319B (en) 2019-03-12 2019-03-12 Depth learning-based panoramic photo permanent tooth identification method and device

Country Status (1)

Country Link
CN (1) CN109949319B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292313B (en) * 2020-02-28 2023-04-28 恩施京植咿呀雅口腔医院有限公司 Dental filling quality evaluation method and device
CN112308867B (en) * 2020-11-10 2022-07-22 上海商汤智能科技有限公司 Tooth image processing method and device, electronic equipment and storage medium
CN113326871A (en) * 2021-05-19 2021-08-31 天津理工大学 Cloud edge cooperative meniscus detection method and system
CN117788355A (en) * 2022-06-20 2024-03-29 杭州朝厚信息科技有限公司 Method for dividing constant teeth and deciduous teeth in full-view oral cavity film and determining tooth number

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9626462B2 (en) * 2014-07-01 2017-04-18 3M Innovative Properties Company Detecting tooth wear using intra-oral 3D scans
CN108648283B (en) * 2018-04-02 2022-07-05 北京正齐口腔医疗技术有限公司 Tooth segmentation method and device
CN108389207A (en) * 2018-04-28 2018-08-10 上海视可电子科技有限公司 A kind of the tooth disease diagnosing method, diagnostic device and intelligent image harvester
CN109146897B (en) * 2018-08-22 2021-08-03 北京羽医甘蓝信息技术有限公司 Oral cavity radiation image quality control method and device
CN109146867B (en) * 2018-08-24 2021-11-19 四川智动木牛智能科技有限公司 Oral cavity curved surface CT image biological feature extraction and matching method and device
CN109360196B (en) * 2018-09-30 2021-09-28 北京羽医甘蓝信息技术有限公司 Method and device for processing oral cavity radiation image based on deep learning

Also Published As

Publication number Publication date
CN109949319A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109949319B (en) Depth learning-based panoramic photo permanent tooth identification method and device
CN109978841B (en) Method and device for recognizing impacted tooth of panoramic picture based on deep learning
US11020206B2 (en) Tooth segmentation based on anatomical edge information
CN109360196B (en) Method and device for processing oral cavity radiation image based on deep learning
BR112020012292A2 (en) automated prediction of 3d root format using deep learning methods
JP2021535493A (en) Automated orthodontic treatment planning using deep learning
CN111784754B (en) Tooth orthodontic method, device, equipment and storage medium based on computer vision
JP2019524367A5 (en)
WO2017205484A1 (en) Virtual modeling of gingiva adaptations to progressive orthodontic correction & associated methodology of appliance manufacture
WO2016197370A1 (en) Segmentation and reconstruction method and device for teeth and alveolar bone
CN109961427A (en) The method and apparatus of whole scenery piece periapical inflammation identification based on deep learning
CN109766877B (en) Method and device for recognizing artificial teeth of panoramic film based on deep learning
US20160256246A1 (en) Negative and positive merge modelling
KR20220103902A (en) Automated method for aligning 3d dental data and computer readable medium having program for performing the method
CN113223140A (en) Method for generating image of orthodontic treatment effect by using artificial neural network
CN107622529B (en) Three-dimensional tooth model automatic segmentation method based on morphology
US20220378548A1 (en) Method for generating a dental image
CN109544530B (en) Method and system for automatically positioning structural feature points of X-ray head radiography measurement image
CN109754870B (en) Depth learning-based panoramic alveolar bone resorption grading method and device
WO2022174747A1 (en) Method for segmenting computed tomography image of teeth
CN115953359A (en) Digital oral cavity model mark point identification method and device and electronic equipment
CN115731169A (en) Method and system for automatically determining abutment based on deep learning and electronic equipment
JP7269587B2 (en) segmentation device
KR102473139B1 (en) System for recognizing implant based on artificial intelligence and method thereof
CN109948619A (en) The method and apparatus of whole scenery piece dental caries identification based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant