CN112418170A - Oral examination and identification method based on 3D scanning - Google Patents
Oral examination and identification method based on 3D scanning Download PDFInfo
- Publication number
- CN112418170A CN112418170A CN202011436558.6A CN202011436558A CN112418170A CN 112418170 A CN112418170 A CN 112418170A CN 202011436558 A CN202011436558 A CN 202011436558A CN 112418170 A CN112418170 A CN 112418170A
- Authority
- CN
- China
- Prior art keywords
- scanning
- gth
- loss function
- method based
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 45
- 238000001514 detection method Methods 0.000 claims abstract description 42
- 230000006870 function Effects 0.000 claims abstract description 42
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000002372 labelling Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 210000000214 mouth Anatomy 0.000 claims abstract description 6
- 238000013135 deep learning Methods 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000002790 cross-validation Methods 0.000 claims description 8
- 238000010200 validation analysis Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 4
- 208000014151 Stomatognathic disease Diseases 0.000 description 3
- 208000018035 Dental disease Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 208000002925 dental caries Diseases 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of oral examination, in particular to an oral examination identification method based on 3D scanning, which comprises the following steps: s1, acquiring a dental image through 3D scanning; s2, preprocessing the dental image, wherein the preprocessing comprises labeling data and sample dividing processing; s3, performing machine deep learning training and establishing an algorithm model on the basis of the preprocessed dental image, wherein the algorithm model is a multi-scale algorithm model established by taking a residual error network as a main network and combining a target detection algorithm; s4, carrying out tooth identification on the oral cavity scanning image to be identified through the multi-scale algorithm model, carrying out two-stage detection through a two-stage loss function by using the multi-scale algorithm model which is constructed by taking the residual error network as a main network and combining a target detection algorithm, so that the network has the capability of carrying out coarse detection and fine detection.
Description
Technical Field
The invention belongs to the field of oral examination, and particularly relates to an oral examination identification method based on 3D scanning.
Background
The oral health and prevention are more and more emphasized by people, the examination in the oral cavity is also carried into the life of people, in the past, most of the examination in the oral cavity depends on an experienced doctor to manually observe and know the oral health condition in a mode of matching a reflector with a flashlight, but for teeth, the target is small, sometimes the reaction is not very intuitive after the teeth are damaged, and the deviation is easy to appear in the visual observation, so that the diagnosis is not facilitated.
Disclosure of Invention
The invention provides an oral examination identification method based on 3D scanning, which aims to solve the problems and comprises the following steps:
s1, acquiring a dental image through 3D scanning;
s2, preprocessing the dental image, wherein the preprocessing comprises labeling data and sample dividing processing;
s3, performing machine deep learning training and establishing an algorithm model on the basis of the preprocessed dental image, wherein the algorithm model is a multi-scale algorithm model established by taking a residual error network as a main network and combining a target detection algorithm;
and S4, carrying out the tooth recognition on the oral cavity scanning image to be recognized through the multi-scale algorithm model.
Preferably, the annotation data includes a type of the dental disease and a degree of risk.
Preferably, the sampling process includes sampling and copying the dental region image, randomly pasting the copied dental region image to another region of the sample, and labeling the sampled and copied image.
Preferably, in step S2, the data is divided by using a two-fold cross validation method, 80% of the data is randomly extracted for model training, the remaining 20% of the data is used for validation, the data is divided into 5 parts for two-fold cross validation, and each training data set is trained for 40 times.
Preferably, the learning rate of the model training is adjusted at equal intervals, and the training adjustment is 0.8 times of the original training adjustment every 10 training data sets.
Preferably, the loss function loss of the multi-scale algorithm model includes a cross entropy loss function of a classification task and a loss function of a detection task, and is characterized by:
loss=CE(x,gth_c)+DL(x,gth_d) (1)
in the formula, CE is a cross entropy loss function for a classification task, gth _ c is a real value label for classification, DL is a loss function for a detection task, and gth _ d is a real label for a detection frame.
Preferably, the loss function of the detection task is characterized by:
DL(x,gth_d)=1/N(Lc(x,c)+aLloc(l,gth_d)) (2)
in the formula, Lloc(l, gth _ d) is a smoothed 1-norm loss function with respect to the prediction and real frames, and a is a coefficient.
Preferably, Lloc(l, gth _ d) is characterized by:
Lloc(l,gth_d)=∑smooth_L1(l,(i)-gth_d(i)) (3)。
preferably, Lc(x, c) is a confidence loss function, namely a cross entropy loss function, about the output and the classification after detection, and is characterized in that:
Lc(x,c)=-∑x_p(i)log(c_p(i))-∑x_n(i)log(c_n(i)) (4)。
preferably, the model is quantified after training is complete.
The invention has the following beneficial effects: the oral examination and identification method based on 3D scanning is provided, a multi-scale algorithm model is constructed by taking a residual error network as a backbone network and combining a target detection algorithm, two-stage detection is carried out through two stages of loss functions, so that the network has the capability of coarse detection to fine detection and the capability of multi-scale learning, the network firstly carries out down sampling to extract high-level semantic information, then integrates and splices high-level features and bottom-level features, new features are synthesized for target detection, and the strategy effectively improves the detection capability of small targets.
Drawings
FIG. 1 is a block diagram illustrating steps performed by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-scale algorithm model according to an embodiment of the present invention;
FIG. 3 is a comparison graph of the convergence rate of the algorithm in the improved method and the classical method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. Unless otherwise specified, the technical means used in the examples are conventional means well known to those skilled in the art.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
As shown in fig. 1-3, the oral examination identification method based on 3D scanning includes the following steps:
s1, acquiring a dental image through 3D scanning;
s2, preprocessing the dental image, wherein the preprocessing comprises labeling data and sample dividing processing;
s3, performing machine deep learning training and establishing an algorithm model on the basis of the preprocessed dental image, wherein the algorithm model is a multi-scale algorithm model established by taking a residual error network as a main network and combining a target detection algorithm;
and S4, carrying out the tooth recognition on the oral cavity scanning image to be recognized through the multi-scale algorithm model.
And (3) generating a plurality of three-dimensional images from different visual angles in real time by using a digital oral scanner and applying a staring type structured light three-dimensional measurement technology, and creating single tooth or full arch three-dimensional data by adopting a quick splicing algorithm.
The image is RGB image with [224,224] resolution, and the specification of the image is consistent in both training stage and prediction stage. Firstly, training the image by using the dental image to obtain a detection algorithm model. And then, detecting and identifying the sick teeth by using the trained algorithm model to the newly acquired tooth images.
Preferably, the marking data includes a category of the tooth and a risk degree, and for category marking such as tooth caries, dental caries and the like, the risk degree mainly represents the degree of harm of the tooth, or the degree of disease, and the subdivided grade can provide a basis for later identification.
Preferably, the sampling process includes sampling and copying the dental region image, randomly pasting the copied dental region image to another region of the sample, and labeling the sampled and copied image.
After obtaining the tooth scanning image, we label the diseased tooth area in the image, and record pixel coordinates (x1, y1) and (x2, y2) of the diagonal position of the labeling frame, i.e. the coordinates of the pixel point corresponding to the origin of the image. Thus, we can uniquely describe a rectangular labeled box, whose four vertices are: (x1, y1), (x1, y2), (x2, y1), (x2, y 2).
The proportion of the image area of the labeling frame to the whole image area is very low, so as to increase the number of samples in the sick tooth area. We randomly copy the image in the region of the annotation box to other locations of the image. First, we randomly generate the top left vertex (x _ n, y _ n) of the new labeled box in the image region [1280,760], and easily get the bottom right vertex (x _ n + W, y _ n-H) of the new labeled box according to the height H and width W of the labeled box. If the new frame does not exceed the image area and the original labeling frame does not have an overlapping area, the labeling frame is reserved, and the pixel value in the area of the original labeling frame is assigned to the pixel of the new labeling frame.
By analogy, the copy of a plurality of diseased tooth areas and the positions of a plurality of corresponding marking frames can be obtained on one image.
The size range of the patient tooth is relatively small in the entire dental image, and the data amount is much smaller than the number of samples sampled from other regions. In the image detection problem, the problem belongs to a small target problem and a data imbalance problem, the performance of the algorithm is greatly influenced, and a method for increasing a patient tooth sample is adopted to improve the recognition rate. The method comprises the following steps: the dental area is copied in multiple copies and randomly pasted to other areas of the secondary image. Meanwhile, the data and the category of the corresponding labeling frame are added in the labeling data. All the labeled data are preprocessed in a similar way. Therefore, the dental data can be obviously increased, and the recall rate and the accuracy of the algorithm can be effectively improved.
Preferably, in step S2, the data is divided by using a two-fold cross validation method, 80% of the data is randomly extracted for model training, the remaining 20% of the data is used for validation, the data is divided into 5 pieces for two-fold cross validation, and each training data set is trained for 40 times.
For the image samples of the less common dental diseases, the proportion of the total samples is low, the number of the image samples is small compared with that of other dental disease image samples, and in order to improve the recall rate, an oversampling (oversampling) mode is adopted, the samples are trained for multiple times, and the performance of an algorithm model is improved. Because the tooth image has larger blank areas, the areas can be easily judged according to the pixel values, therefore, a plurality of tooth pictures to be detected can be copied in the areas, meanwhile, other tooth areas can not be influenced, and the probability of identifying the target to be detected can be greatly improved through the preprocessing.
Preferably, the learning rate of the model training is adjusted at equal intervals, and the training adjustment is 0.8 times of the original training adjustment every 10 training data sets.
Dividing data by adopting a 2-fold cross validation method, randomly extracting 80% of data for model training, and using the remaining 20% of data for validation, wherein the main parameters of the model training are as follows: epoch (complete training for all samples in the training set): each training data set was trained 40 times in turn, with 5 training data sets for 2-fold cross validation, thus training 200 times in turn, with Epoch 200. Learning rate strategy: the initial learning rate lr is 0.001 and the initial learning rate is adjusted to 0.8 times the original learning rate per 10 epochs. An optimizer: adam, beta1=0.001,β20.999,. epsilon.1.0 e-8. Resnet pre-training weights: based on pre-trained weights on the imagenet dataset.
In the aspect of deep learning algorithm models, the calculated amount, the calculating speed and the precision index are considered comprehensively, a mobilent-ssd model is selected, and in the algorithm training process, the model hyper-parameters are finely adjusted according to the mAP performance index. mAP is defined as follows:
k is the number of categories and APi is the average accuracy (mean) of the ith category, which can be solved by an accuracy-recall graph (precision-recall).
The used network model is a multi-task mixed model, a part of model structures are mainly used for classification, and a loss function of the classification task model is generally a cross entropy method because the cross entropy well describes the difference degree of the same random variable under different probabilities.
The other part of the model structure is mainly used for detection, and the detection task is mainly to perform regression on the vertex coordinates of the labeling box, and generally uses an L1 loss function or an L2 loss function. The L1 loss function is a more robust way than the L2 loss function. It is not particularly sensitive to singular values in the training data. The L2 loss function is greatly influenced by singular values, so that a lot of normal training data are easily lost due to the influence in the training process, the convergence speed is too low, or the optimal solution cannot be converged. In the scene of sick tooth detection, the preprocessing steps are more, the labeling steps are relatively complicated, and the noise of training data caused by human errors is reduced. Here the L1 loss function is chosen.
The joint application of the multiple loss functions effectively improves the convergence speed of the algorithm and improves the detection precision.
As a preferred scheme, the loss function loss of the multi-scale algorithm model includes a cross entropy loss function of a classification task and a loss function of a detection task, and is characterized in that:
loss=CE(x,gth_c)+DL(x,gth_d) (1)
in the formula, CE is a cross entropy loss function for a classification task, gth _ c is a real value label for classification, DL is a loss function for a detection task, and gth _ d is a real label for a detection frame.
As a preferred solution, the loss function of the detection task is characterized by:
DL(x,gth_d)=1/N(Lc(x,c)+aLloc(l,gth_d)) (2)
in the formula, Lloc(l, gth _ d) is a smoothed 1-norm loss function with respect to the prediction and real frames, and a is a coefficient.
Preferably, L isloc(l, gth _ d) is characterized by:
Lloc(l,gth_d)=∑smooth_L1(l,(i)-gth_d(i)) (3)。
preferably, L isc(x, c) is a confidence loss function, namely a cross entropy loss function, about the output and the classification after detection, and is characterized in that:
Lc(x,c)=-∑x_p(i)log(c_p(i))-∑x_n(i)log(c_n(i)) (4)。
preferably, the model is quantized after the training is finished, and the model is quantized after the training is finished to improve the working efficiency in the actual application in order to further reduce the calculation amount and save the calculation time and compress the capacity of the algorithm model.
In view of the fact that a characteristic pyramid network (featurepyaramidnetwork) can extract high-level semantic information and low-level semantic information, has the characteristic of multi-scale and multi-resolution learning and has great advantages for the task of detecting small targets, a multi-scale algorithm model which takes a resource 50 as a main network and is combined with ssd is provided, the algorithm model is characterized by having two stages of loss functions, one stage is a softmax loss function after down sampling, the other stage is a ssd loss function of the whole network, the two stages of loss functions enable the network to have the capability of detecting the accurate detection from the rough detection, and the algorithm strategy of the two-stage detection improves the algorithm efficiency. The algorithm framework is a multi-task learning framework, one is a classification task which is carried out through softmax after down sampling, the other is an ssd detection task, and in the tooth detection service, the classification algorithm has an auxiliary function and is beneficial to accelerating algorithm convergence and training. The method has multi-scale learning capability, a network firstly performs down-sampling to extract high-level semantic information, then integrates and splices high-level features and bottom-level features to synthesize new features for target detection, and the strategy effectively improves the detection capability of small targets.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and those skilled in the art should make various changes, modifications, alterations, and substitutions on the technical solution of the present invention without departing from the spirit of the present invention, which falls within the protection scope defined by the claims of the present invention.
Claims (10)
1. Oral examination recognition method based on 3D scanning, its characterized in that: the method comprises the following steps:
s1, acquiring a dental image through 3D scanning;
s2, preprocessing the dental image, wherein the preprocessing comprises labeling data and sample dividing processing;
s3, performing machine deep learning training and establishing an algorithm model on the basis of the preprocessed dental image, wherein the algorithm model is a multi-scale algorithm model established by taking a residual error network as a main network and combining a target detection algorithm;
and S4, carrying out the tooth recognition on the oral cavity scanning image to be recognized through the multi-scale algorithm model.
2. The oral examination recognition method based on 3D scanning as claimed in claim 1, wherein: the annotation data includes a dental category and a risk level.
3. The oral examination recognition method based on 3D scanning as claimed in claim 2, wherein: the sample dividing treatment comprises the steps of dividing and copying the dental region image, pasting the copied dental region image to other regions of the sample at random, and labeling the image which is divided and copied.
4. The oral examination recognition method based on 3D scanning as claimed in claim 1, wherein: in the step S2, data are divided by using a two-fold cross validation method, 80% of the data are randomly extracted for model training, the remaining 20% of the data are used for validation, the data are divided into 5 parts for two-fold cross validation, and each training data set is trained for 40 times in turn.
5. The oral examination recognition method based on 3D scanning as claimed in claim 4, wherein: the learning rate of model training is adjusted at equal intervals, and the training adjustment is 0.8 times of the original training adjustment every 10 training data sets.
6. The oral examination recognition method based on 3D scanning as claimed in claim 1, wherein: the loss function loss of the multi-scale algorithm model comprises a cross entropy loss function of a classification task and a loss function of a detection task, and is characterized in that:
loss=CE(x,gth_c)+DL(x,gth_d) (1)
in the formula, CE is a cross entropy loss function for a classification task, gth _ c is a real value label for classification, DL is a loss function for a detection task, and gth _ d is a real label for a detection frame.
7. The oral examination recognition method based on 3D scanning as claimed in claim 6, wherein: the loss function of the detection task is characterized in that:
DL(x,gth_d)=1/N(Lc(x,c)+aLloc(l,gth_d)) (2)
in the formula, Lloc(l, gth _ d) is a smoothed 1-norm loss function with respect to the prediction and real frames, and a is a coefficient.
8. The oral examination recognition method based on 3D scanning as claimed in claim 7, wherein: l isloc(l, gth _ d) is characterized by:
Lloc(l,gth_d)=∑smooth_L1(l,(i)-gth_d(i)) (3)。
9. the oral examination recognition method based on 3D scanning as claimed in claim 8, wherein: l isc(x, c) is a confidence loss function, namely a cross entropy loss function, about the output and the classification after detection, and is characterized in that:
Lc(x,c)=-∑x_p(i)log(c_p(i))-∑x_n(i)log(c_n(i)) (4)。
10. the oral examination recognition method based on 3D scanning as claimed in claim 1, wherein: and quantifying the model after the training is finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011436558.6A CN112418170B (en) | 2020-12-11 | 2020-12-11 | 3D scanning-based oral examination and identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011436558.6A CN112418170B (en) | 2020-12-11 | 2020-12-11 | 3D scanning-based oral examination and identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112418170A true CN112418170A (en) | 2021-02-26 |
CN112418170B CN112418170B (en) | 2024-03-01 |
Family
ID=74776267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011436558.6A Active CN112418170B (en) | 2020-12-11 | 2020-12-11 | 3D scanning-based oral examination and identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112418170B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
CN109117876A (en) * | 2018-07-26 | 2019-01-01 | 成都快眼科技有限公司 | A kind of dense small target deteection model building method, model and detection method |
CN109584246A (en) * | 2018-11-16 | 2019-04-05 | 成都信息工程大学 | Based on the pyramidal DCM cardiac muscle diagnosis and treatment irradiation image dividing method of Analysis On Multi-scale Features |
CN110009052A (en) * | 2019-04-11 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of method of image recognition, the method and device of image recognition model training |
CN110111313A (en) * | 2019-04-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | Medical image detection method and relevant device based on deep learning |
CN110120036A (en) * | 2019-04-17 | 2019-08-13 | 杭州数据点金科技有限公司 | A kind of multiple dimensioned tire X-ray defect detection method |
CN110223352A (en) * | 2019-06-14 | 2019-09-10 | 浙江明峰智能医疗科技有限公司 | A kind of medical image scanning automatic positioning method based on deep learning |
CN110660049A (en) * | 2019-09-16 | 2020-01-07 | 青岛科技大学 | Tire defect detection method based on deep learning |
CN110674866A (en) * | 2019-09-23 | 2020-01-10 | 兰州理工大学 | Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network |
CN111027547A (en) * | 2019-12-06 | 2020-04-17 | 南京大学 | Automatic detection method for multi-scale polymorphic target in two-dimensional image |
CN111311506A (en) * | 2020-01-21 | 2020-06-19 | 辽宁师范大学 | Low-dose CT tooth image denoising method based on double residual error networks |
CN111784671A (en) * | 2020-06-30 | 2020-10-16 | 天津大学 | Pathological image focus region detection method based on multi-scale deep learning |
KR20200120035A (en) * | 2019-04-11 | 2020-10-21 | 주식회사 디오 | Method and apparatus for detecting tooth object in oral scan image |
US20200364860A1 (en) * | 2019-05-16 | 2020-11-19 | Retrace Labs | Artificial Intelligence Architecture For Identification Of Periodontal Features |
CN111973158A (en) * | 2020-08-31 | 2020-11-24 | 河北工业大学 | Intelligent oral cavity detection system and oral cavity image detection method |
-
2020
- 2020-12-11 CN CN202011436558.6A patent/CN112418170B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117876A (en) * | 2018-07-26 | 2019-01-01 | 成都快眼科技有限公司 | A kind of dense small target deteection model building method, model and detection method |
CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
CN109584246A (en) * | 2018-11-16 | 2019-04-05 | 成都信息工程大学 | Based on the pyramidal DCM cardiac muscle diagnosis and treatment irradiation image dividing method of Analysis On Multi-scale Features |
KR20200120035A (en) * | 2019-04-11 | 2020-10-21 | 주식회사 디오 | Method and apparatus for detecting tooth object in oral scan image |
CN110009052A (en) * | 2019-04-11 | 2019-07-12 | 腾讯科技(深圳)有限公司 | A kind of method of image recognition, the method and device of image recognition model training |
CN110120036A (en) * | 2019-04-17 | 2019-08-13 | 杭州数据点金科技有限公司 | A kind of multiple dimensioned tire X-ray defect detection method |
CN110111313A (en) * | 2019-04-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | Medical image detection method and relevant device based on deep learning |
US20200364860A1 (en) * | 2019-05-16 | 2020-11-19 | Retrace Labs | Artificial Intelligence Architecture For Identification Of Periodontal Features |
CN110223352A (en) * | 2019-06-14 | 2019-09-10 | 浙江明峰智能医疗科技有限公司 | A kind of medical image scanning automatic positioning method based on deep learning |
CN110660049A (en) * | 2019-09-16 | 2020-01-07 | 青岛科技大学 | Tire defect detection method based on deep learning |
CN110674866A (en) * | 2019-09-23 | 2020-01-10 | 兰州理工大学 | Method for detecting X-ray breast lesion images by using transfer learning characteristic pyramid network |
CN111027547A (en) * | 2019-12-06 | 2020-04-17 | 南京大学 | Automatic detection method for multi-scale polymorphic target in two-dimensional image |
CN111311506A (en) * | 2020-01-21 | 2020-06-19 | 辽宁师范大学 | Low-dose CT tooth image denoising method based on double residual error networks |
CN111784671A (en) * | 2020-06-30 | 2020-10-16 | 天津大学 | Pathological image focus region detection method based on multi-scale deep learning |
CN111973158A (en) * | 2020-08-31 | 2020-11-24 | 河北工业大学 | Intelligent oral cavity detection system and oral cavity image detection method |
Non-Patent Citations (2)
Title |
---|
文怀兴;王俊杰;韩?;: "基于改进残差网络的红枣缺陷检测分类方法研究", 食品与机械, no. 01 * |
黄盛;李菲菲;陈虬;: "基于改进深度残差网络的计算断层扫描图像分类算法", 光学学报, no. 03 * |
Also Published As
Publication number | Publication date |
---|---|
CN112418170B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
CN112200843B (en) | Super-voxel-based CBCT and laser scanning point cloud data tooth registration method | |
CN107247971B (en) | Intelligent analysis method and system for ultrasonic thyroid nodule risk index | |
CN102096917B (en) | Automatic eliminating method for redundant image data of capsule endoscope | |
US10991091B2 (en) | System and method for an automated parsing pipeline for anatomical localization and condition classification | |
CN112365464B (en) | GAN-based medical image lesion area weak supervision positioning method | |
CN111340130A (en) | Urinary calculus detection and classification method based on deep learning and imaging omics | |
Kong et al. | Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network | |
Tian et al. | Efficient computer-aided design of dental inlay restoration: a deep adversarial framework | |
CN115205469A (en) | Tooth and alveolar bone reconstruction method, equipment and medium based on CBCT | |
CN111369501B (en) | Deep learning method for identifying oral squamous cell carcinoma based on visual features | |
CN111784639A (en) | Oral panoramic film dental caries depth identification method based on deep learning | |
CN114638852A (en) | Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image | |
CN111798445A (en) | Tooth image caries identification method and system based on convolutional neural network | |
KR20200058316A (en) | Automatic tracking method of cephalometric point of dental head using dental artificial intelligence technology and service system | |
CN112418170B (en) | 3D scanning-based oral examination and identification method | |
CN114862771B (en) | Wisdom tooth identification and classification method based on deep learning network | |
CN116205925A (en) | Tooth occlusion wing tooth caries segmentation method based on improved U-Net network | |
CN115578373A (en) | Bone age assessment method, device, equipment and medium based on global and local feature cooperation | |
Brahmi et al. | Exploring the role of Convolutional Neural Networks (CNN) in dental radiography segmentation: A comprehensive Systematic Literature Review | |
CN115439409A (en) | Tooth type identification method and device | |
CN112150422A (en) | Modeling method of oral health self-detection model based on multitask learning | |
CN112420171A (en) | Maxillary sinus floor bone classification method and system based on artificial intelligence | |
Broll et al. | Generative deep learning approaches for the design of dental restorations: A narrative review | |
CN117152507B (en) | Tooth health state detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |