CN114066804A - Curved surface fault layer tooth position identification method based on deep learning - Google Patents

Curved surface fault layer tooth position identification method based on deep learning Download PDF

Info

Publication number
CN114066804A
CN114066804A CN202111123166.9A CN202111123166A CN114066804A CN 114066804 A CN114066804 A CN 114066804A CN 202111123166 A CN202111123166 A CN 202111123166A CN 114066804 A CN114066804 A CN 114066804A
Authority
CN
China
Prior art keywords
convolutional neural
neural network
network model
curved surface
tooth position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111123166.9A
Other languages
Chinese (zh)
Inventor
赵小艇
梁生
朱宇昂
李琳
曹寅
许桐楷
刘峰
彭俐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202111123166.9A priority Critical patent/CN114066804A/en
Publication of CN114066804A publication Critical patent/CN114066804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a curved surface fault layer tooth position identification method based on deep learning. The method comprises the following steps: constructing a convolutional neural network model for identifying the tooth positions of the curved surface fault layer, wherein the convolutional neural network model comprises a Feature Pyramid Network (FPN) and a suggestion frame network (RPN); training the convolutional neural network model by utilizing the training set and the verification set to obtain a trained convolutional neural network model; and inputting the curved surface fault layer tooth position image to be identified into the trained convolutional neural network model, and outputting tooth position information in the curved surface fault layer tooth position image to be identified by the convolutional neural network model. The method simplifies the process of artificial intelligence for assisting the tooth position identification of the oral cavity curved surface fault layer, and avoids the possibility of errors caused by the addition of a segmentation process; the technology of artificial intelligence assisted recognition of the tooth positions of the oral cavity curved surface fault layer is enriched; the technology for realizing the oral fault tooth position recognition by target detection and semantic segmentation, namely example segmentation is provided.

Description

Curved surface fault layer tooth position identification method based on deep learning
Technical Field
The invention relates to the technical field of tooth position identification, in particular to a curved surface fault layer tooth position identification method based on deep learning.
Background
The X-ray image auxiliary diagnosis based on artificial intelligence is a clinical medicine research hotspot, and has significant academic, economic and social values for breaking through the subjective bottleneck problem of doctor diagnosis, developing remote intelligent medical treatment and improving the medical level of remote difficult areas.
The artificial intelligent auxiliary diagnosis of the X-ray image of the oral cavity is still in the starting stage, and the identification of the tooth position is the basis and is more important. At present, a complete scheme for performing artificial intelligent auxiliary identification on the direct tooth position aiming at the oral fault does not exist. In the prior art, a method for identifying a tooth position of image data of an oral cavity fault layer includes: it is necessary to perform the identification of the tooth position by segmenting each tooth and then by deep learning.
The above-mentioned prior art method for identifying the tooth position of the image data of the oral cavity fault layer has the following disadvantages: no matter the tooth position is identified, or the decayed tooth, periodontitis, wisdom tooth and other odontopathy are identified, the pretreatment method of the sample needs to segment the teeth, then corresponding task processing is carried out, and the fault oral cavity tooth position cannot be directly identified; and the added tooth segmentation process increases the source of errors, possibly causes the increase of final errors and has certain influence on the identification of the tooth position.
Disclosure of Invention
The embodiment of the invention provides a curved surface fault layer tooth position identification method based on deep learning, so as to effectively identify oral cavity fault layer tooth positions.
In order to achieve the purpose, the invention adopts the following technical scheme.
A curved surface fault layer tooth position identification method based on deep learning comprises the following steps:
constructing a convolutional neural network model for recognizing tooth positions of the curved surface fault layer;
training the convolutional neural network model by utilizing the training set and the verification set to obtain a trained convolutional neural network model;
and inputting the curved surface fault layer tooth position image to be identified into the trained convolutional neural network model, and outputting tooth position information in the curved surface fault layer tooth position image to be identified by the convolutional neural network model.
Preferably, the method further comprises:
and (3) making a sample set image of the tooth position of the curved surface fault layer sheet by using a labelme marking tool, carrying out gray processing on the sample set image, setting 90% of images in the sample set image as a training set, and setting 10% of images as a verification set.
Preferably, the convolutional neural network model comprises a depth residual error network 101Resnet101, a feature pyramid network FPN and a suggestion box network RPN, the Resnet101 is used as a backbone feature extraction network and comprises a Conv Block module and an Identity Block module, the input dimension and the output dimension of the Conv Block module are different and cannot be connected in series continuously, the Conv Block module is used for changing the dimension of the network, and the input dimension and the output dimension of the Identity Block module are the same and can be connected in series.
Preferably, the convolutional neural network model includes: mask R-CNN, Fast R-CNN and PANET network architecture.
Preferably, the training of the convolutional neural network model by using the training set and the validation set to obtain the trained convolutional neural network model includes:
inputting the training set into a convolutional neural network model, respectively obtaining an effective characteristic layer and a suggestion box through a resnet101 network and a characteristic pyramid network (FPN), and extracting the characteristics of the effective characteristic layer through the suggestion box to obtain a local characteristic layer;
screening whether objects are contained in the suggestion frame or not through a full-connection layer network by utilizing a local feature layer, adjusting the screened suggestion frame to obtain a prediction frame, extracting features from an effective feature layer by utilizing the prediction frame, and inputting the extracted features into a convolutional neural network to obtain mask marking information, namely information of tooth positions;
and verifying the information of the tooth positions acquired by the convolutional neural network model by using the verification set, adjusting the parameters of the convolutional neural network model according to the verification result, repeatedly and iteratively executing the processing process, determining the optimal parameters of the convolutional neural network model, and obtaining the trained convolutional neural network model.
According to the technical scheme provided by the embodiment of the invention, the embodiment of the invention simplifies the process of identifying the tooth position of the artificial intelligence auxiliary oral cavity curved surface fault layer, and avoids the possibility error caused by increasing the segmentation process; the technology of artificial intelligence assisted recognition of the tooth positions of the oral cavity curved surface fault layer is enriched; the technology for realizing the oral fault tooth position recognition by target detection and semantic segmentation, namely example segmentation is provided.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an implementation schematic diagram of a curved surface fault layer tooth position identification method based on deep learning according to an embodiment of the present invention;
FIG. 2 is a processing flow chart of a curved surface fault layer tooth position identification method based on deep learning according to an embodiment of the present invention;
fig. 3 is a structural diagram of a Conv Block module according to an embodiment of the present invention;
fig. 4 is a structural diagram of an Identity Block module according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
The embodiment of the invention mainly identifies the tooth positions, and labelme manual extraction and labeling are carried out on the tooth positions in the curved surface fault slice image by collecting the data of the clinical oral tooth curved surface fault slice image, so as to establish a sample set required by machine learning.
According to the embodiment of the invention, the tooth position recognition is carried out on the image data of the oral cavity tooth curved surface fault layer, 28 normal teeth (except wisdom teeth) are extracted by manual marking, the samples are subjected to deep learning to obtain a training model with higher accuracy, and the model can be used for directly carrying out tooth position calibration on the newly input oral cavity curved surface fault layer image.
The embodiment of the invention provides a deep learning method for directly identifying oral fault tooth positions, which comprises the following steps: the method comprises the steps of extracting features of tomographic image data through a Mask R-CNN (deep learning method), screening a suggestion box to obtain a sample, inputting the sample into a neural network for training and optimization, and accordingly detecting teeth and marking tooth position information and recognizing tooth positions.
Artificial intelligence auxiliary recognition of oral fault tooth positions: the method makes up the vacancy of artificial intelligent auxiliary identification of the oral fault tooth position, uses a Mask R-CNN network architecture, comprises an FPN (characteristic pyramid network) and an RPN (recommended frame network), improves the prediction capability of the model, and can accurately output the prediction result.
The processing implementation principle of the curved surface fault layer tooth position identification method based on deep learning provided by the embodiment of the invention is shown in fig. 1, and the specific processing flow is shown in fig. 2, and the method comprises the following processing procedures:
and step S10, making a sample set image of the tooth positions of the curved surface fault layer sheet by using a labelme marking tool, and simultaneously carrying out gray processing on the sample set image in order to improve the training precision of the training model, wherein 90% of images are set as a training set, and 10% of images are set as a verification set.
Step S20, constructing a convolutional neural Network model for identifying the tooth positions of the curved surface fault layer, where the convolutional neural Network model includes Resnet101 (depth residual error Network 101), FPN (Feature Pyramid Network), and RPN (Region pro-social Network). The convolutional neural network model can be a network architecture such as Mask R-CNN, Fast R-CNN, PANet and the like. The invention uses Resnet101 as a backbone feature extraction network, and has two basic modules which are respectively named as Conv Block and Identity Block. The structure of the Conv Block is shown in FIG. 3, where the Conv Block input and output are not of the same dimension and therefore cannot be concatenated continuously, and its role is to change the dimension of the network. The Identity Block module has a structure as shown in fig. 4, and the input dimension and the output dimension are the same and can be connected in series for deepening the network.
The Conv Block module is mainly connected with one convolution layer and three convolution layers in parallel, and the Identity Block module is connected with 3 convolution layers in series.
The feature pyramid FPN performs structure construction by compressing the results of C2, C3, C4, C5 twice, in length and width, in the skeleton extraction feature network. P2, P3, P4, P5 and P6 extracted through the FPN are used as effective characteristic layers of the RPN network, the RPN suggestion frame network is used for carrying out next operation on the effective characteristic layers, and the priori frame is decoded to obtain the suggestion frame. The extracted P2, P3, P4 and P5 are used as effective characteristic layers of a Classifier and Mask network, next operation is carried out on the effective characteristic layers by utilizing the Classifier prediction frame network, and a final prediction frame is obtained by decoding a suggestion frame; and performing the next operation on the effective characteristic layer by using a Mask semantic segmentation network to obtain a semantic segmentation result inside each prediction box.
And step S30, inputting the training set into a convolutional neural network model, and training the convolutional neural network model. And verifying the training result of the convolutional neural network model by using the verification set to obtain the trained convolutional neural network model.
Inputting the training set into a convolutional neural network model, respectively obtaining an effective characteristic layer and a suggestion box through a resnet101 network and an FPN (characteristic pyramid network), and extracting the characteristics of the effective characteristic layer through the suggestion box to obtain a local characteristic layer.
And screening whether the object is contained in the suggestion frame through a full-connection layer network by utilizing the local characteristic layer, and adjusting the suggestion frame to obtain a prediction frame. And extracting features from the effective feature layer by using a prediction frame, and inputting the extracted features into a convolutional neural network to obtain mask marking information, namely information of the tooth positions. The classifier used by the full-connection layer network can be Soft or SVM.
And verifying the information of the tooth positions acquired by the convolutional neural network model by using the verification set, and adjusting the parameters of the convolutional neural network model according to the verification result. And repeatedly and iteratively executing the processing process, and determining the optimal parameters of the convolutional neural network model to obtain the trained convolutional neural network model.
And step S40, inputting the curved surface fault layer tooth position image to be recognized into the trained convolutional neural network model, loading the weight information after training optimization, and the convolutional neural network model can automatically output the tooth position information in the curved surface fault layer tooth position image to be recognized and label the tooth position information.
In conclusion, the embodiment of the invention simplifies the process of artificial intelligence for assisting the tooth position identification of the oral cavity curved surface fault layer, and avoids the possibility error caused by the addition of the segmentation process; the technology of artificial intelligence assisted recognition of the tooth positions of the oral cavity curved surface fault layer is enriched; the technology for realizing the oral fault tooth position recognition by target detection and semantic segmentation, namely example segmentation is provided.
According to the embodiment of the invention, the processing of the image is simplified by the labelme labeling method; by means of the resnet101 neural network and the FPN (feature pyramid network), extraction of image features is improved, and the extracted features are more effective; by adding a Mask branch, the automatic marking of the oral cavity tooth position can be realized, and the tooth position identification is more intelligent.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, they are described in relative terms, as long as they are described in partial descriptions of method embodiments. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (5)

1. A curved surface fault layer tooth position identification method based on deep learning is characterized by comprising the following steps:
constructing a convolutional neural network model for recognizing tooth positions of the curved surface fault layer;
training the convolutional neural network model by utilizing the training set and the verification set to obtain a trained convolutional neural network model;
and inputting the curved surface fault layer tooth position image to be identified into the trained convolutional neural network model, and outputting tooth position information in the curved surface fault layer tooth position image to be identified by the convolutional neural network model.
2. The method of claim 1, further comprising:
and (3) making a sample set image of the tooth position of the curved surface fault layer sheet by using a labelme marking tool, carrying out gray processing on the sample set image, setting 90% of images in the sample set image as a training set, and setting 10% of images as a verification set.
3. The method according to claim 1, wherein the convolutional neural network model comprises a deep residual network 101Resnet101, a feature pyramid network FPN and a suggestion box network RPN, wherein the Resnet101 is used as a backbone feature extraction network and comprises a Conv Block module and an Identity Block module, the input dimension and the output dimension of the Conv Block module are different and cannot be connected in series for changing the dimension of the network, and the input dimension and the output dimension of the Identity Block module are the same and can be connected in series.
4. The method of claim 3, wherein the convolutional neural network model comprises: mask R-CNN, Fast R-CNN and PANET network architecture.
5. The method according to claim 3 or 4, wherein the training of the convolutional neural network model using the training set and the validation set to obtain the trained convolutional neural network model comprises:
inputting the training set into a convolutional neural network model, respectively obtaining an effective characteristic layer and a suggestion box through a resnet101 network and a characteristic pyramid network (FPN), and extracting the characteristics of the effective characteristic layer through the suggestion box to obtain a local characteristic layer;
screening whether objects are contained in the suggestion frame or not through a full-connection layer network by utilizing a local feature layer, adjusting the screened suggestion frame to obtain a prediction frame, extracting features from an effective feature layer by utilizing the prediction frame, and inputting the extracted features into a convolutional neural network to obtain mask marking information, namely information of tooth positions;
and verifying the information of the tooth positions acquired by the convolutional neural network model by using the verification set, adjusting the parameters of the convolutional neural network model according to the verification result, repeatedly and iteratively executing the processing process, determining the optimal parameters of the convolutional neural network model, and obtaining the trained convolutional neural network model.
CN202111123166.9A 2021-09-24 2021-09-24 Curved surface fault layer tooth position identification method based on deep learning Pending CN114066804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111123166.9A CN114066804A (en) 2021-09-24 2021-09-24 Curved surface fault layer tooth position identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111123166.9A CN114066804A (en) 2021-09-24 2021-09-24 Curved surface fault layer tooth position identification method based on deep learning

Publications (1)

Publication Number Publication Date
CN114066804A true CN114066804A (en) 2022-02-18

Family

ID=80233945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111123166.9A Pending CN114066804A (en) 2021-09-24 2021-09-24 Curved surface fault layer tooth position identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN114066804A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596861A (en) * 2023-04-28 2023-08-15 中山大学 Dental lesion recognition method, system, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110974288A (en) * 2019-12-26 2020-04-10 北京大学口腔医学院 Periodontal disease CBCT longitudinal data recording and analyzing method
CN113222994A (en) * 2021-07-08 2021-08-06 北京朗视仪器股份有限公司 Three-dimensional oral cavity model Ann's classification method based on multi-view convolutional neural network
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110974288A (en) * 2019-12-26 2020-04-10 北京大学口腔医学院 Periodontal disease CBCT longitudinal data recording and analyzing method
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
CN113222994A (en) * 2021-07-08 2021-08-06 北京朗视仪器股份有限公司 Three-dimensional oral cavity model Ann's classification method based on multi-view convolutional neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANANTHARAMAN R等: "Utilizing mask R-CNN for detection and segmentation of oral diseases", 《2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)》, 31 December 2018 (2018-12-31), pages 2197 - 2204, XP033507331, DOI: 10.1109/BIBM.2018.8621112 *
凌晨等: "基于Mask R-CNN算法的遥感图像处理技术及其应用", 《计算机科学》, no. 10, 15 October 2020 (2020-10-15), pages 159 - 168 *
刘昌荣: "牙齿病理全景数据构建研究", 《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》, no. 1, 15 January 2021 (2021-01-15), pages 31 *
聂振钢等: "Mask RCNN在雾化背景下的船舶流量检测", 《北京理工大学学报》, vol. 40, no. 11, 15 November 2020 (2020-11-15), pages 1224 - 1226 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596861A (en) * 2023-04-28 2023-08-15 中山大学 Dental lesion recognition method, system, equipment and storage medium
CN116596861B (en) * 2023-04-28 2024-02-23 中山大学 Dental lesion recognition method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109886273B (en) CMR image segmentation and classification system
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN110298844B (en) X-ray radiography image blood vessel segmentation and identification method and device
CN110245657B (en) Pathological image similarity detection method and detection device
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
CN108629768B (en) Method for segmenting epithelial tissue in esophageal pathology image
CN110415230B (en) CT slice image semantic segmentation system and method based on deep learning
CN111488921A (en) Panoramic digital pathological image intelligent analysis system and method
CN110110723B (en) Method and device for automatically extracting target area in image
CN111598875A (en) Method, system and device for building thyroid nodule automatic detection model
CN110543912B (en) Method for automatically acquiring cardiac cycle video in fetal key section ultrasonic video
CN111539480A (en) Multi-class medical image identification method and equipment
CN110246579B (en) Pathological diagnosis method and device
CN112862830A (en) Multi-modal image segmentation method, system, terminal and readable storage medium
CN114332577A (en) Colorectal cancer image classification method and system combining deep learning and image omics
CN112579808A (en) Data annotation processing method, device and system
CN114066804A (en) Curved surface fault layer tooth position identification method based on deep learning
CN112950552B (en) Rib segmentation marking method and system based on convolutional neural network
CN112200810B (en) Multi-modal automated ventricle segmentation system and method of use thereof
CN114881105A (en) Sleep staging method and system based on transformer model and contrast learning
CN116228731A (en) Multi-contrast learning coronary artery high-risk plaque detection method, system and terminal
CN113409293A (en) Pathology image automatic segmentation system based on deep learning
CN114240846A (en) System and method for reducing false positive rate of medical image focus segmentation result
KR20180006120A (en) Optimized Image Segmentation Methods and System with DL and PDE
CN113592766B (en) Coronary angiography image segmentation method based on depth sequence information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination