CN113436172A - Superpoint-based medical image processing method - Google Patents

Superpoint-based medical image processing method Download PDF

Info

Publication number
CN113436172A
CN113436172A CN202110725512.4A CN202110725512A CN113436172A CN 113436172 A CN113436172 A CN 113436172A CN 202110725512 A CN202110725512 A CN 202110725512A CN 113436172 A CN113436172 A CN 113436172A
Authority
CN
China
Prior art keywords
network
image
feature
points
steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110725512.4A
Other languages
Chinese (zh)
Inventor
闫哲
张陶
李敬远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202110725512.4A priority Critical patent/CN113436172A/en
Publication of CN113436172A publication Critical patent/CN113436172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a medical image processing method based on SuperPoint, and belongs to the technical field of image processing. The method mainly comprises the following steps: s1: acquiring images and initializing a network, mainly comprising the following parts: acquiring an inner cavity image acquired by an endoscope and initializing a deep learning network; s2: inputting the acquired image into a network for feature extraction, and simultaneously obtaining descriptors of feature points, wherein the descriptors mainly comprise the following parts: coding network, decoding network, descriptor detection and loss function construction; s3: carrying out feature matching by using a KNN nearest neighbor matching method; s4: and performing hierarchical clustering on the feature set by adopting a K-Means clustering method. The method effectively overcomes the defects of insufficient stability and easy influence of illumination in the feature extraction process, and improves the accuracy of feature matching compared with the traditional method.

Description

Superpoint-based medical image processing method
Technical Field
The invention relates to the field of processing medical images, in particular to a medical image processing method based on Superpoint.
Background
In the past decades, advances in medical technology and innovations in technological approaches have played an important role in the practice of reconstructive surgery. The overall progress of the medical model and the drive of the overall treatment view lead to a minimally invasive concept, and the solid-state camera and the optical fiber equipment enable minimally invasive surgery. In the minimally invasive surgery process, elongated instruments, illumination and other equipment are applied, and advanced technical means are utilized to replace the traditional direct vision with the electronic mirror image observation, so that the minimum incision range and the minimum tissue damage are ensured, and the observation, the diagnosis and the treatment of the focus in vivo are completed at the same time. In the modern minimally invasive surgery technology, an endoscope and a professional instrument are inserted into an inner cavity through a small incision of the skin of a surgical site, pictures shot by the endoscope are displayed on a screen in real time, a doctor controls the near end of the endoscope to adjust the direction and the angle, and the operation condition is judged from the transmitted pictures.
Compared with the traditional minimally invasive surgery, the minimally invasive surgery has the advantages of small wound, light pain, difficulty in infection and the like, greatly reduces the risks of wound, postoperative recovery period and complications of patients, and is a trend of modern medical surgical treatment. However, with the progress of science and technology, minimally invasive surgery is gradually expanded in a plurality of medical fields, the traditional minimally invasive surgery technology encounters bottlenecks in clinical practice, for example, the acquired image information is missing or fuzzy, and the image is not clear due to soft tissue bleeding in the surgery process, so that the problems that whether the clinical experience of doctors is rich or not is depended very much in the surgery process, the judgment error of the inexperienced clinicians is easily caused, the bleeding of peripheral soft tissues is caused, and the surgery difficulty is improved.
As a trend of modern medical surgery treatment, minimally invasive surgery gradually leaves open corners in various departments, but in modern minimally invasive surgery technology, how to accurately process image information captured by an endoscope is very important, and at the same time, the minimally invasive surgery technology is also a basis for performing advanced subsequent processing on images. The main problems with processing such images currently have several limitations: (1) illumination information in a minimally invasive surgery environment is not uniform, so that the acquired image is easy to blur; (2) problems such as bleeding of soft tissues in the operation process cause difficulty in image feature extraction and matching; (3) when the soft tissue is burned in the process of processing, smoke is generated, and useful feature points cannot be extracted easily when the image is processed.
Disclosure of Invention
In view of the problems of the above studies, the present invention aims to: aiming at the problem that the image acquired by an endoscope is difficult to perform feature extraction in a minimally invasive surgery, a feature extraction method based on a deep learning neural network is provided, and relatively accurate feature extraction of a medical image is realized.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a Superpoint-based medical image processing method comprises the following steps:
s1: acquiring images and initializing a network, mainly comprising the following parts: acquiring an inner cavity image acquired by the endoscope and initializing a deep learning network.
Further, the specific step of S1 is:
s1.1: acquiring a target image which is acquired by an endoscope and needs to be processed in a minimally invasive surgery process;
s1.2: and initializing deep learning network parameters. Virtual three-dimensional graphics are used as a data set for network training, and the training network has the function of extracting angular points.
And acquiring an image and initializing the network through the steps.
S2: inputting the acquired image into a network for feature extraction, and simultaneously obtaining descriptors of feature points, wherein the descriptors mainly comprise the following parts: coding network, decoding network, descriptor detection and loss function construction.
Further, the specific step of S2 is:
s2.1: inputting an image to be processed into a network, and reducing the dimension of the image through a shared coding network:
Hc=H/8
where H is the original size of the image, HcThe image size after dimension reduction.
S2.2: and (4) self-labeling the interest points. And decoding the characteristic points, wherein the output of the decoder is the probability value that the pixel points are the characteristic points. The adopted method is sub-pixel convolution, and a characteristic point detection head with a specific decoder is utilized to acquire an up-sampled image; of the input tensorDimension is
Figure BDA0003137520320000021
Dimension of output is RH×W
S2.3: descriptor detection. Obtaining a semi-dense descriptor by using a network similar to UCN, obtaining a complete descriptor by carrying out bilinear interpolation, and finally obtaining a description of unit length by using L2 standardization, wherein the characteristic dimension is as follows:
Figure BDA0003137520320000022
the following steps are changed:
D∈RH×W×D
s2.4: and constructing a loss function. The network has two branch networks, the loss function is also divided into two parts, the sum of the losses of the two parts is the final loss function of the network:
L(X,X',D,D';Y,Y',S)=Lp(X,Y)+Lp(X',Y')+λLd(D,D',S)
wherein L ispIs a loss function of the feature points, LdTo describe the sub-loss function, λ is the coefficient that balances the negative versus the positive correspondence.
And obtaining the characteristic points and the descriptors through the steps.
S3: and carrying out feature matching by using a KNN method.
Further, the specific step of S3 is:
s3.1: and selecting a proper k value.
S3.2: and calculating the distances between the points in the known category data set and the current point, sequencing the distances according to an increasing sequence, selecting the k points with the minimum distance from the current point, and arranging the adjacent tuples of the test tuples according to the distance to serve as priority queues. Traversing the tuple, calculating the distance L between the tuple and the test tuple:
Figure BDA0003137520320000031
and L in the priority queuemaxComparing, if:
Lmax≤L
then the tuple is discarded, otherwise L is assigned to Lmax
S3.3: and after traversing, calculating a plurality of classes of the k tuples in the priority queue, and using the classes as the classes of the test tuples. An error rate is then calculated. Setting different k values to train again, and finally taking the k value with the minimum error rate.
And matching the characteristic points through the steps.
S4: and performing hierarchical clustering on the feature set by adopting a K-Means clustering method.
Further, the specific step of S4 is:
s4.1: and selecting k points as initial centroids.
S4.2: assigning each point to the nearest centroid, forming K clusters, recalculating the centroid of each cluster from the sum of squared errors SSE:
Figure BDA0003137520320000032
the computation is stopped when the cluster does not change or the maximum number of iterations is reached.
S4.3: and taking each frame of image in the sequence as a query object again to construct a feature set of the image.
S4.4: and establishing KD-Tree for the characteristics of the image, and calculating and matching the current image with all images in the matching set by using a KNN method.
Compared with the prior art, the invention has the beneficial effects that: the method effectively overcomes the defects of insufficient stability and easy influence of illumination in the feature extraction process, and improves the accuracy of feature matching compared with the traditional method.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of a network according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments are provided, and the present invention is further described in detail.
As shown in fig. 1 and fig. 2, a method for processing medical images based on Superpoint includes the following specific steps:
s1: acquiring images and initializing a network, mainly comprising the following parts: acquiring an inner cavity image acquired by the endoscope and initializing a deep learning network.
Further, the specific step of S1 is:
s1.1: acquiring a target image which is acquired by an endoscope and needs to be processed in a minimally invasive surgery process;
s1.2: and initializing deep learning network parameters. Virtual three-dimensional graphics are used as a data set for network training, and the training network has the function of extracting angular points.
And acquiring an image and initializing the network through the steps.
S2: inputting the acquired image into a network for feature extraction, and simultaneously obtaining descriptors of feature points, wherein the descriptors mainly comprise the following parts: coding network, decoding network, descriptor detection and loss function construction.
Further, the specific step of S2 is:
s2.1: inputting an image to be processed into a network, and reducing the dimension of the image through a shared coding network:
Hc=H/8
where H is the original size of the image, HcThe image size after dimension reduction.
S2.2: and (4) self-labeling the interest points. And decoding the characteristic points, wherein the output of the decoder is the probability value that the pixel points are the characteristic points. The adopted method is sub-pixel convolution, and a characteristic point detection head with a specific decoder is utilized to acquire an up-sampled image; the dimension of the input tensor is
Figure BDA0003137520320000041
Dimension of output is RH×W
S2.3: descriptor detection. Obtaining a semi-dense descriptor by using a network similar to UCN, obtaining a complete descriptor by carrying out bilinear interpolation, and finally obtaining a description of unit length by using L2 standardization, wherein the characteristic dimension is as follows:
Figure BDA0003137520320000042
the following steps are changed:
D∈RH×W×D
s2.4: and constructing a loss function. The network has two branch networks, the loss function is also divided into two parts, the sum of the losses of the two parts is the final loss function of the network:
L(X,X',D,D';Y,Y',S)=Lp(X,Y)+Lp(X',Y')+λLd(D,D',S)
wherein L ispIs a loss function of the feature points, LdTo describe the sub-loss function, λ is the coefficient that balances the negative versus the positive correspondence.
And obtaining the characteristic points and the descriptors through the steps.
S3: and carrying out feature matching by using a KNN method.
Further, the specific step of S3 is:
s3.1: and selecting a proper k value.
S3.2: and calculating the distances between the points in the known category data set and the current point, sequencing the distances according to an increasing sequence, selecting the k points with the minimum distance from the current point, and arranging the adjacent tuples of the test tuples according to the distance to serve as priority queues. Traversing the tuple, calculating the distance L between the tuple and the test tuple:
Figure BDA0003137520320000051
and L in the priority queuemaxComparing, if:
Lmax≤L
then the tuple is discarded, otherwise L is assigned to Lmax
S3.3: and after traversing, calculating a plurality of classes of the k tuples in the priority queue, and using the classes as the classes of the test tuples. An error rate is then calculated. Setting different k values to train again, and finally taking the k value with the minimum error rate.
And matching the characteristic points through the steps.
S4: and performing hierarchical clustering on the feature set by adopting a K-Means clustering method.
Further, the specific step of S4 is:
s4.1: and selecting k points as initial centroids.
S4.2: assigning each point to the nearest centroid, forming K clusters, recalculating the centroid of each cluster from the sum of squared errors SSE:
Figure BDA0003137520320000052
the computation is stopped when the cluster does not change or the maximum number of iterations is reached.
S4.3: and taking each frame of image in the sequence as a query object again to construct a feature set of the image.
S4.4: and establishing KD-Tree for the characteristics of the image, and calculating and matching the current image with all images in the matching set by using a KNN method.
Clustering is performed through the above steps.
In summary, the medical image processing problem based on Superpoint provided by the invention can provide good feature information for doctors, and is convenient for performing primary processing on images, the feature points extracted by the medical image processing method based on Superpoint are uniformly distributed and enough in number, so that the efficiency and quality of image feature extraction and matching are effectively improved, and the method has certain application in other scenes, but is still blank in the aspect of processing medical images; the extracted feature points are uniformly distributed, excessive edge information does not exist, and a good feature basis can be provided for advanced processing of subsequent images.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention, and it is apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A Superpoint-based medical image processing method is characterized by comprising the following steps: the method comprises the following steps:
s1: acquiring an image and initializing a network;
s2: inputting the obtained image into a network for feature extraction, and simultaneously obtaining descriptors of feature points;
s3: carrying out feature matching by a KNN method;
s4: and performing hierarchical clustering on the feature set by adopting a K-Means clustering method.
2. A Superpoint based medical image processing method according to claim 1, characterized by: the specific steps of step S1 are:
s1.1: acquiring a target image which is acquired by an endoscope and needs to be processed in a minimally invasive surgery process;
s1.2: and initializing deep learning network parameters, and taking the virtual three-dimensional graph as a data set for network training, wherein the training network has the function of extracting angular points.
3. A Superpoint based medical image processing method according to claim 1, characterized by: the specific steps of step S2 are:
s2.1: inputting an image to be processed into a network, and reducing the dimension of the image through a shared coding network:
Hc=H/8
where H is the original size of the image, HcThe image size after dimension reduction;
s2.2: the method comprises the following steps that (1) interest points are labeled automatically, decoding of feature points is carried out, the output of a decoder is the probability value that pixel points are the feature points, the adopted method is sub-pixel convolution, and a feature point detection head with a specific decoder is utilized to obtain an up-sampled image; the dimension of the input tensor is
Figure FDA0003137520310000011
Dimension of output is RH×W
S2.3: descriptor detection, namely obtaining a semi-dense descriptor by using a network similar to UCN, obtaining a complete descriptor by carrying out bilinear interpolation, and finally obtaining a description of unit length by using L2 standardization, wherein the characteristic dimension is as follows:
Figure FDA0003137520310000012
the following steps are changed:
D∈RH×W×D
s2.4: the method comprises the following steps of constructing a loss function, wherein the network is provided with two branch networks, the loss function is also divided into two parts, and the sum of the losses of the two parts is the final loss function of the network:
L(X,X',D,D';Y,Y',S)=Lp(X,Y)+Lp(X',Y')+λLd(D,D',S)
wherein L ispIs a loss function of the feature points, LdTo describe the sub-loss function, λ is the coefficient that balances the negative versus the positive correspondence.
4. A Superpoint based medical image processing method according to claim 1, characterized by: the specific steps of step S3 are:
s3.1: selecting a proper k value;
s3.2: calculating the distance between a point in a known category data set and a current point, sorting the distances according to an increasing sequence, selecting k points with the minimum distance from the current point, arranging adjacent tuples of a test tuple according to the distance as a priority queue, traversing the tuples, and calculating the distance L between the tuples and the test tuple:
Figure FDA0003137520310000021
and L in the priority queuemaxMake a comparison if:
Lmax≤L
Then the tuple is discarded, otherwise L is assigned to Lmax
S3.3: and after traversing, calculating a plurality of classes of k tuples in the priority queue, taking the classes as classes of test tuples, then calculating an error rate, setting different k values for retraining, and finally taking the k value with the minimum error rate.
5. A method for processing medical images based on Superpoint according to claim 1, wherein said step S4 comprises the following steps:
s4.1: selecting k points as an initial centroid;
s4.2: assigning each point to the nearest centroid, forming K clusters, recalculating the centroid of each cluster from the sum of squared errors SSE:
Figure FDA0003137520310000022
stopping the calculation when the cluster is not changed or the maximum iteration number is reached;
s4.3: constructing a feature set of each frame of image in the sequence by taking each frame of image in the sequence as a query object;
s4.4: and establishing KD-Tree for the characteristics of the image, and calculating and matching the current image with all images in the matching set by using a KNN method.
CN202110725512.4A 2021-06-29 2021-06-29 Superpoint-based medical image processing method Pending CN113436172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110725512.4A CN113436172A (en) 2021-06-29 2021-06-29 Superpoint-based medical image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110725512.4A CN113436172A (en) 2021-06-29 2021-06-29 Superpoint-based medical image processing method

Publications (1)

Publication Number Publication Date
CN113436172A true CN113436172A (en) 2021-09-24

Family

ID=77757688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110725512.4A Pending CN113436172A (en) 2021-06-29 2021-06-29 Superpoint-based medical image processing method

Country Status (1)

Country Link
CN (1) CN113436172A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678504A (en) * 2013-11-19 2014-03-26 西安华海盈泰医疗信息技术有限公司 Similarity-based breast image matching image searching method and system
CN109919927A (en) * 2019-03-06 2019-06-21 辽宁师范大学 Based on the multipair as altering detecting method of the extremely humorous transformation of quick quaternary number
CN110046555A (en) * 2019-03-26 2019-07-23 合肥工业大学 Endoscopic system video image stabilization method and device
CN110728685A (en) * 2019-09-20 2020-01-24 东南大学 Brain tissue segmentation method based on diagonal voxel local binary pattern texture operator
CN112347842A (en) * 2020-09-11 2021-02-09 博云视觉(北京)科技有限公司 Off-line face clustering method based on association graph
CN112906753A (en) * 2021-01-26 2021-06-04 昆山华颐生物科技有限公司 Matching technology based on HE staining image and MBT staining image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678504A (en) * 2013-11-19 2014-03-26 西安华海盈泰医疗信息技术有限公司 Similarity-based breast image matching image searching method and system
CN109919927A (en) * 2019-03-06 2019-06-21 辽宁师范大学 Based on the multipair as altering detecting method of the extremely humorous transformation of quick quaternary number
CN110046555A (en) * 2019-03-26 2019-07-23 合肥工业大学 Endoscopic system video image stabilization method and device
CN110728685A (en) * 2019-09-20 2020-01-24 东南大学 Brain tissue segmentation method based on diagonal voxel local binary pattern texture operator
CN112347842A (en) * 2020-09-11 2021-02-09 博云视觉(北京)科技有限公司 Off-line face clustering method based on association graph
CN112906753A (en) * 2021-01-26 2021-06-04 昆山华颐生物科技有限公司 Matching technology based on HE staining image and MBT staining image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
D. DETONE, T. MALISIEWICZ AND A. RABINOVICH: "《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)》", 17 December 2018 *
ZHWA: "《博客园,https://www.cnblogs.com/zhwa1314/p/12054699.html》", 17 December 2019 *
小柠: "《博客园,https://www.cnblogs.com/txx120/p/11487674.html》", 8 September 2019 *

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
EP3876190B1 (en) Endoscopic image processing method and system and computer device
KR102013806B1 (en) Method and apparatus for generating artificial data
CN108765392B (en) Digestive tract endoscope lesion detection and identification method based on sliding window
CN109767841B (en) Similar model retrieval method and device based on craniomaxillofacial three-dimensional morphological database
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN110265142B (en) Auxiliary diagnosis system for restoration image of lesion area
CN110473619B (en) Bronchofiberscope intubation assistant decision-making system based on deep learning
Kong et al. Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network
CN108765483B (en) Method and system for determining mid-sagittal plane from brain CT image
CN111430025B (en) Disease diagnosis model training method based on medical image data augmentation
CN113052902A (en) Dental treatment monitoring method
Chen et al. Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs
CN116077024B (en) Data processing method based on head infrared image, electronic equipment and storage medium
CN115564712B (en) Capsule endoscope video image redundant frame removing method based on twin network
CN111080676B (en) Method for tracking endoscope image sequence feature points through online classification
Wu et al. Reconstructing 3D lung shape from a single 2D image during the deaeration deformation process using model-based data augmentation
CN117012344A (en) Image analysis method for 4CMOS camera acquisition
CN113436172A (en) Superpoint-based medical image processing method
Chen et al. Detection of various dental conditions on dental panoramic radiography using Faster R-CNN
Li et al. Computer-aided disease diagnosis system in TCM based on facial image analysis
CN114972881A (en) Image segmentation data labeling method and device
CN113313722A (en) Tooth root image interactive annotation method
Cao et al. Transformer for computer-aided diagnosis of laryngeal carcinoma in pcle images
Guo et al. A Low-Dose CT Image Denoising Method Combining Multistage Network and Edge Protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210924