CN108681731A - A kind of thyroid cancer ultrasound picture automatic marking method and system - Google Patents

A kind of thyroid cancer ultrasound picture automatic marking method and system Download PDF

Info

Publication number
CN108681731A
CN108681731A CN201810298494.4A CN201810298494A CN108681731A CN 108681731 A CN108681731 A CN 108681731A CN 201810298494 A CN201810298494 A CN 201810298494A CN 108681731 A CN108681731 A CN 108681731A
Authority
CN
China
Prior art keywords
cancer
picture
clustering
feature
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810298494.4A
Other languages
Chinese (zh)
Inventor
詹宜巨
李海良
蔡庆玲
毛宜军
王永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201810298494.4A priority Critical patent/CN108681731A/en
Publication of CN108681731A publication Critical patent/CN108681731A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of thyroid cancer ultrasound picture automatic marking method and systems, by being pre-processed to pending cancer image data collection, extract the ROI sub-graph data collection of every cancer picture, then after using VGG16 deep learnings network model to carry out feature extraction to ROI sub-graph data collection, the feature obtained to extraction using K means++ algorithms is clustered, and then after the result that cluster obtains is compared with the benchmark cluster result of preset no cancer picture, extraction obtains the cancer feature cluster of cancer picture, the cancer feature cluster that finally correspondence markings extraction obtains in the artwork of cancer picture.Work efficiency is high by the present invention, and accuracy is higher, saves a large amount of financial resource and material resource, and application cost is low, can be widely applied in the process field of medical image.

Description

Automatic thyroid cancer ultrasonic picture labeling method and system
Technical Field
The invention relates to the field of medical image data processing, in particular to an automatic thyroid cancer ultrasonic picture labeling method and system.
Background
With the rapid development and maturity of computer storage capacity and computing capacity, the related technologies of artificial intelligence, especially the related technologies of computer vision and natural language processing, have been greatly developed. Meanwhile, artificial intelligence continuously participates in different fields, production efficiency and working efficiency of related industries in each field are improved, and combination of medical science and artificial intelligence is included.
At present, the combination of artificial intelligence and medicine is mainly embodied in the assistance of a machine to a doctor in diagnosis. Through artificial intelligence, by utilizing a computational vision technology and deep learning, the machine can assist in completing disease judgment of medical images, such as judgment of thyroid cancer. However, due to the technical limitation, currently, when training artificial intelligence to complete the identification of the thyroid cancer ultrasound images, a large number of cancer images are mainly selected as a training set for training, and this method has the following problems: 1) the training model needs a large number of thyroid cancer ultrasonic images as training samples; 2) the training set used to train the model requires a physician to manually mark the cancer region on each picture. The way that doctors complete a large amount of image marking work undoubtedly results in serious time consumption, and simultaneously, the time of doctors is greatly overstocked, and hospital resources are wasted. In addition, in the mode, when the judgment accuracy of the model needs to be continuously optimized, the number of the pictures needing to be marked is continuously increased, the working efficiency is low, more manpower and material resources are required to be consumed, and the practical application significance is not large.
Noun interpretation
ROI: region of interest; in machine vision and image processing, a region needing to be processed is outlined from a processed image in a mode of a square frame, a circle, an ellipse, an irregular polygon and the like;
k-means + + algorithm: a cluster analysis algorithm is a step algorithm for optimizing initial random points on the basis of a K-means algorithm. The K-means algorithm is a hard clustering algorithm, is a typical target function clustering method based on a prototype, is an optimized target function with a certain distance from a data point to the prototype, and obtains an adjustment rule of iterative operation by using a function extremum solving method.
Selective Search algorithm: selective search algorithm, an image search algorithm, for a given picture, the selective search algorithm will find the ROI on the picture.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides an automatic method and system for labeling an ultrasound image of thyroid cancer.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an automatic thyroid cancer ultrasonic picture labeling method comprises the following steps:
s1, preprocessing a cancer picture data set to be processed, and extracting an ROI (region of interest) sub-picture data set of each cancer picture;
s2, performing feature extraction on the ROI subgraph data set by adopting a VGG16 deep learning network model;
s3, clustering the extracted features by adopting a K-means + + algorithm;
s4, comparing the result obtained by clustering with a preset reference clustering result of the cancer-free picture, and extracting a cancer feature cluster of the obtained cancer picture;
and S5, correspondingly marking the original image of the cancer picture with the extracted cancer feature cluster.
Further, in step S4, the reference clustering result of the preset cancer-free pictures is obtained by:
s01, preprocessing the data set of the cancer-free picture, and extracting an ROI (region of interest) sub-picture data set of each cancer-free picture;
s02, performing feature extraction on the ROI (region of interest) sub-image data set of the cancer-free picture by adopting a VGG16 deep learning network model to obtain reference features;
and S03, clustering the extracted reference features by adopting a K-means + + algorithm.
Further, in step S1, it specifically includes:
after adjusting the contrast, brightness and definition of each cancer picture in the cancer picture data set to be processed, segmenting the adjusted cancer picture by adopting a selective search algorithm to obtain a plurality of ROI subgraphs, and recording the coordinates of the position of each ROI subgraph in an original picture of the cancer picture to obtain a corresponding ROI subgraph data set.
Further, in step S2, it specifically includes:
and inputting the ROI subgraph data set serving as input data into a pre-trained VGG16 deep learning network model for calculation, and extracting the characteristics of the 5 th convolutional layer obtained by the VGG16 deep learning network model in the learning process.
Further, in step S3, it specifically includes:
and after the number K of clusters to be obtained by clustering is specified, clustering the extracted features by adopting a K-means + + algorithm to obtain a feature cluster set.
Further, in step S4, it specifically includes:
performing Euclidean distance calculation on each cluster in the feature cluster set obtained by clustering and the cluster center of the feature cluster set of the preset reference clustering result of the cancer-free picture, and selecting the first n clusters with the largest distance as the cancer feature clusters of the cancer picture;
wherein n is a preset constant.
Further, the step S3 specifically includes the following steps:
s31, specifying the number k of clusters to be obtained by clustering;
s32, randomly selecting one of the extracted features as a clustering center, traversing the extracted features, calculating the distance between the remaining features and the selected clustering center, and selecting the feature with the largest distance as the latest clustering center;
s33, judging whether the total number of the clustering centers reaches k, if so, executing a step S34, otherwise, returning to continue executing the step S32;
s34, aiming at all the characteristics of the non-clustering centers, distributing the characteristics into k clustering centers to ensure that the sum of squares of the characteristic cluster set corresponding to each cluster is minimum;
and S35, after the clustering center of each feature cluster set is recalculated, returning to execute the step S34 until all the clustering centers are unchanged.
Further, in step S5, it specifically includes:
and marking each feature in the extracted cancer feature cluster by adopting a preset marking format at a corresponding position on an original image of the cancer picture according to the corresponding coordinate information of the feature.
The other technical scheme adopted by the invention for solving the technical problem is as follows:
an automatic thyroid cancer ultrasonic picture labeling system, comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is enabled to implement the method for automatically labeling the thyroid cancer ultrasound image.
The invention has the beneficial effects that: according to the method, after the ROI subgraph data set of each cancer picture is obtained through extraction, the characteristic extraction is carried out on the ROI subgraph data set through a VGG16 deep learning network model, then the extracted characteristics are clustered through a K-means + + algorithm, the clustering result is compared with a reference clustering result, the cancer characteristic cluster of the cancer picture can be automatically extracted, and finally the extracted cancer characteristic cluster is automatically marked on the original image of the cancer picture.
Drawings
FIG. 1 is a flow chart of an automatic ultrasonic image annotation method for thyroid cancer according to the present invention;
fig. 2 is a block diagram of an automatic ultrasonic image annotation system for thyroid cancer according to the present invention.
Detailed Description
Method embodiment
Referring to fig. 1, the present embodiment provides an automatic thyroid cancer ultrasound image labeling method, including the following steps:
s1, preprocessing a cancer picture data set to be processed, and extracting an ROI (region of interest) sub-picture data set of each cancer picture;
s2, performing feature extraction on the ROI subgraph data set by adopting a VGG16 deep learning network model;
s3, clustering the extracted features by adopting a K-means + + algorithm;
s4, comparing the result obtained by clustering with a preset reference clustering result of the cancer-free picture, and extracting a cancer feature cluster of the obtained cancer picture;
and S5, correspondingly marking the original image of the cancer picture with the extracted cancer feature cluster.
According to the scheme, after the ROI subgraph data set of each cancer picture is obtained through extraction, the characteristic extraction is carried out on the ROI subgraph data set through a VGG16 deep learning network model, then the extracted characteristics are clustered through a K-means + + algorithm, the clustering result is compared with a reference clustering result, the cancer characteristic cluster of the cancer picture can be automatically extracted, and finally the extracted cancer characteristic cluster is automatically marked on the original image of the cancer picture.
Further preferably, in step S4, the reference clustering result of the preset cancer-free pictures is obtained by:
s01, preprocessing the data set of the cancer-free picture, and extracting an ROI (region of interest) sub-picture data set of each cancer-free picture;
s02, performing feature extraction on the ROI (region of interest) sub-image data set of the cancer-free picture by adopting a VGG16 deep learning network model to obtain reference features;
and S03, clustering the extracted reference features by adopting a K-means + + algorithm.
Specifically, the specific processing procedures of steps S01 to S03 are the same as those of steps S1 to S3, and both are only different processing objects, that is, after the processing procedures of steps S01 to S03 are performed on the cancer-free picture data set in advance, a reference feature clustering result corresponding to the cancer-free picture data set is established, and when the method is subsequently performed, the steps S1 to S3 of the same processing procedure are adopted to perform clustering processing on the cancer picture data set to be processed, so that the cancer feature extraction is facilitated, and finally, the purpose of labeling the cancer features of the method is achieved.
By adopting the selective search algorithm to cut the ROI subgraph of the cancer picture, compared with a method of cutting the subgraph blindly, the method has the advantages that the obtained picture effect is better, the picture scale is smaller, and therefore the efficiency and the accuracy are improved in the subsequent operation.
Further, as a preferred embodiment, in step S1, it is specifically:
after adjusting the contrast, brightness and definition of each cancer picture in the cancer picture data set to be processed, segmenting the adjusted cancer picture by adopting a selective search algorithm to obtain a plurality of ROI subgraphs, and recording the coordinates of the position of each ROI subgraph in an original picture of the cancer picture to obtain a corresponding ROI subgraph data set. And when the ROI subgraph coordinates are recorded, setting the origin of the coordinates as the upper left corner of the original image.
Because the ultrasonic image has the conditions of monotonous color, unclear texture and low pixel value, the contrast, brightness and definition of the cancer picture are adjusted by the step, so that the cancer picture is more suitable for a selective search algorithm, and more ROI subgraphs which meet the requirements can be obtained by segmenting through the selective search algorithm.
Further, as a preferred embodiment, in step S2, it is specifically:
and inputting the ROI subgraph data set serving as input data into a VGG16 deep learning network model trained in advance through the public image data set for calculation, and extracting the characteristics of the 5 th convolutional layer obtained by the VGG16 deep learning network model in the learning process.
The VGG16 deep learning Network model is a Convolutional Neural Network (CNN), has the characteristics of deep hierarchy and good effect, and has a better image processing effect.
In this step, in the process of calculating the VGG16 deep learning network model, the convolution calculation is performed on the input data, i.e., the ROI subgraph data set, so as to extract the features of the 5 th convolutional layer as the feature extraction result.
Compared with the traditional convolutional neural network, the VGG16 has simpler characteristics in the convolutional layer with lower hierarchy. As the hierarchy grows deeper, VGG16 can extract higher order features that are more abstract and also more semantic. Therefore, the VGG16 can be adopted in the step to better extract the features of the image. In the step, a VGG16 deep learning network model is used as a feature extractor, and meanwhile, the output of the 5 th convolutional layer with deeper layers is selected as a feature, so that more abstract and higher-order features can be extracted compared with the traditional feature extraction method, and subsequent operations can be completed better.
In the present invention, the training of the VGG16 deep learning network model is aimed at feature extraction and is not limited to specifying a cancer picture, and therefore, the training objective can be achieved by training through an open image data set. The public image data set is a picture data set shared by deep learning training, and the deep learning training can download the data set for training.
Further, as a preferred embodiment, in step S3, it is specifically:
and after the number K of clusters to be obtained by clustering is specified, clustering the extracted features by adopting a K-means + + algorithm to obtain a feature cluster set.
Further, as a preferred embodiment, in step S4, it is specifically:
performing Euclidean distance calculation on each cluster in the feature cluster set obtained by clustering and the cluster center of the feature cluster set of the preset reference clustering result of the cancer-free picture, and selecting the first n clusters with the largest distance as the cancer feature clusters of the cancer picture;
wherein n is a preset constant. Preferably, in this embodiment, the value of n is 3.
The larger the euclidean distance, the smaller the similarity between clusters, and the smaller the euclidean distance, the larger the similarity between clusters. The calculation formula of the euclidean distance is as follows:
wherein, denotes the Euclidean distance between two clusters x and y, xiAnd yiElements representing clusters x and y, respectively, n representing the total of features in a clusterAnd (4) counting.
Further, as a preferred embodiment, the step S3 specifically includes the following steps:
s31, specifying the number k of clusters to be obtained by clustering;
s32, randomly selecting one of the extracted features as a clustering center, traversing the extracted features, calculating the distance between the remaining features and the selected clustering center, and selecting the feature with the largest distance as the latest clustering center; here, the remaining features refer to features other than the cluster center;
s33, judging whether the total number of the clustering centers reaches k, if so, executing a step S34, otherwise, returning to continue executing the step S32;
s34, aiming at all the characteristics of the non-clustering centers, distributing the characteristics into k clustering centers to ensure that the sum of squares of the characteristic cluster set corresponding to each cluster is minimum;
the specific allocation process adopts the following formula:
wherein,represents the ith cluster, k represents the number of clusters finally obtained by the clustering algorithm, j represents any positive integer between 1 and k, and xpThe characteristics are represented by a plurality of symbols,the i-th cluster center is represented,representing the jth clustering center, dist representing the distance between two features, and t representing the iteration times of the algorithm;
the formula reflects the basis of a K-means + + algorithm in the clustering process, namely, a plurality of characteristics are distributed, so that the square sum in the cluster is minimum.
And S35, after the clustering center of each feature cluster set is recalculated, returning to execute the step S34 until all the clustering centers are unchanged.
And particularly, recalculating the clustering center of each feature cluster set by the following formula:
a cluster center obtained after recalculation of the feature cluster set representing the ith cluster center,denotes the ith cluster, xjRepresentation dependent onWhere j represents any positive integer between 1 and k and t represents the number of iterations of the algorithm.
In the invention, a cluster refers to a certain group obtained in the clustering operation process, and a clustering center refers to a central point of the group obtained in the clustering operation, namely the central point of the cluster.
Further, as a preferred embodiment, in step S5, it is specifically:
and marking each feature in the extracted cancer feature cluster by adopting a preset marking format at a corresponding position on an original image of the cancer picture according to the corresponding coordinate information of the feature.
The specific labeling process is as follows: finding out the corresponding ROI subgraph according to the name of the feature, and obtaining the position of the ROI subgraph on the original graph according to the length and the width of the ROI subgraph through the coordinate information stored in the step S1. And recording the position information of the ROI subgraph on the original graph into an xml file through a program, thereby automatically marking according to a preset marking format.
System embodiment
Referring to fig. 2, the present embodiment provides an automatic ultrasound image annotation system for thyroid cancer, including:
at least one processor 100;
at least one memory 200 for storing at least one program;
when the at least one program is executed by the at least one processor 100, the at least one processor 100 implements the method for automatically labeling thyroid cancer ultrasound images.
The thyroid cancer ultrasonic image automatic labeling system of the embodiment can execute the thyroid cancer ultrasonic image automatic labeling method provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An automatic thyroid cancer ultrasonic picture labeling method is characterized by comprising the following steps:
s1, preprocessing a cancer picture data set to be processed, and extracting an ROI (region of interest) sub-picture data set of each cancer picture;
s2, performing feature extraction on the ROI subgraph data set by adopting a VGG16 deep learning network model;
s3, clustering the extracted features by adopting a K-means + + algorithm;
s4, comparing the result obtained by clustering with a preset reference clustering result of the cancer-free picture, and extracting a cancer feature cluster of the obtained cancer picture;
and S5, correspondingly marking the original image of the cancer picture with the extracted cancer feature cluster.
2. The method for automatically labeling thyroid cancer ultrasonic pictures according to claim 1, wherein in step S4, the reference clustering result of the preset cancer-free pictures is obtained by the following steps:
s01, preprocessing the data set of the cancer-free picture, and extracting an ROI (region of interest) sub-picture data set of each cancer-free picture;
s02, performing feature extraction on the ROI (region of interest) sub-image data set of the cancer-free picture by adopting a VGG16 deep learning network model to obtain reference features;
and S03, clustering the extracted reference features by adopting a K-means + + algorithm.
3. The method for automatically labeling thyroid cancer ultrasonic pictures according to claim 1, wherein the step S1 specifically comprises:
after adjusting the contrast, brightness and definition of each cancer picture in the cancer picture data set to be processed, segmenting the adjusted cancer picture by adopting a selective search algorithm to obtain a plurality of ROI subgraphs, and recording the coordinates of the position of each ROI subgraph in an original picture of the cancer picture to obtain a corresponding ROI subgraph data set.
4. The method for automatically labeling thyroid cancer ultrasonic pictures according to claim 1, wherein the step S2 specifically comprises:
and inputting the ROI subgraph data set serving as input data into a pre-trained VGG16 deep learning network model for calculation, and extracting the characteristics of the 5 th convolutional layer obtained by the VGG16 deep learning network model in the learning process.
5. The method for automatically labeling thyroid cancer ultrasonic pictures according to claim 1, wherein the step S3 specifically comprises:
and after the number K of clusters to be obtained by clustering is specified, clustering the extracted features by adopting a K-means + + algorithm to obtain a feature cluster set.
6. The method for automatically labeling thyroid cancer ultrasonic pictures according to claim 5, wherein the step S4 specifically comprises:
performing Euclidean distance calculation on each cluster in the feature cluster set obtained by clustering and the cluster center of the feature cluster set of the preset reference clustering result of the cancer-free picture, and selecting the first n clusters with the largest distance as the cancer feature clusters of the cancer picture;
wherein n is a preset constant.
7. The method for automatically labeling the thyroid cancer ultrasonic picture according to claim 5, wherein the step S3 specifically comprises the following steps:
s31, specifying the number k of clusters to be obtained by clustering;
s32, randomly selecting one of the extracted features as a clustering center, traversing the extracted features, calculating the distance between the remaining features and the selected clustering center, and selecting the feature with the largest distance as the latest clustering center;
s33, judging whether the total number of the clustering centers reaches k, if so, executing a step S34, otherwise, returning to continue executing the step S32;
s34, aiming at all the characteristics of the non-clustering centers, distributing the characteristics into k clustering centers to ensure that the sum of squares of the characteristic cluster set corresponding to each cluster is minimum;
and S35, after the clustering center of each feature cluster set is recalculated, returning to execute the step S34 until all the clustering centers are unchanged.
8. The method for automatically labeling thyroid cancer ultrasonic pictures according to claim 1, wherein the step S5 specifically comprises:
and marking each feature in the extracted cancer feature cluster by adopting a preset marking format at a corresponding position on an original image of the cancer picture according to the corresponding coordinate information of the feature.
9. An automatic thyroid cancer ultrasonic picture labeling system is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is enabled to implement the method for automatically labeling the thyroid cancer ultrasound picture as claimed in any one of claims 1 to 8.
CN201810298494.4A 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system Pending CN108681731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810298494.4A CN108681731A (en) 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810298494.4A CN108681731A (en) 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system

Publications (1)

Publication Number Publication Date
CN108681731A true CN108681731A (en) 2018-10-19

Family

ID=63800797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810298494.4A Pending CN108681731A (en) 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system

Country Status (1)

Country Link
CN (1) CN108681731A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126470A (en) * 2019-12-18 2020-05-08 创新奇智(青岛)科技有限公司 Image data iterative clustering analysis method based on depth metric learning
CN111897984A (en) * 2020-05-28 2020-11-06 广州市玄武无线科技股份有限公司 Picture labeling method and device, terminal equipment and storage medium
WO2021189900A1 (en) * 2020-10-14 2021-09-30 平安科技(深圳)有限公司 Medical image analysis method and apparatus, and electronic device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120008839A1 (en) * 2010-07-07 2012-01-12 Olympus Corporation Image processing apparatus, method of processing image, and computer-readable recording medium
CN103324853A (en) * 2013-06-25 2013-09-25 上海交通大学 Similarity calculation system and method based on medical image features
CN105139390A (en) * 2015-08-14 2015-12-09 四川大学 Image processing method for detecting pulmonary tuberculosis focus in chest X-ray DR film
CN105427296A (en) * 2015-11-11 2016-03-23 北京航空航天大学 Ultrasonic image low-rank analysis based thyroid lesion image identification method
CN105654490A (en) * 2015-12-31 2016-06-08 中国科学院深圳先进技术研究院 Lesion region extraction method and device based on ultrasonic elastic image
CN106023239A (en) * 2016-07-05 2016-10-12 东北大学 Breast lump segmentation system and method based on mammary gland subarea density clustering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120008839A1 (en) * 2010-07-07 2012-01-12 Olympus Corporation Image processing apparatus, method of processing image, and computer-readable recording medium
CN103324853A (en) * 2013-06-25 2013-09-25 上海交通大学 Similarity calculation system and method based on medical image features
CN105139390A (en) * 2015-08-14 2015-12-09 四川大学 Image processing method for detecting pulmonary tuberculosis focus in chest X-ray DR film
CN105427296A (en) * 2015-11-11 2016-03-23 北京航空航天大学 Ultrasonic image low-rank analysis based thyroid lesion image identification method
CN105654490A (en) * 2015-12-31 2016-06-08 中国科学院深圳先进技术研究院 Lesion region extraction method and device based on ultrasonic elastic image
CN106023239A (en) * 2016-07-05 2016-10-12 东北大学 Breast lump segmentation system and method based on mammary gland subarea density clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DAVID ARTHUR,ET AL.: ""k-means++: the advantages of careful seeding"", 《SODA "07: PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126470A (en) * 2019-12-18 2020-05-08 创新奇智(青岛)科技有限公司 Image data iterative clustering analysis method based on depth metric learning
CN111897984A (en) * 2020-05-28 2020-11-06 广州市玄武无线科技股份有限公司 Picture labeling method and device, terminal equipment and storage medium
WO2021189900A1 (en) * 2020-10-14 2021-09-30 平安科技(深圳)有限公司 Medical image analysis method and apparatus, and electronic device and readable storage medium

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
CN110503654B (en) Medical image segmentation method and system based on generation countermeasure network and electronic equipment
CN107895367B (en) Bone age identification method and system and electronic equipment
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN108734120B (en) Method, device and equipment for labeling image and computer readable storage medium
CN112950651B (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
WO2022001623A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
CN110263659B (en) Finger vein recognition method and system based on triplet loss and lightweight network
CN105493078B (en) Colored sketches picture search
Tian et al. Learning complementary saliency priors for foreground object segmentation in complex scenes
WO2018107371A1 (en) Image searching system and method
CN110796135B (en) Target positioning method and device, computer equipment and computer storage medium
CN110472737A (en) Training method, device and the magic magiscan of neural network model
CN108764242A (en) Off-line Chinese Character discrimination body recognition methods based on deep layer convolutional neural networks
CN108681731A (en) A kind of thyroid cancer ultrasound picture automatic marking method and system
Mei et al. A curve evolution approach for unsupervised segmentation of images with low depth of field
CN114519401A (en) Image classification method and device, electronic equipment and storage medium
CN108597589B (en) Model generation method, target detection method and medical imaging system
JP2021043881A (en) Information processing apparatus, information processing method, and information processing program
CN113222051A (en) Image labeling method based on small intestine focus characteristics
CN112001877A (en) Thyroid malignant nodule detection method based on deep learning
CN116993947A (en) Visual display method and system for three-dimensional scene
CN117197864A (en) Certificate classification recognition and crown-free detection method and system based on deep learning
Wang et al. Optic disc detection based on fully convolutional neural network and structured matrix decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181019