CN108681731A - A kind of thyroid cancer ultrasound picture automatic marking method and system - Google Patents

A kind of thyroid cancer ultrasound picture automatic marking method and system Download PDF

Info

Publication number
CN108681731A
CN108681731A CN201810298494.4A CN201810298494A CN108681731A CN 108681731 A CN108681731 A CN 108681731A CN 201810298494 A CN201810298494 A CN 201810298494A CN 108681731 A CN108681731 A CN 108681731A
Authority
CN
China
Prior art keywords
cancer
cluster
picture
feature
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810298494.4A
Other languages
Chinese (zh)
Inventor
詹宜巨
李海良
蔡庆玲
毛宜军
王永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201810298494.4A priority Critical patent/CN108681731A/en
Publication of CN108681731A publication Critical patent/CN108681731A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of thyroid cancer ultrasound picture automatic marking method and systems, by being pre-processed to pending cancer image data collection, extract the ROI sub-graph data collection of every cancer picture, then after using VGG16 deep learnings network model to carry out feature extraction to ROI sub-graph data collection, the feature obtained to extraction using K means++ algorithms is clustered, and then after the result that cluster obtains is compared with the benchmark cluster result of preset no cancer picture, extraction obtains the cancer feature cluster of cancer picture, the cancer feature cluster that finally correspondence markings extraction obtains in the artwork of cancer picture.Work efficiency is high by the present invention, and accuracy is higher, saves a large amount of financial resource and material resource, and application cost is low, can be widely applied in the process field of medical image.

Description

A kind of thyroid cancer ultrasound picture automatic marking method and system
Technical field
The present invention relates to medical image process fields, are marked automatically more particularly to a kind of thyroid cancer ultrasound picture Injecting method and system.
Background technology
With the high speed development and maturation of computer storage capacity and computing capability, the relevant technologies of artificial intelligence obtain Greatly development, the especially the relevant technologies of computer vision and natural language processing.Meanwhile artificial intelligence also constantly participates in In different fields, the production efficiency and working efficiency of every field related industry are improved, wherein including just medicine and people The combination of work intelligence.
Currently, the combination of artificial intelligence and medicine be mainly reflected in machine to doctor diagnosis when auxiliary on.Pass through people Work intelligence, using computation vision technology and deep learning, machine can assist completing the Diseases diagnosis to medical image, such as first The judgement of shape gland cancer disease.However, due to the limitation of technology, currently, being completed to thyroid cancer ultrasound figure in training artificial intelligence When the identification of piece, mainly chooses a large amount of cancer picture and be trained as training set, this mode has the following problems:1) Training pattern needs a large amount of thyroid cancer ultrasound picture as training sample;2) it is used for the training set of training pattern, is needed There is doctor's hand labeled to go out the cancerous area on every pictures.This side that a large amount of picture indicia work is completed by doctor Formula undoubtedly will produce serious time loss, while also can largely overstock the time of doctor, cause the waste of hospital resources.This Outside, this mode when the accuracy of judgement for continuing to optimize model being needed to spend, needs the number of pictures marked that will continue to rise, this Mode working efficiency is low, and needs to expend more manpower and materials, and practical application has little significance.
Explanation of nouns
ROI:Full name region of interest, area-of-interest;In machine vision, image procossing, from processed Image sketches the contours of region to be treated in a manner of box, circle, ellipse, irregular polygon etc.;
K-means++ algorithms:A kind of cluster algorithm, be go to optimize on the basis of K-means algorithms it is initial random Point into order algorithm.Wherein, K-means algorithms are hard clustering algorithms, are the typical object function clustering methods based on prototype Representative, it is data point to certain object function of distance as an optimization of prototype, is obtained using the method that function seeks extreme value The adjustment rule of interative computation.
Selective Search algorithms:Selective search algorithm, a kind of picture search algorithm, for a given figure Piece, selective search algorithm will find out ROI region on picture.
Invention content
In order to solve the above technical problems, the object of the present invention is to provide a kind of thyroid cancer ultrasound pictures to mark automatically Injecting method and system.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of thyroid cancer ultrasound picture automatic marking method, includes the following steps:
S1, pending cancer image data collection is pre-processed, extracts the ROI sub-graph datas of every cancer picture Collection;
S2, feature extraction is carried out to ROI sub-graph data collection using VGG16 deep learnings network model;
S3, the feature obtained to extraction using K-means++ algorithms are clustered;
After S4, the result that will cluster acquisition are compared with the benchmark cluster result of preset no cancer picture, extraction obtains Obtain the cancer feature cluster of cancer picture;
S5, the cancer feature cluster that correspondence markings extraction obtains in the artwork of cancer picture.
Further, in the step S4, the benchmark cluster result of the preset no cancer picture is obtained by following steps :
S01, no cancer image data collection is pre-processed, extracts every ROI sub-graph data collection without cancer picture;
S02, feature extraction is carried out to the ROI sub-graph data collection of no cancer picture using VGG16 deep learnings network model, Obtain reference characteristic;
S03, the reference characteristic obtained to extraction using K-means++ algorithms are clustered.
Further, the step S1 is specially:
For every cancer picture that pending cancer image data is concentrated, contrast, brightness and clear are carried out to it After degree adjustment, cutting is carried out to the cancer picture after adjustment using selective search algorithm and obtains multiple ROI subgraphs, and is recorded every A ROI subgraphs after the coordinate of present position, obtain corresponding ROI sub-graph datas collection in the artwork of cancer picture.
Further, the step S2 is specially:
Using ROI sub-graph datas collection as input data, it is input in advance trained VGG16 deep learnings network model It is calculated, and extracts the feature for the 5th section of convolutional layer that VGG16 deep learnings network model obtains in learning process.
Further, the step S3 is specially:
After the number k of the specified cluster cluster to be obtained, the feature obtained to extraction using K-means++ algorithms is gathered Class obtains the conjunction of feature gathering.
Further, the step S4 is specially:
Each cluster in the feature gathering conjunction that cluster is obtained, with the benchmark cluster result of preset no cancer picture After the cluster heart that feature gathering is closed carries out Euclidean distance calculating, the maximum preceding n cluster of chosen distance is special as the cancer of cancer picture Levy cluster;
Wherein, n is preset constant.
Further, the step S3, specifically includes following steps:
The number k of S31, the specified cluster cluster to be obtained;
S32, the feature obtained for extraction, randomly choose one as after cluster centre, the feature that extraction is obtained into Row traversal calculates the distance between the cluster centre of residue character and selection, and chosen distance is maximum characterized by newest poly- Class center;
S33, judge whether the sum of cluster centre reaches k, if so, thening follow the steps S34, otherwise return and continue to execute Step S32;
S34, all features for non-cluster center, assign it in the cluster of k cluster centre so that Mei Geju The quadratic sum that the corresponding feature gathering of class is closed is minimum;
S35, after recalculating the cluster centre that each feature gathering is closed, S34 is returned to step, until in all clusters The heart no longer changes.
Further, the step S5 is specially:
Each feature in the cancer feature cluster obtained for extraction, according to its corresponding coordinate information, in cancer figure Corresponding position in the artwork of piece is marked this feature using preset tag format.
The present invention solves another technical solution used by its technical problem:
A kind of thyroid cancer ultrasound picture automatic marking system, including:
At least one processor;
At least one processor, for storing at least one program;
When at least one program is executed by least one processor so that at least one processor is realized A kind of thyroid cancer ultrasound picture automatic marking method.
The beneficial effects of the invention are as follows:After the present invention obtains the ROI sub-graph data collection of every cancer picture by extraction, adopt Feature extraction is carried out to ROI sub-graph datas collection with VGG16 deep learnings network model, then uses K-means++ algorithms to carrying After the feature taken is clustered, cluster result is compared with benchmark cluster result, can automatically extract and obtain cancer picture Cancer feature cluster, the cancer feature cluster that finally automatically correspondence markings extraction obtains in the artwork of cancer picture is entire to mark Note process is that automation carries out, and does not need manpower intervention, saves a large amount of financial resource and material resource, and application cost is low, and automatic Change that work efficiency is high, accuracy is higher, the annotation results of acquisition doctor can be facilitated quickly and intuitively to obtain there may be cancerations Region, have preferable auxiliary reference meaning.
Description of the drawings
Fig. 1 is a kind of flow chart of thyroid cancer ultrasound picture automatic marking method of the present invention;
Fig. 2 is a kind of structure diagram of thyroid cancer ultrasound picture automatic marking system of the present invention.
Specific implementation mode
Embodiment of the method
Referring to Fig.1, a kind of thyroid cancer ultrasound picture automatic marking method, including following step are present embodiments provided Suddenly:
S1, pending cancer image data collection is pre-processed, extracts the ROI sub-graph datas of every cancer picture Collection;
S2, feature extraction is carried out to ROI sub-graph data collection using VGG16 deep learnings network model;
S3, the feature obtained to extraction using K-means++ algorithms are clustered;
After S4, the result that will cluster acquisition are compared with the benchmark cluster result of preset no cancer picture, extraction obtains Obtain the cancer feature cluster of cancer picture;
S5, the cancer feature cluster that correspondence markings extraction obtains in the artwork of cancer picture.
After this programme obtains the ROI sub-graph data collection of every cancer picture by extraction, using VGG16 deep learning networks Model carries out feature extraction to ROI sub-graph data collection, after then being clustered to the feature of extraction using K-means++ algorithms, Cluster result is compared with benchmark cluster result, the cancer feature cluster for obtaining cancer picture can be automatically extracted, finally certainly The dynamic ground cancer feature cluster that correspondence markings extraction obtains in the artwork of cancer picture, entire annotation process be automate into Row, does not need manpower intervention, saves a large amount of financial resource and material resource, and automatically working is efficient, accuracy is higher, acquisition Annotation results can facilitate doctor quickly and intuitively to obtain the region there may be canceration.
It is further used as preferred embodiment, in the step S4, the benchmark of the preset no cancer picture clusters As a result it is obtained by following steps:
S01, no cancer image data collection is pre-processed, extracts every ROI sub-graph data collection without cancer picture;
S02, feature extraction is carried out to the ROI sub-graph data collection of no cancer picture using VGG16 deep learnings network model, Obtain reference characteristic;
S03, the reference characteristic obtained to extraction using K-means++ algorithms are clustered.
Specifically, the concrete processing procedure of step S01~S03 is identical as step S1~S3, the two is only to deal with objects Difference that is, in advance after the processing procedure for executing step S01~S03 to no cancer image data collection, establish without cancer picture number According to collecting corresponding reference characteristic cluster result, when subsequent execution this method, treated using step S1~S3 of same process The cancer image data collection of processing carries out clustering processing, consequently facilitating carrying out cancer feature extraction, finally realizes the cancer of this programme Disease feature marks purpose.
This method cuts the ROI subgraphs of cancer picture by using selective search algorithm, relative to blindness cutting subgraph Method, obtained picture effect is more preferable, picture scale also smaller, to improve efficiency and accuracy rate in subsequent operation.
It is further used as preferred embodiment, the step S1 is specially:
For every cancer picture that pending cancer image data is concentrated, contrast, brightness and clear are carried out to it After degree adjustment, cutting is carried out to the cancer picture after adjustment using selective search algorithm and obtains multiple ROI subgraphs, and is recorded every A ROI subgraphs after the coordinate of present position, obtain corresponding ROI sub-graph datas collection in the artwork of cancer picture.Record ROI When figure coordinate, coordinate origin is set as the artwork upper left corner.
Since ultrasonoscopy itself is there are color dullness, texture is unintelligible, the relatively low situation of pixel value, by this step After carrying out contrast, brightness and clarity adjustment to cancer picture, it is made more to adapt to selective search algorithm, to pass through choosing Selecting property searching algorithm can obtain more satisfactory ROI subgraphs with cutting.
It is further used as preferred embodiment, the step S2 is specially:
Using ROI sub-graph datas collection as input data, it is input to through public image data set trained VGG16 in advance It is calculated in deep learning network model, and extracts the 5th section of volume that VGG16 deep learnings network model in learning process obtains The feature of lamination.
VGG16 deep learning network models be a kind of convolutional neural networks (Convolutional Neural Network, CNN), there is level depth, the good feature of effect to have preferable image processing effect.
In this step, during VGG16 deep learning network models are calculated, show as to input data i.e. ROI Sub-graph data collection carries out convolutional calculation, to extract the feature of the 5th section of convolutional layer as feature extraction result.
Relative to traditional convolutional neural networks, for VGG16 in the lower convolutional layer of level, feature is fairly simple.With The intensification of level, VGG16 can extract the feature of higher order, these features are more abstract, while also have more semantic special Sign.Therefore, this step can preferably extract the feature of image using VGG16.This step uses VGG16 deep learning networks Model chooses the output of the deeper 5th section of convolutional layer of level as feature, relative to traditional spy as feature extractor Extracting method is levied, more abstract and high-order feature can be extracted, to preferably complete subsequent operation.
In the present invention, the training goal of VGG16 deep learning network models is to carry out feature extraction, and is not limited to refer to Determine cancer picture, therefore, is trained by public image data set and training goal can be realized.Public image data set is deep The shared image data collection of learning training is spent, general deep learning training can download the data set and be trained.
It is further used as preferred embodiment, the step S3 is specially:
After the number k of the specified cluster cluster to be obtained, the feature obtained to extraction using K-means++ algorithms is gathered Class obtains the conjunction of feature gathering.
It is further used as preferred embodiment, the step S4 is specially:
Each cluster in the feature gathering conjunction that cluster is obtained, with the benchmark cluster result of preset no cancer picture After the cluster heart that feature gathering is closed carries out Euclidean distance calculating, the maximum preceding n cluster of chosen distance is special as the cancer of cancer picture Levy cluster;
Wherein, n is preset constant.Preferably, in the present embodiment, the value of n is 3.
Euclidean distance is bigger, indicates that the similarity between cluster is smaller, and Euclidean distance is smaller, indicates that the similarity between cluster is got over Greatly.The calculation formula of Euclidean distance is as follows:
Wherein, the Euclidean distance between two clusters x and y, x are indicatediAnd yiIndicate that the element of cluster x and y, n indicate in cluster respectively The sum of feature.
It is further used as preferred embodiment, the step S3 specifically includes following steps:
The number k of S31, the specified cluster cluster to be obtained;
S32, the feature obtained for extraction, randomly choose one as after cluster centre, the feature that extraction is obtained into Row traversal calculates the distance between the cluster centre of residue character and selection, and chosen distance is maximum characterized by newest poly- Class center;Here, residue character refers to the feature except cluster centre;
S33, judge whether the sum of cluster centre reaches k, if so, thening follow the steps S34, otherwise return and continue to execute Step S32;
S34, all features for non-cluster center, assign it in the cluster of k cluster centre so that Mei Geju The quadratic sum that the corresponding feature gathering of class is closed is minimum;
Specific assigning process, using following formula:
Wherein,Indicate ith cluster cluster, k indicates the number of the final obtained cluster of clustering algorithm, j expressions between 1 to the arbitrary positive integer between k, xpIndicate feature,Indicate ith cluster center,Indicate j-th of cluster centre, Dist indicates that the distance between two features, t indicate algorithm iteration number;
The formula has reacted foundation of the K-means++ algorithms in cluster process, i.e., is allocated to multiple features so that Quadratic sum in cluster is minimum.
S35, after recalculating the cluster centre that each feature gathering is closed, S34 is returned to step, until in all clusters The heart no longer changes.
The cluster centre that each feature gathering is closed is recalculated especially by following formula:
Indicate the cluster centre obtained after the feature gathering conjunction at ith cluster center is recalculated,Indicate i-th A clustering cluster, xjExpression is subordinated toAll features, wherein j indicates that between 1 to the arbitrary positive integer between k, t indicates to calculate Method iterations.
In the present invention, cluster refers to clustering some grouping obtained in calculating process, and cluster centre refers to that cluster operation obtains The central point for the grouping arrived, the i.e. central point of cluster.
It is further used as preferred embodiment, the step S5 is specially:
Each feature in the cancer feature cluster obtained for extraction, according to its corresponding coordinate information, in cancer figure Corresponding position in the artwork of piece is marked this feature using preset tag format.
Specific annotation process is as follows:By the title of feature find corresponding ROI subgraphs, then by being preserved in step S1 Coordinate information obtains its position in artwork in conjunction with the length and width of ROI subgraphs.By program by ROI subgraphs in artwork Location information is recorded in xml document, to carry out automatic marking according to preset tag format.
System embodiment
With reference to Fig. 2, a kind of thyroid cancer ultrasound picture automatic marking system is present embodiments provided, including:
At least one processor 100;
At least one processor 200, for storing at least one program;
When at least one program is executed by least one processor 100 so that at least one processor 100 realize a kind of thyroid cancer ultrasound picture automatic marking method.
The thyroid cancer ultrasound picture automatic marking system of the present embodiment, executable the method for the present invention embodiment are provided Thyroid cancer ultrasound picture automatic marking method, the arbitrary combination implementation steps of executing method embodiment have the party The corresponding function of method and advantageous effect.
It is to be illustrated to the preferable implementation of the present invention, but the invention is not limited to the implementation above Example, those skilled in the art can also make various equivalent variations or be replaced under the premise of without prejudice to spirit of that invention It changes, these equivalent modifications or replacement are all contained in the application claim limited range.

Claims (9)

1. a kind of thyroid cancer ultrasound picture automatic marking method, which is characterized in that include the following steps:
S1, pending cancer image data collection is pre-processed, extracts the ROI sub-graph data collection of every cancer picture;
S2, feature extraction is carried out to ROI sub-graph data collection using VGG16 deep learnings network model;
S3, the feature obtained to extraction using K-means++ algorithms are clustered;
S4, it after being compared the result that cluster obtains with the benchmark cluster result of preset no cancer picture, extracts and obtains cancer The cancer feature cluster of disease picture;
S5, the cancer feature cluster that correspondence markings extraction obtains in the artwork of cancer picture.
2. thyroid cancer ultrasound picture automatic marking method according to claim 1, which is characterized in that the step S4 In, the benchmark cluster result of the preset no cancer picture is obtained by following steps:
S01, no cancer image data collection is pre-processed, extracts every ROI sub-graph data collection without cancer picture;
S02, feature extraction is carried out to the ROI sub-graph data collection of no cancer picture using VGG16 deep learnings network model, obtained Reference characteristic;
S03, the reference characteristic obtained to extraction using K-means++ algorithms are clustered.
3. thyroid cancer ultrasound picture automatic marking method according to claim 1, which is characterized in that the step S1 is specially:
For every cancer picture that pending cancer image data is concentrated, contrast, brightness and clarity tune are carried out to it After whole, cutting is carried out to the cancer picture after adjustment using selective search algorithm and obtains multiple ROI subgraphs, and is recorded each ROI subgraphs after the coordinate of present position, obtain corresponding ROI sub-graph datas collection in the artwork of cancer picture.
4. thyroid cancer ultrasound picture automatic marking method according to claim 1, which is characterized in that the step S2 is specially:
Using ROI sub-graph datas collection as input data, it is input in advance trained VGG16 deep learnings network model and carries out It calculates, and extracts the feature for the 5th section of convolutional layer that VGG16 deep learnings network model obtains in learning process.
5. thyroid cancer ultrasound picture automatic marking method according to claim 1, which is characterized in that the step S3 is specially:
After the number k of the specified cluster cluster to be obtained, the feature obtained to extraction using K-means++ algorithms is clustered, Obtain the conjunction of feature gathering.
6. thyroid cancer ultrasound picture automatic marking method according to claim 5, which is characterized in that the step S4 is specially:
Each cluster in the feature gathering conjunction that cluster is obtained, the feature with the benchmark cluster result of preset no cancer picture After the cluster heart that gathering is closed carries out Euclidean distance calculating, cancer feature cluster of the maximum preceding n cluster of chosen distance as cancer picture;
Wherein, n is preset constant.
7. thyroid cancer ultrasound picture automatic marking method according to claim 5, which is characterized in that the step S3 specifically includes following steps:
The number k of S31, the specified cluster cluster to be obtained;
S32, the feature obtained for extraction, randomly choose one as after cluster centre, are carried out time to the feature that extraction obtains It goes through, calculates the distance between the cluster centre of residue character and selection, and chosen distance is maximum characterized by newest cluster The heart;
S33, judge whether the sum of cluster centre reaches k, if so, thening follow the steps S34, otherwise return and continue to execute step S32;
S34, all features for non-cluster center, assign it in the cluster of k cluster centre so that each cluster pair The quadratic sum that the feature gathering answered is closed is minimum;
S35, after recalculating the cluster centre that each feature gathering is closed, S34 is returned to step, until all cluster centres are equal No longer change.
8. thyroid cancer ultrasound picture automatic marking method according to claim 1, which is characterized in that the step S5 is specially:
Each feature in the cancer feature cluster obtained for extraction, according to its corresponding coordinate information, in cancer picture Corresponding position in artwork is marked this feature using preset tag format.
9. a kind of thyroid cancer ultrasound picture automatic marking system, which is characterized in that including:
At least one processor;
At least one processor, for storing at least one program;
When at least one program is executed by least one processor so that at least one processor is realized as weighed Profit requires a kind of thyroid cancer ultrasound picture automatic marking method of 1-8 any one of them.
CN201810298494.4A 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system Pending CN108681731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810298494.4A CN108681731A (en) 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810298494.4A CN108681731A (en) 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system

Publications (1)

Publication Number Publication Date
CN108681731A true CN108681731A (en) 2018-10-19

Family

ID=63800797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810298494.4A Pending CN108681731A (en) 2018-04-03 2018-04-03 A kind of thyroid cancer ultrasound picture automatic marking method and system

Country Status (1)

Country Link
CN (1) CN108681731A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126470A (en) * 2019-12-18 2020-05-08 创新奇智(青岛)科技有限公司 Image data iterative clustering analysis method based on depth metric learning
CN111897984A (en) * 2020-05-28 2020-11-06 广州市玄武无线科技股份有限公司 Picture labeling method and device, terminal equipment and storage medium
WO2021189900A1 (en) * 2020-10-14 2021-09-30 平安科技(深圳)有限公司 Medical image analysis method and apparatus, and electronic device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120008839A1 (en) * 2010-07-07 2012-01-12 Olympus Corporation Image processing apparatus, method of processing image, and computer-readable recording medium
CN103324853A (en) * 2013-06-25 2013-09-25 上海交通大学 Similarity calculation system and method based on medical image features
CN105139390A (en) * 2015-08-14 2015-12-09 四川大学 Image processing method for detecting pulmonary tuberculosis focus in chest X-ray DR film
CN105427296A (en) * 2015-11-11 2016-03-23 北京航空航天大学 Ultrasonic image low-rank analysis based thyroid lesion image identification method
CN105654490A (en) * 2015-12-31 2016-06-08 中国科学院深圳先进技术研究院 Lesion region extraction method and device based on ultrasonic elastic image
CN106023239A (en) * 2016-07-05 2016-10-12 东北大学 Breast lump segmentation system and method based on mammary gland subarea density clustering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120008839A1 (en) * 2010-07-07 2012-01-12 Olympus Corporation Image processing apparatus, method of processing image, and computer-readable recording medium
CN103324853A (en) * 2013-06-25 2013-09-25 上海交通大学 Similarity calculation system and method based on medical image features
CN105139390A (en) * 2015-08-14 2015-12-09 四川大学 Image processing method for detecting pulmonary tuberculosis focus in chest X-ray DR film
CN105427296A (en) * 2015-11-11 2016-03-23 北京航空航天大学 Ultrasonic image low-rank analysis based thyroid lesion image identification method
CN105654490A (en) * 2015-12-31 2016-06-08 中国科学院深圳先进技术研究院 Lesion region extraction method and device based on ultrasonic elastic image
CN106023239A (en) * 2016-07-05 2016-10-12 东北大学 Breast lump segmentation system and method based on mammary gland subarea density clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DAVID ARTHUR,ET AL.: ""k-means++: the advantages of careful seeding"", 《SODA "07: PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126470A (en) * 2019-12-18 2020-05-08 创新奇智(青岛)科技有限公司 Image data iterative clustering analysis method based on depth metric learning
CN111897984A (en) * 2020-05-28 2020-11-06 广州市玄武无线科技股份有限公司 Picture labeling method and device, terminal equipment and storage medium
WO2021189900A1 (en) * 2020-10-14 2021-09-30 平安科技(深圳)有限公司 Medical image analysis method and apparatus, and electronic device and readable storage medium

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN110503654A (en) A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
Yang et al. Deep plastic surgery: Robust and controllable image editing with human-drawn sketches
CN104850633B (en) A kind of three-dimensional model searching system and method based on the segmentation of cartographical sketching component
US10346728B2 (en) Nodule detection with false positive reduction
Saha et al. Skeletonization: Theory, methods and applications
WO2022001623A1 (en) Image processing method and apparatus based on artificial intelligence, and device and storage medium
CN109740652A (en) A kind of pathological image classification method and computer equipment
Schultz et al. Open-box spectral clustering: applications to medical image analysis
CN102651128B (en) Image set partitioning method based on sampling
CN108416776A (en) Image-recognizing method, pattern recognition device, computer product and readable storage medium storing program for executing
CN110147483A (en) A kind of title method for reconstructing and device
CN108681731A (en) A kind of thyroid cancer ultrasound picture automatic marking method and system
CN106874489A (en) A kind of Lung neoplasm image block search method and device based on convolutional neural networks
Casaca et al. Laplacian coordinates: Theory and methods for seeded image segmentation
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
Seidl et al. Automated classification of petroglyphs
Zhao et al. Fine-grained diabetic wound depth and granulation tissue amount assessment using bilinear convolutional neural network
CN110334566A (en) Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks
Wang et al. Segmenting crop disease leaf image by modified fully-convolutional networks
Kang et al. Image-based modeling of plants and trees
CN114419087A (en) Focus image generation method and device, electronic equipment and storage medium
CN110363103A (en) Identifying pest method, apparatus, computer equipment and storage medium
Zhang et al. Classification of benign and malignant pulmonary nodules based on deep learning
Lengauer et al. A sketch-aided retrieval approach for incomplete 3D objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181019