CN111340807A - Nidus positioning core data extraction method, system, electronic equipment and storage medium - Google Patents
Nidus positioning core data extraction method, system, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111340807A CN111340807A CN202010413451.3A CN202010413451A CN111340807A CN 111340807 A CN111340807 A CN 111340807A CN 202010413451 A CN202010413451 A CN 202010413451A CN 111340807 A CN111340807 A CN 111340807A
- Authority
- CN
- China
- Prior art keywords
- image
- data
- core
- information entropy
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The invention discloses a focus positioning core data extraction method, a system, electronic equipment and a storage medium, for any image in a medical image data set, calculating and fusing the information entropy, the contrast value and the acceptance score value of the image, and calculating the core degree of the image; and (4) arranging all images in the medical image data set in a descending order according to the core degree, and extracting images with the core degree ranking k before as core data. Optimizing the information entropy by using the previous batch of core data and the medical data without pathology; and repeating the process to extract the core data with proper quantity. The invention continuously extracts the core data and simultaneously continuously optimizes the extraction mechanism, so that the extraction performance of the invention is continuously improved. Experiments prove that the method has high practicability, can reduce a large amount of data labeling burden, can train to obtain an excellent focus positioning model, can effectively assist doctors in diagnosis, and reduces misdiagnosis rate.
Description
Technical Field
The invention relates to the field of intelligent medical treatment, in particular to a focus positioning core data extraction method and system based on active learning, electronic equipment and a storage medium.
Background
In recent years, artificial intelligence is becoming more and more mature in theory and technology, and brings great convenience to daily life of people, wherein intelligent medical treatment is developed very rapidly, and an algorithm based on deep learning, such as that proposed by Google, can explain the diabetic retinopathy signs; ni achieves high accuracy in abdominal organ segmentation by means of deep learning, and the like. The deep learning technology can assist doctors to judge diseases, greatly reduces the burden of the doctors and is beneficial to the doctors to make more accurate diagnosis. The researches show the effectiveness of deep learning technology on medical image analysis, but most of the current researches on disease intelligent diagnosis focus on disease identification, while in the medical field, focus positioning information can help doctors to make better diagnosis, and for the treatment of most diseases, the acquisition of focus part position information is indispensable. At present, the main focus positioning method depends on doctor judgment, which not only greatly increases the workload of doctors, but also makes the doctor easily make mistakes in the focus position judgment under the conditions of fatigue and the like, thereby delaying the treatment. Therefore, the focus is positioned by means of the target detection technology based on deep learning, so that intelligent medical treatment is more accurate and comprehensive, and doctors can be better assisted.
The deep learning essence is that deep networks automatically extract features from a large amount of data, the data is guaranteed by network learning, and the quality and quantity of the data influence the performance of the network. And the development of image recognition and target detection in the medical field of deep learning depends on a fully supervised learning method to a great extent, which requires a large amount of strongly labeled data to learn. However, in the big data medical age, although there are a lot of medical image data, such as X-ray film, CT image, etc., these data include some inferior images with low resolution and much noise. In addition to quality issues, another important issue is that most of these images are not labeled, and although most medical images may have disease type labels, labeling of the lesion position is almost absent, so that the development of target detection facing medical image in deep learning is limited. Target detection is an indispensable part of intelligent medical auxiliary technology, so how to solve the obstacle in target detection in the medical field is very important. The most direct method of the above two problems is to label all the unlabelled data and train the network. However, focus labeling requires professional-oriented medical knowledge and skills, is time-consuming, labor-consuming and expensive, so that the method is difficult to implement in the case of massive medical image data without focus labeling, and poor-quality images may exist to influence network training. Therefore, a large amount of labeled data is needed for the focus positioning by using the deep learning technology to train the target detection model, and the existing data quality is different and mostly has no label. Need professional to choose and mark data, increased doctor's work load, it is wasted time and energy, and the cost of labor is expensive.
In the deep learning, two main approaches of weak supervised learning, namely semi supervised learning and active learning, are proposed for the situation that strong labels are not obtained due to the excessively high labeling cost. The focus of semi-supervised learning is learning with easy-to-use annotations. It is done automatically or semi-automatically by computer without human experts. Although the method reduces the labeling cost, the labeling result excessively depends on the model trained by the data labeled in the initial part, so that the accuracy of the labeling result cannot be ensured. The active learning focuses on reducing the number of samples needing to be labeled, the most valuable unlabeled data are selected for labeling the expert through a query function, and the target model is trained through a few core data sets with labels. The expert participates in the labeling, the problem that the expert excessively depends on the reference model is solved, the effect is more stable, and the method is more suitable for the medical field. However, most of the active learning studies are currently directed to image classification, and only a few studies are directed to target detection, but the initial data set is required to contain data with part of labeled information, and the method is not applied to the medical field. In a word, the existing active learning methods for target detection all need to include part of data with labeled information in an initial data set, and need to interact with experts for many times, which is not suitable for the actual situation in the medical field. At present, no proper method oriented to the medical field exists, the training data can be reduced, meanwhile, the model accuracy is kept, the focus positioning is realized, and the diagnosis of doctors is assisted.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method, a system, an electronic device and a storage medium for extracting focus positioning core data, aiming at the defects of the prior art, to extract the core data from medical image data without any focus labeling information, and solve the problem of difficult focus positioning caused by a large amount of label-free data and inconsistent quality in intelligent medical treatment.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a focus positioning core data extraction method comprises the following steps:
s1, calculating and fusing the information entropy, the contrast value and the acceptance score value of any image in the medical image data set, and calculating the core degree of the image;
and S2, arranging all images in the medical image data set according to the core degree in a descending order, and extracting images with the core degree of k before ranking as core data.
S3, optimizing the information entropy by using the previous batch of core data and medical data without pathology;
and S4, repeating the steps S1-S3, and extracting a proper amount of core data.
The method of the invention does not need a large amount of focus marking data, and solves the problems of inconsistent data quality, scarce focus marking data, troublesome marking and high cost in the prior artCore data can be extracted from medical image data initially without any lesion marking information. Considering the importance of the image under different evaluation indexes and reserving the characteristics of numerical values, the invention designs a mean value fusion algorithm to calculate the mean value of each evaluation index, namely, the following formula is utilized to calculate the core degree of the image i in the medical image data set:
Wherein r represents the number of images under the same evaluation index;respectively representing the information entropy value, the contrast value and the acceptance score value of the image i; the value range of i is 1,2, …, r;respectively representing the information entropy value, the contrast value and the acceptance score value of the image j; j has a value in the range of 1,2, …, r.
The information entropy optimization process of the invention comprises the following steps: and fine-tuning the classification model by using the last batch of core data and the non-pathological medical data, namely fixing the parameter weight of the front layer of the classification model, training and adjusting the weight of the last layer of the classification model by using the extracted last batch of core data and the non-pathological medical data, and replacing the original classification model by using the fine-tuned classification model, thereby optimizing the information entropy.
In step S1, the information entropy calculation process for image i includes: learning and calculating the image i by using a classification model with pre-training weight to obtain the information entropy of the image i and the information entropy of the image iThe calculation formula is as follows:
n is the output category number of the classification model, i represents the ith image of the positive sample set in the label-free data set,indicating the confidence that the classification model predicted image i is of class c.
In step S1, before the contrast value of the image i is calculated, GLCM (gray level co-occurrence matrix) conversion is performed on the image i. The texture features of the image can be more accurately represented based on the contrast of the gray level co-occurrence matrix. The deeper the texture groove, the greater its contrast, the clearer the image, the more beneficial the model learning features.
In step S1, the process of calculating the value of the acceptance score of the image i includes cutting the image i to obtain n × n sub-blocks, calculating the acceptance score of each sub-block, synthesizing the acceptance scores of all the sub-blocks of the image i to obtain the acceptance score of the image i, and accurately calculating the acceptance score of the image i.
Corresponding to the method, the invention also provides a focus positioning core data extraction system based on active learning, which comprises the following steps:
the information entropy calculation module is used for calculating the information entropy of all images in the medical image data set;
the contrast value calculation module is used for calculating the contrast values of all images in the medical image data set;
an inception score value calculation module for calculating the inception score values of all images in the medical image data set;
the fusion module is used for calculating the core degree of each image according to the information entropy, the contrast value and the acceptance score value of each image;
and the sorting module is used for sorting all the images in the medical image data set in a descending order according to the core degree and extracting the images with the core degree ranking k before as a batch of core data.
And the optimization circulation module is used for optimizing the information entropy calculation module and circularly extracting the core data until a proper amount of core data is extracted.
The information entropy calculation module learns and calculates the image i by using a classification model with pre-training weight to obtain the information entropy of the image i. The contrast value calculation module of the present invention includes:
the conversion unit is used for carrying out GLCM conversion on the images in the medical image data set;
and the calculating unit is used for calculating the contrast value of the converted image.
The acceptance score value calculation module of the present invention comprises:
a cutting unit configured to perform cutting processing on the images in the medical image dataset, and cut each image into n × n subblocks;
an acceptance score calculating unit for calculating an acceptance score of each sub-block;
and the synthesis unit is used for synthesizing the acceptance score of the n × n sub-blocks to obtain the acceptance score of the image.
The optimization cycle module of the present invention comprises:
and the optimization unit is used for carrying out fine adjustment on the classification model in the information entropy calculation module by using transfer learning, namely, the parameter weight of the front layer of the network is fixed, and the weight of the last layers is adjusted only by using the data training. And replacing the original model in the selection module by the fine-tuned model.
The loop unit circularly executes the operation of the optimization units in the information entropy calculation module, the contrast value calculation module, the acceptance score value calculation module, the fusion module, the sequencing module and the optimization loop module until the core data with proper quantity is extracted.
As an inventive concept, the present invention also provides an electronic device for extracting lesion localization core data, comprising a processor; the processor is configured to perform the above method.
Preferably, in order to facilitate data acquisition, the electronic device of the present invention further comprises a data acquisition module for acquiring a medical image and transmitting the lesion image to the processor.
As an inventive concept, the present invention also provides a computer storage medium storing a program; the program is for executing the above method.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention solves the problems that the model training of focus positioning in the intelligent medical treatment at the present stage depends on a large amount of focus marking data, the data quality is different, the focus marking data is scarce, the marking is troublesome and the cost is high, the core data can be extracted from the medical image data without any focus marking information initially, and the core data extracted by the invention can be used for training to obtain a target detection model for effectively positioning the focus while reducing the marking burden and the interaction times of doctors so as to assist the diagnosis of the doctors, reduce the working pressure of the doctors and promote the development of target detection in the intelligent medical treatment;
2. the invention continuously extracts the core data and simultaneously continuously optimizes the extraction mechanism, so that the extraction performance of the invention is continuously improved. Experiments prove that the method has high practicability, can reduce the burden of mass data annotation, and effectively assists diagnosis of doctors.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of the present invention architecture;
FIG. 3 is a schematic diagram of the Incep v3 network structure in the selection module according to the present invention; wherein, (a) a first module in the inclusion v 3; (b) module two in inclusion v 3; (c) module three in inclusion v 3; (d) network structure trained by the Inception v3 model;
FIG. 4 is a block diagram of a module for calculating a contrast value according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an acceptance score value calculation module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an optimized loop module according to an embodiment of the present invention.
Detailed Description
The invention adopts the idea of active learning, designs a core data extraction method, can extract core data from medical image data without any focus marking information, is used for training a focus positioning target detection model, and can solve the focus positioning development obstruction caused by a large amount of unmarked data and inconsistent quality in intelligent medical treatment.
The development of lesion localization in smart medicine relies on deep learning-based target detection techniques, requiring a large amount of labeled data. However, in medical big data, the quality of medical images is not uniform, and some medical big data have quality problems such as noise. In addition, most image data have no lesion position labels, and the labels need medical knowledge, so that the method is time-consuming, labor-consuming and high in acquisition cost. The invention extracts core data from data without lesion marking through a designed selection module (namely, the extraction system of the invention). In order to accurately extract core data, the selection module comprises three image evaluation indexes for evaluating the image. And calculating the image by selecting an image quality evaluation index such as contrast based on a gray level co-occurrence matrix (GLCM) and the like by considering the quality of the image. And calculating image information entropy by considering model training, and introducing Incepositiccore by integrating the training of the target detection model and the consideration of the image data set. It is an index often used to evaluate GAN-generated images for clarity and diversity. Generally, a set of images is discriminated, and the larger the value of the discrimination value is, the sharper and more the image is. The invention designs an effective fusion algorithm to be fused to obtain the image core degree, because the importance of the image under different evaluation indexes is considered and the characteristics of numerical values are reserved, the invention designs a mean value fusion algorithm, calculates the mean value of each evaluation index, processes the original data by the mean value, and finally carries out fusion to obtain the image core degree, wherein the formula is as follows:
where r represents the number of images under the same evaluation index,respectively represent information entropy values of the image i, forA value of the contrast and an acceptance score; the value range of i is 1,2, …, r;respectively representing the information entropy value, the contrast value and the acceptance score value of the image j; j has a value in the range of 1,2, …, r.
The method mainly comprises the following three stages: the first stage is that core data are extracted from a data set without labels through a selection module; in the second stage, the selection module is optimized through the extracted core data; in the third stage, the two steps are repeated to obtain a proper amount of core data, and the core data are finally given to human experts to mark specific pathological positions.
Referring to fig. 1, the present invention extracts core data according to the core degree, and the above process includes the following specific steps:
the first step is as follows: first, the image is GLCM transformed and its contrast value is calculated. The value reflects the degree of sharpness of the image and the depth of the texture grooves. The deeper the texture grooves, the greater its contrast and the sharper the image.
The second step is that: the information entropy of the image is obtained by learning and calculating the image by using a classification model with pre-training weights, such as ImageNet pre-training weight inclusion v 3.
And thirdly, performing cutting processing on the image to obtain n × n blocks (for example, n = 5), calculating an acceptance score for each block of the image, and finally synthesizing the acceptance scores of all the blocks of the image to obtain the image.
The fourth step: and fusing the values of the three groups of image evaluation indexes by using a designed fusion algorithm to obtain a core degree with comprehensive evaluation representativeness of the image.
The fifth step: and performing descending order on the core degrees of the images, and extracting the images with the core degrees ranked k before as a batch of core data.
In order to extract the core data more accurately, the invention adopts batch extraction, and continuously optimizes a selection module (namely, optimizes the information entropy calculated in the second step, namely, optimizes an information entropy calculation module) by using the extracted core data, and the steps are as follows:
the first step is as follows: the inclusion v3 with pre-weight in the selection module is fine-tuned (for example, 1000 iterations) by using transfer learning to match the previous batch of core data with the normal image without pathology in the unmarked pool, i.e., the initial weight of the front layer of the network is maintained unchanged, and the weight of the last layer is adjusted only by training with the data.
The second step is that: and replacing the original model in the selection module by the fine-tuned model.
And repeating the two stages until a proper amount of core data is selected. And finally, submitting the images to a doctor expert, and marking the position information of the specific focus on the core medical images by the doctor expert. The core data can be used for training a target detection model for locating the focus, and an excellent focus locating model is obtained.
The architecture of the invention is shown in fig. 2 and mainly comprises four parts: (1) and (4) a label-free data set pool. The unmarked data set has a large number of low-cost pathological images which are easy to obtain without pathological position marks, and contains a small number of normal medical images without pathology. (2) And selecting a module. The selection module comprises an information entropy calculation module, a contrast value calculation module, an inceptoscore value calculation module, a fusion module and a sorting module. (3) And (5) carrying out expert annotation. The physician specialist labels the lesion location for the finally selected core data set. (4) Iteratively updated (labeled) core dataset pools. And finally, obtaining the marked core data set after the whole core data set is marked by the expert.
As can be seen from fig. 2, the extraction system of the embodiment of the present invention includes the following modules:
the information entropy calculation module is used for calculating the information entropy of all images in the medical image data set;
the contrast value calculation module is used for calculating the contrast values of all images in the medical image data set based on the gray level co-occurrence matrix;
an inception score value calculation module for calculating the inception score values of all images in the medical image data set;
the fusion module is used for calculating the core degree of each image according to the information entropy, the contrast value and the acceptance score value of each image;
and the sorting module is used for sorting all the images in the medical image data set in a descending order according to the core degree and extracting the images with the core degree ranking k before as a batch of core data.
The optimization cycle module is used for optimizing the information entropy module and cyclically extracting a batch of core data until a proper amount of core data sets are extracted;
the information entropy calculation module learns and calculates the image i by using a classification model with pre-training weight to obtain the information entropy of the image i.
As shown in FIG. 3, a classification model in an information entropy calculation module is very important, and the inclusion v3 has good performance in the classification field, so the invention uses the inclusion v3 to learn and calculate the information entropy, the inclusion v3 model has 46 layers, an input image passes through a convolution layer (Conv), a pooling layer (Pool) and a full connection layer (FC) to obtain the confidence of image classification, and the other inclusion v3 model has 3 types of inclusion modules which are respectively an acceptance module of two continuous convolution kernels 3 ×, an acceptance module of a convolution n × n into continuous convolution kernels n × 1 and 1 × n, and an acceptance module of a convolution n × n into parallel convolution n × 1 and 1 × n.
As shown in fig. 4, the contrast value calculation module includes:
the conversion unit is used for carrying out GLCM conversion on the images in the medical image data set;
and the calculating unit is used for calculating the contrast value of the converted image.
As shown in fig. 5, the acceptance score value calculation module includes:
a cutting unit configured to perform cutting processing on the images in the medical image dataset, and cut each image into n × n subblocks;
an acceptance score calculating unit for calculating an acceptance score of each sub-block;
and the synthesis unit is used for synthesizing the acceptance score of the n × n sub-blocks to obtain the acceptance score of the image.
As shown in fig. 6, the optimization cycle module includes:
the optimization unit is used for finely adjusting the classification model in the information entropy calculation module by using the last batch of core data and the non-pathological medical data extracted by the sequencing module, namely fixing the parameter weight of the front layer of the classification model, training and adjusting the weight of the last layer by using the last batch of extracted core data and the non-pathological medical data, and replacing the original classification model by using the finely adjusted classification model;
the loop unit is used for circularly executing the operations of the information entropy calculation module, the contrast value calculation module, the acceptance score value calculation module, the fusion module, the sequencing module and the optimization loop module until the core data with the proper quantity is extracted.
Claims (10)
1. A focus positioning core data extraction method is characterized by comprising the following steps:
s1, calculating and fusing the information entropy of any image in the medical image data set, the contrast value based on the gray level co-occurrence matrix and the acceptance score value, and calculating the core degree of the image;
preferably, the coring level of image i in the medical image dataset is calculated using the following equation:
Wherein r represents the number of images under the same evaluation index;respectively representing the information entropy value, the contrast value and the acceptance score value of the image i; the value range of i is 1,2, …, r;respectively representing the information entropy value, the contrast value and the acceptance score value of the image j;
s2, arranging all images in the medical image data set according to the core degree in a descending order, and extracting images with the core degree of k before ranking as a batch of core data;
s3, optimizing the information entropy by using the previous batch of core data and medical data without pathology;
and S4, repeating the steps S1-S3, and extracting a proper amount of core data.
2. The lesion localization core data extraction method according to claim 1, wherein in step S1, the information entropy calculation process of the image i includes: learning and calculating the image i by using a classification model with pre-training weight to obtain the information entropy of the image iThe calculation formula is as follows:
n is the output category number of the classification model, i represents the ith image of the positive sample set in the label-free data set,representing the confidence that the classification model predicted image i is of the class c;
preferably, the classification model is finely adjusted by using the previous batch of core data and the non-pathological medical data, that is, the parameter weight of the front layer of the classification model is fixed, the extracted previous batch of core data and the non-pathological medical data are used for training and adjusting the weight of the last layer of the classification model, and the finely adjusted classification model is used for replacing the original classification model, so that the information entropy is optimized.
3. The lesion localization core data extraction method according to claim 1, wherein in step S1, before calculating the contrast value of the image i, the image i is subjected to gray level co-occurrence matrix transformation.
4. The method for extracting lesion localization core data according to claim 1, wherein in step S1, the process of calculating the value of the concept score of the image i comprises the steps of cutting the image i to obtain n × n sub-blocks, calculating the concept score of each sub-block, and synthesizing the concept scores of all the sub-blocks of the image i to obtain the concept score of the image i.
5. A lesion localization core data extraction system, comprising:
the information entropy calculation module is used for calculating the information entropy of all images in the medical image data set;
the contrast value calculation module is used for calculating the contrast values of all images in the medical image data set based on the gray level co-occurrence matrix;
an inception score value calculation module for calculating the inception score values of all images in the medical image data set;
the fusion module is used for calculating the core degree of each image according to the information entropy, the contrast value and the acceptance score value of each image;
the sorting module is used for sorting all images in the medical image data set in a descending order according to the core degree and extracting images with the core degree of k before ranking as a batch of core data;
the optimization circulation module is used for optimizing the information entropy calculation module and circularly extracting the core data until a proper amount of core data is extracted;
the information entropy calculation module learns and calculates the image i by using a classification model with training weight to obtain the information entropy of the image i.
6. The lesion localization core data extraction system of claim 5, wherein the contrast value calculation module comprises:
the conversion unit is used for carrying out gray level co-occurrence matrix conversion on the images in the medical image data set;
and the calculating unit is used for calculating the contrast value of the converted image.
7. The lesion localization core data extraction system of claim 5, wherein the inceptoscore value calculation module comprises:
a cutting unit configured to perform cutting processing on the images in the medical image dataset, and cut each image into n × n subblocks;
an acceptance score calculating unit for calculating an acceptance score of each sub-block;
and the synthesis unit is used for synthesizing the acceptance score of the n × n sub-blocks to obtain the acceptance score of the image.
8. The lesion localization core data extraction system of claim 5, wherein the optimization loop module comprises:
the optimization unit is used for finely adjusting the classification model in the information entropy calculation module by using the last batch of core data and the non-pathological medical data extracted by the sequencing module, namely fixing the parameter weight of the front layer of the classification model, training and adjusting the weight of the last layer by using the last batch of extracted core data and the non-pathological medical data, and replacing the original classification model by using the finely adjusted classification model;
the loop unit is used for circularly executing the operations of the information entropy calculation module, the contrast value calculation module, the acceptance score value calculation module, the fusion module, the sequencing module and the optimization loop module until the core data with the proper quantity is extracted.
9. An electronic device for extracting lesion localization core data, comprising a processor; the processor is used for executing the method of one of claims 1 to 5; preferably, the system further comprises a data acquisition module for acquiring medical images and transmitting the lesion images to the processor.
10. A computer storage medium characterized by storing a program; the program is used for executing the method of one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010413451.3A CN111340807B (en) | 2020-05-15 | 2020-05-15 | Nidus positioning core data extraction method, system, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010413451.3A CN111340807B (en) | 2020-05-15 | 2020-05-15 | Nidus positioning core data extraction method, system, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111340807A true CN111340807A (en) | 2020-06-26 |
CN111340807B CN111340807B (en) | 2020-09-11 |
Family
ID=71186448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010413451.3A Active CN111340807B (en) | 2020-05-15 | 2020-05-15 | Nidus positioning core data extraction method, system, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340807B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112257812A (en) * | 2020-11-12 | 2021-01-22 | 四川云从天府人工智能科技有限公司 | Method and device for determining labeled sample, machine readable medium and equipment |
CN113962976A (en) * | 2021-01-20 | 2022-01-21 | 赛维森(广州)医疗科技服务有限公司 | Quality evaluation method for pathological slide digital image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101345891A (en) * | 2008-08-25 | 2009-01-14 | 重庆医科大学 | Non-reference picture quality appraisement method based on information entropy and contrast |
CN103871054A (en) * | 2014-02-27 | 2014-06-18 | 华中科技大学 | Combined index-based image segmentation result quantitative evaluation method |
CN104104943A (en) * | 2013-04-10 | 2014-10-15 | 江南大学 | No-reference JPEG2000 compressed image quality evaluation method based on generalized regression neural network |
JP5710408B2 (en) * | 2011-07-19 | 2015-04-30 | 国立大学法人京都大学 | Noodle crack detection device, crack detection method and sorting system |
CN110599447A (en) * | 2019-07-29 | 2019-12-20 | 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) | Method, system and storage medium for processing liver cancer focus data |
-
2020
- 2020-05-15 CN CN202010413451.3A patent/CN111340807B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101345891A (en) * | 2008-08-25 | 2009-01-14 | 重庆医科大学 | Non-reference picture quality appraisement method based on information entropy and contrast |
JP5710408B2 (en) * | 2011-07-19 | 2015-04-30 | 国立大学法人京都大学 | Noodle crack detection device, crack detection method and sorting system |
CN104104943A (en) * | 2013-04-10 | 2014-10-15 | 江南大学 | No-reference JPEG2000 compressed image quality evaluation method based on generalized regression neural network |
CN103871054A (en) * | 2014-02-27 | 2014-06-18 | 华中科技大学 | Combined index-based image segmentation result quantitative evaluation method |
CN110599447A (en) * | 2019-07-29 | 2019-12-20 | 广州市番禺区中心医院(广州市番禺区人民医院、广州市番禺区心血管疾病研究所) | Method, system and storage medium for processing liver cancer focus data |
Non-Patent Citations (2)
Title |
---|
YEH, C. H.等: "Deep learning underwater image color correction and contrast enhancement based on hue preservation", 《IN 2019 IEEE UNDERWATER TECHNOLOGY (UT)》 * |
孙思婷: "医学超声图像分割与病灶中心空间定位方法研究", 《万方学位论文》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112257812A (en) * | 2020-11-12 | 2021-01-22 | 四川云从天府人工智能科技有限公司 | Method and device for determining labeled sample, machine readable medium and equipment |
CN112257812B (en) * | 2020-11-12 | 2024-03-29 | 四川云从天府人工智能科技有限公司 | Labeling sample determination method, device, machine-readable medium and equipment |
CN113962976A (en) * | 2021-01-20 | 2022-01-21 | 赛维森(广州)医疗科技服务有限公司 | Quality evaluation method for pathological slide digital image |
CN113962976B (en) * | 2021-01-20 | 2022-09-16 | 赛维森(广州)医疗科技服务有限公司 | Quality evaluation method for pathological slide digital image |
Also Published As
Publication number | Publication date |
---|---|
CN111340807B (en) | 2020-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhuang et al. | An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases. | |
CN110491502B (en) | Microscope video stream processing method, system, computer device and storage medium | |
CN114201592B (en) | Visual question-answering method for medical image diagnosis | |
CN113627564B (en) | CT medical image processing model training method and diagnosis and treatment system based on deep learning | |
CN111340807B (en) | Nidus positioning core data extraction method, system, electronic equipment and storage medium | |
CN111430025B (en) | Disease diagnosis model training method based on medical image data augmentation | |
CN111651991A (en) | Medical named entity identification method utilizing multi-model fusion strategy | |
CN111079901A (en) | Acute stroke lesion segmentation method based on small sample learning | |
CN112085742B (en) | NAFLD ultrasonic video diagnosis method based on context attention | |
CN116664911A (en) | Breast tumor image classification method based on interpretable deep learning | |
CN114399634B (en) | Three-dimensional image classification method, system, equipment and medium based on weak supervision learning | |
CN115471512A (en) | Medical image segmentation method based on self-supervision contrast learning | |
CN114612381A (en) | Medical image focus detection algorithm with scale enhancement and attention fusion | |
CN118261887A (en) | Improved YOLOv prokaryotic and blastomere detection method | |
CN113779295A (en) | Retrieval method, device, equipment and medium for abnormal cell image features | |
CN114140437A (en) | Fundus hard exudate segmentation method based on deep learning | |
CN117457134A (en) | Medical data management method and system based on intelligent AI | |
CN112200810A (en) | Multi-modal automated ventricular segmentation system and method of use thereof | |
CN117174238A (en) | Automatic pathology report generation method based on artificial intelligence | |
CN116977862A (en) | Video detection method for plant growth stage | |
CN116403706A (en) | Diabetes prediction method integrating knowledge expansion and convolutional neural network | |
CN115937590A (en) | Skin disease image classification method with CNN and Transformer fused in parallel | |
CN113469962B (en) | Feature extraction and image-text fusion method and system for cancer lesion detection | |
CN115409812A (en) | CT image automatic classification method based on fusion time attention mechanism | |
CN112562819B (en) | Report generation method of ultrasonic multi-section data for congenital heart disease |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |