CN115393362A - Method, equipment and medium for selecting automatic glaucoma recognition model - Google Patents
Method, equipment and medium for selecting automatic glaucoma recognition model Download PDFInfo
- Publication number
- CN115393362A CN115393362A CN202211332335.4A CN202211332335A CN115393362A CN 115393362 A CN115393362 A CN 115393362A CN 202211332335 A CN202211332335 A CN 202211332335A CN 115393362 A CN115393362 A CN 115393362A
- Authority
- CN
- China
- Prior art keywords
- model
- feature
- glaucoma
- source domain
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/87—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using selection of the recognition techniques, e.g. of a classifier in a multiple classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method, equipment and a medium for selecting an automatic glaucoma identification model, wherein the method comprises the following steps: acquiring a pre-training model library; selecting a common retinal image dataset as a source domain datasetTaking the glaucoma fundus image data set as a target domain data set(ii) a Measure each model inAndinter-migratability: using model extractionAndthe feature vector of the middle sample is subjected to bilinear transformation, and the obtained high-dimensional feature vector is subjected to low-dimensional mapping to obtain a feature set、(ii) a ComputingAndthe distance is used for representing the mobility of the current prediction model for automatically identifying the source domain and the target domain; and selecting the model with the strongest mobility for training, and automatically identifying the glaucoma. The prediction model selected by the invention does not need a glaucoma sample label and has better automatic glaucoma identification effect.
Description
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a method, equipment and medium for selecting an automatic glaucoma identification model based on mobility measurement.
Background
Recent advances in deep learning have been applied to different medical fields for early detection or prediction of certain abnormalities. In the field of ophthalmology, medical image analysis using deep learning methods has made significant progress. Among the major ophthalmic abnormalities, glaucoma is a common and serious one, which can lead to irreversible loss of vision. At present, the automatic identification research of glaucoma at home and abroad is mainly based on glaucoma prior characteristics and stealth characteristics based on deep learning. Where classification of glaucoma is based on a large number of manually screened features, this has the advantage of being somewhat targeted. However, this also results in a high time cost, since a large amount of labor is required to screen the classification features; the second screened feature may be affected by subjective factors of the screening personnel, so that the screened feature is not accurate enough; the third model is difficult to generalize, and large-scale glaucoma fundus data is difficult to obtain because medical data relates to privacy of patients and data barrier problems among hospitals in China are still serious.
In order to solve the problem of lack of effective labeling of medical data, researchers propose a new solution, namely a transfer learning method. Transfer learning is a learning method that mimics the human visual system in the process of performing a new task in a particular domain, using a large amount of prior knowledge in other relevant domains. It is expected that the model will train an ideal recognition effect when the number of medical image data sets is small. And the method also has higher automatic identification performance on a new test data set. However, different retinal fundus image datasets, due to different scanners, image resolution, light source intensity and parameter settings, result in images with significant differences in appearance. Resulting in a large degradation of performance on the target domain data set for identifying well-performing deep learning models on the source domain data set. In current migration learning applications, finding the optimal migration strategy still requires time-consuming experimentation and domain knowledge. The migratability metric for the model can quantitatively reveal how easily it is to migrate knowledge learned from the source domain data to the target domain data. And providing guidance for selecting the transfer learning model. Therefore, model migratability measurement research is of great significance to the wide and efficient application of migration learning in glaucoma automatic identification. At present, the model mobility measurement research mainly comprises the following methods, which have breakthroughs and face certain limitations:
(1) Model migratability metrics based on empirical studies: taskonomy evaluates migration performance by retraining the source model for each target task. Expensive training calculations are required.
(2) Model migratability measurements based on analytical methods: h-score analytically assesses migratability by solving the HGR maximum correlation problem. The NCE measures migratability at a particular setting using conditional entropy. LEEP constructs an empirical predictor by estimating the joint distribution of pre-training and target label space, predicts the virtual label distribution of target data in source label space, and calculates the empirical condition distribution of the target label of a given virtual label. The performance of the empirical predictor is used to evaluate the pre-trained model. Although fast in computation, the a priori method is not accurate and is applied specifically to image classification of supervised pre-trained models. On the one hand, these methods have strict assumptions about the data, and on the other hand, work poorly in cross-domain settings.
(3) Mobility metric based on Optimal Transport (OT): the migratability is described as a linear combination of domain differences and task differences. In the calculation process, part of target data set data with known labels is needed to be used as observation samples.
(4) Attribution graph-based heterogeneous depth model migratability measures: and calculating a data attribution map of each trained model in the model base to the detection data set by utilizing the existing depth model attribution method. The mobility metric of the model is measured by the similarity of the attributed graphs. The method needs to establish a detection data set, and an author collects the part of data through various network picture search engines, so that a large amount of additional cost is increased; the requirement on the detection data is strict, and the quality of the detection data has great influence on the measurement effect of the model mobility; and calculating the attribution graph of each model in the model library to the target data, wherein when the number of the models is large, the storage of the attribution graphs and the calculation cost of the distance are not negligible.
(5) A zero sample image retrieval method and device based on Hash coding and graph attention mechanism are disclosed: from the macroscopic view, the method utilizes the selected model to extract picture features, compares the unknown label image with all known label images in the database one by one in the model application stage, and selects the label of the image with the minimum Hamming distance of the Hash code between the unknown label image and the database as the prediction classification result of the unknown label image. This model selection approach, lacking a priori estimates, does not enable a measure of the model's own migratability before the model is used for actual migration behavior. From the microscopic perspective, the method realizes classification by comparing each picture in the database one by one, and the classification performance of the model in practical application is greatly limited by the richness of the known label image library of the database.
Due to the characteristics of transfer learning, the application effect of the transfer learning method in the field of automatic identification of glaucoma fundus images is closely related to model selection. Searching for a model with optimal recognition performance to realize the migration behavior requires a great deal of experiments and information. Learning the model with an imperfect recognition accuracy even after consuming a lot of computing resources, is a waste of computing resources. This lack of an evaluated model selection approach leaves the field of automatic glaucoma identification with greater uncertainty. Therefore, the mobility of the deep learning model is measured, and the method has guiding significance for the application of the fundus image recognition model. The model migratability is affected by many factors, such as the size of the data set, the model optimization method, etc. The capability of the deep learning model to the feature extraction module of the image is a main factor influencing the migration capability of the model. How to measure the ability of the deep learning model to the feature extraction module of the image and how to compare different models under the same criterion is then a key issue in evaluating model mobility. The current research aiming at the model mobility measurement mainly faces the following problems: the source model needs to be retrained, and expensive training calculation is consumed; strict assumption is made on data, and the working effect in cross-domain setting is poor; the partial target domain data sample label needs to be known.
Disclosure of Invention
Aiming at the defects of the existing model mobility measurement method, the invention provides a method, equipment and medium for selecting an automatic glaucoma identification model based on mobility measurement, which do not need a glaucoma sample label and have better automatic glaucoma identification effect.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a method for glaucoma auto-identification model selection based on migratability metrics, comprising:
step 2, selecting a public retina image data set as a source domain data setComprises thatA common retinal image sample, noted(ii) a Using the glaucoma fundus image dataset as a target domain datasetComprises thatnA sample of a glaucoma fundus image(ii) a For each pre-training model, its data set in the source domain is measured in steps 3-5With the target domain data setMigratability for automatic identification;
step 3, extracting a source domain data set by using a pre-training modelAnd a target domain data setCarrying out bilinear transformation on the extracted feature vector to obtain a high-dimensional feature vector;
step 4, performing count sketch mapping on each high-dimensional feature vector obtained in the step 3 to obtain a characterization source domain data setSource domain feature set ofAnd characterizing the target domain datasetTarget domain feature set of;
Step 5, calculating a source domain feature setCentral feature ofAnd a set of target featuresCentral feature ofThen calculateAnd withAnd using the distance to characterize the current prediction model versus the source domain data setWith the target domain data setThe mobility of automatic identification is carried out, and the mobility of the model with the minimum distance is ultra strong;
step 6, selecting a pre-training model with the strongest mobility, and using a source domain data setAnd training, and taking the model obtained by training as an automatic glaucoma identification model.
Further, theNThe different pre-training models are all heterogeneous deep learning models.
Further, the computation formula for performing bilinear transformation on the feature vector is as follows:
in the formula (I), the compound is shown in the specification,the feature vectors extracted for the pre-trained models,is a feature vectorThe transposed vector of (a) is provided,a matrix obtained by bilinear transformation;
then the matrix is dividedAll elements are spliced into a length ofHigh-dimensional feature vector of。
Further, the specific process of performing count sketch mapping on each high-dimensional feature vector is as follows:
(1) Self-defining a projection dimension d of the count sketch transformation function;
(2) Randomly generating an arrayAndwhereinSlave arrayThe assignment is randomly drawn,assigning values from an array {1, -1} of random samples; initializationdZero vector of dimension;
(3) ComputingObtained bydDimension vectorThe feature vector is obtained by mapping; whereinAs high-dimensional feature vectorsToiAnd (4) a component.
Further, the source domain feature setCentral feature ofThe calculating method comprises the following steps:
(1) Feature set for source domainSetting the number of clustersOrder cluster(ii) a WhereinRespectively source domain data setsInmThe samples are mapped by the count sketchmA feature vector;
(4) Updating mean vectorAt the same time willFrom cluster C and source domain feature setRemoving; whereinAndthe mean vector is the vector before and after updating;
(5) Repeating the steps (3) and (4) until the source domain feature setEmpty, mean vector at this timeIs the source domain feature setCentral feature of;
The set of target domain featuresCentral feature ofComputing method, and source domain feature setCentral feature ofThe calculation method is the same.
in the formula (I), the compound is shown in the specification,is composed ofAndthe Kanbera distance between the two can be determined,andrespectively representAndto (1)iDimension feature, d is the dimension of the feature vector obtained by the count sketch mapping.
An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to implement the method for selecting model for glaucoma automatic identification based on migratability metrics according to any of the above technical solutions.
A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method for selecting a model for glaucoma automatic identification based on migratability measures according to any of the above-mentioned aspects.
Advantageous effects
1 in the prior art, a plurality of source models need to be retrained when transfer learning is carried out, and the transferability of each model is evaluated according to the recognition precision of a target image after the actual transfer behavior of the model, however, in the field of medical images, the data quantity of fundus data sets is small, the difference between data is large, and the application effect of the transfer learning in the field of automatic glaucoma recognition is limited. According to the method, the model mobility is measured at the stage of extracting the image characteristic from the convolution basis of the deep learning model, the source model does not need to be retrained, the model does not need to be actually migrated, the deep learning model with better migration performance can be selected for automatically identifying the glaucoma fundus image, and the glaucoma sample label is not needed and the better automatic glaucoma identification effect is achieved.
2. Aiming at the problems that the feature dimension extracted from the convolution base of the learning model with different depths is high, the characterization capability is not strong, the measurement cannot be realized and the like, the invention provides the method for generating the joint representation by using the bilinear feature and enhancing the characterization capability of the feature vector; approximating a kernel function by using a count sketch transformation function, and mapping high-dimensional bilinear features to the same vector space with relatively low dimensionality; the distance between the converted image feature vectors is measured by the Kanbera quantity, so that the model mobility can be reflected, and guidance is provided for model selection in the migration learning application process. On one hand, compared with other measurement methods, the Kanbera distance is suitable for measuring the distance between two points in a vector space, is sensitive to the value change close to 0 (more than or equal to 0) and is suitable for the model mobility measurement scene, and on the other hand, the method is low in calculation cost and does not need extra space storage cost.
3. Aiming at the problems that in the field of model mobility measurement, strict assumptions are made on data of a source domain and data of a target domain, and measurement effects are poor in cross-domain setting, the method for performing mobility measurement on a plurality of pre-training models does not strictly assume data of the source domain and the target domain, and measurement effects are not influenced in cross-domain setting.
Drawings
FIG. 1 is a diagram of a full flow analysis of a method according to an embodiment of the present application.
Detailed Description
The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.
Example 1
The embodiment provides a glaucoma automatic identification model selection method based on migratability measurement, which is shown in fig. 1 and includes the following steps:
Step 2, selecting a public retina image data set as a source domain data setComprises thatA common retinal image sample, noted(ii) a Using the cyan light fundus image dataset as a target domain datasetN cyan light fundus image samples, and recording(ii) a For each pre-training model, its data set in the source domain is measured in steps 3-5With the target domain data setFor automatic identification.
The common retinal image set can be Drishti-GS, RIM-ONE-R1, R2, R3, REGUGGE and the like, and Drishti-GS is selected in the embodiment and comprises 101 retinal images and mask marks of optical disc and optical cup for detecting glaucoma.
Step 3, extracting a source domain data set by using a pre-training modelAnd a target domain data setAnd then carrying out bilinear transformation on the extracted feature vector to obtain a high-dimensional feature vector.
The feature vectors of each common retinal image and each cyan fundus image are recorded asWhereinRepresenting feature vectorsS-dimensional feature of (1).
then the matrix is divided intoAll elements are spliced to obtain the product with the length ofHigh-dimensional feature vector of。
Step 4, performing count sketch mapping on each high-dimensional feature vector obtained in the step 3 to obtain a characterization source domain data setSource domain feature set ofAnd characterizing the target domain datasetTarget domain feature set of. The embodiment specifically includes:
(1) And (5) defining the projection dimension d of the count sketch transformation function. The appropriate setting of d depends on the amount of training data, memory budget and task difficulty. In this embodiment d =8000 is sufficient to achieve near maximum accuracy.
(2) Randomly generating an arrayAndin whichSlave arrayThe assignment is randomly extracted and,assigning values from an array {1, -1} of random samples; initializationdZero vector of dimension。
(3) ComputingObtained bydDimension vectorThe feature vector obtained by mapping is low in dimensionality and high in representation; whereinAs a high-dimensional feature vectorTo (1)iAnd (4) a component.
Integrating source domain data setsPassing through the modelExtracting features from the convolution basis, mapping a count sketch function to finally obtain a feature set which is expressed asIn whichRepresenting source domain dataThe characteristics of (1). Target domain data setPassing through the modelExtracting features from the convolution base, mapping the count sketch function to finally obtain a feature set expressed asIn whichRepresenting target domain dataThe characteristics of (1).
Step 5, calculating a source domain characteristic setCentral feature ofAnd a set of target featuresCentral feature ofThen calculateAnd withThe Kancperra distance therebetween, and using the distance to characterize the current prediction model versus the source domain data setWith the target domain data setAnd the mobility of the automatic identification is realized, and the mobility of the model with the minimum distance is ultra strong.
Wherein the source domain feature setCentral feature ofThe calculating method comprises the following steps:
(1) Feature set for source domainSetting the number of clustered clustersOrder cluster(ii) a WhereinRespectively source domain data setsInmThe samples are mapped by the count sketchmA feature vector;
(4) Updating mean vectorsAt the same time willFrom cluster C and source domain feature setRemoving; whereinAndthe mean vector is the vector before and after updating;
(5) Repeating the steps (3) and (4) until the source domain feature setEmpty, mean vector at this timeIs the source domain feature setCentral feature of。
Target domain feature setCentral feature ofComputing method, and source domain feature setCentral feature ofThe calculation method comprises the following steps:
(1) Targeting domain feature setsSetting the number of clustersOrder cluster(ii) a WhereinRespectively target domain feature setThe n samples are mapped by the count sketchnA feature vector;
(4) Updating mean vectorAt the same time willFrom cluster C and target domain feature setRemoving; whereinAndthe mean vector is the vector before and after updating;
(5) Repeating the steps (3) and (4) until the target domain feature setEmpty, mean vector at this timeIs the target domain feature setCentral feature of (2)。
In addition, the first and second substrates are,andthe Kanbera distance between them is calculated as:
in the formula (I), the compound is shown in the specification,is composed ofAndthe Kanbera distance between the two can be determined,andrespectively representAndto (1) aiThe dimensional characteristics of the image data are measured,dthe dimensions of the feature vector obtained for the count sketch map.
For each pre-training model, the method obtained according to the step 3-5Andthe Kancperla distance between the two can be used for representing the source domain data set of the current prediction modelWith the target domain data setThe automatic identification is performed. And the smaller the Kanbera distance, the more migratability of the pre-trained model is indicated.
Step (ii) of6, selecting the pre-training model with the strongest mobility, and using the labeled source domain data setAnd training, and taking the model obtained by training as an automatic glaucoma identification model.
Example 2
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to implement the method for automatic glaucoma recognition model selection based on migratability metrics of embodiment 1.
Example 3
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for automatic glaucoma identification model selection based on migratability metrics according to embodiment 1.
The above embodiments are preferred embodiments of the present application, and those skilled in the art can make various changes or modifications without departing from the general concept of the present application, and such changes or modifications should fall within the scope of the claims of the present application.
Claims (8)
1. A method for selecting an automatic glaucoma identification model based on a mobility metric, comprising:
step 1, obtaining a pre-training model library obtained by training on a standard data set(ii) a WhereinAre respectively asNA plurality of different pre-training models;
step 2, selecting a public retina image data set as a source domain data setComprises thatA common retinal image sample, noted(ii) a Using the glaucoma fundus image dataset as a target domain datasetComprises thatnA sample of a glaucoma fundus image(ii) a For each pre-training model, its data set in the source domain is measured in steps 3-5With the target domain data setMigratability for automatic identification;
step 3, extracting a source domain data set by using a pre-training modelAnd a target domain data setCarrying out bilinear transformation on the extracted feature vector to obtain a high-dimensional feature vector;
step 4, performing count sketch mapping on each high-dimensional feature vector obtained in the step 3 to obtain a characterization source domain data setSource domain feature set ofAnd characterizing the target domain datasetTarget domain feature set of;
Step 5, calculating a source domain characteristic setCentral feature ofAnd a set of target featuresCentral feature ofThen calculateAndthe Kancperra distance therebetween, and using the distance to characterize the current prediction model versus the source domain data setWith the target domain data setThe mobility of automatic identification is carried out, and the mobility of the model with the minimum distance is ultra strong;
2. The method for glaucoma model selection with automatic recognition according to claim 1, wherein the model is selected from the group consisting of a model for glaucoma model selection, and a model for glaucoma model selectionNThe different pre-training models are all heterogeneous deep learning models.
3. The method for selecting a model for automatic glaucoma recognition according to claim 1, wherein the computation formula for bilinear transformation of the feature vectors is:
in the formula (I), the compound is shown in the specification,the feature vectors extracted for the pre-trained models,is a feature vectorThe transposed vector of (a) is,a matrix obtained by bilinear transformation;
4. The method for selecting a model for automatic glaucoma recognition according to claim 1, wherein the specific process of performing count sketch mapping on each high-dimensional feature vector is as follows:
(1) Self-defining a projection dimension d of the count sketch transformation function;
(2) Randomly generating an arrayAndwhereinSlave arrayThe assignment is randomly extracted and,assigning values from an array {1, -1} of random samples; initializationdZero vector of dimension;
5. The method for selecting a model for automatic recognition of glaucoma according to claim 1, wherein the set of source domain featuresCentral feature ofThe calculation method comprises the following steps:
(1) Feature set for source domainSetting the number of clustered clustersOrder cluster(ii) a WhereinRespectively source domain data setsInmThe samples are mapped by the count sketchmA feature vector;
(4) Updating mean vectorAt the same time willFrom cluster C and source domain feature setRemoving; whereinAndthe mean vector is the vector before and after updating;
(5) Repeating the steps (3) and (4) until the source domain feature setEmpty, mean vector at this timeIs the source domain feature setCentral feature of;
6. The glaucoma automatic recognition model selection method according to claim 1,andthe Kanbera distance between them is calculated as:
in the formula (I), the compound is shown in the specification,is composed ofAndthe kanperla distance therebetween is increased by the distance,andrespectively representAndto (1) aiThe dimensional characteristics of the image data are measured,dthe dimensions of the feature vector obtained for the count sketch map.
7. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the method according to any one of claims 1 to 6.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211332335.4A CN115393362B (en) | 2022-10-28 | 2022-10-28 | Method, equipment and medium for selecting automatic glaucoma recognition model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211332335.4A CN115393362B (en) | 2022-10-28 | 2022-10-28 | Method, equipment and medium for selecting automatic glaucoma recognition model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115393362A true CN115393362A (en) | 2022-11-25 |
CN115393362B CN115393362B (en) | 2023-02-03 |
Family
ID=84115167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211332335.4A Active CN115393362B (en) | 2022-10-28 | 2022-10-28 | Method, equipment and medium for selecting automatic glaucoma recognition model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115393362B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070535A (en) * | 2019-04-23 | 2019-07-30 | 东北大学 | A kind of retinal vascular images dividing method of Case-based Reasoning transfer learning |
CN110378366A (en) * | 2019-06-04 | 2019-10-25 | 广东工业大学 | A kind of cross-domain image classification method based on coupling knowledge migration |
CN113344016A (en) * | 2020-02-18 | 2021-09-03 | 深圳云天励飞技术有限公司 | Deep migration learning method and device, electronic equipment and storage medium |
US20210369195A1 (en) * | 2018-04-26 | 2021-12-02 | Voxeleron, LLC | Method and system for disease analysis and interpretation |
CN114724231A (en) * | 2022-04-13 | 2022-07-08 | 东北大学 | Glaucoma multi-modal intelligent recognition system based on transfer learning |
-
2022
- 2022-10-28 CN CN202211332335.4A patent/CN115393362B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210369195A1 (en) * | 2018-04-26 | 2021-12-02 | Voxeleron, LLC | Method and system for disease analysis and interpretation |
CN110070535A (en) * | 2019-04-23 | 2019-07-30 | 东北大学 | A kind of retinal vascular images dividing method of Case-based Reasoning transfer learning |
CN110378366A (en) * | 2019-06-04 | 2019-10-25 | 广东工业大学 | A kind of cross-domain image classification method based on coupling knowledge migration |
CN113344016A (en) * | 2020-02-18 | 2021-09-03 | 深圳云天励飞技术有限公司 | Deep migration learning method and device, electronic equipment and storage medium |
CN114724231A (en) * | 2022-04-13 | 2022-07-08 | 东北大学 | Glaucoma multi-modal intelligent recognition system based on transfer learning |
Non-Patent Citations (2)
Title |
---|
吴星等: "人工智能眼底分析技术对青光眼病灶的诊断价值研究", 《解放军医学院学报》 * |
徐志京等: "青光眼眼底图像的迁移学习分类方法", 《计算机工程与应用》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115393362B (en) | 2023-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Stacke et al. | Measuring domain shift for deep learning in histopathology | |
Chen et al. | Source-free domain adaptive fundus image segmentation with denoised pseudo-labeling | |
Zhang et al. | Quantifying facial age by posterior of age comparisons | |
US20220237788A1 (en) | Multiple instance learner for tissue image classification | |
Putra et al. | Enhanced skin condition prediction through machine learning using dynamic training and testing augmentation | |
Prabhu et al. | Few-shot learning for dermatological disease diagnosis | |
CN113454733A (en) | Multi-instance learner for prognostic tissue pattern recognition | |
Sainju et al. | Automated bleeding detection in capsule endoscopy videos using statistical features and region growing | |
WO2019015246A1 (en) | Image feature acquisition | |
Filipovych et al. | Semi-supervised cluster analysis of imaging data | |
Zakazov et al. | Anatomy of domain shift impact on U-Net layers in MRI segmentation | |
Mitchell-Heggs et al. | Neural manifold analysis of brain circuit dynamics in health and disease | |
Zhang et al. | Feature-transfer network and local background suppression for microaneurysm detection | |
Naqvi et al. | Feature quality-based dynamic feature selection for improving salient object detection | |
Zhou et al. | Adaptive weighted locality-constrained sparse coding for glaucoma diagnosis | |
He et al. | A selective overview of feature screening methods with applications to neuroimaging data | |
Marinescu et al. | A vertex clustering model for disease progression: application to cortical thickness images | |
Voon et al. | Evaluating the effectiveness of stain normalization techniques in automated grading of invasive ductal carcinoma histopathological images | |
CN115393362B (en) | Method, equipment and medium for selecting automatic glaucoma recognition model | |
Wang et al. | Signal subgraph estimation via vertex screening | |
Fernández et al. | Diffusion methods for aligning medical datasets: Location prediction in CT scan images | |
Kamraoui et al. | Popcorn: Progressive pseudo-labeling with consistency regularization and neighboring | |
Yesilbek11 et al. | SVM-based sketch recognition: which hyperparameter interval to try? | |
Islam et al. | Unic-net: Uncertainty aware involution-convolution hybrid network for two-level disease identification | |
Ghoshal et al. | Bayesian deep active learning for medical image analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |