CN114842306A - Model training method and device applied to retina focus image type recognition - Google Patents

Model training method and device applied to retina focus image type recognition Download PDF

Info

Publication number
CN114842306A
CN114842306A CN202210763424.8A CN202210763424A CN114842306A CN 114842306 A CN114842306 A CN 114842306A CN 202210763424 A CN202210763424 A CN 202210763424A CN 114842306 A CN114842306 A CN 114842306A
Authority
CN
China
Prior art keywords
image
capillary vessel
retina
trained
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210763424.8A
Other languages
Chinese (zh)
Inventor
蔡芳发
周波
邹小刚
苗瑞
莫少峰
梁书玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen HQVT Technology Co Ltd
Original Assignee
Shenzhen HQVT Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen HQVT Technology Co Ltd filed Critical Shenzhen HQVT Technology Co Ltd
Priority to CN202210763424.8A priority Critical patent/CN114842306A/en
Publication of CN114842306A publication Critical patent/CN114842306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The utility model provides a model training method and device applied to retina focus image type recognition, which relate to the image recognition technology and comprise the following steps: acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, and the to-be-trained retina images are provided with labeled image types and labeled capillary vessel images; inputting a retina image to be trained into a preset model to obtain a predicted image type and capillary vessel extraction image of the retina image to be trained; updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained to obtain an identification model; the recognition model is used for determining the image category of the retina image to be recognized and the capillary vessel image of the retina image to be recognized. The method and the device can improve the identification precision of the image type identification result to a certain extent by utilizing the multitask model based on the image category identification and the capillary vessel image extraction.

Description

Model training method and device applied to retina focus image type recognition
Technical Field
The present disclosure relates to image recognition technologies, and in particular, to a model training method and device applied to retinal lesion image type recognition.
Background
Currently, with the development of technology, the type of a lesion image of a retinal image can be identified. The lesion image type may be used to assist in determining a chronic medical condition associated with retinopathy.
In the prior art, retina image data with radiation such as X-rays is mostly adopted, the retina image data is input into a deep learning model based on a single task, and the model can identify the focus image type of the retina image.
However, the recognition accuracy of the recognition result obtained in the above manner is expected to be further improved.
Disclosure of Invention
The present disclosure provides a model training method and device applied to retinal lesion image type recognition, so as to improve the recognition accuracy of the lesion image type of a retinal image recognized in the prior art.
According to a first aspect of the present disclosure, there is provided a model training method applied to retinal lesion image type recognition, including:
acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images;
inputting the retinal image to be trained into a preset model to obtain a predicted image type and a capillary vessel extraction diagram of the retinal image to be trained, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retinal image to be trained;
updating parameters of the preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
According to a second aspect of the present disclosure, there is provided a retinal lesion image type identification method, including:
obtaining a retina image to be identified;
inputting the retina image to be recognized into a recognition model for processing to obtain the image type and the capillary vessel image of the retina image to be recognized;
the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retinal image to be trained are obtained by processing the retinal image to be trained based on the preset model; the to-be-trained retina image is provided with an annotated image type and a capillary vessel annotation graph, and the capillary vessel annotation graph is an annotation graph of a capillary vessel in the to-be-trained retina image.
According to a third aspect of the present disclosure, there is provided a model training apparatus applied to retinal lesion image type recognition, the apparatus including:
an acquisition unit for acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images;
the training unit is used for inputting the retinal image to be trained into a preset model to obtain a predicted image type of the retinal image to be trained and a capillary vessel extraction diagram, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retinal image to be trained;
the training unit is further used for updating the parameters of the preset model according to the labeled image type, the capillary vessel labeling diagram, the predicted image type and the capillary vessel extraction diagram of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
According to a fourth aspect of the present disclosure, there is provided a retinal lesion image type recognition apparatus, the apparatus including:
an acquisition unit configured to acquire a retina image to be recognized;
the identification unit is used for inputting the retina image to be identified into an identification model for processing to obtain the image type and the capillary vessel image of the retina image to be identified;
the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retinal image to be trained are obtained by processing the retinal image to be trained based on the preset model; the to-be-trained retina image is provided with an annotated image type and a capillary vessel annotation graph, and the capillary vessel annotation graph is an annotation graph of a capillary vessel in the to-be-trained retina image.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program;
the processor is configured to read the computer program stored in the memory, and execute the method according to the first aspect or the second aspect according to the computer program in the memory.
According to a sixth aspect of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the method according to the first or second aspect.
According to a seventh aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the method according to the first or second aspect.
The model training method and device applied to retina focus image type recognition provided by the present disclosure include: acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images; inputting a retina image to be trained into a preset model to obtain a predicted image type and a capillary vessel extraction diagram of the retina image to be trained, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retina image to be trained; updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified. According to the model training method and device applied to retina focus image type recognition, a multitask model based on image type recognition of a retina image and extraction of a capillary vessel image in the retina image can be utilized, so that the recognition accuracy of an obtained image type recognition result is improved to a certain extent; the simultaneously obtained capillary vessel images can also be used for assisting in determining chronic medical diseases related to retinopathy.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a schematic flowchart illustrating a model training method applied to retinal lesion image type recognition according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating a model training method applied to retinal lesion image type recognition according to another exemplary embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a retinal lesion image type recognition method according to an exemplary embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a retinal lesion image type recognition method according to another exemplary embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a model training apparatus applied to retinal lesion image type recognition according to an exemplary embodiment of the present disclosure;
fig. 6 is a block diagram illustrating a retinal lesion image type recognition apparatus according to an exemplary embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
Detailed Description
Currently, with the development of technology, the type of a lesion image of a retinal image can be identified. The lesion image type may be used to assist in determining a chronic medical condition associated with retinopathy. Among them, common medical chronic diseases can cause retinopathy such as hypertension, nephritis, diabetes, subacute endocarditis, anemia, leukemia, polycythemia, septicemia, and the like. The retina is full of abundant capillaries, and many eye diseases need to be distinguished by the capillary condition of the eyeball. In the prior art, retina image data with radiation such as X-rays is mostly adopted, the retina image data is input into a deep learning model based on a single task, and the model can identify the focus image type of the retina image.
However, the recognition accuracy of the recognition result obtained in the above manner is expected to be further improved.
In order to solve the technical problem, the present disclosure provides a solution in which a multitask model based on image class recognition of a retinal image and capillary vessel image extraction in the retinal image is provided. Since the retina image is full of abundant capillaries, many eye diseases can be expressed by the capillary condition of the eyeball. Therefore, the multi-task model combining image type recognition and capillary vessel image extraction can improve the recognition accuracy of the obtained image type recognition result to a certain extent; the simultaneously obtained capillary vessel images can also be used for assisting in determining chronic medical diseases related to retinopathy.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a model training method applied to retinal lesion image type recognition according to an exemplary embodiment of the present disclosure.
As shown in fig. 1, the model training method applied to retinal lesion image type recognition provided in this embodiment includes:
step 101, acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images.
The method provided by the present disclosure may be executed by an electronic device with computing capability, such as a computer. The electronic device is capable of acquiring a training data set.
Wherein, the retina image refers to the retina image of human eyes.
Wherein capillaries refer to capillaries in the retina in the retinal image.
The capillary vessel labeling diagram refers to a capillary vessel image in the retina extracted from the retina image.
In particular, a plurality of retinal images to be trained included in the training dataset may have a plurality of annotated image types. Wherein each retinal image to be trained corresponds to a unique labeled image type.
The retina image to be trained can acquire data from a hospital, and can download public pictures from the internet.
Specifically, the retinal image to be trained may be a retinal image of a person having retinopathy.
Specifically, the retinal image to be trained has an annotated image type and an annotated map of capillary vessels in the retinal image to be trained. The annotated image type is the true image type. The labeled graph of the capillary vessels is an image of capillary vessels in the retina extracted from the retina image to be trained.
Step 102, inputting the retina image to be trained into a preset model to obtain a predicted image type of the retina image to be trained and a capillary vessel extraction diagram, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retina image to be trained.
Specifically, the preset model may be a preset deep neural network model.
Specifically, the retinal image to be trained is input into a preset model, and the preset model can process the retinal image to be trained and output a predicted image type and a capillary extraction map of the retinal image to be trained.
103, updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
Specifically, the loss function of the preset model can be determined according to the labeled image type, the capillary vessel labeling diagram, the predicted image type and the capillary vessel extraction diagram of the retina image to be trained, and the parameters of the preset model are updated by using the loss function to obtain the identification model.
Specifically, the image type of the label of the retinal image to be trained can be used as a label of the predicted image type of the retinal image to be trained, the preset model is trained, and parameters in the preset model are optimized. The predicted image type is brought closer to the annotated image type.
Similarly, the capillary vessel labeling graph of the retinal image to be trained can be used as a label of the capillary vessel extraction graph of the retinal image to be trained, the preset model is trained, and parameters in the preset model are optimized. So that the capillary vessel extraction graph and the capillary vessel labeling graph are closer and closer.
When preset conditions are met, for example, the predicted image type is ninety percent the same as the labeled image type, and the capillary vessel extraction image is ninety percent the same as the capillary vessel labeling image, the training is stopped, and the recognition model is obtained.
Specifically, the retina image to be recognized is input to the recognition model, and the recognition model can process the retina image to be recognized and output the image category of the retina image to be recognized and the capillary vessel image of the retina image to be recognized.
Wherein the image categories may be used to assist in determining a chronic medical condition associated with retinopathy of the retinal image to be identified.
The model training method applied to retina focus image type recognition provided by the present disclosure comprises: acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images; inputting a retina image to be trained into a preset model to obtain a predicted image type and a capillary vessel extraction diagram of the retina image to be trained, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retina image to be trained; updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified. In the method adopted by the method, a multitask model based on image category identification of the retina image and extraction of the capillary vessel image in the retina image can be utilized, so that the identification precision of the obtained image type identification result is improved to a certain extent; the simultaneously obtained capillary vessel images can also be used for assisting in determining chronic medical diseases related to retinopathy.
Fig. 2 is a flowchart illustrating a model training method applied to retinal lesion image type recognition according to another exemplary embodiment of the present disclosure.
As shown in fig. 2, the model training method applied to retinal lesion image type recognition provided in this embodiment includes:
step 201, acquiring an original retina image; the original retina image has an original capillary vessel labeling diagram; the original capillary vessel labeling diagram is a labeling diagram of the capillary vessels in the original retina image.
The original retinal image may be a retinal image obtained by a radiation-free method. For example, the retinal image to be trained may be acquired by a vision sensor. Wherein the vision sensor may be a camera. In particular, the electronic device may acquire a raw retinal image from a vision sensor.
Wherein the original retina image has an original capillary vessel labeling diagram; wherein, the original capillary vessel labeling image is an image of the capillary vessels extracted from the original retina image. Specifically, the original retina image may be masked by using image processing tools such as LabelMe or photoshop, that is, the capillary vessels in the retina are extracted from the original retina image, so that the image values in the capillary vessel region are kept unchanged, and the image values outside the capillary vessel region are all 0, thereby obtaining the original capillary vessel labeling diagram.
Step 202, converting the original retina image and the original capillary vessel labeling diagram into a retina image with a preset size and a capillary vessel labeling diagram with a preset size respectively.
The preset size is a size preset according to actual requirements.
Specifically, the original retina image and the original capillary vessel labeling diagram can be converted into a retina image with a preset size and a capillary vessel labeling diagram with a preset size.
And 203, respectively carrying out pixel normalization processing on the converted retina image and the converted capillary vessel labeling diagram to obtain a processed retina image and a processed capillary vessel labeling diagram.
Wherein, the pixel normalization processing means that the pixel value of 0-255 in the original image is converted into the pixel value of 0-1. Specifically, the pixel values in the converted retina image and the converted capillary vessel labeling diagram may be divided by 255 to obtain a processed retina image and a processed capillary vessel labeling diagram.
Step 204, respectively carrying out image turning and gray value conversion processing on the processed retina image and the processed capillary vessel annotation drawing to obtain a retina image to be trained and a capillary vessel annotation drawing corresponding to the retina image to be trained; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images.
Specifically, the processed retina image and the processed capillary vessel labeling image can be respectively subjected to image inversion according to a certain inversion angle. The turning angle can be a preset turning angle or a random turning angle.
Specifically, the gradation value conversion means changing the pixel value of the image. For example, the retinal image after the image inversion processing and the capillary vessel labeling image after the image inversion processing are respectively subjected to the operation of adding 5 to the pixel value in the image, so as to change the pixel value.
Specifically, the processed retina image and the processed capillary vessel labeling image are respectively subjected to image turning and gray value conversion processing, so that a retina image to be trained and a capillary vessel labeling image corresponding to the retina image to be trained can be obtained.
Specifically, a group of retinal images and capillary vessel labeling graphs can be converted into a plurality of different groups of retinal images and capillary vessel labeling graphs by using image turning and gray value conversion methods, and original data is expanded to enrich the data volume to be trained.
Further, the original retinal image may also have an annotated image category. Specifically, the original retinal image may be labeled by using an image processing tool such as LabelMe or photoshop, so as to obtain the original retinal image with the labeled image type. Wherein, the labeled image category is also the labeled image category of the obtained retina image to be trained.
Further, a plurality of retinal images to be recognized constitute a training data set. The retina image to be trained is provided with an annotated image type and a capillary vessel annotation graph.
Step 205, processing the retina image to be trained according to a first model in the preset models to obtain a predicted image type of the retina image to be trained; processing the retina image to be trained according to a second model in the preset models to obtain a capillary vessel extraction image of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained.
Specifically, the preset model may include a first model and a second model. The first model can process the retina image to be trained to obtain the predicted image type of the retina image to be trained; the second model can process the retina image to be trained to obtain a capillary vessel extraction image of the retina image to be trained. Specifically, the capillary vessel extraction map is an extraction map of capillary vessels in the retina in the retinal image to be trained.
Step 206, determining difference information between the annotated image type and the predicted image type, and determining a first loss function according to the difference information; wherein the first loss function is indicative of a difference between the annotated image type and the predicted image type.
Specifically, the first loss function may be determined based on difference information between the annotated image type and the predicted image type.
Optionally, the formula of the first loss function is as follows:
Figure 611813DEST_PATH_IMAGE002
where x represents the difference between the annotated image type and the predicted image type of the retinal image to be trained.
Step 207, determining union information and intersection information between the capillary vessel labeling graph and the capillary vessel extraction graph, and determining a second loss function according to the union information and the intersection information; the union information represents all characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph; the intersection information represents the same characteristic information between the capillary vessel labeling graph and the capillary vessel extracting graph; the second loss function is used to indicate characteristics of the capillary vessel labeling map and the capillary vessel extraction map.
Specifically, the second loss function may be determined based on the same characteristic information between the capillary vessel labeling map and the capillary vessel extraction map, and the total characteristic information.
Optionally, the formula of the second loss function is as follows:
Figure 997795DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
a capillary vessel labeling diagram representing a retina image to be trained;
Figure 449636DEST_PATH_IMAGE006
a capillary vessel extraction diagram representing a retinal image to be trained;
Figure DEST_PATH_IMAGE007
representing the intersection of the capillary vessel labeling graph and the capillary vessel extraction graph;
Figure 516949DEST_PATH_IMAGE008
represents the union of the true capillary annotation map and the capillary extraction map.
Step 208, updating parameters of a preset model according to the first loss function and the second loss function to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
Specifically, the preset model may be optimized according to the first loss function and the second loss function, and parameters in the preset model are updated to obtain the recognition model. Specifically, the parameters of the first model included in the preset model may be updated by using the first loss function, so as to obtain the updated first model. The parameters of the second model included in the preset model may be updated by using the second loss function, so as to obtain an updated second model. The recognition model may be composed of an updated first model and an updated second model.
Specifically, the retina image to be recognized is input to the recognition model, and the recognition model can process the retina image to be recognized and output the image category of the retina image to be recognized and the capillary vessel image of the retina image to be recognized.
Wherein the image categories may be used to assist in determining a chronic medical condition associated with retinopathy of the retinal image to be identified.
In one implementation, a third loss function is determined according to the first loss function, the second loss function, the first parameter ratio and the second parameter ratio; the first parameter ratio represents the hyperparametric proportion in the process of obtaining the predicted image type based on the preset model, and the second parameter ratio represents the hyperparametric proportion in the process of obtaining the capillary vessel extraction diagram based on the preset model.
Specifically, the first model may further include a first Gated Linear Unit (GLU), where the first Gated Linear Unit is configured to determine a hyper-parameter proportion, that is, a first parameter proportion, in a process of obtaining the predicted image type based on the preset model.
Specifically, the second model may further include a second gated linear unit, and the second gated linear unit is configured to determine the hyperparametric proportion, that is, the second parameter proportion, in the process of obtaining the capillary extraction map based on the preset model.
Specifically, the first gating linear unit and the second gating linear unit can adaptively change the hyperparametric proportion of the first model and the second model according to the training effect, and the dynamic balance effect is achieved. Therefore, when the model is optimized according to the loss function, the first model and the second model can achieve good training precision.
Specifically, the third loss function may be determined according to the first loss function, the second loss function, the first parameter ratio, and the second parameter ratio. The parameters in the preset model may be updated using a third loss function.
Optionally, determining first product information of the first loss function in proportion to the first parameter, and determining second product information of the second loss function in proportion to the second parameter; and determining a third loss function according to the first product information, the second product information and the training parameters of the preset model.
Specifically, the formula of the third loss function is as follows:
Figure 800163DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE011
representing a third loss function;
Figure 560308DEST_PATH_IMAGE012
indicating the first parameter ratio;
Figure DEST_PATH_IMAGE013
indicating the second parameter proportion;
Figure 493586DEST_PATH_IMAGE014
representing a first loss function;
Figure DEST_PATH_IMAGE015
representing a second loss function;
Figure 630169DEST_PATH_IMAGE016
is a partial derivative symbol;
Figure DEST_PATH_IMAGE017
training parameters representing a first model;
Figure 971152DEST_PATH_IMAGE018
representing the training parameters of the second model.
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE019
the generalized bias term in the preset model can play a role in regularization. The generalised bias term may normalize features in the models, thereby reducing the risk of over-fitting between models and the Rademacher complexity (i.e. improving the ability to adapt to random noise).
And updating the parameters of the preset model according to the first loss function, the second loss function and the third loss function to obtain the identification model.
Specifically, a first loss function may be used to update parameters of a first model in a preset model; updating parameters of a second model in the preset model by using a second loss function; updating parameters of the first model and the second model in the preset model by using a third loss function; thereby obtaining a recognition model.
Specifically, since the retinal image is filled with abundant capillaries, many eye diseases can be expressed by the capillary condition of the eyeball. Therefore, the image type recognition and capillary vessel image extraction multitask model is combined in the scheme, the capillary vessel image extraction task model (namely the second model) is generalized to the image type recognition task model (namely the first model), so that two tasks can obtain a more generalized representation, and the recognition accuracy and the generalization capability of the obtained image type recognition result of the recognition model are improved to a certain extent. The simultaneously obtained capillary vessel images can also be used for assisting in determining chronic medical diseases related to retinopathy.
Fig. 3 is a flowchart illustrating a retinal lesion image type identification method according to an exemplary embodiment of the present disclosure.
As shown in fig. 3, the retinal lesion image type identification method provided in this embodiment includes:
step 301, obtaining a retina image to be identified.
Specifically, the retinal image to be recognized may be a retinal image obtained by a non-radiation method. For example, the retinal image to be trained may be acquired by a vision sensor. Wherein the vision sensor may be a camera. Specifically, the electronic device may acquire the retinal image to be recognized from the vision sensor.
Step 302, inputting the retina image to be identified into an identification model for processing to obtain the image type and the capillary vessel image of the retina image to be identified; the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retina image to be trained are obtained by processing the retina image to be trained based on a preset model; the to-be-trained retina image has an annotated image type and a capillary vessel annotation graph, and the capillary vessel annotation graph is an annotation graph of a capillary vessel in the to-be-trained retina image.
Specifically, the retina image to be recognized is input to the recognition model, and the recognition model can process the retina image to be recognized and output the image category of the retina image to be recognized and the capillary vessel image of the retina image to be recognized.
Specifically, the recognition model is obtained by training according to the method of the above embodiment, and is not described again.
Fig. 4 is a flowchart illustrating a retinal lesion image type recognition method according to another exemplary embodiment of the present disclosure.
As shown in fig. 4, the retinal lesion image type identification method provided in this embodiment includes:
step 401, obtaining a retina image to be identified.
Specifically, the principle and implementation of step 401 are similar to those of step 301, and are not described again.
Step 402, converting the retina image to be recognized into an image with a preset size.
The preset size is a size preset according to actual conditions.
Specifically, the retinal image to be recognized may be converted into an image of a preset size.
And 403, performing pixel normalization processing on the converted image to obtain a processed retina image to be identified.
Wherein, the pixel normalization processing means that the pixel value of 0-255 in the original image is converted into the pixel value of 0-1. Specifically, the pixel value in the converted image may be divided by 255 to obtain a processed retina image to be recognized.
Step 404, inputting the processed retina image to be recognized into a recognition model for processing to obtain the image type and the capillary vessel image of the retina image to be recognized; the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retina image to be trained are obtained by processing the retina image to be trained based on a preset model; the to-be-trained retina image has an annotated image type and a capillary vessel annotation graph, and the capillary vessel annotation graph is an annotation graph of a capillary vessel in the to-be-trained retina image.
Specifically, the processed retina image to be recognized is input to the recognition model, and the recognition model may process the processed retina image to be recognized and output an image category of the retina image to be recognized and a capillary vessel image of the retina image to be recognized.
In one implementation, the identification model is obtained by updating parameters of a preset model according to a first loss function and a second loss function;
a first loss function for indicating a difference between the annotated image type and the predicted image type; the first loss function is determined based on difference information between the annotated image type and the predicted image type;
the second loss function is used for indicating the characteristics of the capillary vessel labeling graph and the capillary vessel extraction graph; the second loss function is determined based on union information and intersection information between the capillary vessel labeling graph and the capillary vessel extraction graph; the union information represents all characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph; the intersection set information represents the same characteristic information between the capillary vessel labeling graph and the capillary vessel extracting graph.
In one implementation, the identification model is obtained by updating parameters of a preset model according to a first loss function, a second loss function and a third loss function;
the third loss function is determined based on the first loss function, the second loss function, the first parameter ratio and the second parameter ratio; the first parameter ratio represents the hyperparametric proportion in the process of obtaining the predicted image type based on the preset model, and the second parameter ratio represents the hyperparametric proportion in the process of obtaining the capillary vessel extraction diagram based on the preset model.
In one implementation, the third loss function is determined according to the first product information, the second product information, and the training parameters of the preset model;
the first product information is determined based on a ratio of the first loss function to the first parameter; the second product information is determined based on a ratio of the second loss function to the second parameter.
Specifically, step 404 may refer to the foregoing embodiment, and is not described again.
Fig. 5 is a block diagram illustrating a model training apparatus applied to retinal lesion image type recognition according to an exemplary embodiment of the present disclosure.
As shown in fig. 5, the present disclosure provides a model training apparatus 500 applied to retinal lesion image type recognition, including:
an obtaining unit 510, configured to obtain a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images;
the training unit 520 is configured to input the retinal image to be trained into a preset model, to obtain a predicted image type of the retinal image to be trained and a capillary vessel extraction diagram, where the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retinal image to be trained;
the training unit 520 is further configured to update parameters of a preset model according to the labeled image type, the capillary vessel labeling diagram, the predicted image type and the capillary vessel extraction diagram of the retina image to be trained, so as to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
A training unit 520, specifically configured to determine difference information between the annotated image type and the predicted image type, and determine a first loss function according to the difference information; wherein the first loss function is indicative of a difference between the annotated image type and the predicted image type;
determining union information and intersection information between the capillary vessel labeling graph and the capillary vessel extraction graph, and determining a second loss function according to the union information and the intersection information; the union information represents all characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph; the intersection information represents the same characteristic information between the capillary vessel labeling graph and the capillary vessel extracting graph; the second loss function is used for indicating the characteristics of the capillary vessel labeling graph and the capillary vessel extraction graph;
and updating the parameters of the preset model according to the first loss function and the second loss function to obtain the identification model.
A training unit 520, specifically configured to determine a third loss function according to the first loss function, the second loss function, the first parameter ratio, and the second parameter ratio; the first parameter ratio represents the hyperparametric proportion in the process of obtaining the predicted image type based on the preset model, and the second parameter ratio represents the hyperparametric proportion in the process of obtaining the capillary vessel extraction map based on the preset model;
and updating the parameters of the preset model according to the first loss function, the second loss function and the third loss function to obtain the identification model.
A training unit 520, specifically configured to determine first product information of a ratio of the first loss function to the first parameter, and determine second product information of a ratio of the second loss function to the second parameter;
and determining a third loss function according to the first product information, the second product information and the training parameters of the preset model.
An acquisition unit 510, specifically configured to acquire an original retinal image; the original retina image has an original capillary vessel labeling diagram; the original capillary vessel labeling diagram is a labeling diagram of the capillary vessels in the original retina image;
respectively converting the original retina image and the original capillary vessel labeling image into a retina image with a preset size and a capillary vessel labeling image with a preset size;
respectively carrying out pixel normalization processing on the converted retina image and the converted capillary vessel labeling image to obtain a processed retina image and a processed capillary vessel labeling image;
and respectively carrying out image turning and gray value conversion on the processed retina image and the processed capillary vessel labeling image to obtain a retina image to be trained and a capillary vessel labeling image corresponding to the retina image to be trained.
The training unit 520 is specifically configured to process the retinal image to be trained according to a first model in the preset models to obtain a predicted image type of the retinal image to be trained; and processing the retina image to be trained according to a second model in the preset models to obtain a capillary vessel extraction image of the retina image to be trained.
Fig. 6 is a block diagram illustrating a retinal lesion image type recognition apparatus according to an exemplary embodiment of the present disclosure.
As shown in fig. 6, the present disclosure provides a retinal lesion image type recognition apparatus 600 including:
an acquisition unit 610 for acquiring a retina image to be recognized;
the identification unit 620 is used for inputting the retina image to be identified into the identification model for processing to obtain the image type and the capillary vessel image of the retina image to be identified;
the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retina image to be trained are obtained by processing the retina image to be trained based on a preset model; the to-be-trained retina image has an annotated image type and a capillary vessel annotation drawing, and the capillary vessel annotation drawing is an annotation drawing of a capillary vessel in the to-be-trained retina image.
In one implementation, the identification model is obtained by updating parameters of a preset model according to a first loss function and a second loss function;
a first loss function for indicating a difference between the annotated image type and the predicted image type; the first loss function is determined based on difference information between the annotated image type and the predicted image type;
the second loss function is used for indicating the characteristics of the capillary vessel labeling graph and the capillary vessel extraction graph; the second loss function is determined based on union information and intersection information between the capillary vessel labeling graph and the capillary vessel extraction graph; the union information represents all characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph; the intersection set information represents the same characteristic information between the capillary vessel labeling graph and the capillary vessel extracting graph.
In one implementation, the identification model is obtained by updating parameters of a preset model according to a first loss function, a second loss function and a third loss function;
the third loss function is determined based on the first loss function, the second loss function, the first parameter ratio and the second parameter ratio; the first parameter ratio represents the hyperparametric proportion in the process of obtaining the predicted image type based on the preset model, and the second parameter ratio represents the hyperparametric proportion in the process of obtaining the capillary vessel extraction diagram based on the preset model.
In one implementation, the third loss function is determined according to the first product information, the second product information, and the training parameters of the preset model;
the first product information is determined based on a ratio of the first loss function to the first parameter; the second product information is determined based on a ratio of the second loss function to the second parameter.
The acquiring unit 610 is further configured to convert the retinal image to be recognized into an image of a preset size;
and carrying out pixel normalization processing on the converted image to obtain a processed retina image to be identified.
Fig. 7 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
As shown in fig. 7, the electronic device provided in this embodiment includes:
a memory 701;
a processor 702; and
a computer program;
wherein the computer program is stored in the memory 701 and configured to be executed by the processor 702 to implement any of the methods as above.
The present embodiments also provide a computer-readable storage medium having a computer program stored thereon, the computer program being executable by a processor to implement any of the methods as above.
The present embodiment also provides a computer program product comprising a computer program which, when executed by a processor, performs any of the methods described above.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A model training method applied to retinal lesion image type recognition is characterized by comprising the following steps:
acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images;
inputting the retinal image to be trained into a preset model to obtain a predicted image type and a capillary vessel extraction diagram of the retinal image to be trained, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retinal image to be trained;
updating parameters of the preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
2. The method according to claim 1, wherein updating parameters of the preset model according to the labeled image type, the capillary vessel labeling diagram, the predicted image type and the capillary vessel extraction diagram of the retinal image to be trained to obtain a recognition model comprises:
determining difference information between the annotated image type and the predicted image type, and determining a first loss function according to the difference information; wherein the first loss function is indicative of a difference between the annotated image type and the predicted image type;
determining union information and intersection information between the capillary vessel labeling graph and the capillary vessel extraction graph, and determining a second loss function according to the union information and the intersection information; wherein the union information characterizes all characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph; the intersection information represents the same characteristic information between the capillary vessel labeling graph and the capillary vessel extracting graph; the second loss function is used for indicating the characteristics of the capillary vessel labeling graph and the capillary vessel extraction graph;
and updating the parameters of the preset model according to the first loss function and the second loss function to obtain an identification model.
3. The method of claim 2, wherein updating the parameters of the predetermined model according to the first loss function and the second loss function to obtain an identification model comprises:
determining a third loss function according to the first loss function, the second loss function, the first parameter ratio and the second parameter ratio; the first parameter proportion representation represents the hyperparametric proportion in the process of obtaining the predicted image type based on a preset model, and the second parameter proportion representation represents the hyperparametric proportion in the process of obtaining the capillary extraction map based on the preset model;
and updating the parameters of the preset model according to the first loss function, the second loss function and the third loss function to obtain an identification model.
4. The method of claim 3, wherein determining a third loss function based on the first loss function, the second loss function, a first parameter ratio, and a second parameter ratio comprises:
determining first product information of the first loss function in proportion to the first parameter, and determining second product information of the second loss function in proportion to the second parameter;
and determining the third loss function according to the first product information, the second product information and the training parameters of the preset model.
5. The method of any one of claims 1-4, wherein the obtaining a training data set comprises:
acquiring an original retina image; the original retina image has an original capillary vessel labeling diagram; the original capillary vessel labeling diagram is a labeling diagram of the capillary vessels in the original retina image;
respectively converting the original retina image and the original capillary vessel labeling image into a retina image with a preset size and a capillary vessel labeling image with a preset size;
respectively carrying out pixel normalization processing on the converted retina image and the converted capillary vessel labeling image to obtain a processed retina image and a processed capillary vessel labeling image;
and respectively carrying out image turning and gray value conversion on the processed retina image and the processed capillary vessel labeling image to obtain a retina image to be trained and a capillary vessel labeling image corresponding to the retina image to be trained.
6. The method according to any one of claims 1-4, wherein inputting the retinal image to be trained into a preset model to obtain a predicted image type and capillary vessel extraction map of the retinal image to be trained comprises:
processing the retinal image to be trained according to a first model in the preset models to obtain a predicted image type of the retinal image to be trained; and processing the retinal image to be trained according to a second model in the preset models to obtain a capillary vessel extraction image of the retinal image to be trained.
7. A retinal lesion image type identification method, the method comprising:
obtaining a retina image to be identified;
inputting the retina image to be recognized into a recognition model for processing to obtain the image type and the capillary vessel image of the retina image to be recognized;
the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction graph is an extraction graph of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retinal image to be trained are obtained by processing the retinal image to be trained based on the preset model; the to-be-trained retina image is provided with an annotated image type and a capillary vessel annotation graph, and the capillary vessel annotation graph is an annotation graph of a capillary vessel in the to-be-trained retina image.
8. The method of claim 7, wherein the identification model is obtained by updating parameters of the predetermined model according to a first loss function and a second loss function;
the first loss function is to indicate a difference between the annotated image type and the predicted image type; the first loss function is determined based on difference information between the annotated image type and the predicted image type;
the second loss function is used for indicating the characteristics of the capillary vessel labeling graph and the capillary vessel extraction graph; the second loss function is determined based on union information and intersection information between the capillary vessel labeling graph and the capillary vessel extraction graph; wherein the union information characterizes all characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph; the intersection information represents the same characteristic information between the capillary vessel labeling graph and the capillary vessel extraction graph.
9. The method of claim 8, wherein the identification model is obtained by updating parameters of the predetermined model according to the first loss function, the second loss function and a third loss function;
the third loss function is determined based on the first loss function, the second loss function, a first parameter ratio, and a second parameter ratio; the first parameter ratio represents the hyperparametric proportion in the process of obtaining the predicted image type based on a preset model, and the second parameter ratio represents the hyperparametric proportion in the process of obtaining the capillary vessel extraction map based on the preset model.
10. The method of claim 9, wherein the third loss function is determined according to the first product information, the second product information, and the training parameters of the predetermined model;
the first product information is determined based on a ratio of the first loss function to the first parameter; the second product information is determined based on a ratio of the second loss function to the second parameter.
11. The method according to any one of claims 7-10, wherein before inputting the retinal image to be recognized into a recognition model for processing, the method further comprises:
converting the retina image to be recognized into an image with a preset size;
and carrying out pixel normalization processing on the converted image to obtain a processed retina image to be identified.
12. A model training device applied to retina focus image type recognition is characterized by comprising:
an acquisition unit for acquiring a training data set; the training data set comprises a plurality of to-be-trained retina images, the to-be-trained retina images are provided with labeled image types and capillary vessel labeling diagrams, and the capillary vessel labeling diagrams are labeling diagrams of capillary vessels in the to-be-trained retina images;
the training unit is used for inputting the retinal image to be trained into a preset model to obtain a predicted image type of the retinal image to be trained and a capillary vessel extraction diagram, wherein the capillary vessel extraction diagram is an extraction diagram of capillary vessels in the retinal image to be trained;
the training unit is further used for updating the parameters of the preset model according to the labeled image type, the capillary vessel labeling diagram, the predicted image type and the capillary vessel extraction diagram of the retina image to be trained to obtain an identification model; the identification model is used for determining the image category of the retina image to be identified and the capillary vessel image of the retina image to be identified.
13. An apparatus for retinal lesion image type recognition, the apparatus comprising:
an acquisition unit configured to acquire a retina image to be recognized;
the identification unit is used for inputting the retina image to be identified into an identification model for processing to obtain the image type and the capillary vessel image of the retina image to be identified;
the identification model is obtained by updating parameters of a preset model according to the marked image type, the capillary vessel marking graph, the predicted image type and the capillary vessel extraction graph of the retina image to be trained; the capillary vessel extraction diagram is an extraction diagram of capillary vessels in a retina image to be trained; the predicted image type and capillary vessel extraction image of the retinal image to be trained are obtained by processing the retinal image to be trained based on the preset model; the to-be-trained retina image is provided with an annotated image type and a capillary vessel annotation graph, and the capillary vessel annotation graph is an annotation graph of a capillary vessel in the to-be-trained retina image.
14. An electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program;
the processor is configured to read the computer program stored in the memory and execute the method of any one of claims 1-6 or 7-11 according to the computer program in the memory.
15. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, perform the method of any one of claims 1-6 or 7-11.
CN202210763424.8A 2022-07-01 2022-07-01 Model training method and device applied to retina focus image type recognition Pending CN114842306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763424.8A CN114842306A (en) 2022-07-01 2022-07-01 Model training method and device applied to retina focus image type recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763424.8A CN114842306A (en) 2022-07-01 2022-07-01 Model training method and device applied to retina focus image type recognition

Publications (1)

Publication Number Publication Date
CN114842306A true CN114842306A (en) 2022-08-02

Family

ID=82574138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763424.8A Pending CN114842306A (en) 2022-07-01 2022-07-01 Model training method and device applied to retina focus image type recognition

Country Status (1)

Country Link
CN (1) CN114842306A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130301008A1 (en) * 2012-05-10 2013-11-14 Carl Zeiss Meditec, Inc. Analysis and visualization of oct angiography data
CN110490860A (en) * 2019-08-21 2019-11-22 北京大恒普信医疗技术有限公司 Diabetic retinopathy recognition methods, device and electronic equipment
CN112669293A (en) * 2020-12-31 2021-04-16 上海商汤智能科技有限公司 Image detection method, training method of detection model, related device and equipment
CN113763336A (en) * 2021-08-24 2021-12-07 北京鹰瞳科技发展股份有限公司 Image multi-task identification method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130301008A1 (en) * 2012-05-10 2013-11-14 Carl Zeiss Meditec, Inc. Analysis and visualization of oct angiography data
CN110490860A (en) * 2019-08-21 2019-11-22 北京大恒普信医疗技术有限公司 Diabetic retinopathy recognition methods, device and electronic equipment
CN112669293A (en) * 2020-12-31 2021-04-16 上海商汤智能科技有限公司 Image detection method, training method of detection model, related device and equipment
CN113763336A (en) * 2021-08-24 2021-12-07 北京鹰瞳科技发展股份有限公司 Image multi-task identification method and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钟冰: "基于全卷积编码解码结构的视网膜图像分割方法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 医药卫生科技辑》 *

Similar Documents

Publication Publication Date Title
Wang et al. Dual encoding u-net for retinal vessel segmentation
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
US20210365717A1 (en) Method and apparatus for segmenting a medical image, and storage medium
CN109558832B (en) Human body posture detection method, device, equipment and storage medium
US20180165810A1 (en) Method of automatically detecting microaneurysm based on multi-sieving convolutional neural network
CN110211087B (en) Sharable semiautomatic marking method for diabetic fundus lesions
WO2020260936A1 (en) Medical image segmentation using an integrated edge guidance module and object segmentation network
CN109902548B (en) Object attribute identification method and device, computing equipment and system
CN113240655B (en) Method, storage medium and device for automatically detecting type of fundus image
KR20200144398A (en) Apparatus for performing class incremental learning and operation method thereof
CN112001399B (en) Image scene classification method and device based on local feature saliency
CN111160239A (en) Concentration degree evaluation method and device
CN112052877A (en) Image fine-grained classification method based on cascade enhanced network
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
CN112541394A (en) Black eye and rhinitis identification method, system and computer medium
CN112883931A (en) Real-time true and false motion judgment method based on long and short term memory network
CN113052236A (en) Pneumonia image classification method based on NASN
CN112991281A (en) Visual detection method, system, electronic device and medium
CN112883930A (en) Real-time true and false motion judgment method based on full-connection network
Gollapudi et al. Artificial intelligence and computer vision
CN114842306A (en) Model training method and device applied to retina focus image type recognition
CN116486465A (en) Image recognition method and system for face structure analysis
CN110610184B (en) Method, device and equipment for detecting salient targets of images
CN113111879B (en) Cell detection method and system
CN111915623B (en) Image segmentation method and device using gating and adaptive attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220802