CN111428737B - Instance retrieval method, device, server and storage medium for ophthalmic image - Google Patents

Instance retrieval method, device, server and storage medium for ophthalmic image Download PDF

Info

Publication number
CN111428737B
CN111428737B CN202010249627.6A CN202010249627A CN111428737B CN 111428737 B CN111428737 B CN 111428737B CN 202010249627 A CN202010249627 A CN 202010249627A CN 111428737 B CN111428737 B CN 111428737B
Authority
CN
China
Prior art keywords
image
feature
deep learning
learning model
ophthalmic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010249627.6A
Other languages
Chinese (zh)
Other versions
CN111428737A (en
Inventor
方建生
刘江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern University of Science and Technology
Original Assignee
Southern University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern University of Science and Technology filed Critical Southern University of Science and Technology
Priority to CN202010249627.6A priority Critical patent/CN111428737B/en
Publication of CN111428737A publication Critical patent/CN111428737A/en
Application granted granted Critical
Publication of CN111428737B publication Critical patent/CN111428737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the invention discloses an example retrieval method, a device, a server and a storage medium of an ophthalmic image, wherein the method comprises the following steps: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.

Description

Instance retrieval method, device, server and storage medium for ophthalmic image
Technical Field
The embodiment of the invention relates to a retrieval technology, in particular to an example retrieval method, device, server and storage medium of an ophthalmic image.
Background
With the development of imaging technology, ophthalmic digital images have become the main data of ophthalmology, and this trend drives the construction of an ophthalmic image retrieval function to assist doctors in clinical decisions. Conventionally, an ophthalmic image retrieval function adopts a text-based retrieval method, firstly performs text description on an image (establishes a corresponding relationship between the text and the image), inputs a keyword query during retrieval, and returns a sorting result. The method for finding the graph by words has the semantic difference that the text description is inconsistent with the image content, and the retrieval effect is affected. With the development of computer vision technology, content-based image retrieval (CBIR) method is beginning to be applied to ophthalmology. The method for finding the picture by the picture can search according to the characteristics of the color, the shape, the texture and the like of the picture, and avoid the semantic difference of text description and image content. Content-based image retrieval (CBIR) technology is a technology that retrieves the most similar image according to the Content of an image, and integrates information retrieval, computer vision, and the like. In recent years, in the field of medical images, a deep learning algorithm represented by a deep convolutional network (CNN) obtains excellent performance in disease classification and focus segmentation of ophthalmic images, and the characteristics of extracted textures, colors, forms and the like exceed those of traditional classifiers (such as a Support Vector Machine (SVM), random Forests (RF) and the like), so that a technical foundation is provided for the construction of an image retrieval function.
Diseases at various parts of the human body can be represented by ocular lesions, so the academia and industry are widely devoted to automatically screening diseases by analyzing ophthalmic digital images through artificial intelligence algorithms, and related achievements have been made. For example, the university of washington developed a software BiliScreen that can diagnose liver cancer according to eye color, developed free software White Eye Detector that can determine eye cancer by photographs, developed by the state Bei Leda of texas in united states, and introduced an intelligent screening fundus camera from domestic to real health companies. However, depending on the ophthalmic medical imaging technology and the image processing technology, the automatic disease screening method has the problems of interpretation and accuracy challenges, and samples used for training the model have the problems of difficult acquisition, subjective ambiguity labeling and the like, so that the clinical application still has quite long routes. More importantly, although the computer screening results are merely used as a secondary reference for the physician, the computer results more or less affect the physician's readjustment and thus may affect the final diagnostic result. In the scene that computer vision technology is widely applied to natural images, such as face recognition, automatic driving and the like, and the ophthalmic digital image has such important significance for early disease discovery, the search engine construction based on the computer vision technology can be considered, and the aim of improving the decision efficiency is achieved without directly giving a decision result.
Based on the application value of the ophthalmic image retrieval, related researches on the ophthalmic image retrieval are developed at present, but the research on an example-level retrieval method based on a lesion area is relatively insufficient. The retrieval of the lesion location is very important for case retrieval. Generally, the discrimination of an image is mainly based on the identification and comparison of critical areas (i.e., lesion areas). Assuming that the two images have a small lesion area ratio, if the whole images are compared, the feature representation of the lesion area is ignored, which leads to an error in similarity.
Disclosure of Invention
The invention provides an example retrieval method of an ophthalmic image, which is used for improving the precision and accuracy of the ophthalmic image retrieval and improving the experience of a user.
In a first aspect, an embodiment of the present invention provides an example retrieval method for an ophthalmic image, the method including:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature;
and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions.
Optionally, before the step of acquiring the eye image of the current user, the method further includes:
acquiring a sample image and a case description thereof;
and establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model.
Optionally, the local features include: blood vessels, cornea, or angle of the atrium.
Optionally, the first deep learning model includes: the system comprises a feature encoding module, a context semantic extraction module and a feature decoding module.
Optionally, the feature encoding module uses a pre-trained feature extraction module to extract local features.
Optionally, the context semantic extraction module extracts high-order feature information for the local features by using a densely linked hole convolution operation and a multi-scale pooling operation.
Optionally, the feature decoding module fuses the information extracted by the encoding model and the context semantic extraction module to obtain a segmentation result.
In a second aspect, an embodiment of the present invention further provides an example retrieving apparatus for an ophthalmic image, including:
the data acquisition module is used for acquiring an eye image of the current user;
the data extraction module is used for acquiring a preset feature image in the eye image by using a first deep learning model according to the requirement of a user, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature;
and the data identification module is used for inputting the preset characteristic image into a pre-trained second deep learning model to obtain a matched sample image and a case description.
In a third aspect, an embodiment of the present invention further provides a server, where the server includes:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the example retrieval method of an ophthalmic image as described in any of the above.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an example retrieval method for an ophthalmic image as described in any one of the preceding claims.
The embodiment of the invention discloses an example retrieval method, a device, a server and a storage medium of an ophthalmic image, wherein the method comprises the following steps: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.
Drawings
FIG. 1 is a flowchart of an example method for retrieving an ophthalmic image according to an embodiment of the present invention;
FIG. 2 is a flowchart of an example method for retrieving an ophthalmic image according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an exemplary device for retrieving an ophthalmic image according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Furthermore, the terms "first," "second," and the like, may be used herein to describe various directions, acts, steps, or elements, etc., but these directions, acts, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first deep learning model may be referred to as a second deep learning model, and similarly, a second deep learning model may be referred to as a first deep learning model without departing from the scope of the present application. Both the first and second deep learning models are deep learning models, but they are not the same deep learning model. The terms "first," "second," and the like, are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Example 1
Fig. 1 is a flowchart of an example searching method for an ophthalmic image according to an embodiment of the present invention, where the embodiment is applicable to a case of searching for an ophthalmic disease on a subscriber line, and specifically includes the following steps:
step 100, acquiring an eye image of a current user.
In this embodiment, an eye image of a user, typically an ophthalmic digital image, is first acquired, and the ophthalmic digital image imaging mode includes an ocular surface color photograph, a fundus color photograph, an optical coherence tomography (Optical Coherence Tomography, OCT), an anterior optical coherence tomography (Anterior Segment Optical Coherence Tomography, AS-OCT), an optical coherence tomography angiography (Optical Coherence Tomography Angiography, OCTA), a living cornea confocal microscope (in vivo confocal microscopy, IVCM), a fluorescein fundus angiography (Fundus Fluorescein Angiography, FFA), an indocyanine green choroidal angiography (Indocyanine Green Angiography, ICGA), and the like. In this embodiment, the ophthalmic digital image includes: fundus oculi, corneal neurogram, and OCT images. The imaging device outputs digital images by observing the morphology of blood vessels, nerves, cornea, crystals, crystalline lens, iris and other tissue structures of the eyes in the human body, wherein the ophthalmoscope, slit lamp and optical coherence tomography (Optical Coherence Tomography, OCT). The ophthalmologic digital images generated by different imaging devices are different, have different resolutions and different parts, and can be used for diagnosing different disease types. Such as posterior segment optical coherence tomography, has important value in clinical examination and diagnosis of retinal diseases, macular disease, optic nerve disease, glaucoma, and the like.
Step 110, a preset feature image in the eye image is obtained by using a first deep learning model according to the user's requirement, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature.
In this embodiment, the eye image includes a plurality of local features, including: ocular characteristics such as blood vessels, cornea, or angle of the room. The first deep learning model is a Context-encoding Network (CE-Net) based deep learning model, and specifically, the first deep learning model includes: the system comprises a feature encoding module, a context semantic extraction module and a feature decoding module.
The feature encoding module extracts local features using a pre-trained feature extraction module. The feature coding module adopts a pre-trained ResNet-34 as an encoder, the first 4 feature extraction parts in the residual error network are reserved, and an average pooling layer and a full connection layer are abandoned. The ResNet extraction feature module is added with a quick connection mechanism, so that gradient hours are avoided, and network convergence is quickened. The feature encoding module is mainly used for marking a plurality of local features of the eye feature image and distinguishing other local features. The context semantic extraction module extracts high-order feature information for the local features by using densely linked hole convolution operations and multi-scale pooling operations. The context semantic extraction module consists of a dense cavity convolution module and a residual error multi-scale pooling module. The dense spatial convolution module has 4 cascaded branches. The residual multi-scale pooling module comprises four convolution kernels with different sizes, and the four-layer output comprises feature output graphs with different sizes. The context semantic extraction is performed by extracting local features to be extracted from the marked eye feature images and obtaining high-order feature vectors of the local features. And the feature decoding module fuses the information extracted by the coding model and the context semantic extraction module to obtain a segmentation result. The feature decoding module aims to recover the high-order feature vectors extracted from the feature encoder module and the up-down Wen Yuyi module. Some detailed information is transmitted from the encoder to the decoder using jump chaining to compensate for information loss due to chained pools or cross-convolution operations, while a simple college transposed convolution decoding module is used to improve decoding performance.
Taking local characteristic blood vessels as an example, extracting blood vessel characteristics in the eye images based on a deep learning model of a context coding network, carrying out binarization unified operation on the eye images, extracting pixel points which accord with blood vessel pixel values after pooling convolution calculation, and discarding the pixel points to obtain the eye images only comprising the local characteristics of the blood vessels, thereby being convenient for improving retrieval precision and retrieval accuracy during retrieval.
And step 120, inputting the preset feature image into a pre-trained second deep learning model to obtain a matched sample image and a case description.
In this embodiment, the second deep learning model is a Triplet model, and the local feature image in step 110 is obtained, where the local feature image includes a high-dimensional feature vector, and the similarity cannot be directly calculated, so that the feature vector of the image is further sent to the Triplet network structure to extract a low-dimensional discrete hash code, and according to the discrete hash code, the retrieval can be performed. An eye picture (containing a specific lesion area) is newly input, a hash code is obtained through the same CE-Net and Triplet two models, hamming distance calculation is carried out on the hash code and samples of the existing hash codes in a database, a sample image with highest similarity (also containing a similar lesion area) and a case description of the sample image are returned, the case description comprises specific treatment processes of illness state, diagnosis method and the like of the sample image, and example retrieval is achieved through extraction of local features of the eye image.
The embodiment of the invention discloses an example retrieval method of an ophthalmic image, which comprises the following steps: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.
Example two
Fig. 2 is a flowchart of an example searching method for an ophthalmic image according to an embodiment of the present invention, where the embodiment is applicable to a case of searching for an ophthalmic disease on a subscriber line, and specifically includes the following steps:
step 200, acquiring a sample image and a case description thereof.
In this embodiment, a database is built by acquiring a sample image of a historical user and a case description thereof, after acquiring a current eye image of the user, a preset deep learning model is trained according to local features to be searched for the eye image to acquire a trained model, and then the current eye image of the user is input into the trained deep learning model, so that the closest sample image and the case description thereof can be input, thereby achieving the purpose of searching.
Step 210, establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model.
In this embodiment, the deep learning model includes a CE-Net network and a Triplet model, and the CE-Net structure segmentation model is used to map features of an original image to features of a salient lesion region, where dimensions of the features are unchanged, and the purpose of performing structure segmentation on the image in advance is to mine information of the lesion region. The Triplet model further maps the high-dimensional structural features into low-dimensional discrete hash codes, and distance calculation can be performed in the hamming space to judge the similarity of the images. The length K of the discrete hash code is typically much smaller than the structural features.
In this embodiment, the sample image is used to train the Triplet model, so that the Triplet model learns hash codes from structural features for similarity calculation. The structure of the Triplet model is as shown in the figure, three pictures are input simultaneously and respectively pass through three networks with the same structure, and finally, tri-state loss is used for training network weights which are shared by the three networks. The method is characterized in that three pictures have a relation, one picture is selected, and then one relevant picture and one irrelevant picture are selected for training together. The definition of relevant and irrelevant is whether they belong to the same class, e.g. all have a certain disease, or all have the same lesion area, which requires a labeling in advance. Class labels are also required for training the CE-Net model, and lesion areas are marked. This sample operation is a precondition operation, and thus is not particularly described. The tri-state loss of the Triplet model is designed by using correlation and uncorrelation, and if the correlations are correlated, the hash codes should be close; if uncorrelated, the hash code should be far apart.
Step 220, obtaining an eye image of the current user.
Step 230, acquiring a preset feature image in the eye image by using a first deep learning model according to a user requirement, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature.
Step 240, inputting the preset feature image into a pre-trained second deep learning model to obtain a matched sample image and a case description.
In this embodiment, by acquiring the local feature image in step 230, the local feature image contains a high-dimensional feature vector, and the similarity calculation cannot be directly performed, so that the feature vector of the image is further sent to the triple network structure to extract a low-dimensional discrete hash code, and according to the discrete hash code, the retrieval can be performed. An eye picture (containing a specific lesion area) is newly input, a hash code is obtained through the same CE-Net and Triplet two models, hamming distance calculation is carried out on the hash code and samples of the existing hash codes in a database, a sample image with highest similarity (also containing a similar lesion area) and a case description of the sample image are returned, the case description comprises specific treatment processes of illness state, diagnosis method and the like of the sample image, and example retrieval is achieved through extraction of local features of the eye image.
The embodiment of the invention discloses an example retrieval method of an ophthalmic image, which comprises the following steps: acquiring a sample image and a case description thereof; establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model; acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.
Example III
The example retrieval device for the ophthalmic image provided by the embodiment of the invention can implement the example retrieval method for the ophthalmic image provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Fig. 3 is a schematic structural diagram of an example retrieving device 300 for an ophthalmic image according to an embodiment of the present invention. Referring to fig. 3, an example retrieving apparatus 300 for an ophthalmic image according to an embodiment of the present invention may specifically include:
a data acquisition module 310, configured to acquire an eye image of a current user;
the data extraction module 320 is configured to obtain a preset feature image in the eye image according to a user requirement by using a first deep learning model, where the preset feature image is one of a plurality of local features in the eye image that preferentially protrudes from the preset image feature;
the data recognition module 330 is configured to input the preset feature image into a pre-trained second deep learning model to obtain a matched sample image and a case description.
Further, before the obtaining the eye image of the current user, the method further includes:
acquiring a sample image and a case description thereof;
and establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model.
Further, the local features include: blood vessels, cornea, or angle of the atrium.
Further, the deep learning model includes: the system comprises a feature encoding module, a context semantic extraction module and a feature decoding module.
Further, the feature encoding module uses a pre-trained feature extraction module to extract local features.
Further, the context semantic extraction module extracts high-order feature information for the local features by using densely-linked hole convolution operations and multi-scale pooling operations.
Further, the feature decoding module fuses the coding model and the information extracted by the context semantic extraction module to obtain a segmentation result.
The embodiment of the invention discloses an example retrieval device of an ophthalmic image, which comprises the following components: the data acquisition module is used for acquiring an eye image of the current user; the data extraction module is used for acquiring a preset feature image in the eye image by using a first deep learning model according to the requirement of a user, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and the data identification module is used for inputting the preset characteristic image into a pre-trained second deep learning model to obtain a matched sample image and a case description. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.
Example IV
Fig. 4 is a schematic structural diagram of a computer server according to an embodiment of the present invention, where, as shown in fig. 4, the computer server includes a memory 410 and a processor 420, and the number of the processors 420 in the computer server may be one or more, and in fig. 4, one processor 420 is taken as an example. The memory 410, processor 420 in the device may be connected by a bus or other means, for example in fig. 4.
The memory 410 is used as a computer readable storage medium, and may be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the example retrieving method of an ophthalmic image in the embodiment of the present invention (for example, the data acquisition module 310, the data extraction module 320, and the data identification module 330 in the example retrieving apparatus 300 of an ophthalmic image), and the processor 420 executes the software programs, instructions, and modules stored in the memory 410, thereby performing various functional applications of the device/terminal/device and data processing, that is, implementing the example retrieving method of an ophthalmic image described above.
Wherein the processor 420 is configured to execute a computer program stored in the memory 410, and the following steps are implemented:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature;
and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions.
In one embodiment, the computer program of the computer device provided in the embodiments of the present invention is not limited to the above method operations, but may also perform the related operations in the method for searching the instance of the ophthalmic image provided in any embodiment of the present invention.
Memory 410 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 410 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 410 may further include memory remotely located relative to processor 420, which may be connected to the device/terminal/device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the invention discloses an instance retrieval server of an ophthalmic image, which is used for executing the following method: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing an example retrieval method of an ophthalmic image, the method comprising:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature;
and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the above-described method operations, and may also perform the related operations in the method for retrieving an instance of an ophthalmic image provided in any embodiment of the present invention.
The computer-readable storage media of embodiments of the present invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The embodiment of the invention discloses an example retrieval storage medium of an ophthalmic image, which is used for executing the following method: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset image feature; and inputting the preset characteristic images into a pre-trained second deep learning model to obtain matched sample images and case descriptions. According to the example retrieval method for the ophthalmic image, provided by the embodiment of the invention, the local characteristics of the ophthalmic image are extracted, the system is used for identifying in the deep learning model, and the identification result of the local characteristics is output, so that the problem that the local characteristics of the ophthalmic image cannot be utilized for directional retrieval in the prior art is solved, the precision and the accuracy of the ophthalmic image retrieval are improved, and the user experience is improved.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (6)

1. An example retrieval method for an ophthalmic image, comprising:
acquiring an eye image of a current user; wherein the eye image comprises a plurality of local features;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the preset feature image which are preferentially highlighted in the eye image;
inputting the preset feature images into a pre-trained second deep learning model to obtain matched sample images and case descriptions;
wherein the first deep learning model comprises: the system comprises a feature encoding module, a context semantic extraction module and a feature decoding module; the feature coding module extracts local features by using a pre-trained feature extraction module and is used for marking a plurality of local features of the eye image; the context semantic extraction module extracts local features to be extracted from the marked eye images, and extracts high-order feature information from the local features by using densely-linked cavity convolution operation and multi-scale pooling operation; and the feature decoding module fuses the information extracted by the feature encoding module and the context semantic extraction module to obtain a segmentation result.
2. The method for retrieving an instance of an ophthalmic image according to claim 1, further comprising, prior to the step of obtaining the ophthalmic image of the current user:
acquiring a sample image and a case description thereof;
and establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model.
3. An example retrieval method for an ophthalmic image as defined in claim 1, wherein the local features include: blood vessels, cornea, or angle of the atrium.
4. An instance retrieval device for an ophthalmic image, comprising:
the data acquisition module is used for acquiring an eye image of the current user; wherein the eye image comprises a plurality of local features;
the data extraction module is used for acquiring a preset feature image in the eye image by using a first deep learning model according to the requirement of a user, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted in the preset feature image;
the data identification module is used for inputting the preset characteristic image into a pre-trained second deep learning model to obtain a matched sample image and a case description;
wherein the first deep learning model comprises: the system comprises a feature encoding module, a context semantic extraction module and a feature decoding module; the feature coding module extracts local features by using a pre-trained feature extraction module and is used for marking a plurality of local features of the eye image; the context semantic extraction module extracts local features to be extracted from the marked eye images, and extracts high-order feature information from the local features by using densely-linked cavity convolution operation and multi-scale pooling operation; and the feature decoding module fuses the information extracted by the feature encoding module and the context semantic extraction module to obtain a segmentation result.
5. A server, the server comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the example retrieval method of an ophthalmic image of any of claims 1-3.
6. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements an instance retrieval method of an ophthalmic image according to any one of claims 1-3.
CN202010249627.6A 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image Active CN111428737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010249627.6A CN111428737B (en) 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249627.6A CN111428737B (en) 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image

Publications (2)

Publication Number Publication Date
CN111428737A CN111428737A (en) 2020-07-17
CN111428737B true CN111428737B (en) 2024-01-19

Family

ID=71550455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249627.6A Active CN111428737B (en) 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image

Country Status (1)

Country Link
CN (1) CN111428737B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579750A (en) * 2020-11-30 2021-03-30 百度健康(北京)科技有限公司 Similar medical record retrieval method, device, equipment and storage medium
CN113343943B (en) * 2021-07-21 2023-04-28 西安电子科技大学 Eye image segmentation method based on scleral region supervision
CN113554641B (en) * 2021-07-30 2022-04-12 江苏盛泽医院 Pediatric pharyngeal image acquisition method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829446A (en) * 2019-03-06 2019-05-31 百度在线网络技术(北京)有限公司 Eye fundus image recognition methods, device, electronic equipment and storage medium
CN110427509A (en) * 2019-08-05 2019-11-08 山东浪潮人工智能研究院有限公司 A kind of multi-scale feature fusion image Hash search method and system based on deep learning
CN110751637A (en) * 2019-10-14 2020-02-04 北京至真互联网技术有限公司 Diabetic retinopathy detection system, method, equipment and training system
CN110837572A (en) * 2019-11-15 2020-02-25 北京推想科技有限公司 Image retrieval method and device, readable storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829446A (en) * 2019-03-06 2019-05-31 百度在线网络技术(北京)有限公司 Eye fundus image recognition methods, device, electronic equipment and storage medium
CN110427509A (en) * 2019-08-05 2019-11-08 山东浪潮人工智能研究院有限公司 A kind of multi-scale feature fusion image Hash search method and system based on deep learning
CN110751637A (en) * 2019-10-14 2020-02-04 北京至真互联网技术有限公司 Diabetic retinopathy detection system, method, equipment and training system
CN110837572A (en) * 2019-11-15 2020-02-25 北京推想科技有限公司 Image retrieval method and device, readable storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111428737A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
Islam et al. Applying supervised contrastive learning for the detection of diabetic retinopathy and its severity levels from fundus images
CN111428737B (en) Instance retrieval method, device, server and storage medium for ophthalmic image
EP3373798B1 (en) Method and system for classifying optic nerve head
CN111428072A (en) Ophthalmologic multimodal image retrieval method, apparatus, server and storage medium
Li et al. Automated feature extraction in color retinal images by a model based approach
WO2020056454A1 (en) A method and system for analysing images of a retina
WO2022166399A1 (en) Fundus oculi disease auxiliary diagnosis method and apparatus based on bimodal deep learning
JP7413147B2 (en) Image processing device, image processing method, and program
KR20190022216A (en) Eye image analysis method
CN107563434B (en) Brain MRI image classification method and device based on three-dimensional convolutional neural network
CN113962311A (en) Knowledge data and artificial intelligence driven ophthalmic multi-disease identification system
CN111428070A (en) Ophthalmologic case retrieval method, ophthalmologic case retrieval device, ophthalmologic case retrieval server and storage medium
JP2023016933A (en) Ophthalmologic apparatus, control method of ophthalmologic apparatus, and program
JP7194136B2 (en) OPHTHALMOLOGICAL APPARATUS, OPHTHALMOLOGICAL APPARATUS CONTROL METHOD, AND PROGRAM
CN110503636B (en) Parameter adjustment method, focus prediction method, parameter adjustment device and electronic equipment
JP2022011912A (en) Image processing apparatus, image processing method and program
CN116168258A (en) Object classification method, device, equipment and readable storage medium
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
CN111938567B (en) Deep learning-based ophthalmologic parameter measurement method, system and equipment
CN117237711A (en) Bimodal fundus image classification method based on countermeasure learning
Zhou [Retracted] Design of Intelligent Diagnosis and Treatment System for Ophthalmic Diseases Based on Deep Neural Network Model
CN115170492A (en) Intelligent prediction and evaluation system for postoperative vision of cataract patient based on AI (artificial intelligence) technology
Akyol et al. A decision support system for early-stage diabetic retinopathy lesions
CN115908224A (en) Training method of target detection model, target detection method and training device
CN111539940A (en) Ultra-wide angle fundus image generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant