CN111428737A - Example retrieval method, device, server and storage medium for ophthalmologic image - Google Patents

Example retrieval method, device, server and storage medium for ophthalmologic image Download PDF

Info

Publication number
CN111428737A
CN111428737A CN202010249627.6A CN202010249627A CN111428737A CN 111428737 A CN111428737 A CN 111428737A CN 202010249627 A CN202010249627 A CN 202010249627A CN 111428737 A CN111428737 A CN 111428737A
Authority
CN
China
Prior art keywords
image
learning model
deep learning
feature
local features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010249627.6A
Other languages
Chinese (zh)
Other versions
CN111428737B (en
Inventor
方建生
刘江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202010249627.6A priority Critical patent/CN111428737B/en
Publication of CN111428737A publication Critical patent/CN111428737A/en
Application granted granted Critical
Publication of CN111428737B publication Critical patent/CN111428737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the invention discloses an example retrieval method, a device, a server and a storage medium of an ophthalmologic image, wherein the method comprises the following steps: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first depth learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and inputting the preset characteristic image into a second depth learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.

Description

Example retrieval method, device, server and storage medium for ophthalmologic image
Technical Field
The present invention relates to retrieval technologies, and in particular, to a method, an apparatus, a server, and a storage medium for retrieving an example of an ophthalmic image.
Background
With the development of imaging technology, digital ophthalmic images have become the main data of ophthalmology, and this trend drives the construction of ophthalmic image search function to assist the clinical decision of doctors. Conventionally, the ophthalmic image retrieval function adopts a text-based retrieval method, which first performs text description on an image (establishes a corresponding relationship between a text and the image), inputs a keyword query during retrieval, and returns a ranking result. The method of finding pictures by words has semantic difference of inconsistent text description and image content, and influences retrieval effect. With the development of computer vision technology, Content-based image Retrieval (CBIR) methods are beginning to be applied to ophthalmology. The method of finding the image by the image carries out retrieval according to the characteristics of the color, the shape, the texture and the like of the image, and avoids semantic difference between text description and image content. A Content-based Image Retrieval (CBIR) technique is a technique for retrieving the most similar Image from the Content of an Image, and combines information Retrieval, computer vision, and the like. In recent years, in the field of medical imaging, a deep learning algorithm represented by a deep convolutional network (CNN) obtains excellent performance in disease classification and lesion segmentation of ophthalmic images, and is superior to traditional classifiers (such as a Support Vector Machine (SVM), a Random Forest (RF) and the like) in extracting features such as textures, colors, forms and the like, so that a technical basis is provided for construction of an image retrieval function.
Diseases of various parts of the human body can be presented through pathological changes of the eyes, so that the academic and industrial fields are widely dedicated to automatically screening the diseases by analyzing the digital ophthalmology image through an artificial intelligence algorithm, and related results are published. For example, White Eye Detector, a free software for determining Eye cancer by pictures, developed by university of Beller, Texas, USA, BiliSten, a software for diagnosing liver cancer according to Eye color, developed by university of Washington, and Intelligent screening Eye fundus camera, introduced by national health company. However, the automatic disease screening based on the ophthalmic medical imaging technology and the image processing technology has the problems that the algorithm faces the challenges of interpretability and accuracy, the samples used for training the model also face the problems of difficult acquisition, subjective ambiguity labeling and the like, and the clinical application still has a long way to go. More importantly, although the computer screening result is only used as an auxiliary reference for the doctor, the computer screening result more or less affects the judgment of the doctor again, and thus the final diagnosis result is possibly affected. Under the condition that the computer vision technology is widely applied to natural image scenes, such as face recognition, automatic driving and the like, and the ophthalmic digital image has such important significance for early disease discovery, the construction of a search engine based on the computer vision technology can be considered, and the aim of improving the decision efficiency without directly giving a decision result is fulfilled.
Based on the application value of ophthalmic image retrieval, related research is carried out on ophthalmic image retrieval at present, but the example-level retrieval method based on lesion areas is relatively insufficient. The retrieval of the lesion position is very important for case retrieval. Generally, the discrimination of an image depends mainly on the identification and comparison of key regions (i.e. lesion regions). If the ratio of the lesion area is small in the two images, if the comparison of the whole image is performed, the feature representation of the lesion area is ignored, which results in an error of similarity.
Disclosure of Invention
The invention provides an example retrieval method of an ophthalmologic image, which aims to improve the precision and accuracy of ophthalmologic image retrieval and improve the user experience.
In a first aspect, an embodiment of the present invention provides an example retrieval method for an ophthalmic image, where the method includes:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted;
and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description.
Optionally, before acquiring the eye image of the current user, the method further includes:
acquiring a sample image and case description thereof;
and establishing a deep learning model and training the deep learning model by using the sample image to obtain the trained deep learning model.
Optionally, the local feature includes: blood vessels, cornea or angle of the house.
Optionally, the first deep learning model includes: the device comprises a feature coding module, a context semantic extraction module and a feature decoding module.
Optionally, the feature coding module extracts local features by using a pre-trained feature extraction module.
Optionally, the context semantic extraction module extracts high-order feature information from the local features by using a densely linked hole convolution operation and a multi-scale pooling operation.
Optionally, the feature decoding module fuses the information extracted by the coding model and the context semantic extraction module to obtain a segmentation result.
In a second aspect, an embodiment of the present invention further provides an example retrieval device for an ophthalmic image, where the device includes:
the data acquisition module is used for acquiring the eye image of the current user;
the data extraction module is used for acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted;
and the data identification module is used for inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description.
In a third aspect, an embodiment of the present invention further provides a server, where the server includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an example retrieval method for an ophthalmic image as described in any of the above.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the example retrieval method for an ophthalmic image according to any one of the above methods.
The embodiment of the invention discloses an example retrieval method, a device, a server and a storage medium of an ophthalmologic image, wherein the method comprises the following steps: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.
Drawings
Fig. 1 is a flowchart illustrating an example retrieving method for an ophthalmic image according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating an example retrieving method of an ophthalmic image according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an example retrieving device for ophthalmic images according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, the first deep learning model may be referred to as a second deep learning model, and similarly, the second deep learning model may be referred to as a first deep learning model, without departing from the scope of the present application. Both the first deep learning model and the second deep learning model are deep learning models, but they are not the same deep learning model. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of an example retrieving method for an ophthalmic image according to an embodiment of the present invention, which is applicable to a case where a user performs an ophthalmic disease retrieval online, and specifically includes the following steps:
step 100, obtaining an eye image of a current user.
In this embodiment, an eye image of a user is first acquired, and the eye image is generally an ophthalmic digital image, and the ophthalmic digital image imaging methods include an eye surface color photograph, a Fundus color photograph, an Optical Coherence Tomography (OCT), an Anterior Segment OCT, an Optical Coherence tomography (AS-OCT), an Optical Coherence tomography Angiography (oca), an in vivo corneal confocal microscope (IVCM), a Fluorescein Fundus Angiography (FFA), and an Indocyanine Green film Angiography (ICGA). In this embodiment, the ophthalmic digital image includes: fundus, corneal nerve and OCT images. The imaging apparatus outputs a digital image by observing the morphology of tissue structures of blood vessels, nerves, cornea, crystals, crystalline lens, iris, etc. of the eye in the human body, among which a fundoscope, a slit lamp, and Optical Coherence Tomography (OCT). The ophthalmic digital images generated by different imaging devices are different, have different resolutions and different parts, and can be used for different disease type diagnoses. Such as optical coherence tomography of the posterior segment of the eye, has important value in clinical examination and diagnosis of retinal diseases, macular diseases, optic nerve diseases, glaucoma and the like.
Step 110, obtaining a preset feature image in the eye image by using a first deep learning model according to a user requirement, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted.
In this embodiment, the eye image includes a plurality of local features, including: blood vessels, cornea, or angle of the atrium. The first deep learning model is a deep learning model based on a Context-Encoder Network (CE-Net), and specifically includes: the device comprises a feature coding module, a context semantic extraction module and a feature decoding module.
The feature encoding module extracts local features using a pre-trained feature extraction module. The feature coding module adopts the pre-trained ResNet-34 as a coder, reserves the first 4 feature extraction parts in the residual error network, and abandons an average pooling layer and a full connection layer. The ResNet extraction feature module adds a quick connection mechanism, so that the small gradient is avoided, and the network convergence is accelerated. The feature coding module is mainly used for marking a plurality of local features of the eye feature image and distinguishing other local features. The context semantic extraction module extracts high-order feature information from the local features by using densely linked hole convolution operations and multi-scale pooling operations. The context semantic extraction module consists of a dense cavity convolution module and a residual multi-scale pooling module. The dense spatial convolution module has 4 cascaded branches. The residual multi-scale pooling module comprises four convolution kernels of different sizes, and the four-layer output comprises feature output graphs with different sizes. And context semantic extraction is carried out by extracting local features to be extracted from the marked eye feature image and obtaining high-order feature vectors of the local features. And the feature decoding module fuses the coding model and the information extracted by the context semantic extraction module to obtain a segmentation result. The feature decoding module aims to recover the high-order feature vectors extracted from the feature encoder module and the context semantic module. Skip chaining is used to transfer some detailed information from the encoder to the decoder to make up for the information loss due to the chained pool or cross-convolution operations, while a simple, highly scalable transposed convolution decoding module is used to improve decoding performance.
Exemplarily, taking a local feature blood vessel as an example, extracting blood vessel features in the eye image based on a deep learning model of a context coding network, performing binarization unified operation on the eye image, extracting pixel points which meet blood vessel pixel values after performing pooling convolution calculation, and discarding the pixel points to obtain the eye image only containing the local feature of the blood vessel, thereby facilitating improvement of retrieval precision and retrieval accuracy during retrieval.
And 120, inputting the preset feature image into a second deep learning model trained in advance to obtain a matched sample image and case description.
In this embodiment, the second deep learning model is a Triplet model, and by obtaining the local feature image in step 110, the local feature image includes a high-dimensional feature vector, and the similarity calculation cannot be directly performed, so that the feature vector of the image is further sent to the Triplet network structure to extract a low-dimensional discrete hash code, and the retrieval can be performed according to the discrete hash code. Newly inputting an eye image (containing a specific lesion area), obtaining a hash code through the same CE-Net and Triplet models, performing Hamming distance calculation on the hash code and a sample with the hash code in a database, returning a sample image (also containing a similar lesion area) with the highest similarity and a case description of the sample image, wherein the case description comprises specific treatment processes of the disease condition, the diagnosis method and the like of the sample image, and realizing example retrieval by extracting local features of the eye image.
The embodiment of the invention discloses an example retrieval method of an ophthalmologic image, which comprises the following steps: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.
Example two
Fig. 2 is a flowchart of an example retrieving method for an ophthalmic image according to an embodiment of the present invention, which is applicable to a case where a user performs an ophthalmic disease retrieval online, and specifically includes the following steps:
step 200, acquiring a sample image and a case description thereof.
In the embodiment, sample images and case descriptions of historical users are obtained, a database is established, after current eye images of the users are obtained, a preset deep learning model is trained according to local features of the eye images needing to be retrieved to obtain a trained model, and then the current eye images of the users are input into the trained deep learning model, so that the closest sample images and case descriptions of the sample images can be input, and the purpose of retrieval is achieved.
And step 210, establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model.
In this embodiment, the deep learning model includes a CE-Net network and a Triplet model, and the CE-Net structure segmentation model is to map features of an original image into features that highlight a lesion region, where the feature dimension is unchanged, and the purpose of performing structure segmentation on the image in advance is to dig out information of the lesion region. The Triplet model further maps the high-dimensional structural features into low-dimensional discrete hash codes, distance calculation can be performed in a Hamming space, and the similarity of images is judged. The length K of a typical discrete hash code is much smaller than the structural features.
In this embodiment, the sample image is used to train the Triplet model, so that the Triplet model learns the hash code from the structural features for similarity calculation. The structure of the triple model is as the above figure, three pictures are input simultaneously and respectively pass through three networks with the same structure, and finally, the three-state loss is used for training the network weight which is shared by the three networks. The method is characterized in that three pictures are related, one picture is selected, and then the related picture and the unrelated picture are respectively selected to be trained. The relevant and irrelevant definitions are whether they belong to the same class, such as all have a certain disease, or all have the same lesion area, which requires labeling in advance. The training of the CE-Net model also requires class labels to mark the lesion area. This sample operation is a precondition operation and is not specifically described. The three-state loss of the triple model is designed by using correlation and irrelevance, and if the three-state loss is correlated, the hash codes are close to each other; if not, the hash code should be far away.
Step 220, obtaining the eye image of the current user.
Step 230, obtaining a preset feature image in the eye image by using a first deep learning model according to a user requirement, wherein the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted.
And 240, inputting the preset feature image into a second deep learning model trained in advance to obtain a matched sample image and case description.
In this embodiment, by obtaining the local feature image in step 230, the local feature image includes a high-dimensional feature vector, and the similarity calculation cannot be directly performed, so that the feature vector of the image is further sent to the Triplet network structure to extract a low-dimensional discrete hash code, and the retrieval can be performed according to the discrete hash code. Newly inputting an eye image (containing a specific lesion area), obtaining a hash code through the same CE-Net and Triplet models, performing Hamming distance calculation on the hash code and a sample with the hash code in a database, returning a sample image (also containing a similar lesion area) with the highest similarity and a case description of the sample image, wherein the case description comprises specific treatment processes of the disease condition, the diagnosis method and the like of the sample image, and realizing example retrieval by extracting local features of the eye image.
The embodiment of the invention discloses an example retrieval method of an ophthalmologic image, which comprises the following steps: acquiring a sample image and case description thereof; establishing a deep learning model and training the deep learning model by using the sample image to obtain a trained deep learning model; acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.
EXAMPLE III
The example retrieval device for the ophthalmologic image provided by the embodiment of the invention can implement the example retrieval method for the ophthalmologic image provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Fig. 3 is a schematic structural diagram of an example retrieving device 300 for ophthalmic images according to an embodiment of the present invention. Referring to fig. 3, an example retrieving device 300 for an ophthalmic image according to an embodiment of the present invention may specifically include:
a data obtaining module 310, configured to obtain an eye image of a current user;
the data extraction module 320 is configured to obtain a preset feature image in the eye image according to a user requirement by using a first deep learning model, where the preset feature image is one of a plurality of local features in the eye image, which are preferentially highlighted;
the data recognition module 330 is configured to input the preset feature image into a second deep learning model trained in advance to obtain a matched sample image and case description.
Further, before the acquiring the eye image of the current user, the method further includes:
acquiring a sample image and case description thereof;
and establishing a deep learning model and training the deep learning model by using the sample image to obtain the trained deep learning model.
Further, the local features include: blood vessels, cornea or angle of the house.
Further, the deep learning model comprises: the device comprises a feature coding module, a context semantic extraction module and a feature decoding module.
Further, the feature coding module extracts local features by using a pre-trained feature extraction module.
Further, the context semantic extraction module extracts high-order feature information from the local features by using densely linked hole convolution operations and multi-scale pooling operations.
Further, the feature decoding module fuses the information extracted by the coding model and the context semantic extraction module to obtain a segmentation result.
The embodiment of the invention discloses an example retrieval device of an ophthalmologic image, which comprises: the data acquisition module is used for acquiring the eye image of the current user; the data extraction module is used for acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and the data identification module is used for inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.
Example four
Fig. 4 is a schematic structural diagram of a computer server according to an embodiment of the present invention, as shown in fig. 4, the computer server includes a memory 410 and a processor 420, the number of the processors 420 in the computer server may be one or more, and one processor 420 is taken as an example in fig. 4; the memory 410 and the processor 420 in the device may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example.
The memory 410 is used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the example retrieval method of the ophthalmic image in the embodiment of the present invention (for example, the data acquisition module 310, the data extraction module 320, and the data identification module 330 in the example retrieval device 300 of the ophthalmic image), and the processor 420 executes various functional applications and data processing of the apparatus/terminal/device by running the software programs, instructions, and modules stored in the memory 410, so as to implement the example retrieval method of the ophthalmic image.
Wherein the processor 420 is configured to run the computer program stored in the memory 410, and implement the following steps:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted;
and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description.
In one embodiment, the computer program of the computer device provided in the embodiments of the present invention is not limited to the above method operations, and may also perform related operations in the example retrieval method of the ophthalmic image provided in any embodiment of the present invention.
The memory 410 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 410 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 410 may further include memory located remotely from the processor 420, which may be connected to devices/terminals/devices through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the invention discloses an example retrieval server of an ophthalmologic image, which is used for executing the following method: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform an example retrieval method for an ophthalmic image, the method including:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted;
and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the above-described method operations, and may also perform related operations in an example retrieval method for an ophthalmic image provided by any embodiments of the present invention.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The embodiment of the invention discloses an example retrieval storage medium of an ophthalmologic image, which is used for executing the following method: acquiring an eye image of a current user; acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted; and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description. According to the example retrieval method of the ophthalmologic image, provided by the embodiment of the invention, the local features of the ophthalmologic image are extracted, the system identifies in the deep learning model, and the identification result of the local features is output, so that the problem that the directional retrieval cannot be performed by using the local features of the ophthalmologic image in the prior art is solved, the precision and accuracy of ophthalmologic image retrieval are improved, and the user experience is improved.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An example retrieval method of an ophthalmologic image, comprising:
acquiring an eye image of a current user;
acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted;
and inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description.
2. The method of claim 1, wherein the step of obtaining the eye image of the current user further comprises:
acquiring a sample image and case description thereof;
and establishing a deep learning model and training the deep learning model by using the sample image to obtain the trained deep learning model.
3. The method of claim 1, wherein the local features comprise: blood vessels, cornea or angle of the house.
4. The method of claim 1, wherein the first deep learning model comprises: the device comprises a feature coding module, a context semantic extraction module and a feature decoding module.
5. The method of claim 1, wherein the feature encoding module uses a pre-trained feature extraction module to extract local features.
6. The method of claim 1, wherein the context semantic extraction module extracts higher-order feature information from the local features by using a densely-linked hole convolution operation and a multi-scale pooling operation.
7. The method for example retrieval of ophthalmology image as claimed in claim 1, wherein the feature decoding module fuses the information extracted by the coding model and the context semantic extraction module to obtain a segmentation result.
8. An example retrieval device for an ophthalmologic image, comprising:
the data acquisition module is used for acquiring the eye image of the current user;
the data extraction module is used for acquiring a preset feature image in the eye image by using a first deep learning model according to user requirements, wherein the preset feature image is one of a plurality of local features in the eye image which are preferentially highlighted;
and the data identification module is used for inputting the preset characteristic image into a second deep learning model trained in advance to obtain a matched sample image and case description.
9. A server, characterized in that the server comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the example retrieval method for ophthalmic images of any of claims 1-7.
10. A computer-readable storage medium on which a computer program is stored, which, when being executed by a processor, implements an example retrieval method for an ophthalmic image according to any one of claims 1 to 7.
CN202010249627.6A 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image Active CN111428737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010249627.6A CN111428737B (en) 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010249627.6A CN111428737B (en) 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image

Publications (2)

Publication Number Publication Date
CN111428737A true CN111428737A (en) 2020-07-17
CN111428737B CN111428737B (en) 2024-01-19

Family

ID=71550455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010249627.6A Active CN111428737B (en) 2020-04-01 2020-04-01 Instance retrieval method, device, server and storage medium for ophthalmic image

Country Status (1)

Country Link
CN (1) CN111428737B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579750A (en) * 2020-11-30 2021-03-30 百度健康(北京)科技有限公司 Similar medical record retrieval method, device, equipment and storage medium
CN113343943A (en) * 2021-07-21 2021-09-03 西安电子科技大学 Eye image segmentation method based on sclera region supervision
CN113554641A (en) * 2021-07-30 2021-10-26 江苏盛泽医院 Pediatric pharyngeal image acquisition method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829446A (en) * 2019-03-06 2019-05-31 百度在线网络技术(北京)有限公司 Eye fundus image recognition methods, device, electronic equipment and storage medium
CN110427509A (en) * 2019-08-05 2019-11-08 山东浪潮人工智能研究院有限公司 A kind of multi-scale feature fusion image Hash search method and system based on deep learning
CN110751637A (en) * 2019-10-14 2020-02-04 北京至真互联网技术有限公司 Diabetic retinopathy detection system, method, equipment and training system
CN110837572A (en) * 2019-11-15 2020-02-25 北京推想科技有限公司 Image retrieval method and device, readable storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829446A (en) * 2019-03-06 2019-05-31 百度在线网络技术(北京)有限公司 Eye fundus image recognition methods, device, electronic equipment and storage medium
CN110427509A (en) * 2019-08-05 2019-11-08 山东浪潮人工智能研究院有限公司 A kind of multi-scale feature fusion image Hash search method and system based on deep learning
CN110751637A (en) * 2019-10-14 2020-02-04 北京至真互联网技术有限公司 Diabetic retinopathy detection system, method, equipment and training system
CN110837572A (en) * 2019-11-15 2020-02-25 北京推想科技有限公司 Image retrieval method and device, readable storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112579750A (en) * 2020-11-30 2021-03-30 百度健康(北京)科技有限公司 Similar medical record retrieval method, device, equipment and storage medium
CN113343943A (en) * 2021-07-21 2021-09-03 西安电子科技大学 Eye image segmentation method based on sclera region supervision
CN113554641A (en) * 2021-07-30 2021-10-26 江苏盛泽医院 Pediatric pharyngeal image acquisition method and device

Also Published As

Publication number Publication date
CN111428737B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
Islam et al. Applying supervised contrastive learning for the detection of diabetic retinopathy and its severity levels from fundus images
CN109376636B (en) Capsule network-based eye fundus retina image classification method
Tabassum et al. CDED-Net: Joint segmentation of optic disc and optic cup for glaucoma screening
US20240081618A1 (en) Endoscopic image processing
CN111428072A (en) Ophthalmologic multimodal image retrieval method, apparatus, server and storage medium
Ishtiaq et al. Diabetic retinopathy detection through artificial intelligent techniques: a review and open issues
Sedai et al. Semi-supervised segmentation of optic cup in retinal fundus images using variational autoencoder
CN111428737A (en) Example retrieval method, device, server and storage medium for ophthalmologic image
US20230342918A1 (en) Image-driven brain atlas construction method, apparatus, device and storage medium
CN108416776A (en) Image-recognizing method, pattern recognition device, computer product and readable storage medium storing program for executing
CN107563434B (en) Brain MRI image classification method and device based on three-dimensional convolutional neural network
CN113496489B (en) Training method of endoscope image classification model, image classification method and device
CN110335269A (en) The classification recognition methods of eye fundus image and device
CN111428070A (en) Ophthalmologic case retrieval method, ophthalmologic case retrieval device, ophthalmologic case retrieval server and storage medium
WO2022166399A1 (en) Fundus oculi disease auxiliary diagnosis method and apparatus based on bimodal deep learning
Xie et al. Optic disc and cup image segmentation utilizing contour-based transformation and sequence labeling networks
KR101925603B1 (en) Method for faciliating to read pathology image and apparatus using the same
CN116168258B (en) Object classification method, device, equipment and readable storage medium
CN110503636B (en) Parameter adjustment method, focus prediction method, parameter adjustment device and electronic equipment
Mishra et al. MacularNet: Towards fully automated attention-based deep CNN for macular disease classification
Tariq et al. Diabetic retinopathy detection using transfer and reinforcement learning with effective image preprocessing and data augmentation techniques
Gururaj et al. Fundus image features extraction for exudate mining in coordination with content based image retrieval: A study
Attia et al. A survey on machine and deep learning for detection of diabetic RetinopathY
CN117237711A (en) Bimodal fundus image classification method based on countermeasure learning
CN115908224A (en) Training method of target detection model, target detection method and training device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant