CN110837572A - Image retrieval method and device, readable storage medium and electronic equipment - Google Patents

Image retrieval method and device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN110837572A
CN110837572A CN201911120526.2A CN201911120526A CN110837572A CN 110837572 A CN110837572 A CN 110837572A CN 201911120526 A CN201911120526 A CN 201911120526A CN 110837572 A CN110837572 A CN 110837572A
Authority
CN
China
Prior art keywords
image
focus
vector
retrieved
lesion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911120526.2A
Other languages
Chinese (zh)
Other versions
CN110837572B (en
Inventor
尹思源
张欢
陈宽
王少康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Co Ltd filed Critical Infervision Co Ltd
Priority to CN201911120526.2A priority Critical patent/CN110837572B/en
Publication of CN110837572A publication Critical patent/CN110837572A/en
Application granted granted Critical
Publication of CN110837572B publication Critical patent/CN110837572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an image retrieval method, an image retrieval device, a readable storage medium and electronic equipment. The method comprises the following steps: acquiring an image to be retrieved and edge segmentation information of a focus in the image to be retrieved; extracting machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector; according to the edge segmentation information of the focus, extracting the image omics characteristics of the focus from the image to be retrieved to obtain an image omics vector; combining the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus; and outputting a retrieval result of similar focuses similar to the focuses according to the feature vectors corresponding to the focuses. The image retrieval method can quickly retrieve similar focuses, and by combining the machine learning vector and the image omics vector, the feature extraction of the focuses is more comprehensive, the retrieved similar focuses are more accurate, the retrieval accuracy is improved, and doctors can be helped to make more reasonable diagnosis and treatment opinions.

Description

Image retrieval method and device, readable storage medium and electronic equipment
Technical Field
The invention relates to the technical field of image retrieval, in particular to an image retrieval method, an image retrieval device, a readable storage medium and electronic equipment.
Background
The focuses with similar shapes on the pathology often have similar properties, so that the pathological results, diagnosis and treatment modes and the like of the focuses with similar shapes are searched for and compared transversely, so that a doctor can judge the current focus more clearly and comprehensively, and diagnosis and treatment opinions can be given more accurately.
For searching similar focuses, the traditional scheme is that keyword search is carried out on a database to search matching results with the same keywords, but the cognition of doctors on the focus characteristics possibly generates deviation, so that the keywords are inaccurate, the search results are inaccurate, and diagnosis deviation is caused. Moreover, a large number of keywords are used for description, which is troublesome and labor-consuming and affects the working efficiency.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image retrieval method, an image retrieval device, a readable storage medium, and an electronic device, which can quickly retrieve similar cases, improve retrieval accuracy, and help a doctor make more reasonable diagnosis and treatment opinions.
According to a first aspect of embodiments of the present invention, there is provided an image retrieval method, including: acquiring an image to be retrieved and edge segmentation information of a focus in the image to be retrieved; extracting machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector; according to the edge segmentation information of the focus, extracting the image omics characteristics of the focus from the image to be retrieved to obtain an image omics vector; combining the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus; and outputting a retrieval result of similar focuses similar to the focuses according to the feature vectors corresponding to the focuses.
In an embodiment of the present invention, the obtaining of the image to be retrieved and the edge segmentation information of the lesion in the image to be retrieved includes: acquiring position information of a focus in an image to be retrieved; and according to the position information, performing edge extraction and segmentation on the focus to acquire edge segmentation information of the focus.
In an embodiment of the present invention, the extracting machine learning features of the lesion includes: extracting machine learning characteristics of the focus through a deep learning model, wherein the deep learning model is obtained by training in one or more of the following modes: performing data expansion on the focus by a data enhancement method to obtain an expanded training sample, and training the deep learning model by using the expanded training sample; marking the type of the focus to obtain a marked training sample, and training the deep learning model by using the marked training sample; and training the deep learning model by adopting a triple loss function.
In an embodiment of the present invention, the data expansion of the lesion by the data enhancement method includes: data augmentation of the lesion is performed by data enhancement methods of translation, rotation, flipping, zooming, slight brightness adjustment, and/or slight deformation.
In an embodiment of the present invention, the merging the machine learning vector and the omics vector to obtain the feature vector corresponding to the lesion includes: and splicing the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus.
In an embodiment of the present invention, the outputting a search result of a similar lesion similar to the lesion according to a feature vector corresponding to the lesion includes: performing distance calculation on the characteristic vector corresponding to the focus and the characteristic vector of the focus in the database to obtain a distance calculation result; according to the distance calculation result, determining the similarity between the focus in the image to be retrieved and the focus in the database; and outputting the retrieval result of the similar focus according to the similarity.
In an embodiment of the present invention, the search result includes image information and text information, where the image information includes an image of the similar lesion, and the text information includes a pathological diagnosis result and/or a diagnosis and treatment manner of the similar lesion.
According to a second aspect of the embodiments of the present invention, there is provided an image retrieval apparatus including: the acquisition module is used for acquiring the image to be retrieved and the edge segmentation information of the focus in the image to be retrieved; the first extraction module is used for extracting the machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector; the second extraction module is used for extracting the image omics characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain an image omics vector; the merging module is used for merging the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus; and the output module outputs a retrieval result of a similar focus similar to the focus according to the feature vector corresponding to the focus.
According to a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program for executing any one of the image retrieval methods described above.
According to a fourth aspect of embodiments of the present invention, there is provided an electronic apparatus, including: a processor; a memory for storing processor-executable instructions; and the processor is used for executing any image retrieval method.
According to the technical scheme provided by the embodiment of the invention, the machine learning characteristic and the image omics characteristic of the focus are respectively extracted from the image to be retrieved according to the edge segmentation information of the focus to obtain the machine learning vector and the image omics vector of the focus, the machine learning vector and the image omics vector are combined to obtain the characteristic vector corresponding to the focus, and the retrieval result of the similar focus related to the focus is output according to the characteristic vector, so that the similar focus can be rapidly retrieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating an image retrieval method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of an image retrieval method according to another embodiment of the present invention.
Fig. 3 is a block diagram of an image retrieval apparatus according to an embodiment of the present invention.
Fig. 4 is a block diagram illustrating an obtaining module of an image retrieving device according to another embodiment of the present invention.
Fig. 5 is a block diagram of an output module of an image retrieval apparatus according to another embodiment of the present invention.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart illustrating an image retrieval method according to an embodiment of the present invention. The method may be performed by a computer device (e.g., a server). As shown in fig. 1, the method includes the following.
S110: and acquiring the image to be retrieved and the edge segmentation information of the focus in the image to be retrieved.
The image to be retrieved may be medical images such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Computed Radiography (CR), Digital Radiography (DR) of the patient, which is not limited in this respect.
The focus to be searched is divided from the background in the image, so that the edge division information of the focus can be obtained, and the edge division information can be the coordinate information of the focus edge. If the image is a multilayer image, the obtained information can be 3D edge segmentation information; in the case of a single-layer image, 2D edge segmentation information may be obtained, which is not limited by the present invention. By dividing the focus and the background in the image, the background information in the image can be effectively shielded, and the accuracy of focus feature extraction, particularly the accuracy of image omics feature extraction, is improved.
S120: and extracting the machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector.
Specifically, the machine learning features of the lesion may be extracted through a machine learning model and a machine learning vector may be output, where the machine learning model may include a deep learning model, a random forest, a support vector machine, or a decision tree, and the present invention is not limited in this respect. According to the edge segmentation information of the focus, a focus image is extracted from an image to be retrieved, and then the focus image is input into a machine learning model for machine learning feature extraction; the image to be retrieved and the edge segmentation information of the focus can also be directly input into a machine learning model for machine learning feature extraction, which is not specifically limited by the invention.
S130: according to the edge segmentation information of the focus, extracting the image omics characteristics of the focus from the image to be retrieved to obtain an image omics vector;
the image omics is to extract quantitative characteristics from images at high flux, convert medical images into high-dimensional and extractable data and perform subsequent big data analysis, thereby providing prognosis and diagnosis values for various diseases, particularly malignant tumors, and providing decision support for disease diagnosis and treatment. The image omics characteristics have very important medical significance.
Specifically, the image omics features of the lesion may be extracted through a machine learning model, for example, a conventional machine learning model or a deep learning network model may be used, and the specific machine learning model is not limited in the present invention. The machine learning model for extracting the features of the image group may be the same as or different from the machine learning model for extracting the features of the machine learning, which is not limited in the present invention.
According to the edge segmentation information of the focus, a focus image is extracted from the image to be retrieved, and then the focus image is input into a machine learning model for image omics feature extraction; the image to be retrieved and the edge segmentation information of the focus can also be directly input into a machine learning model for image omics feature extraction, which is not specifically limited by the invention.
The imagery omics features may include shape features, first-order histogram features, second-order histogram or texture features, and the like, which is not specifically limited by the present invention.
Morphological features include features that describe the size of the lesion, such as volume, surface area, maximum diameter in two and three dimensions, and effective diameter (the diameter of a sphere having the same volume as the lesion), as well as features that describe how similar the lesion is to a sphere, such as surface to volume ratio, compactness, eccentricity, sphericity, and the like.
The first-order histogram features describe features related to voxel intensity distribution in a lesion, do not contain mutual spatial action between the features, and can be obtained by histogram analysis calculation, including mean, median, minimum, maximum, standard deviation, skewness, kurtosis and the like.
Second order histogram features or texture features characterize the intensity level of the voxel spatial distribution. Image texture refers to the perceived or measurable spatial variation at intensity level, which is considered a gray scale, a composite of the local features of an image that is visually perceived. The second order histogram features or texture features include: a gray level co-occurrence matrix, a gray level long matrix, a gray level band matrix and a neighborhood gray level difference matrix.
S140: and combining the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus.
By combining the machine learning vector and the image omics vector, the feature vector corresponding to the focus can comprise both the machine learning feature and the image omics feature, so that the extracted focus feature is more comprehensive and has more medical significance.
In particular, the machine learning vector and the imagery omics vector may be stitched. For example, machine learning toThe quantity being a vector X of length N ═ X0,x1,...,xN-2,xN-1]TThe image omics vector is a vector Y with the length of M ═ Y0,y1,...,yM-2,yM-1]TThen the spliced feature vector is a vector Z ═ x with a length of N + M0,x1,...,xN-2,xN-1,y0,y1,...,yM-2,yM-1]T. It should be understood that vector weighting combination may also be performed, or a machine learning vector and an omics vector are input to a neural network for combination, and the like, and the specific combination manner of the vectors is not particularly limited in the present invention.
S150: and outputting a retrieval result of similar focuses similar to the focuses according to the feature vectors corresponding to the focuses.
The retrieval result may include image information and text information, where the image information includes an image of a similar lesion, where the image may be a thumbnail of the similar lesion, may also be a medical image such as an original CT, CR, DR, or MRI including the similar lesion, and may also include a thumbnail of the similar lesion and its corresponding original medical image. The text information may include a diagnosis report of similar lesions, such as a pathological diagnosis result and/or a diagnosis and treatment manner, and may further include patient-related information, such as a disease state, a genetic medical history, etc. of the patient. The doctor can compare the focus in the image to be diagnosed with the similar focus in the retrieval result, and refer to the pathological diagnosis result, the diagnosis mode and other information of the similar focus, so that more reasonable clinical diagnosis can be made.
According to the technical scheme provided by the embodiment of the invention, the machine learning characteristic and the image omics characteristic of the focus are respectively extracted from the image to be retrieved according to the edge segmentation information of the focus to obtain the machine learning vector and the image omics vector of the focus, the machine learning vector and the image omics vector are combined to obtain the characteristic vector corresponding to the focus, and the retrieval result of the similar focus related to the focus is output according to the characteristic vector, so that the similar focus can be rapidly retrieved.
In another embodiment of the present invention, the obtaining of the image to be retrieved and the edge segmentation information of the lesion in the image to be retrieved includes: acquiring position information of a focus in an image to be retrieved; and according to the position information, performing edge extraction and segmentation on the focus to acquire edge segmentation information of the focus.
Specifically, edge segmentation information may be obtained by performing automatic edge extraction and segmentation on the lesion. For example, the physician may only give the general location of the lesion, e.g., the physician may give the general location of the lesion by entering coordinates, using a specific tool box to give the coordinates of the location of the lesion, using a CAD or AI tool, etc. And then 3D or 2D edge extraction and segmentation are carried out on the focus by adopting a deep learning algorithm and the like to obtain edge segmentation information of the focus. By the mode, a doctor only needs to give the general position of the focus, and the use difficulty of a user is reduced.
In another embodiment of the present invention, the information of the edge segmentation of the lesion may also be obtained in a manner manually specified by a physician. In this case, the doctor is required to clearly specify the precise location of the lesion, for example, to draw a three-dimensional lesion, so as to obtain the edge segmentation information of the lesion.
In another embodiment of the present invention, the extracting the machine learning feature of the lesion includes: extracting machine learning characteristics of the focus through a deep learning model, wherein the deep learning model is obtained by training in one or more of the following modes: performing data expansion on the focus by a data enhancement method to obtain an expanded training sample, and training the deep learning model by using the expanded training sample; marking the type of the focus to obtain a marked training sample, and training the deep learning model by using the marked training sample; and training the deep learning model by adopting a triple loss function.
In particular, the lesion may be data augmented by data enhancement methods such as translation, rotation, flipping, zooming, slight brightness adjustment, and/or slight deformation. The deep learning model is trained through training samples obtained after data expansion, the data volume can be expanded when the sample data is less, and meanwhile, the enhanced image of the focus can be used as the similar image with the highest similarity to train the deep learning model, so that the accuracy of deep learning feature extraction is improved.
Specifically, the lesion may be type-marked according to a result of manual judgment, or the lesion may be type-marked according to a classification result of a model such as a deep learning model, which is not limited in the present invention. For example, the deep learning model may be trained on a training sample after marking benign and malignant features, so that the benign and malignant features of the lesion are included in the deep learning features extracted by the trained deep learning model, and the output deep learning vector is a vector including the benign and malignant features of the lesion, so that the retrieved similar lesion is a lesion consistent with the benign and malignant features of the lesion. It should be understood that the above description is only an exemplary description, and the present invention may be used not only for searching for a lesion of a benign or malignant type, but also for searching for a lesion of a non-benign or malignant type, etc., and the present invention does not limit the specific type of marking.
The method comprises the steps of firstly, randomly selecting a sample from a training data set, wherein the sample is called an anchor (a), then randomly selecting a sample belonging to the same class as a and a sample belonging to different classes as a, wherein the two samples are respectively called a positive sample (p) and a negative sample (n), and accordingly forming an (anchor, positive, negative) triplet, inputting each sample in the triplet into a deep learning model to obtain feature vectors of three samples, wherein the purpose of the triplet loss function is to enable the distance between the a and p feature vectors to be as small as possible and the distance between the a and n feature vectors to be as large as possible through learning, and the distance between the a and n and the distance between the a and p are enabled to have a minimum interval α.
Specifically, the target formula of the triple loss function (Triplet loss) used in the training process is:
loss=max(d(a,p)-d(a,n)+α,0)
in an embodiment of the present invention, in the process of training the deep learning model, the enhanced image of the lesion a is taken as p, and the other images are taken as n, wherein the enhanced image of the lesion a is an image obtained by performing translation, rotation, inversion, scaling, slight brightness adjustment or slight deformation on the lesion a. By the method, the enhanced image of the same focus can be used as a sample with the highest similarity to train the deep learning model.
In another embodiment of the present invention, a lesion of the same type as a is regarded as p, and a lesion of a different type from a is regarded as n, wherein a, p, n may be labeled according to a manual judgment result, or may be labeled according to a classification result of a model such as a deep learning model, which is not limited in the present invention. In this way, the deep learning model can be trained with the same type of lesion as the second highest sample of similarity.
In an embodiment of the present invention, the outputting a search result of a similar lesion similar to the lesion according to a feature vector corresponding to the lesion includes: performing distance calculation on the characteristic vector corresponding to the focus and the characteristic vector of the focus in the database to obtain a distance calculation result; according to the distance calculation result, determining the similarity between the focus in the image to be retrieved and the focus in the database; and outputting the retrieval result of the similar focus according to the similarity.
And vectors of all focuses in the database are vectors obtained by splicing the deep learning vectors and the image omics vectors. Specifically, the euclidean distance, manhattan distance, chebyshev distance or minkowski distance may be calculated from the feature vector corresponding to the lesion and the feature vector of the lesion in the database, and the specific manner of calculating the quantitative distance is not limited by the present invention.
And determining the similarity between the focuses according to the distance calculation result. For example, closer distance gives higher similarity to two lesions, and conversely, farther distance gives lower similarity to two lesions.
And outputting the retrieval result of the similar focus according to the similarity. The similar lesions with the highest similarity may be output, or the similar lesions with the similarity greater than a certain threshold may be output, which is not specifically limited by the present invention. The doctor can give more reasonable diagnosis and treatment opinions by comparing the focus in the image to be diagnosed (i.e. the image to be retrieved) with the m similar focuses with the highest similarity and referring to the diagnosis result and the diagnosis and treatment mode of the similar focuses.
Fig. 2 is a schematic flow chart of an image retrieval method according to another embodiment of the present invention. The method may be performed by a computer device (e.g., a server). As shown in fig. 2, the method includes the following.
S210: a CT image of the patient and the general location of the lesion to be retrieved in the CT image are acquired.
S220: and (3) according to the general position of the focus in the CT image, performing edge extraction and segmentation on the focus by using a deep learning algorithm to obtain 3D edge segmentation information of the focus.
S230: and inputting the CT image and the 3D edge segmentation information of the focus into a deep learning model for feature extraction to obtain a deep learning vector.
S240: and inputting the CT image and the 3D edge segmentation information of the focus into a traditional machine learning model for image omics feature extraction to obtain an image omics vector.
S250: and splicing the deep learning vector and the image omics vector to obtain a combined feature vector.
S260: and performing Euclidean distance calculation on the combined feature vector and the vector in the database to obtain the similarity between the focus and the focus in the database.
S270: and outputting the retrieval results of the m similar focuses with the highest similarity.
According to the technical scheme provided by the embodiment of the invention, the retrieval result of the similar focus can be obtained by inputting the image and framing the general position of the focus to be retrieved, and the method is simple to operate, convenient and quick; by using a deep learning algorithm to extract and segment the edge of the focus, background information in the image can be effectively shielded, and the accuracy of focus feature extraction is improved; by extracting the deep learning features and the image omics features and combining the deep learning vectors and the image omics vectors, the features of the focus can be extracted more comprehensively, more accurate similar focuses can be obtained, the focus can better accord with the sense cognition and knowledge cognition of human, and a doctor can be helped to make more reasonable diagnosis and treatment opinions.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Fig. 3 is a block diagram of an image retrieval apparatus according to an embodiment of the present invention. As shown in fig. 3, the image retrieval apparatus 300 includes:
the obtaining module 310 is configured to obtain an image to be retrieved and edge segmentation information of a lesion in the image to be retrieved.
The first extraction module 320 is configured to extract a machine learning feature of a lesion from an image to be retrieved according to edge segmentation information of the lesion, so as to obtain a machine learning vector.
The second extraction module 330 is configured to extract an omics feature of the lesion from the image to be retrieved according to the edge segmentation information of the lesion, so as to obtain an omics vector.
And the merging module 340 is configured to merge the machine learning vector and the omics vector to obtain a feature vector corresponding to the lesion.
The output module 350 is configured to output a search result of a similar lesion similar to the lesion according to the feature vector corresponding to the lesion.
According to the technical scheme provided by the embodiment of the invention, the machine learning characteristic and the image omics characteristic of the focus are respectively extracted from the image to be retrieved according to the edge segmentation information of the focus to obtain the machine learning vector and the image omics vector of the focus, the machine learning vector and the image omics vector are combined to obtain the characteristic vector corresponding to the focus, and the retrieval result of the similar focus related to the focus is output according to the characteristic vector, so that the similar focus can be rapidly retrieved.
In another embodiment of the present invention, as shown in fig. 4, the obtaining module 310 includes a obtaining unit 3110 and a lesion edge extracting unit 3120. The obtaining unit 3110 is configured to obtain position information of a lesion in an image to be retrieved; the lesion edge extraction unit 3120 is configured to perform edge extraction and segmentation on a lesion according to the position information, and obtain the edge segmentation information of the lesion.
In another embodiment of the present invention, the first extraction module 320 is further configured to extract machine learning features of the lesion through a deep learning model, wherein the deep learning model is trained by one or more of the following ways: performing data expansion on the focus by a data enhancement method to obtain an expanded training sample, and training the deep learning model by using the expanded training sample; marking the type of the focus to obtain a marked training sample, and training the deep learning model by using the marked training sample; and training the deep learning model by adopting a triple loss function.
In another embodiment of the present invention, the data expansion of the lesion by the data enhancement method includes: data augmentation of the lesion is performed by data enhancement methods of translation, rotation, flipping, zooming, slight brightness adjustment, and/or slight deformation.
In another embodiment of the present invention, the merging module 340 is further configured to splice the machine learning vector and the omics vector to obtain a feature vector corresponding to the lesion.
In another embodiment of the present invention, as shown in fig. 5, the output module 350 includes a distance calculating unit 3510, a determining unit 3520, and an output unit 3530. The distance calculating unit 3510 is configured to perform distance calculation on the feature vector corresponding to the lesion and the feature vector of the lesion in the database to obtain a distance calculation result; the determining unit 3520 is configured to determine similarity between a lesion in the image to be retrieved and a lesion in the database according to the distance calculation result; the output unit 3530 is used for outputting the search results of similar lesions according to the similarity.
In another embodiment of the present invention, the search result includes image information and text information, wherein the image information includes an image of a similar lesion, and the text information includes a pathological diagnosis result and/or a diagnosis and treatment manner of the similar lesion.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
Fig. 6 is a block diagram of an electronic device 600 according to an embodiment of the invention.
Referring to fig. 6, electronic device 600 includes a processing component 610 that further includes one or more processors, and memory resources, represented by memory 620, for storing instructions, such as applications, that are executable by processing component 610. The application programs stored in memory 620 may include one or more modules that each correspond to a set of instructions. Further, the processing component 610 is configured to execute instructions to perform the image retrieval method described above.
The electronic device 600 may also include a power supply component configured to perform power management of the electronic device 600, a wired or wireless network interface configured to connect the electronic device 600 to a network, and an input-output (I/O) interface. The electronic device 600 may operate based on an operating system, such as Windows Server, stored in the memory 620TM,Mac OSXTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
A non-transitory computer readable storage medium having instructions stored thereon, which when executed by a processor of the electronic device 600, enable the electronic device 600 to perform an image retrieval method, comprising: acquiring an image to be retrieved and edge segmentation information of a focus in the image to be retrieved; extracting machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector; according to the edge segmentation information of the focus, extracting the image omics characteristics of the focus from the image to be retrieved to obtain an image omics vector; combining the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus; and outputting a retrieval result of similar focuses similar to the focuses according to the feature vectors corresponding to the focuses.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program check codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that the combination of the features in the present application is not limited to the combination described in the claims or the combination described in the embodiments, and all the features described in the present application may be freely combined or combined in any manner unless contradictory to each other.
It should be noted that the above-mentioned embodiments are only specific examples of the present invention, and obviously, the present invention is not limited to the above-mentioned embodiments, and many similar variations exist. All modifications which would occur to one skilled in the art and which are, therefore, directly derived or suggested from the disclosure herein are deemed to be within the scope of the present invention.
It should be understood that the terms such as first, second, etc. used in the embodiments of the present invention are only used for clearly describing the technical solutions of the embodiments of the present invention, and are not used to limit the protection scope of the present invention.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image retrieval method, comprising:
acquiring an image to be retrieved and edge segmentation information of a focus in the image to be retrieved;
extracting machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector;
extracting the image omics characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain an image omics vector;
combining the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus;
and outputting a retrieval result of a similar focus similar to the focus according to the feature vector corresponding to the focus.
2. The image retrieval method according to claim 1, wherein the obtaining of the image to be retrieved and the edge segmentation information of the lesion in the image to be retrieved includes:
acquiring position information of the focus in the image to be retrieved;
and according to the position information, performing edge extraction and segmentation on the focus to acquire the edge segmentation information of the focus.
3. The image retrieval method of claim 1, wherein the extracting the machine-learned features of the lesion comprises:
extracting the machine learning characteristics of the focus through a deep learning model,
wherein the deep learning model is obtained by training in one or more of the following ways:
performing data expansion on the focus by a data enhancement method to obtain an expanded training sample, and training the deep learning model by using the expanded training sample;
marking the type of the focus to obtain a marked training sample, and training the deep learning model by using the marked training sample; and
and training the deep learning model by adopting a triple loss function.
4. The image retrieval method according to claim 3, wherein the data augmentation of the lesion by the data enhancement method includes:
data augmentation of the lesion is performed by data enhancement methods of translation, rotation, flipping, zooming, slight brightness adjustment, and/or slight deformation.
5. The image retrieval method of claim 1, wherein the merging the machine learning vector and the omics vector to obtain the feature vector corresponding to the lesion comprises:
and splicing the machine learning vector and the image omics vector to obtain the feature vector corresponding to the focus.
6. The image retrieval method according to claim 1, wherein outputting a retrieval result of a similar lesion similar to the lesion based on the feature vector corresponding to the lesion includes:
performing distance calculation on the feature vector corresponding to the focus and the feature vector of the focus in a database to obtain a distance calculation result;
according to the distance calculation result, determining the similarity between the focus in the image to be retrieved and the focus in a database;
and outputting the retrieval result of the similar focus according to the similarity.
7. The image retrieval method according to any one of claims 1 to 6, wherein the retrieval result includes image information and text information, wherein the image information includes an image of the similar lesion, and the text information includes a pathological diagnosis result and/or a diagnosis and treatment manner of the similar lesion.
8. An image retrieval apparatus, comprising:
the acquisition module is used for acquiring an image to be retrieved and edge segmentation information of a focus in the image to be retrieved;
the first extraction module is used for extracting the machine learning characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain a machine learning vector;
the second extraction module is used for extracting the image omics characteristics of the focus from the image to be retrieved according to the edge segmentation information of the focus to obtain an image omics vector;
the merging module is used for merging the machine learning vector and the image omics vector to obtain a feature vector corresponding to the focus;
and the output module is used for outputting a retrieval result of a similar focus similar to the focus according to the feature vector corresponding to the focus.
9. A computer-readable storage medium storing a computer program for executing the image retrieval method according to any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the image retrieval method according to any one of claims 1 to 7.
CN201911120526.2A 2019-11-15 2019-11-15 Image retrieval method and device, readable storage medium and electronic equipment Active CN110837572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911120526.2A CN110837572B (en) 2019-11-15 2019-11-15 Image retrieval method and device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911120526.2A CN110837572B (en) 2019-11-15 2019-11-15 Image retrieval method and device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110837572A true CN110837572A (en) 2020-02-25
CN110837572B CN110837572B (en) 2020-10-13

Family

ID=69576481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911120526.2A Active CN110837572B (en) 2019-11-15 2019-11-15 Image retrieval method and device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110837572B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428737A (en) * 2020-04-01 2020-07-17 南方科技大学 Example retrieval method, device, server and storage medium for ophthalmologic image
CN111986749A (en) * 2020-07-15 2020-11-24 万达信息股份有限公司 Digital pathological image retrieval system
CN112750530A (en) * 2021-01-05 2021-05-04 上海梅斯医药科技有限公司 Model training method, terminal device and storage medium
CN113178254A (en) * 2021-04-14 2021-07-27 中通服咨询设计研究院有限公司 Intelligent medical data analysis method and device based on 5G and computer equipment
CN113553460A (en) * 2021-08-13 2021-10-26 北京安德医智科技有限公司 Image retrieval method and device, electronic device and storage medium
CN113779295A (en) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 Retrieval method, device, equipment and medium for abnormal cell image features
CN113990458A (en) * 2021-12-28 2022-01-28 深圳市海瑞洋科技有限公司 Medical electronic endoscope image processing system and method
CN116245154A (en) * 2022-11-30 2023-06-09 荣耀终端有限公司 Training method of neural network, public opinion crisis recognition method and related device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496723A (en) * 2008-01-30 2009-08-05 深圳安科高技术股份有限公司 Method for acquiring nerve navigation system imaging data
CN101706843A (en) * 2009-11-16 2010-05-12 杭州电子科技大学 Interactive film Interpretation method of mammary gland CR image
CN101714153A (en) * 2009-11-16 2010-05-26 杭州电子科技大学 Visual perception based interactive mammography image searth method
CN102004917A (en) * 2010-12-17 2011-04-06 南方医科大学 Method for extracting image edge neighbor description feature operator
CN103745227A (en) * 2013-12-31 2014-04-23 沈阳航空航天大学 Method for identifying benign and malignant lung nodules based on multi-dimensional information
CN105956198A (en) * 2016-06-20 2016-09-21 东北大学 Nidus position and content-based mammary image retrieval system and method
CN107491789A (en) * 2017-08-24 2017-12-19 南方医科大学南方医院 The construction method of GISTs malignant potential disaggregated model based on SVMs
CN108171692A (en) * 2017-12-26 2018-06-15 安徽科大讯飞医疗信息技术有限公司 Lung image retrieval method and device
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108389614A (en) * 2018-03-02 2018-08-10 西安交通大学 The method for building medical image collection of illustrative plates based on image segmentation and convolutional neural networks
CN108805181A (en) * 2018-05-25 2018-11-13 深圳大学 A kind of image classification device and sorting technique based on more disaggregated models
CN109166105A (en) * 2018-08-01 2019-01-08 中国人民解放军南京军区南京总医院 The malignancy of tumor risk stratification assistant diagnosis system of artificial intelligence medical image
CN109815355A (en) * 2019-01-28 2019-05-28 网易(杭州)网络有限公司 Image search method and device, storage medium, electronic equipment
CN110197716A (en) * 2019-05-20 2019-09-03 广东技术师范大学 Processing method, device and the computer readable storage medium of medical image
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN110236543A (en) * 2019-05-23 2019-09-17 东华大学 The more classification diagnosis systems of Alzheimer disease based on deep learning
CN110391015A (en) * 2019-06-14 2019-10-29 广东省人民医院(广东省医学科学院) A method of tumor immunity is quantified based on image group

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101496723A (en) * 2008-01-30 2009-08-05 深圳安科高技术股份有限公司 Method for acquiring nerve navigation system imaging data
CN101706843A (en) * 2009-11-16 2010-05-12 杭州电子科技大学 Interactive film Interpretation method of mammary gland CR image
CN101714153A (en) * 2009-11-16 2010-05-26 杭州电子科技大学 Visual perception based interactive mammography image searth method
CN102004917A (en) * 2010-12-17 2011-04-06 南方医科大学 Method for extracting image edge neighbor description feature operator
CN103745227A (en) * 2013-12-31 2014-04-23 沈阳航空航天大学 Method for identifying benign and malignant lung nodules based on multi-dimensional information
CN105956198A (en) * 2016-06-20 2016-09-21 东北大学 Nidus position and content-based mammary image retrieval system and method
CN107491789A (en) * 2017-08-24 2017-12-19 南方医科大学南方医院 The construction method of GISTs malignant potential disaggregated model based on SVMs
CN108171692A (en) * 2017-12-26 2018-06-15 安徽科大讯飞医疗信息技术有限公司 Lung image retrieval method and device
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108389614A (en) * 2018-03-02 2018-08-10 西安交通大学 The method for building medical image collection of illustrative plates based on image segmentation and convolutional neural networks
CN108805181A (en) * 2018-05-25 2018-11-13 深圳大学 A kind of image classification device and sorting technique based on more disaggregated models
CN109166105A (en) * 2018-08-01 2019-01-08 中国人民解放军南京军区南京总医院 The malignancy of tumor risk stratification assistant diagnosis system of artificial intelligence medical image
CN109815355A (en) * 2019-01-28 2019-05-28 网易(杭州)网络有限公司 Image search method and device, storage medium, electronic equipment
CN110197716A (en) * 2019-05-20 2019-09-03 广东技术师范大学 Processing method, device and the computer readable storage medium of medical image
CN110236543A (en) * 2019-05-23 2019-09-17 东华大学 The more classification diagnosis systems of Alzheimer disease based on deep learning
CN110391015A (en) * 2019-06-14 2019-10-29 广东省人民医院(广东省医学科学院) A method of tumor immunity is quantified based on image group
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428737A (en) * 2020-04-01 2020-07-17 南方科技大学 Example retrieval method, device, server and storage medium for ophthalmologic image
CN111428737B (en) * 2020-04-01 2024-01-19 南方科技大学 Instance retrieval method, device, server and storage medium for ophthalmic image
CN111986749A (en) * 2020-07-15 2020-11-24 万达信息股份有限公司 Digital pathological image retrieval system
CN112750530A (en) * 2021-01-05 2021-05-04 上海梅斯医药科技有限公司 Model training method, terminal device and storage medium
CN113178254A (en) * 2021-04-14 2021-07-27 中通服咨询设计研究院有限公司 Intelligent medical data analysis method and device based on 5G and computer equipment
CN113553460A (en) * 2021-08-13 2021-10-26 北京安德医智科技有限公司 Image retrieval method and device, electronic device and storage medium
CN113779295A (en) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 Retrieval method, device, equipment and medium for abnormal cell image features
CN113990458A (en) * 2021-12-28 2022-01-28 深圳市海瑞洋科技有限公司 Medical electronic endoscope image processing system and method
CN116245154A (en) * 2022-11-30 2023-06-09 荣耀终端有限公司 Training method of neural network, public opinion crisis recognition method and related device

Also Published As

Publication number Publication date
CN110837572B (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN110837572B (en) Image retrieval method and device, readable storage medium and electronic equipment
Dam et al. Automatic segmentation of high-and low-field knee MRIs using knee image quantification with data from the osteoarthritis initiative
US9514416B2 (en) Apparatus and method of diagnosing a lesion using image data and diagnostic models
US9122955B2 (en) Method and system of classifying medical images
Shaukat et al. Computer-aided detection of lung nodules: a review
EP2812828B1 (en) Interactive optimization of scan databases for statistical testing
Xu et al. Quantifying the margin sharpness of lesions on radiological images for content‐based image retrieval
CN112529834A (en) Spatial distribution of pathological image patterns in 3D image data
Gandomkar et al. BI-RADS density categorization using deep neural networks
US11574717B2 (en) Medical document creation support apparatus, medical document creation support method, and medical document creation support program
WO2014171830A1 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
Depeursinge et al. Fundamentals of texture processing for biomedical image analysis: A general definition and problem formulation
Wang et al. Whole mammographic mass segmentation using attention mechanism and multiscale pooling adversarial network
Samei et al. Design and fabrication of heterogeneous lung nodule phantoms for assessing the accuracy and variability of measured texture radiomics features in CT
Agarwal et al. Weakly-supervised lesion segmentation on CT scans using co-segmentation
JP2013200642A (en) Case retrieval device, case retrieval method, and program
Krishan et al. Multi-class liver cancer diseases classification using CT images
Şekeroğlu et al. A computer aided diagnosis system for lung cancer detection using support vector machine
JP2017189394A (en) Information processing apparatus and information processing system
Zhou et al. A universal approach for automatic organ segmentations on 3D CT images based on organ localization and 3D GrabCut
Carolus et al. Automated detection and segmentation of mediastinal and axillary lymph nodes from CT using foveal fully convolutional networks
Li et al. Tbidoc: 3d content-based ct image retrieval system for traumatic brain injury
Wang et al. 3D multi-scale DenseNet for malignancy grade classification of pulmonary nodules
Johnson et al. Registration parameter optimization for 3D tissue modeling from resected tumors cut into serial H and E slides
Bhushan Liver cancer detection using hybrid approach-based convolutional neural network (HABCNN)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Patentee after: Tuxiang Medical Technology Co., Ltd

Address before: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Patentee before: Beijing Tuoxiang Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder