CN116521915A - Retrieval method, system, equipment and medium for similar medical images - Google Patents

Retrieval method, system, equipment and medium for similar medical images Download PDF

Info

Publication number
CN116521915A
CN116521915A CN202310483530.5A CN202310483530A CN116521915A CN 116521915 A CN116521915 A CN 116521915A CN 202310483530 A CN202310483530 A CN 202310483530A CN 116521915 A CN116521915 A CN 116521915A
Authority
CN
China
Prior art keywords
feature
image
region
matched
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310483530.5A
Other languages
Chinese (zh)
Inventor
黄家祥
任大伟
罗飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanlicloud Medical Information Technology Beijing Co ltd
Original Assignee
Wanlicloud Medical Information Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanlicloud Medical Information Technology Beijing Co ltd filed Critical Wanlicloud Medical Information Technology Beijing Co ltd
Priority to CN202310483530.5A priority Critical patent/CN116521915A/en
Publication of CN116521915A publication Critical patent/CN116521915A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

A retrieval method, a retrieval system, retrieval equipment and retrieval media for similar medical images relate to the technical field of medical images. In the method, the method comprises the following steps: invoking a trained feature code output network, wherein the feature code output network comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit; determining global morphological characteristics, local morphological characteristics and regional position characteristics of the medical image to be matched through a characteristic coding output network; determining an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature through a feature fusion unit; and determining a similar medical image set of the medical image to be matched according to the image feature codes of the medical image to be matched. By adopting the technical scheme provided by the application, the medical images to be matched are completely and effectively described based on the multi-aspect characteristics of the medical images to be matched, so that the accuracy of the retrieval method of the similar medical images is improved.

Description

Retrieval method, system, equipment and medium for similar medical images
Technical Field
The present disclosure relates to the field of medical imaging technologies, and in particular, to a method, a system, an apparatus, and a medium for retrieving similar medical images.
Background
In terms of medical diagnosis, a doctor can find similar cases of a plurality of patients by retrieving a medical image library, thereby making a diagnosis of the condition of the patient. The image retrieval fully plays the advantage that the computer is good at processing repeated tasks, can extract the effective image visual characteristics of the input images according to the input images given by doctors, measure the similarity between the input images, quickly find out the images related or similar to the input images from a large-scale medical image report database, and provide effective help for the diagnosis of the doctors.
The difference between the medical image and the common image is larger, the common image is an RGB three-channel 8bit image, the medical image is a single-channel gray image, and the dynamic state is usually 14 bits, so that the information of the medical image is difficult to extract, and certain difficulty exists in similar image retrieval.
At present, the retrieval of similar medical images is generally to extract unilateral image features of an input image first, and then search for similar images of the input image through a series of similarity algorithms or similarity models based on the unilateral image features. However, the unilateral image features of the input image are difficult to describe the input image completely and effectively, so that an effective basis is lacking when similar images of the input image are searched, and the accuracy of the existing similar medical image searching method is low.
Disclosure of Invention
In order to enable more complete and effective description of an input image and thereby improve accuracy of a retrieval method of similar medical images, the application provides the retrieval method, the retrieval system, the retrieval equipment and the retrieval medium of the similar medical images.
In a first aspect, the present application provides a method for retrieving similar medical images, the method comprising the steps of: invoking a trained feature code output network, wherein the feature code output network comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit;
determining an interesting region of the medical image to be matched and region position characteristics of the interesting region through the interesting region dividing unit;
determining global morphological characteristics of the medical image to be matched and local morphological characteristics of the region of interest through the characteristic extraction unit;
determining, by the feature fusion unit, an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature;
and determining a similar medical image set of the medical images to be matched from an image database according to the image feature codes of the medical images to be matched, wherein the image database comprises a plurality of medical images and image feature codes corresponding to the medical images.
By adopting the technical scheme, for a given medical image to be matched, on one hand, global morphological characteristics of the medical image to be matched are extracted to describe the medical image to be matched integrally, on the other hand, a plurality of regions of interest are divided into the medical image to be matched, region position characteristics and local morphological characteristics of each region of interest are extracted, and important regions of interest of the medical image to be matched are described; and carrying out fusion processing on the three aspects of features, and determining the image feature codes of the medical images to be matched. And the medical images to be matched are more completely and effectively described, and the accuracy of the retrieval method of the similar medical images is improved.
Optionally, in determining, by the feature fusion unit, an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature, the feature fusion method further includes calculating the image feature code based on a feature fusion algorithm, where the feature fusion algorithm specifically includes:
wherein WMLcode encodes the image features, W is the global morphological feature, n is the number of regions of interest in the medical image to be matched, L i For the region position feature of the ith region of interest, M i And for the local morphological feature of the ith region of interest, P is a first weight coefficient, and Q is a second weight coefficient.
By adopting the technical scheme, the three aspects of features are effectively fused, so that the generated image feature codes can accurately position the integral features, the local focus features and the focus position features of the medical images to be matched, and the retrieval capability of similar images is improved.
Optionally, the feature extraction unit includes at least one feature extraction layer, for an i-th feature extraction layer, the i-th feature extraction layer accepts, as an additional input, an i-1-th feature extraction layer preceding the i-th feature extraction layer, where the input of the i-th feature extraction layer is expressed as:
X i =[H i (X i-1 )]+X i-1
wherein X is i For input of the i-th layer feature extraction layer, H i X is a nonlinear transformation function i-1 Is the input to the i-1 th feature extraction layer.
By adopting the technical scheme, the feature extraction unit adopts a more aggressive dense connection mode, all feature extraction layers are connected with each other, and each feature extraction layer receives all the previous feature extraction layers as additional input, so that the feature transmission is enhanced, the feature reuse is realized, and the feature extraction efficiency is improved.
Optionally, in the determining, according to the image feature encoding of the medical image to be matched, a similar medical image set of the medical image to be matched from an image database, specifically including:
respectively calculating the coding distance between the image feature codes of the medical images to be matched and the image feature codes of the medical images in the image database according to a coding distance calculation formula;
and selecting the first N medical images closest to the coding distance of the medical image to be matched as similar medical images of the medical image to be matched, and generating a similar medical image set of the medical image to be matched.
By adopting the technical scheme, the medical images closest to the coding distance of the medical images to be matched are calculated, the retrieved medical images are ordered in a coding distance TOP N mode, N medical images most similar to the medical images to be matched are determined, and the doctor can conveniently review similar image sets.
Optionally, the coding distance calculation formula specifically includes:
wherein A is the feature code of the medical image to be matched, B is the image feature code of the medical image in the image database, a i The ith position of A, b i I-th position of B, μ a Mean value of A, mu b The average value of B is given as the average value of B,variance of A>Variance of B, delta ab Covariance of sum of A and B, c 1 、c 2 Are all stable constants, and P is a third weight coefficient.
By adopting the technical scheme, the image features are regarded as points in the vector space by using a model based on the vector space, and the similarity between the vector features is measured by calculating the coding distance proximity degree between two points. The comparison of the structural similarity is increased, so that the morphological, distribution and texture similarity is improved to simulate human perception, a similarity calculation formula is attached to a similar medical image retrieval problem, and the similarity of a generated similar image set and an input image is further improved.
Optionally, determining, by the region of interest dividing unit, a region of interest of the medical image to be matched and a region position feature of the region of interest specifically includes:
inputting the medical image to be matched into a feature extraction layer of the region-of-interest dividing unit, and extracting a convolution feature map of the medical image to be matched;
inputting the convolution feature map into a candidate region layer of the region-of-interest dividing unit, and outputting a plurality of candidate regions of interest of the convolution feature map;
selecting the region of interest of the input image from the candidate regions of interest through a boundary regression layer of the region of interest dividing unit;
and classifying the region of interest through a classification layer of the region of interest dividing unit, and extracting the region position characteristics of the region of interest based on a classification result.
By adopting the technical scheme, the region of interest of the medical image to be matched is determined from the candidate regions of interest through boundary regression, the region of interest is marked on the medical image to be matched in the form of an anchor point frame, the region position characteristics of the region of interest are determined according to the classification result, and the description of the region of interest is completed in two aspects of local morphology and local position.
Optionally, selecting the region of interest of the input image in the candidate region of interest through a boundary regression layer of the region of interest dividing unit further includes:
and selecting the region of interest of the input image from the candidate regions of interest based on a preset region of interest attention.
By adopting the technical scheme, the attention of the region of interest is introduced to determine the region of interest of the medical image to be matched, so that the region of interest can embody the local characteristics of the medical image to be matched.
In a second aspect of the present application there is provided a retrieval system for similar medical images, the system comprising the following modules: the feature code output network calling module is used for calling the trained feature code output network, and the feature code output network comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit;
the region of interest dividing module is used for determining a region of interest of the medical image to be matched and region position characteristics of the region of interest through the region of interest dividing unit;
the morphological feature extraction module is used for determining global morphological features of the medical image to be matched and local morphological features of the region of interest through the feature extraction unit;
the image feature code generation module is used for determining the image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature through the feature fusion unit;
the similar medical image set generation module is used for determining a similar medical image set of the medical image to be matched from an image database according to the image feature codes of the medical image to be matched, wherein the image database comprises a plurality of medical images and the image feature codes corresponding to the medical images.
In a third aspect of the present application, an electronic device is provided;
in a fourth aspect of the present application, there is provided a computer readable storage medium;
in summary, one or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. extracting multiple aspects of the medical image to be matched, describing the medical image to be matched from the whole, local and local positions, fusing the multiple aspects of the medical image to be matched through a feature fusion unit, and performing more complete and effective description on the generated feature codes of the medical image to be matched to improve the accuracy of a retrieval method of the similar medical image.
2. The feature extraction unit with all layers connected with each other is provided, and each feature extraction layer of the feature extraction unit receives all the previous feature extraction layers as additional input, so that feature transmission is enhanced, feature reuse is realized, and feature extraction efficiency is improved.
3. Using a model based on vector space, image features are treated as points in vector space, and similarity between vector features is measured by calculating the coding distance proximity between two points. The comparison of the structural similarity is increased, so that the morphological, distribution and texture similarity is improved to simulate human perception, a similarity calculation formula is attached to a similar medical image retrieval problem, and the similarity of a generated similar image set and an input image is further improved.
Drawings
Fig. 1 is a flowchart of a method for retrieving similar medical images according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of region of interest division in a method for retrieving similar medical images according to an embodiment of the present application.
Fig. 3 is a network architecture diagram of a feature extraction unit in a retrieval method of similar medical images according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a retrieval system for similar medical images according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application.
Reference numerals illustrate: 401. the feature code output network calling module; 402. a region of interest dividing module; 403. a morphological feature extraction module; 404. an image feature code generation module; 405. a similar medical image set generation module; 500. an electronic device; 501. a processor; 502. a communication bus; 503. a user interface; 504. a network interface; 505. a memory.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Referring to fig. 1, the present application provides a method for retrieving similar medical images, which specifically includes the following steps: s01: invoking a trained feature code output network, wherein the feature code output network comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit;
specifically, before similar medical image retrieval is performed on the feature coding output network, various models for bearing data operation or logic operation in various units included in the feature coding output network are trained, and the feature coding output network specifically comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit. The feature extraction unit is used for extracting morphological features of the image input into the unit; the region of interest dividing unit is used for dividing a plurality of regions of interest on the medical image to be matched and determining region position features corresponding to the regions of interest based on classification results of the regions of interest; and the feature fusion unit fuses all the features of the medical images to be matched based on a feature fusion algorithm and outputs feature codes of the medical images to be matched.
S02: determining a region of interest of the medical image to be matched and a region position feature of the region of interest through a region of interest dividing unit;
specifically, referring to fig. 2, when a medical image to be matched passes through a region-of-interest dividing unit, the medical image to be matched is divided into a plurality of regions of interest after feature extraction, classification operation and boundary regression in the region-of-interest dividing unit, and the regions of interest are regions to be focused on in the medical image to be matched, and labels corresponding to positions of human bodies are added to each region of interest according to classification results of the classification operation.
The region of interest dividing unit is specifically a fast-RCNN, the region of interest dividing unit takes CNN as architecture backbone, and after the medical image to be matched is input into the region of interest dividing unit, the feature extraction layer of the region of interest dividing unit extracts the convolution feature image of the medical image to be matched. In one possible embodiment of the present application, the network stride of the CNN is set to 16, so that for an input 1024 x 1024 medical image to be matched, after passing through the feature extraction layer, a convolutional feature map of 1024/16 x 1024/16, i.e. 64 x 64, is output.
The convolved feature map output by the feature extraction layer is input to the candidate region layer, and sliding windows applied to the convolved feature map are output at the candidate region layer, and for each sliding window, a plurality of anchor blocks are generated, each anchor block serving as a selected region of interest.
For the output results of the candidate region layers, the output results are respectively transmitted to a classification layer and a boundary regression layer, the classification layer outputs target classification probability vectors of all candidate regions of interest based on a training set, and corresponding labels of focus regions or human body positions are marked on all candidate regions of interest according to the target classification probability vectors; the boundary regression layer carries out position regression on the anchor blocks of each candidate region of interest, so that the anchor blocks after regression can more accurately cover the target object.
Selecting the interested areas of the medical image to be matched from a plurality of candidate interested areas based on the preset interested area attention, and acquiring the output results of the classification layers of the interested areas as the area position characteristics of the medical image to be matched.
S03: determining global morphological characteristics of the medical image to be matched and local morphological characteristics of the region of interest through a characteristic extraction unit;
specifically, referring to fig. 3, all the feature extraction layers in the feature extraction unit are connected to each other, each feature extraction layer will accept all the feature extraction layers in front of it as additional inputs, and by element-level addition, each feature extraction layer is connected to all the feature extraction layers in front in the channel dimension and serves as an input of the next layer, and the input of each feature extraction layer can be expressed as:
X i =[H i (X i-1 )]+X i-1
wherein X is i For input of the i-th layer feature extraction layer, H i X is a nonlinear transformation function i-1 Is the input to the i-1 th feature extraction layer.
H is the same as i Representing a series of combining operations including BN, reLU, pooling and Conv, thus effectively involving multiple convolution operations between feature extraction layers.
The feature extraction unit is used for extracting global morphological features of the medical image to be matched and local morphological features of each region of interest in the medical image to be matched, and the feature extraction unit is trained before the morphological features are extracted, and the training process of the feature extraction unit is as follows:
selecting medical images from an image database, randomly carrying out similar transformation on each selected medical image, wherein the similar transformation comprises transformation operations of amplifying, shrinking, rotating by plus or minus 10 degrees, gaussian blur, contrast changing, brightness changing and the like on a single medical image, classifying the transformed images from the same medical image into the same class, taking the transformed images as a training set of a feature extraction unit, and carrying out classification training on the feature extraction unit; in the training process, SGD is selected as a network optimizer for the nature of dense connection of the feature extraction units, so that gradient cannot drop too fast each time when training is performed, and therefore the learning rate can be set to be 0.0001; after the classification training of the feature extraction unit is completed, the full connection layer of the feature extraction unit is removed, only the feature extraction layer of the feature extraction unit is reserved, the training of the feature extraction unit is completed, and at the moment, when the medical image is input into the feature extraction unit, the output result is the morphological feature of the medical image.
S04: determining an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature through a feature fusion unit;
specifically, the feature fusion unit fuses global morphological features, local morphological features and regional position features of the medical image to be matched based on a feature fusion algorithm, and outputs an image feature code of the medical image to be matched, wherein the feature fusion algorithm specifically comprises:
wherein WMLcode is image feature coding, W is global morphological feature, n is the number of regions of interest in the medical image to be matched, L i For the region position feature of the ith region of interest, M i For the local morphological feature of the ith region of interest, P is a first weight coefficient, Q is a second weight coefficient, and in a preferred embodiment of the present application, p=0.6, q=0.55.
In another possible embodiment of the present application, before calculating the image feature codes of the medical images to be matched through the feature fusion algorithm, the acquired global morphological features, local morphological features and regional position features are subjected to data compression by using maxpool of 1*1, so that cross-channel aggregation is performed on the three aspects of features, and the calculation amount is reduced.
S05: according to the image feature codes of the medical images to be matched, determining a similar medical image set of the medical images to be matched from an image database;
specifically, the image database contains a large number of medical images and image feature codes corresponding to the medical images, and the medical images stored in the image database are calculated through the process described above while the image database is constructed, and one medical image uniquely corresponds to one image feature code.
For a medical image to be matched, calculating an image feature code of the medical image to be matched, calculating an encoding distance between the image feature code of the medical image to be matched and the image feature codes of all medical images in an image database, wherein the calculation of the encoding distance is based on an encoding distance calculation formula, and the encoding distance calculation formula specifically comprises the following steps:
wherein A is the feature code of the medical image to be matched, B is the image feature code of the medical image in the image database, a i The ith position of A, b i I-th position of B, μ a Mean value of A, mu b The average value of B is given as the average value of B,variance of A>Variance of B, delta ab Covariance of sum of A and B, c 1 、c 2 Are all stability constants, P is a third weight coefficient, and in a preferred embodiment of the present application, p=0.65.
According to the coding distance calculation formula, the coding distances between the medical image to be matched and all medical images in the image database are obtained, all medical images in the image database are ordered according to the coding distances between the medical image to be matched and all medical images in the image database, the first N medical images closest to the coding distance of the medical image to be matched are selected as similar medical images of the medical image to be matched, a similar medical image set of the medical image to be matched is generated, and similar image retrieval of the medical image to be matched is completed.
Referring to fig. 4, the present application further provides a retrieval system for similar medical images, the system specifically comprising the following modules: the feature code output network calling module 401 is configured to call a trained feature code output network, where the feature code output network includes a feature extraction unit, a region of interest dividing unit, and a feature fusion unit;
a region of interest segmentation module 402, configured to determine a region of interest of the medical image to be matched and a region location feature of the region of interest through a region of interest segmentation unit;
a morphological feature extraction module 403, configured to determine global morphological features of the medical image to be matched and local morphological features of the region of interest through a feature extraction unit;
the image feature code generating module 404 is configured to determine, by using the feature fusion unit, an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature;
the similar medical image set generating module 405 is configured to determine a similar medical image set of the medical image to be matched from an image database according to image feature codes of the medical image to be matched, where the image database includes a plurality of medical images and image feature codes corresponding to the medical images.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses an electronic device 500. Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device 500 according to the disclosure in an embodiment of the present application. The electronic device 500 may include: at least one processor 501, at least one network interface 504, a user interface 503, a memory 505, at least one communication bus 502.
Wherein a communication bus 502 is used to enable connected communications between these components.
The user interface 503 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 503 may further include a standard wired interface and a standard wireless interface.
The network interface 504 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 501 may include one or more processing cores. The processor 501 connects various parts throughout the server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 505, and invoking data stored in the memory 505. Alternatively, the processor 501 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 501 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 501 and may be implemented by a single chip.
The Memory 505 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 505 comprises a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 505 may be used to store instructions, programs, code sets, or instruction sets. The memory 505 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 505 may also optionally be at least one storage device located remotely from the processor 501. Referring to fig. 5, an operating system, a network communication module, a user interface module, and an application program of a retrieval method of a similar medical image may be included in the memory 505 as a computer storage medium.
In the electronic device 500 shown in fig. 5, the user interface 503 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 501 may be used to invoke an application in the memory 505 that stores a retrieval method of a similar medical image, which when executed by the one or more processors 501, causes the electronic device 500 to perform the method as described in one or more of the embodiments above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory 505. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory 505, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. Whereas the aforementioned memory 505 includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (10)

1. A method of retrieving similar medical images, the method comprising the steps of:
invoking a trained feature code output network, wherein the feature code output network comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit;
determining an interesting region of the medical image to be matched and region position characteristics of the interesting region through the interesting region dividing unit;
determining global morphological characteristics of the medical image to be matched and local morphological characteristics of the region of interest through the characteristic extraction unit;
determining, by the feature fusion unit, an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature;
and determining a similar medical image set of the medical images to be matched from an image database according to the image feature codes of the medical images to be matched, wherein the image database comprises a plurality of medical images and image feature codes corresponding to the medical images.
2. The method according to claim 1, wherein in determining the image feature codes of the medical images to be matched according to the global morphological feature, the local morphological feature and the regional position feature by the feature fusion unit, the method further comprises calculating the image feature codes based on a feature fusion algorithm, wherein the feature fusion algorithm specifically comprises:
wherein WMLcode encodes the image features, W is the global morphological feature, n is the number of regions of interest in the medical image to be matched, L i For the region position feature of the ith region of interest, M i And for the local morphological feature of the ith region of interest, P is a first weight coefficient, and Q is a second weight coefficient.
3. The retrieval method of similar medical images according to claim 1, wherein:
the feature extraction unit comprises at least one feature extraction layer, for an ith feature extraction layer, the ith feature extraction layer accepts an i-1 th feature extraction layer in front of the ith feature extraction layer as an additional input, and the input of the ith feature extraction layer is expressed as:
X i =[H i (X i-1 )]+X i-1
wherein X is i For input of the i-th layer feature extraction layer, H i X is a nonlinear transformation function i-1 Is the input to the i-1 th feature extraction layer.
4. The method for searching for similar medical images according to claim 1, wherein in the similar medical image set of the medical image to be matched determined from an image database according to the image feature codes of the medical image to be matched, specifically comprising:
respectively calculating the coding distance between the image feature codes of the medical images to be matched and the image feature codes of the medical images in the image database according to a coding distance calculation formula;
and selecting the first N medical images closest to the coding distance of the medical image to be matched as similar medical images of the medical image to be matched, and generating a similar medical image set of the medical image to be matched.
5. The method for retrieving similar medical images according to claim 4, wherein the coding distance calculation formula is specifically:
wherein A is the feature code of the medical image to be matched, B is the image feature code of the medical image in the image database, a i The ith position of A, b i I-th position of B, μ a Mean value of A, mu b The average value of B is given as the average value of B,variance of A>Variance of B, delta ab Covariance of sum of A and B, c 1 、c 2 Are all stable constants, and P is a third weight coefficient.
6. The retrieval method of similar medical images according to claim 1, wherein in determining a region of interest of a medical image to be matched and a region position feature of the region of interest by the region of interest dividing unit, specifically comprising:
inputting the medical image to be matched into a feature extraction layer of the region-of-interest dividing unit, and extracting a convolution feature map of the medical image to be matched;
inputting the convolution feature map into a candidate region layer of the region-of-interest dividing unit, and outputting a plurality of candidate regions of interest of the convolution feature map;
selecting the region of interest of the input image from the candidate regions of interest through a boundary regression layer of the region of interest dividing unit;
and classifying the region of interest through a classification layer of the region of interest dividing unit, and extracting the region position characteristics of the region of interest based on a classification result.
7. The retrieval method of a similar medical image according to claim 6, wherein in selecting the region of interest of the input image among the candidate regions of interest by a boundary regression layer of the region of interest dividing unit, further comprising:
and selecting the region of interest of the input image from the candidate regions of interest based on a preset region of interest attention.
8. A retrieval system for similar medical images, the system comprising:
the feature code output network calling module (401) is used for calling the trained feature code output network, and the feature code output network comprises a feature extraction unit, an interesting region dividing unit and a feature fusion unit;
a region of interest segmentation module (402) for determining a region of interest of a medical image to be matched and a region location feature of the region of interest by the region of interest segmentation unit;
a morphological feature extraction module (403) configured to determine global morphological features of the medical image to be matched and local morphological features of the region of interest through the feature extraction unit;
an image feature code generating module (404) configured to determine, by using the feature fusion unit, an image feature code of the medical image to be matched according to the global morphological feature, the local morphological feature and the regional position feature;
and the similar medical image set generating module (405) is used for determining a similar medical image set of the medical image to be matched from an image database according to the image feature codes of the medical image to be matched, wherein the image database comprises a plurality of medical images and the image feature codes corresponding to the medical images.
9. An electronic device comprising a processor (501), a memory (505), a user interface (503) and a network interface (504), the memory (505) being configured to store instructions, the user interface (503) and the network interface (504) being configured to communicate to other devices, the processor (501) being configured to execute the instructions stored in the memory (505) to cause the electronic device (500) to perform the method according to any of claims 1-7.
10. A computer readable storage medium storing instructions which, when executed, perform the method steps of any of claims 1-7.
CN202310483530.5A 2023-04-28 2023-04-28 Retrieval method, system, equipment and medium for similar medical images Pending CN116521915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310483530.5A CN116521915A (en) 2023-04-28 2023-04-28 Retrieval method, system, equipment and medium for similar medical images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310483530.5A CN116521915A (en) 2023-04-28 2023-04-28 Retrieval method, system, equipment and medium for similar medical images

Publications (1)

Publication Number Publication Date
CN116521915A true CN116521915A (en) 2023-08-01

Family

ID=87402490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310483530.5A Pending CN116521915A (en) 2023-04-28 2023-04-28 Retrieval method, system, equipment and medium for similar medical images

Country Status (1)

Country Link
CN (1) CN116521915A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874278A (en) * 2024-03-11 2024-04-12 盛视科技股份有限公司 Image retrieval method and system based on multi-region feature combination

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117874278A (en) * 2024-03-11 2024-04-12 盛视科技股份有限公司 Image retrieval method and system based on multi-region feature combination
CN117874278B (en) * 2024-03-11 2024-05-28 盛视科技股份有限公司 Image retrieval method and system based on multi-region feature combination

Similar Documents

Publication Publication Date Title
WO2020215984A1 (en) Medical image detection method based on deep learning, and related device
CN111369576B (en) Training method of image segmentation model, image segmentation method, device and equipment
CN111161275B (en) Method and device for segmenting target object in medical image and electronic equipment
CN110838125B (en) Target detection method, device, equipment and storage medium for medical image
CN111951280B (en) Image segmentation method, device, equipment and storage medium
JP2023520846A (en) Image processing method, image processing apparatus, computer program and computer equipment based on artificial intelligence
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN114565763B (en) Image segmentation method, device, apparatus, medium and program product
CN111091010A (en) Similarity determination method, similarity determination device, network training device, network searching device and storage medium
WO2021184799A1 (en) Medical image processing method and apparatus, and device and storage medium
CN111951276A (en) Image segmentation method and device, computer equipment and storage medium
CN116521915A (en) Retrieval method, system, equipment and medium for similar medical images
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN113256670A (en) Image processing method and device, and network model training method and device
CN116129141A (en) Medical data processing method, apparatus, device, medium and computer program product
CN110930386B (en) Image processing method, device, equipment and storage medium
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN113822283A (en) Text content processing method and device, computer equipment and storage medium
CN113344028A (en) Breast ultrasound sequence image classification method and device
CN112699907A (en) Data fusion method, device and equipment
CN114664410B (en) Video-based focus classification method and device, electronic equipment and medium
CN115631370A (en) Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN117058405B (en) Image-based emotion recognition method, system, storage medium and terminal
WO2024045819A1 (en) Lesion area determining method and apparatus, and model training method and apparatus
CN115578564B (en) Training method and device for instance segmentation model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination