CN106846317B - Medical image retrieval method based on feature extraction and similarity matching - Google Patents

Medical image retrieval method based on feature extraction and similarity matching Download PDF

Info

Publication number
CN106846317B
CN106846317B CN201710106121.8A CN201710106121A CN106846317B CN 106846317 B CN106846317 B CN 106846317B CN 201710106121 A CN201710106121 A CN 201710106121A CN 106846317 B CN106846317 B CN 106846317B
Authority
CN
China
Prior art keywords
image
target
outline
value
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710106121.8A
Other languages
Chinese (zh)
Other versions
CN106846317A (en
Inventor
章桦
侯玉翎
李春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Linking Medical Technology Co ltd
Original Assignee
Beijing Linking Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Linking Medical Technology Co ltd filed Critical Beijing Linking Medical Technology Co ltd
Priority to CN201710106121.8A priority Critical patent/CN106846317B/en
Publication of CN106846317A publication Critical patent/CN106846317A/en
Application granted granted Critical
Publication of CN106846317B publication Critical patent/CN106846317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a medical image retrieval method based on feature extraction and similarity matching, and in the process of radiotherapy planning, image registration is an important key step. The application purpose of image registration in radiotherapy planning is to find a template image most suitable for a target image, and an optimal registration result can be obtained after matching operation, so that the template image can be used for clinical target volume delineation, organ dose simulation or treatment. Therefore, how to search for the most suitable template image is quite important. In the method, through 10 morphological characteristics related to morphology, medical history information is combined, different discrimination weights are given to the characteristics, and finally, 10 groups of most similar images and related information are selected and provided for a doctor to select, and the most suitable template image is selected under comprehensive consideration. After the operation of the method, the similarity guarantee between the template image and the target image can be provided, so that the applicability and the accuracy of image registration operation in the radiotherapy plan can be improved.

Description

Medical image retrieval method based on feature extraction and similarity matching
Technical Field
The invention relates to a registration sample extraction method for target region delineation of radiotherapy plan, in particular to a medical image retrieval method based on feature extraction and similarity matching.
Background
With the continuous development of computer science and information technology, medical imaging technology has been rapidly developed, and various new imaging devices, such as Computed Tomography (CT), Digital Subtraction Angiography (DSA), single photon emission tomography (SPECT), Magnetic Resonance Imaging (MRI) Positron Emission Tomography (PET), etc., have been emerging. Various imaging techniques and examination methods have their advantages and disadvantages, and not one imaging technique may be suitable for examination and disease diagnosis of all organs of the human body. In order to improve the diagnosis accuracy, it is necessary to comprehensively use various image information of a patient. At present, an obvious development trend of medical imaging is to combine a plurality of medical images by using an information fusion technology, to fully utilize the characteristics of different medical images, to simultaneously express information from various aspects of a human body on one image, so that the conditions of various aspects such as the structure, the function and the like in the human body are reflected by the image, and the information of human anatomy, physiology, pathology and the like is more intuitively provided. To realize multi-image information fusion, it is most important to complete image registration, that is, multiple images achieve complete correspondence between geometric positions and anatomical positions in a spatial domain.
Image registration is also a fairly important topic in the field of radiotherapy. The conventional tumor radiotherapy process is to generate a radiotherapy plan by a physician manually delineating a target region and an organ at risk based on the patient's positioning CT before the start of treatment, and then to treat the patient several times while keeping the radiotherapy plan unchanged during the subsequent treatment. Such a treatment modality does not take into account changes in the patient's anatomy during the treatment, such as changes in tumor volume and location, changes in the patient's body contour, changes in gastrointestinal engorgement status, and resulting changes in the location of the surrounding organs-of-risk, etc., resulting in a deviation of the dose actually received by the patient from the physician's prescribed dose, which in turn results in a decrease in tumor control rate and an increase in the probability of normal tissue complications.
The registration is applied to radiotherapy planning, namely a template image which is most suitable for a target image is found, and the best registration result can be obtained after matching operation, so that the best registration result can be used for clinical target volume delineation, organ dose simulation or treatment. Therefore, how to search the most suitable template image is important to be applied in the registration algorithm of radiotherapy planning. The more accurate the registration operation result is, the more accurate the subsequent target delineation or organ dose simulation or clinical treatment can be.
In recent years, with the advancement of science and technology, image registration application has been gradually introduced into the field of radiotherapy, and is actively applied in the field of radiotherapy. Currently in the development of medical image registration. An image registration method improved based on the DEMONS theory is one of the mainstream at present. However, no matter what image registration method is applied, the most similar samples are searched to complete the registration operation, which is absolutely a method with great effort.
Disclosure of Invention
In order to overcome the defects in the existing image registration technology, the invention provides a medical image retrieval method based on feature extraction and similar matching, so that when a target area and a dangerous organ are delineated clinically, the image registration technology can be applied to processing more quickly and accurately, and the clinical requirements are well met.
In order to achieve the above object, the present invention provides a medical image retrieval method based on feature extraction and similarity matching, comprising the following steps:
step 1, reading images and medical history data of a patient, wherein the data comprises image information and character information;
step 2, preprocessing the data obtained in the step 1, and dividing the preprocessing into image preprocessing and character preprocessing;
step 2.1, image preprocessing: performing file transfer, normalization and segmentation on an original image, and extracting target information; the target information comprises a target image and a target image outline; if the original image is a 2DDICOM image, the file is directly transferred; if the original image is a 3D DICOM image, converting the 3D DICOM image into a 2D DICOM image group, and then performing file conversion;
step 2.2, character preprocessing: obtaining sex, age, disease information, disease treatment part, pathology and image report, whether radiotherapy is performed before and whether related complications exist from the character information of the patient;
step 3, extracting 10 required characteristics from the image or image group preprocessed in the step 2.1, and establishing an image ID of the target image;
the 10 characteristics are as follows:
the method is characterized in that: number of 2D image groups or frame number;
and (2) feature: the longitudinal longest axis of the image profile;
and (3) feature: the transverse longest axis of the image profile;
and (4) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 1/4 of the image outline;
and (5) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 1/2 of the image outline;
and (6) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 3/4 of the image outline;
and (7) feature: taking the longest axis of the image outline at the bounding box transverse direction 1/4 of the image outline;
and (2) characteristic 8: taking the longest axis of the image outline at the bounding box transverse direction 1/2 of the image outline;
and (2) characteristic 9: taking the longest axis of the image outline at the bounding box transverse direction 3/4 of the image outline;
the characteristics are as follows: an image volume or area;
step 4, primarily screening the character information preprocessed in the step 2.2 to find out image groups of the same diseases and the same treatment parts;
step 5, comparing the image ID of the target image obtained in the step 3 with all the image IDs in the image group extracted in the step 4, firstly comparing the similarity of the characteristic 1, searching for a similar image group, and excluding if the similarity is not met;
step 6, comparing the image group screened in the step 5 with the ID of the target image again, searching similar image groups, and excluding the image groups if the image groups do not accord with the ID of the target image, wherein the image groups have the similarity of 2-10;
step 7, calculating an image similarity index (SSIM) with the target image group respectively through the image group screened in the step 6, wherein the closer the comparison value is to 1, the closer the comparison value is, the images of the top 10 groups which are the closest are extracted, and the images which are not matched are eliminated;
and 8, displaying the most similar 10 groups of images in the hospital system end for the selection of the clinician, and ending.
As a further improvement of the present invention, the image information includes a CT image, a cone beam CT image, an ultrasound image, an MRI image, a PET image, and an X-Ray image; the text information comprises basic information of the patient, related medical history, related complications, disease types, disease treatment positions, pathology and image diagnosis report information.
As a further development of the invention, said step 2.1 comprises:
step 2.1.1, judging whether the image is a 2D DICOM image or a 3D DICOM image, wherein at least one image is selected in the invention;
step 2.1.2, if the image is a 3D DICOM image, converting the file into a 2D DICOM sequence image group;
2.1.3, converting the 2D DICOM image or image group into a bmp or jpeg format;
step 2.1.4, histogram equalization is carried out on the image, and the operation steps are as follows:
a. counting the histogram of the given image to be processed, and solving the following steps:
Pr(rj)=nj/N,j=0,1,…,L-1 (1)
b. transforming by using a cumulative distribution function according to the counted histogram;
Figure BDA0001233095790000031
c. the new gray level is used to replace the old gray level to calculate Sk
Wherein N is the total number of pixels in an image; n isjA pixel of the j-th gray level; pr(rj) Representing the probability distribution of the occurrence of gray levels of the original image; r iskIs the kth gray level; t (r)k) Repositioning the cumulative distribution function S in order to establish a correspondence between the grey levels of the input image and the output image, i.e. the probability of new grey levels occurringk
Step 2.1.5, performing self-adaptive binarization on the operation result of the step 2.1.4 by using an Otsu method to obtain a basic image contour edge;
after step 2.1.6 and step 2.1.5 are completed, the basic outline of the target image is preliminarily separated;
and 2.1.7, finding out a minimum square Bounding box capable of enclosing the image according to the basic image contour of the step 2.1.6, and then performing a primary expansion algorithm and a corrosion algorithm on the image to obtain the complete contour edge of the target image.
As a further development of the invention, in step 2.1.5:
the Otsu method is also called as the Otsu method, and can obtain the gray value of the point where f (i, j) is written as MxN image (i, j) no matter whether the histogram of the image has obvious double peaks or not in the calculation process;
assuming that f (i, j) takes a value of [0, m-1], and p (k) is the frequency of the gray value k, then:
Figure BDA0001233095790000041
suppose that the object and background segmented by using the gray value t as the threshold are respectively: { f (i, j) ≦ t } and { f (i, j) > t },
then the target fraction ratio: omega0(t)=∑0≤i≤tp(i) (4)
Target portion points: n is a radical of0(t)=MN∑0≤i≤tp(i) (5)
Background part ratio: omega1(t)=∑t≤i≤m-1p(i) (6)
Number of background part points: n is a radical of1(t)=MN∑t≤i≤m-1p(i) (7)
Target mean value: mu.s0(t)=∑0≤i≤tip(i)/ω0(t) (8)
Background mean value: mu.s1(t)=∑t≤i≤m-1ip(i)/ω1(t) (9)
Overall mean value: mu-omega0(t)μ0(t)+ω1(t)μ1(t) (10)
Otsu's method indicates that the formula for finding the optimal threshold g of an image is:
Figure BDA0001233095790000042
the right bracket of the formula is actually the inter-class variance value, the target and background divided by the threshold g constitute the whole image, and the target value mu0(t) probability of ω1(t), background value μ1(t) probability of ω0(t), the total mean value is mu, and the formula can be obtained according to the definition of the variance.
As a further improvement of the invention, the screening threshold in step 5 is 95% or 90% or 85% or 80% or 75%.
As a further improvement of the invention, the screening threshold in step 6 is 95% or 90% or 85% or 80% or 75%.
As a further improvement of the present invention, in step 7:
given that two images are defined as x and y, respectively, the structural similarity of the two images can be found as follows:
Figure BDA0001233095790000051
wherein muxIs the average value of x, μyIs the average value of y and is,
Figure BDA0001233095790000052
is the difference in x and is,
Figure BDA0001233095790000053
is the variance of y, σxyIs the covariance of x and y; c1=(K1L)2,C2=(K2L)2Is a constant for maintaining stability, L is the dynamic range of pixel values, K1=0.01,K20.03, structural similarity ranges from-1 to 1; when the two images are identical, the value of SSIM is equal to 1.
As a further improvement of the present invention, the screening threshold in step 7 is a value closest to 1.
As a further improvement of the present invention, the preprocessing in step 2, the feature ID extraction of the image in step 3, the screening of the image text information in step 4, the fast screening comparison algorithm in step 5, step 6, and step 7 are implemented by using a GPU, a CPU, or a distributed cloud computing platform.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a medical image retrieval method based on feature extraction and similarity matching, which is applied to a method in the field of radiotherapy delineation registration; the registration in radiotherapy planning is applied to find a template image most suitable for a target image, and after an optimal registration result is obtained through matching operation, the template image is used for clinical target volume delineation, organ dose simulation or treatment. Therefore, how to search for the most suitable template image is quite important. According to the method, 10 shape features related to the form are combined with medical history information, different discrimination weights are given to the features, and finally, 10 groups of images which are most similar and related information thereof are selected (comprehensive options of males, females, children, old people and the like are included in the 10 groups of images), and are provided for a doctor to select, and the most appropriate template image is selected under comprehensive consideration. The image calculated by the method can provide the image similarity guarantee, so that the applicability and the accuracy of registration calculation in a radiotherapy plan can be improved, and the clinical requirement can be well met.
The invention searches the most similar registration sample image through an algorithm based on GPU acceleration, and aims to complete registration and dose simulation verification within a few minutes after a patient lies on a patient bed in clinical radiotherapy. The invention has high efficiency, saves time and labor cost, well meets the clinical requirements, can be clinically popularized and applied, and has remarkable social significance.
Drawings
Fig. 1 is a flowchart of a medical image retrieval method based on feature extraction and similarity matching according to an embodiment of the present invention;
fig. 2 is a diagram illustrating feature extraction in step 3 of fig. 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention discloses a medical image retrieval method based on feature extraction and similarity matching, which is applied to the field of delineation and registration of a radiotherapy target area. In the procedure of radiotherapy planning, image registration is a very important key step. The application purpose of image registration in radiotherapy planning is to find a template image most suitable for a target image, and an optimal registration result can be obtained after matching operation, so that the template image can be used for clinical target volume delineation, organ dose simulation or treatment. Therefore, how to search for the most suitable template image is quite important. In the method, through 10 morphological characteristics related to morphology, medical history information is combined, different discrimination weights are given to the characteristics, and finally, 10 groups of most similar images and related information are selected and provided for a doctor to select, and the most suitable template image is selected under comprehensive consideration. After the operation of the method, the similarity guarantee between the template image and the target image can be provided, so that the applicability and the accuracy of image registration operation in the radiotherapy plan can be improved.
The invention is described in further detail below with reference to the attached drawing figures:
as shown in fig. 1, the present invention provides a medical image retrieval method based on feature extraction and similarity matching, which includes the following steps:
step 1, reading images and medical history data of a patient, wherein the data comprises image information and character information; the image information comprises a CT image, a cone beam CT image, an ultrasonic image, an MRI image, a PET image and an X-Ray image; the text information includes basic information of the patient, related medical history, related complications, disease types, disease treatment positions, pathology and image diagnosis report information.
Step 2, preprocessing the data obtained in the step 1, and specifically dividing the data into image preprocessing and character preprocessing;
step 2.1, image preprocessing: the method comprises the steps of performing file conversion on an original image (generally in a DICOM image format), normalizing the image, segmenting the image, extracting a target image and related image information of the target image, and automatically judging and retaining or deleting an image (group) with an irregular image shooting or an image (group) with an image quality problem; if the original image is a 2D DICOM image, directly performing gear shifting; if the original image is a 3D DICOM image, converting the 3D DICOM image into a 2D DICOM image group, and then performing file conversion; wherein:
step 2.1.1, judging whether the original image is a 2D DICOM image or a 3D DICOM image, wherein at least one image is selected in the invention;
step 2.1.2, if the image is a 3D DICOM image, converting the file into a 2D DICOM sequence image group;
2.1.3, converting the 2D DICOM image or image group into a bmp or jpeg format, which is convenient for subsequent image processing and is converted into at least one of the two;
step 2.1.4, histogram equalization is carried out on the image, and the operation steps are as follows:
a. counting the histogram of the given image to be processed, and solving the following steps:
Pr(rj)=nj/N,j=0,1,…,L-1 (1)
b. transforming by using a cumulative distribution function according to the counted histogram;
Figure BDA0001233095790000071
c. the new gray level is used to replace the old gray level to calculate Sk
Wherein N is the total number of pixels in an image; n isjA pixel of the j-th gray level; pr(rj) Representing the probability distribution of the occurrence of gray levels of the original image; r iskIs the kth gray level; t (r)k) Repositioning the cumulative distribution function S in order to establish a correspondence between the grey levels of the input image and the output image, i.e. the probability of new grey levels occurringk
Step 2.1.5, an Otsu method is used for carrying out self-adaptive binarization on the operation result of the step 2.1.4 to obtain a basic image contour edge, and the method comprises the following steps:
the Otsu method is a global dynamic binarization method, also called the Otsu method, and also called the maximum inter-class variance, and is based on the statistical characteristics of the whole image to realize the automatic selection of the threshold value. The principle is that the image histogram is divided into two classes by a certain gray value, the pixel point number and the gray average value of the two classes are respectively calculated, and then the inter-class variance of the two classes is calculated. When the variance between the two classes is the largest, the gray value is used as the threshold value of the image binarization processing. The use range of Otsu method is wide, no matter whether the histogram of the image has obvious double peaks, the gray value of the point where f (i, j) is MxN image (i, j) can be obtained;
assuming that f (i, j) takes a value of [0, m-1], and p (k) is the frequency of the gray value k, then:
Figure BDA0001233095790000081
suppose that the object and background segmented by using the gray value t as the threshold are respectively: { f (i, j) ≦ t } and { f (i, j) > t },
then the target fraction ratio: omega0(t)=∑0≤i≤tp(i) (4)
Target portion points: n is a radical of0(t)=MN∑0≤i≤tp(i) (5)
Background part ratio: omega1(t)=∑t≤i≤m-1p(i) (6)
Number of background part points: n is a radical of1(t)=MN∑t≤i≤m-1p(i) (7)
Target mean value: mu.s0(t)=∑0≤i≤tip(i)/ω0(t) (8)
Background mean value: mu.s1(t)=∑t≤i≤m-1ip(i)/ω1(t) (9)
Overall mean value: mu-omega0(t)μ0(t)+ω1(t)μ1(t) (10)
Otsu's method indicates that the formula for finding the optimal threshold g of an image is:
Figure BDA0001233095790000082
the right bracket of the formula is actually the inter-class variance value, the target and background divided by the threshold g constitute the whole image, and the target value mu0(t) probability of ω1(t), background value μ1(t) probability of ω0(t), the total mean value is mu, and the formula is obtained according to the definition of the variance; since variance is a measure of the uniformity of the gray distribution, a larger variance value indicates a larger difference between two parts constituting an image, and when a part of an object is mistaken for a background or a part of the background is mistaken for a target, the difference between the two parts is reduced, so that the segmentation with the largest variance between classes means that the probability of wrong segmentation is minimized.
After step 2.1.6 and step 2.1.5 are completed, the basic outline of the target image is preliminarily separated;
step 2.1.7, finding out a Bounding box, namely a square which can wrap the image at the minimum according to the basic image contour of the step 2.1.6, and then performing a one-time expansion algorithm (dilation) and erosion algorithm (erosion) on the image to obtain the complete contour edge of the target image; when the image of the starting position needs to be paid special attention to when the Bounding box is extracted, if the area of the image Bounding box is smaller than a certain size, the image is judged to have the problem of abnormal positioning, and the image is deleted.
Step 2.2, character preprocessing: obtaining relative information of sex, age, tumor type, treatment part, pathology and image report, prior radiotherapy, relative complication … and the like from the character information of the patient; wherein the target tumor information and the tumor site information are applied to image ID screening of a database; the information about sex, age, pathology and image report of the patient is applied in step 8;
step 3, extracting 10 required characteristics from the image or image group preprocessed in the step 2.1, and establishing an image ID of the target image;
as shown in fig. 2, 10 features are as follows:
the method is characterized in that: number of 2D image groups or frame number;
and (2) feature: the longitudinal longest axis of the image profile;
and (3) feature: the transverse longest axis of the image profile;
and (4) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 1/4 of the image outline;
and (5) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 1/2 of the image outline;
and (6) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 3/4 of the image outline;
and (7) feature: taking the longest axis of the image outline at the bounding box transverse direction 1/4 of the image outline;
and (2) characteristic 8: taking the longest axis of the image outline at the bounding box transverse direction 1/2 of the image outline;
and (2) characteristic 9: taking the longest axis of the image outline at the bounding box transverse direction 3/4 of the image outline;
the characteristics are as follows: an image volume or area;
step 4, finding out the image group of the same disease and the same treatment part from the character information preprocessed in the step 2.2; firstly, the information is sent into a database for preliminary screening, the image groups of the same diseases and the same treatment parts are found out, and finally the image groups which are in accordance with the diseases are provided;
step 5, comparing the image ID of the target image obtained in the step 3 with all the image IDs in the image group extracted in the step 4, and mainly comparing the similarity of the characteristic 1, wherein when clinical images are shot, a specified shot image interval standard exists for each part or organ, so that the parameter can be used as a characteristic standard for preliminarily judging the size of the target structure; searching an image group with similar image characteristic ID and target image characteristic ID in a database, and excluding the image group if the image characteristic ID is not matched with the target image characteristic ID; wherein the screening threshold is 95%, 90%, 85%, 80% or 75%, preferably the screening threshold is 95%, namely searching the image group with the similarity between the image feature ID in the database and the target image feature ID being more than 95%;
step 6, comparing the image group screened in the step 5 with the ID of the target image again, searching similar image groups, and excluding the image groups if the image groups do not accord with the ID of the target image, wherein the image groups have the similarity of 2-10; wherein the screening threshold is 95%, or 90%, or 85%, or 80%, or 75%, preferably the screening threshold is 95%, that is, image groups with similarity of more than 95% are searched;
step 7, calculating image similarity indexes (SSIM) of the image groups which are screened in the step 6 and meet the qualification with the target image group respectively, comparing numerical values, wherein the screening threshold value is 10 numerical values which are closest to 1, the closer to 1 is the closer, the top 10 groups of images which are closest are extracted, and the images which are not matched are eliminated;
given that two images are defined as x and y, respectively, the structural similarity of the two images can be found as follows:
Figure BDA0001233095790000101
wherein muxIs the average value of x, μyIs the average value of y and is,
Figure BDA0001233095790000102
is the difference in x and is,
Figure BDA0001233095790000103
is the variance of y, σxyIs the covariance of x and y; c1=(K1L)2,C2=(K2L)2Is a constant for maintaining stability, L is the dynamic range of pixel values, K1=0.01,K20.03, structural similarity ranges from-1 to 1; when the two images are identical, the value of SSIM is equal to 1.
And 8, displaying the most similar 10 groups of images in the hospital system end, displaying relevant basic information of the images, such as age, sex, medical history and the like, providing the relevant information for a clinician to select reference for registering the template images, and ending.
Preferably, the preprocessing in step 2, the feature ID extraction of the image in step 3, the screening of the image text information in step 4, the rapid screening comparison algorithm in step 5, step 6, and step 7 of the present invention are implemented by a GPU, a CPU, or a distributed cloud computing platform.
The invention provides a medical image retrieval method based on feature extraction and similarity matching, which is applied to a method in the field of radiotherapy delineation registration; the registration in radiotherapy planning is applied to find a template image most suitable for a target image, and after an optimal registration result is obtained through matching operation, the template image is used for clinical target volume delineation, organ dose simulation or treatment. Therefore, how to search for the most suitable template image is quite important. According to the method, 10 shape features related to the form are combined with medical history information, different discrimination weights are given to the features, and finally, 10 groups of images which are most similar and related information thereof are selected (comprehensive options of males, females, children, old people and the like are included in the 10 groups of images), and are provided for a doctor to select, and the most appropriate template image is selected under comprehensive consideration. The image calculated by the method can provide the image similarity guarantee, so that the applicability and the accuracy of registration calculation in a radiotherapy plan can be improved, and the clinical requirement can be well met.
The invention searches the most similar registration sample image through an algorithm based on GPU acceleration, and aims to complete registration and dose simulation verification within a few minutes after a patient lies on a patient bed in clinical radiotherapy. The invention has high efficiency, saves time and labor cost, well meets the clinical requirements, can be clinically popularized and applied, and has remarkable social significance.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A medical image retrieval method based on feature extraction and similarity matching is characterized by comprising the following steps:
step 1, reading images and medical history data of a patient, wherein the data comprises image information and character information;
step 2, preprocessing the data obtained in the step 1, and dividing the preprocessing into image preprocessing and character preprocessing;
step 2.1, image preprocessing: carrying out image format conversion, histogram equalization and segmentation on the original image, and extracting target information; the target information comprises a target image and a target image outline; if the original image is a 2DDICOM image, directly converting the image format; if the original image is a 3DDICOM image, firstly converting the 3DDICOM image into a 2DDICOM image group, and then converting the image format;
step 2.2, character preprocessing: obtaining sex, age, disease information, disease treatment part, pathology and image report, whether radiotherapy is performed before and whether related complications exist from the character information of the patient;
step 3, extracting 10 required features from the image or image group preprocessed in the step 2.1 as the image ID of the target image;
the 10 characteristics are as follows:
the method is characterized in that: number of 2D image groups or frame number;
and (2) feature: the longitudinal longest axis of the image profile;
and (3) feature: the transverse longest axis of the image profile;
and (4) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 1/4 of the image outline;
and (5) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 1/2 of the image outline;
and (6) feature: taking the longest axis of the image outline at the bounding box longitudinal direction 3/4 of the image outline;
and (7) feature: taking the longest axis of the image outline at the bounding box transverse direction 1/4 of the image outline;
and (2) characteristic 8: taking the longest axis of the image outline at the bounding box transverse direction 1/2 of the image outline;
and (2) characteristic 9: taking the longest axis of the image outline at the bounding box transverse direction 3/4 of the image outline;
the characteristics are as follows: an image volume or area;
step 4, primarily screening the character information preprocessed in the step 2.2 to find out image groups of the same diseases and the same treatment parts;
step 5, comparing the image ID of the target image obtained in the step 3 with all the image IDs in the image group extracted in the step 4, firstly comparing the similarity of the characteristic 1, searching for a similar image group, and excluding if the similarity is not met;
step 6, comparing the image group screened in the step 5 with the ID of the target image again, searching similar image groups, and excluding the image groups if the image groups do not accord with the ID of the target image, wherein the image groups have the similarity of 2-10;
step 7, calculating an image similarity index (SSIM) with the target image group respectively through the image group screened in the step 6, wherein the closer the comparison value is to 1, the closer the comparison value is, the images of the top 10 groups which are the closest are extracted, and the images which are not matched are eliminated;
and 8, displaying the most similar 10 groups of images in the hospital system end for the selection of the clinician, and ending.
2. The feature extraction and similarity matching based medical image retrieval method according to claim 1, wherein the image information includes a CT image, a cone beam CT image, an ultrasound image, an MRI image, a PET image, and an X-Ray image; the text information comprises basic information of the patient, related medical history, related complications, disease types, disease treatment positions, pathology and image diagnosis report information.
3. The medical image retrieval method based on feature extraction and similarity matching as claimed in claim 1, wherein the step 2.1 comprises:
step 2.1.1, judging whether the image is a 2DDICOM image or a 3DDICOM image, wherein at least one image is selected from the images;
step 2.1.2, if the image is a 3DDICOM image, converting the file into a 2DDICOM sequence image group;
2.1.3, converting the 2DDICOM image or image group into a bmp or jpeg format;
step 2.1.4, histogram equalization is carried out on the image, and the operation steps are as follows:
a. counting the histogram of the given image to be processed, and solving the following steps:
Pr(rj)=nj/N,j=0,1,…,L-1 (1)
b. transforming by using a cumulative distribution function according to the counted histogram;
Figure FDA0003198589450000021
c. the new gray level is used to replace the old gray level to calculate Sk
Wherein N is the total number of pixels in an image; n isjA pixel of the j-th gray level; pr(rj) Representing the probability distribution of the occurrence of gray levels of the original image; r iskIs the kth gray level; t (r)k) Repositioning the cumulative distribution function S in order to establish a correspondence between the grey levels of the input image and the output image, i.e. the probability of new grey levels occurringk
Step 2.1.5, performing self-adaptive binarization on the operation result of the step 2.1.4 by using an Otsu method to obtain a basic image contour edge;
after step 2.1.6 and step 2.1.5 are completed, the basic outline of the target image is preliminarily separated;
and 2.1.7, finding out a minimum square Bounding box capable of enclosing the image according to the basic image contour of the step 2.1.6, and then performing a primary expansion algorithm and a corrosion algorithm on the image to obtain the complete contour edge of the target image.
4. A method for medical image retrieval based on feature extraction and similarity matching as claimed in claim 3, characterized in that in step 2.1.5:
the Otsu method is also called as the Otsu method, and can obtain the gray value of the point where f (i, j) is written as MxN image (i, j) no matter whether the histogram of the image has obvious double peaks or not in the calculation process;
assuming that f (i, j) takes a value of [0, m-1], and p (k) is the frequency of the gray value k, then:
Figure FDA0003198589450000031
suppose that the object and background segmented by using the gray value t as the threshold are respectively: { f (i, j) ≦ t } and { f (i, j) > t },
then the target fraction ratio: omega0(t)=∑0≤i≤tp(i) (4)
Target portion points: n is a radical of0(t)=MN∑0≤i≤tp(i) (5)
Background part ratio: omega1(t)=∑t≤i≤m-1p(i) (6)
Number of background part points: n is a radical of1(t)=MN∑t≤i≤m-1p(i) (7)
Target mean value: mu.s0(t)=∑0≤i≤tip(i)/ω0(t) (8)
Background mean value: mu.s1(t)=∑t≤i≤m-1ip(i)/ω1(t) (9)
Overall mean value: mu-omega0(t)μ0(t)+ω1(t)μ1(t) (10)
Otsu's method indicates that the formula for finding the optimal threshold g of an image is:
Figure FDA0003198589450000032
the right bracket of the formula is actually the inter-class variance value, the target and background divided by the threshold g constitute the whole image, and the target value mu0(t) probability of ω1(t), background value μ1(t) probability of ω0(t), the total mean value is mu, and the formula can be obtained according to the definition of the variance.
5. The method for medical image retrieval based on feature extraction and similarity matching as claimed in claim 1, wherein the screening threshold in step 5 is 95% or 90% or 85% or 80% or 75%.
6. The method for medical image retrieval based on feature extraction and similarity matching as claimed in claim 1, wherein the screening threshold in step 6 is 95% or 90% or 85% or 80% or 75%.
7. The method for medical image retrieval based on feature extraction and similarity matching as claimed in claim 1, wherein in step 7:
given that two images are defined as x and y, respectively, the structural similarity of the two images can be found as follows:
Figure FDA0003198589450000041
wherein muxIs the average value of x, μyIs the average value of y and is,
Figure FDA0003198589450000042
is the difference in x and is,
Figure FDA0003198589450000043
is the variance of y, σxyIs the covariance of x and y; c1=(K1L)2,C2=(K2L)2Is a constant for maintaining stability, L is the dynamic range of pixel values, K1=0.01,K20.03, structural similarity ranges from-1 to 1; when the two images are identical, the value of SSIM is equal to 1.
8. The method according to claim 1, wherein the filtering threshold in step 7 is a value closest to 1.
9. The method for retrieving medical images based on feature extraction and similarity matching as claimed in claim 1, wherein the preprocessing in step 2, the feature ID extraction of the image in step 3, the screening of the image text information in step 4, the fast screening comparison algorithm in step 5, step 6 and step 7 are implemented by a GPU, a CPU or a distributed cloud computing platform.
CN201710106121.8A 2017-02-27 2017-02-27 Medical image retrieval method based on feature extraction and similarity matching Active CN106846317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710106121.8A CN106846317B (en) 2017-02-27 2017-02-27 Medical image retrieval method based on feature extraction and similarity matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710106121.8A CN106846317B (en) 2017-02-27 2017-02-27 Medical image retrieval method based on feature extraction and similarity matching

Publications (2)

Publication Number Publication Date
CN106846317A CN106846317A (en) 2017-06-13
CN106846317B true CN106846317B (en) 2021-09-17

Family

ID=59134147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710106121.8A Active CN106846317B (en) 2017-02-27 2017-02-27 Medical image retrieval method based on feature extraction and similarity matching

Country Status (1)

Country Link
CN (1) CN106846317B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688787B (en) * 2017-09-01 2020-09-29 宜宾学院 Near-end interphalangeal joint line identification method based on Gabor wavelet
CN107679219B (en) * 2017-10-19 2021-05-11 广州视睿电子科技有限公司 Matching method and device, interactive intelligent panel and storage medium
US10510145B2 (en) 2017-12-27 2019-12-17 Industrial Technology Research Institute Medical image comparison method and system thereof
CN108921828B (en) * 2018-06-15 2022-04-22 湖南科技大学 Method for identifying insignificant weld joint in complex scene
CN109101525A (en) * 2018-06-19 2018-12-28 黑龙江拓盟科技有限公司 A kind of medical image comparison method based on image comparison identification
CN109166108B (en) * 2018-08-14 2022-04-08 上海融达信息科技有限公司 Automatic identification method for abnormal lung tissue of CT (computed tomography) image
CN109934934B (en) * 2019-03-15 2023-09-29 广州九三致新科技有限公司 Medical image display method and device based on augmented reality
CN110349652B (en) * 2019-07-12 2022-02-22 之江实验室 Medical data analysis system fusing structured image data
CN112395441A (en) * 2019-08-14 2021-02-23 杭州海康威视数字技术股份有限公司 Object retrieval method and device
CN112206063A (en) * 2020-09-01 2021-01-12 广东工业大学 Multi-mode multi-angle dental implant registration method
CN112508773B (en) * 2020-11-20 2024-02-09 小米科技(武汉)有限公司 Image processing method and device, electronic equipment and storage medium
CN112950623A (en) * 2021-03-29 2021-06-11 云印技术(深圳)有限公司 Mark identification method and system
CN115938591B (en) * 2023-02-23 2023-05-09 福建自贸试验区厦门片区Manteia数据科技有限公司 Dose distribution interval determining device based on radiotherapy and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373479A (en) * 2008-09-27 2009-02-25 华中科技大学 Method and system for searching computer picture of mammary gland x-ray radiography
CN102306239A (en) * 2011-07-22 2012-01-04 李宝生 Method for evaluating and optimizing radiotherapy dose based on cone beam CT (Computer Tomography) image CT value correction technology
CN102509286A (en) * 2011-09-28 2012-06-20 清华大学深圳研究生院 Target region sketching method for medical image
CN103247046A (en) * 2013-04-19 2013-08-14 深圳先进技术研究院 Automatic target volume sketching method and device in radiotherapy treatment planning
CN103345746A (en) * 2013-06-25 2013-10-09 上海交通大学 Medical image diagnostic method based on CT-PET
CN104036109A (en) * 2014-03-14 2014-09-10 上海大图医疗科技有限公司 Image based system and method for case retrieving, sketching and treatment planning
CN104117151A (en) * 2014-08-12 2014-10-29 章桦 Optimization method of online self-adaption radiotherapy plan
CN104338240A (en) * 2014-10-31 2015-02-11 章桦 Automatic optimization method for on-line self-adaption radiotherapy plan and device
CN105956198A (en) * 2016-06-20 2016-09-21 东北大学 Nidus position and content-based mammary image retrieval system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003076003A2 (en) * 2002-03-06 2003-09-18 Tomotherapy Incorporated Method for modification of radiotherapy treatment delivery

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373479A (en) * 2008-09-27 2009-02-25 华中科技大学 Method and system for searching computer picture of mammary gland x-ray radiography
CN102306239A (en) * 2011-07-22 2012-01-04 李宝生 Method for evaluating and optimizing radiotherapy dose based on cone beam CT (Computer Tomography) image CT value correction technology
CN102509286A (en) * 2011-09-28 2012-06-20 清华大学深圳研究生院 Target region sketching method for medical image
CN103247046A (en) * 2013-04-19 2013-08-14 深圳先进技术研究院 Automatic target volume sketching method and device in radiotherapy treatment planning
CN103345746A (en) * 2013-06-25 2013-10-09 上海交通大学 Medical image diagnostic method based on CT-PET
CN104036109A (en) * 2014-03-14 2014-09-10 上海大图医疗科技有限公司 Image based system and method for case retrieving, sketching and treatment planning
CN104117151A (en) * 2014-08-12 2014-10-29 章桦 Optimization method of online self-adaption radiotherapy plan
CN104338240A (en) * 2014-10-31 2015-02-11 章桦 Automatic optimization method for on-line self-adaption radiotherapy plan and device
CN105956198A (en) * 2016-06-20 2016-09-21 东北大学 Nidus position and content-based mammary image retrieval system and method

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Contour Propagation Using Feature-Based Deformable Registration for Lung Cancer;Yuhan Yang等;《Hindawi》;20131231;第1-9页 *
Effect of compressed sensing reconstruction on target and organ delineation in cone-beam CT of head-and-neck and breast cancer patients;Hua Zhang等;《Radiotherapy and Oncology》;20140804;第413-417页 *
Eliana M.V'asquez Osorio B.Sc.等.A Novel Flexible Framework with Automatic Feature Corresp ondence Optimization for Non-Rigid Registration in Radiotherapy.《Medical Physics》.2009, *
Target delineation in post-operative radiotherapy of brain gliomas:Interobserver variability and impact of image registration of MR(pre-operative) images on treatment planning CT scans;Giovanni Mauro Cattaneo等;《Radiotherapy and Oncology》;20051231;第217-223页 *
医学图像融合在前列腺癌调强放疗靶区勾画中的应用研究;胡玉兰;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20140315(第3期);E072-414 *
图像配准技术在图像引导放疗中的应用;袁峥玺等;《中国医学物理学杂志》;20120930;第3628-3631页 *
基于SIFT的三维特征提取及其在医学影像配准中的应用;张瑞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150615(第6期);I138-459 *
基于形变配准的交互式靶区勾画系统的设计与实现;王远瑞;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20140415(第4期);I138-607 *

Also Published As

Publication number Publication date
CN106846317A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106846317B (en) Medical image retrieval method based on feature extraction and similarity matching
Wu et al. Automatic liver segmentation on volumetric CT images using supervoxel-based graph cuts
US10467757B2 (en) System and method for computer aided diagnosis
EP3432784B1 (en) Deep-learning-based cancer classification using a hierarchical classification framework
Li et al. Dilated-inception net: multi-scale feature aggregation for cardiac right ventricle segmentation
US9218661B2 (en) Image analysis for specific objects
Zhang et al. Intelligent scanning: Automated standard plane selection and biometric measurement of early gestational sac in routine ultrasound examination
Liu et al. Automatic whole heart segmentation using a two-stage u-net framework and an adaptive threshold window
CN111008984A (en) Method and system for automatically drawing contour line of normal organ in medical image
CN108629785B (en) Three-dimensional magnetic resonance pancreas image segmentation method based on self-learning
US20210225000A1 (en) Method and device for stratified image segmentation
US9727975B2 (en) Knowledge-based automatic image segmentation
Liu et al. A fully automatic segmentation algorithm for CT lung images based on random forest
Depeursinge et al. 3D lung image retrieval using localized features
CN106651875B (en) Brain tumor spatio-temporal synergy dividing method based on multi-modal MRI longitudinal datas
CN112270676A (en) Computer-aided judgment method for endometrial cancer muscle layer infiltration depth of MRI (magnetic resonance imaging) image
US11854190B2 (en) Similarity determination apparatus, similarity determination method, and similarity determination program
Kaur et al. Optimized multi threshold brain tumor image segmentation using two dimensional minimum cross entropy based on co-occurrence matrix
Artzi et al. Automatic segmentation, classification, and follow‐up of optic pathway gliomas using deep learning and fuzzy c‐means clustering based on MRI
US20200175674A1 (en) Quantified aspects of lesions in medical images
US20210183061A1 (en) Region dividing device, method, and program, similarity determining apparatus, method, and program, and feature quantity deriving apparatus, method, and program
Edwin et al. Liver and tumour segmentation from abdominal CT images using adaptive threshold method
Xu et al. Improved cascade R-CNN for medical images of pulmonary nodules detection combining dilated HRNet
Jalab et al. Fractional Renyi entropy image enhancement for deep segmentation of kidney MRI
Memiş et al. Computerized 2D detection of the multiform femoral heads in magnetic resonance imaging (MRI) sections with the integro-differential operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant