CN113506294B - Medical image evaluation method, system, computer equipment and storage medium - Google Patents

Medical image evaluation method, system, computer equipment and storage medium Download PDF

Info

Publication number
CN113506294B
CN113506294B CN202111052193.1A CN202111052193A CN113506294B CN 113506294 B CN113506294 B CN 113506294B CN 202111052193 A CN202111052193 A CN 202111052193A CN 113506294 B CN113506294 B CN 113506294B
Authority
CN
China
Prior art keywords
image
medical image
focus
medical
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111052193.1A
Other languages
Chinese (zh)
Other versions
CN113506294A (en
Inventor
徐丽珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanyun Shenzhen Internet Technology Co ltd
Original Assignee
Yuanyun Shenzhen Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuanyun Shenzhen Internet Technology Co ltd filed Critical Yuanyun Shenzhen Internet Technology Co ltd
Priority to CN202111052193.1A priority Critical patent/CN113506294B/en
Publication of CN113506294A publication Critical patent/CN113506294A/en
Application granted granted Critical
Publication of CN113506294B publication Critical patent/CN113506294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention belongs to the technical field of computers, and particularly relates to a medical image evaluation method, a medical image evaluation system, computer equipment and a storage medium. The method comprises the following steps: acquiring initial medical data, sequencing a medical image set according to a time label to obtain a medical image queue to be identified, inputting the medical image queue to a pre-trained neural network model for feature extraction processing, outputting focus features and determining position information of the focus features, segmenting each medical image to obtain focus region image blocks, comparing adjacent focus region image blocks on a time axis, calculating to obtain the difference region area and the difference ratio between the adjacent focus region image blocks, and drawing a trend graph of a focus changing along the time axis. The invention is convenient for doctors to judge the treatment effect of the focus according to the trend chart, carries out targeted treatment on patients according to the obtained treatment effect evaluation result, and can also track the change condition of the focus in each period so as to achieve the aim of auxiliary diagnosis.

Description

Medical image evaluation method, system, computer equipment and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a medical image evaluation method, a medical image evaluation system, computer equipment and a storage medium.
Background
With the rapid development of computer science and information technology, medical imaging technology has also rapidly developed, and various medical imaging devices are emerging continuously. Medical imaging technology relies on medical imaging equipment to obtain internal tissue images in a non-invasive manner, and has been developed to date, including other imaging technologies in addition to X-rays, and various imaging applications. Currently common medical imaging techniques include angiography, computerized tomography, mammography, positron emission tomography, magnetic resonance imaging, medical ultrasonography, and the like. The medical imaging device scans the human body of the patient, so that the diagnosis and treatment of the patient can be carried out by integrating various medical image data of the patient at multiple time points, and the diagnosis accuracy is improved.
Medical imaging is currently widely used as an important reference for medical diagnosis. After imaging or imaging the internal tissues of the patient, a doctor can analyze the internal tissues according to various medical image data, further analyze and quantitatively evaluate the focus, and finally give a diagnosis suggestion. However, in the course of treating a patient, a doctor can only judge and analyze the current condition of a lesion according to the current treatment image data, cannot comprehensively analyze the diagnosis effect of the lesion according to medical images of the patient at different time points, cannot give out effect changes of diagnosis and treatment of the patient, such as tumors, and the like, and lacks feedback on the treatment effect.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides a medical image evaluation method, a medical image evaluation system, computer equipment and a storage medium, which are used for positioning the position of a focus and judging the treatment information of the focus based on medical image data of a patient at different time intervals so as to obtain the treatment curative effect evaluation result of the focus and timely and accurately feed back the treatment effect of the patient, so that a doctor can perform targeted treatment on the patient according to the treatment curative effect evaluation result obtained by the medical image data.
The invention is realized by adopting the following technical scheme:
a medical image evaluation method, the method comprising:
acquiring initial medical data, wherein the initial medical data comprises medical image sets acquired by the same patient at different time periods and time labels corresponding to the medical image sets;
sequencing the medical image sets according to the time labels to obtain a medical image queue to be identified;
inputting the medical image queue to be identified into a pre-trained neural network model for feature extraction processing, and outputting the focus features of the medical image queue to be identified;
dividing the lesion features on the medical images to obtain lesion area image blocks corresponding to each medical image, and calculating the size of each lesion area image block;
comparing focus area image blocks of adjacent medical images according to a time axis, and calculating to obtain a difference area and a difference ratio between the adjacent focus area image blocks;
and drawing a trend graph of the lesion along the time axis according to the size, the difference area and the difference ratio of each obtained lesion area image block.
Further, the initial medical data comprises image data of one of angiography, computerized tomography, mammography, positron emission tomography, magnetic resonance imaging, or medical ultrasound.
Further, the neural network model is determined by:
acquiring a sample data set, wherein the sample data set comprises a plurality of medical image sets, and each medical image set corresponds to at least two medical images of a patient at different time periods and focus characteristics corresponding to the medical images;
sequentially inputting a plurality of medical image sets in the sample data set into a preset neural network model to train each medical image set in the sample data set so as to obtain focus characteristics in each medical image, outputting the focus characteristics of each medical image, training so as to obtain the neural network model, and randomly inputting medical images containing the focus characteristics and medical images without the focus characteristics for verification.
Further, the method for segmenting the position information of the lesion features on the corresponding medical image to segment the lesion area image block on each medical image includes:
acquiring a medical image containing position information of focus characteristics;
analyzing the position information of the focus feature to obtain pixel information of the focus feature, and determining a region to be segmented of the focus feature in the medical image according to the pixel information;
performing binarization segmentation processing according to the pixel information, and separating to obtain a foreground area and a background area;
and taking the foreground region obtained by separation as a matting processing result to obtain a focus region image block.
Further, the method for comparing image blocks of the lesion area of adjacent medical images comprises the following steps:
acquiring a plurality of focus region image blocks of different time labels of the same patient, sorting the focus region image blocks according to a time axis, and dividing adjacent focus region image blocks into a first image block and a second image block;
and respectively calculating a difference ratio and a difference area according to the difference value of the matched pixel points of the first image block and the second image block.
Further, the lesion area image block captured at the first time is positioned as a first image block, and the lesion area image block captured at the second time is positioned as a second image block.
The invention also comprises a medical image evaluation system, wherein the medical image evaluation system adopts the medical image evaluation method to obtain a trend chart of the lesion along with the change of a time axis; the medical image evaluation system comprises a data preprocessing module, a feature extraction module, a binarization segmentation module, an image block comparison module and a trend graph generation module.
The data preprocessing module is used for sequencing the acquired initial medical data according to the time labels to obtain a medical image queue to be identified.
The characteristic extraction module is used for carrying out characteristic extraction processing on the input medical image queue to be identified according to a pre-trained neural network model and outputting focus characteristics of the medical image queue to be identified;
the binarization segmentation module is used for carrying out portrait binarization segmentation processing on the acquired medical image to be segmented containing the focus characteristics, separating to obtain a foreground region and an absolute background region, and taking the foreground region as a matting processing result to obtain a focus region image block;
the image block comparison module is used for aligning focus area image blocks corresponding to two medical images on adjacent time axes to generate overlapped image information, and obtaining a difference ratio and a difference area according to the difference ratio of the pixel points of the two focus area image blocks; and
and the trend graph generating module is used for drawing a trend graph of the lesion along with the change of the time axis according to the obtained difference ratio and the area of the difference region.
The image processing device further comprises an overlapped image information module, wherein the overlapped image information module is used for carrying out image alignment on a first image block and a second image block corresponding to adjacent focus area image blocks on a time axis so as to generate overlapped image information, and respectively calculating a difference ratio and a difference area according to the difference value of the number of matched pixels of the first image block and the second image block.
The invention also includes a computer device comprising a memory storing at least one instruction, at least one program, set of codes, or set of instructions and a processor that when loaded and executed performs the steps of a method of medical image assessment.
The present invention also includes a storage medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions which, when loaded and executed by a processor, carries out the steps of a medical image evaluation method.
The technical scheme provided by the invention has the following beneficial effects:
the technical scheme provided by the invention is that according to medical data of the same patient in different time periods, a medical image set is sorted according to a time axis, focus characteristics are extracted and position information is marked on the corresponding medical image, focus characteristics are subjected to binarization processing to divide focus area image blocks, two adjacent focus area image blocks on the time axis are compared, baseline data of image block difference is calculated to obtain focus diagnosis and treatment index data, a trend graph changing along the time axis is drawn, so that a doctor can judge the treatment effect of a focus according to the trend graph, the patient is subjected to targeted treatment according to the obtained treatment effect evaluation result, the doctor can objectively evaluate the diagnosis effect of the patient in a period of time according to the obtained trend graph, accurate analysis is provided according to the condition that whether the focus area is deteriorated, a treatment method is properly adjusted, and further the change conditions of tumors and other periods can be tracked, thereby achieving the purpose of auxiliary diagnosis.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of a medical image evaluation method according to embodiment 1 of the present invention.
Fig. 2 is a schematic diagram illustrating an operation of identifying a segmentation and comparison lesion area in the medical image evaluation method according to embodiment 1 of the present invention. (a) Inputting an original image; (b) extracting focus features; (c) dividing focus area image blocks; (d) comparing the image blocks of the focus area.
Fig. 3 is a flowchart of neural network model training in a medical image evaluation method according to embodiment 1 of the present invention.
Fig. 4 is a flowchart of lesion area image block segmentation in the medical image assessment method according to embodiment 1 of the present invention.
Fig. 5 is a flowchart of comparing image blocks of adjacent lesion areas in the medical image evaluation method in embodiment 1 of the present invention.
Fig. 6 is a system block diagram of a medical image evaluation system according to embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be understood that although the terms first, second, etc. may be used to describe various information in embodiments of the present invention, the information should not be limited by these terms. These terms are only used to distinguish one type of information from another.
With the continuous application of medical imaging technology in disease diagnosis, it is widely used to obtain internal tissue images in a non-invasive manner as an important reference for medical diagnosis. However, in the current medical image data analysis for images or imaged medical images, doctors usually analyze the disease conditions by combining the current focus condition of the current diagnosis, and as the image data of the same patient in other time periods, the medical image data cannot be recognized and analyzed to accurately detect the medical images except for the obvious characteristics recognizable by naked eyes, so that the doctors are not convenient to know the change condition of the focus of the patient in each period and the curative effect of focus treatment.
Based on this, the embodiment of the application provides a medical image evaluation method which can be applied to a server, and particularly can be applied to an image processing device in the server. The medical image evaluation method comprises the steps of obtaining any two or more medical image data of the same patient in different time periods; then, extracting a focus position by an introduced neural network model and judging focus characteristics of treatment information of the focus; then dividing focus area image blocks, and calculating the size of each focus area image block; comparing two adjacent focus area image blocks on a time axis and calculating baseline data of image block difference, wherein the baseline data comprises difference area and difference ratio; and finally, drawing a trend graph of the lesion along the time axis according to the size, the difference area and the difference ratio of each obtained lesion area image block. The doctor can conveniently judge the treatment effect of the focus according to the trend chart and carry out targeted treatment on the patient according to the obtained treatment effect evaluation result.
The medical image evaluation method provided by the embodiment of the application is suitable for the technical field of analysis evaluators of any two or more medical images provided by patients. For example: in practical application, an application program of a medical image or a graph can be developed based on the inventive concept of the medical image evaluation method provided by the embodiment of the application, and a doctor can conveniently analyze and evaluate the severity and/or specific distribution diagnosis and treatment conditions of the lesion such as tumor on the medical image of the patient by a method of scanning or uploading the medical image of the patient at different times. Furthermore, the medical image or graphic application program can recommend the doctor the optimal diagnosis and treatment method for different diseases such as tumor in different situations according to different detection results, so as to treat the patients with different diseases in a targeted manner.
Specifically, the embodiments of the present application will be further explained below with reference to the drawings.
Referring to fig. 1, fig. 1 is a flowchart of a medical image evaluation method according to an embodiment of the present invention, and for convenience of description, only the portions related to the embodiment of the present invention are shown. In an embodiment of the present invention, the present embodiment provides a medical image evaluation method, including the steps of:
s101: initial medical data is acquired.
In this embodiment, the image processing apparatus may acquire initial medical data including a medical image set and a time stamp corresponding to the medical image set. The medical image set refers to a medical image including a detected lesion region of a patient, and all lesion features of the patient can be acquired through the medical image. Wherein, in the medical image set of the patient, a "time label" for labeling each medical image shooting, that is, the time of different medical image shooting of the patient, is further included, so as to distinguish the medical images of different periods by the time labels.
In this embodiment, the "medical image set" may be acquired by an image processing apparatus including one of angiography, computerized tomography, mammography, positron emission tomography, magnetic resonance imaging, or medical ultrasound.
Further, the medical image set may be a picture or a video. In this embodiment, it is preferable to use pictures (for example, to show the MRI result of the patient at different times). When the medical image set is a video (for example, an image fragment of magnetic resonance imaging performed on a patient is obtained by hospital information department copy), the frame rate conversion is performed on the obtained video to obtain a video picture with N frames per second, and the video picture is extracted as a part of the medical image set according to the time sequence. For example, in the present embodiment, video pictures are acquired at a frame rate of 5 frames per second, the video pictures are sorted in a time sequence, or a part of the video pictures may be extracted as a medical image set according to requirements or empirical data. For example, the middle value and the two end values of the video image acquired every 10 seconds are taken as the medical image set.
In the embodiment of the present invention, as shown in fig. 2 (a) the input original image, in the time tag obtained by the image processing device, the medical image set may further be subjected to a gray-scale processing to obtain a gray-scale processed image, for example, a CT image of the same patient obtained within a period of time, and the obtained image is subjected to a gray-scale processing to obtain a black-and-white image. For example, the image graying is realized by using a Maximum value method (Maximum), an Average value method (Average), a Weighted Average value method (Weighted Average), or the like. Taking Weighted Average method (Weighted Average) as an example, the three color components R, G, B of the color of the medical image have the same value, i.e., R = G = B = wr + wg + G + wb B, and wr, wg, wb are Weighted values R, G, B, respectively. Gray images with different gray levels are formed by changing the values of the weights, for example, the weights are uniformly set to wr =32%, wg =57%, wb =10%, and the medical image is subjected to gray processing.
S102: and sequencing the medical image sets according to the time labels to obtain a medical image queue to be identified.
In this embodiment, the initial medical data is scanned according to the start and stop time of the time tag, the time tags of all medical images in the medical image set are extracted, the medical images are sorted according to the recorded time tags, the medical images are scanned according to the sorted order, the reading order of each medical image is designated, and a medical image queue to be identified is formed.
S103: and inputting the medical image queue to be identified into a pre-trained neural network model for feature extraction processing, and outputting the focus features of the medical image queue to be identified.
In the present embodiment, as shown in (b) lesion feature extraction in fig. 2, the neural network model is determined by the following steps:
s301, a sample data set is obtained, wherein the sample data set comprises a plurality of medical image sets, and each medical image set corresponds to at least two medical images of a patient in different time periods and focus characteristics corresponding to the medical images.
In this embodiment, the sample data set may be taken from a medical MNIST CT image data set, including a large amount of medical image data such as brain CT, hand CT, chest CT, abdomen CT, breast MRI, and the like. Specifically, the diagnosis of colon cancer by CT scan is taken as an example. Patient data of a sample dataset is acquired including lesion features without polyps, 5-10mm polyps and greater than 10mm polyps. With 1152 cases with XLS patches in this sample dataset, polyp description and its location in the colon segment is provided.
S302, sequentially inputting a plurality of medical image sets in the sample data set into a preset neural network model, training each medical image set in the sample data set to obtain focus characteristics in each medical image, outputting the focus characteristics of each medical image, training to obtain the neural network model, and randomly inputting medical images containing the focus characteristics and medical images without the focus characteristics for verification.
In this embodiment, each medical image of the medical image set in the sample data set may be trained by using a preset neural network model, so as to obtain a lesion feature of each medical image. In this step, the lesion features are embedded in the training process, i.e., the medical image and the corresponding lesion features are input into the neural network model to obtain the trained neural network model. In this embodiment, the neural network model may include a Convolutional Neural Network (CNN) or a Deep Convolutional Neural Network (DCNN), which is not specifically limited herein.
Specifically, a sample data set including patient data without polyps, 5-10mm polyps and lesion features larger than 10mm polyps can be input into a selected convolutional neural network, and through 1152 cases of case data training with XLS slices, corresponding polyp lesion features are extracted, and through repeated training, a neural network model is obtained. In performing the verification, at least one set of XLS patches with polyps of different sizes are randomly entered to verify that the output matches the polyp size.
The method comprises the steps of performing feature extraction processing through a pre-trained neural network model, outputting the focus features of the medical image queue to be identified, and simultaneously determining the position information of the focus features on the corresponding medical images.
In this embodiment, the location information of the lesion feature is marked by a pixel point of the region where the lesion feature is located. Specifically, in the medical image, all pixels within the contour of the acquired lesion feature may be set as white pixels, and all pixels outside the contour of the lesion feature may be set as black pixels, so as to obtain the medical image including the position information of the lesion feature.
S104: and segmenting the lesion features on the medical images to obtain lesion area image blocks corresponding to each medical image, and calculating the size of each lesion area image block.
In this embodiment, the method for segmenting the lesion feature into lesion area image blocks on each medical image by segmenting the position information of the lesion feature on the corresponding medical image includes:
s401, acquiring a medical image containing position information of the lesion feature.
S402, analyzing the position information of the focus feature to obtain pixel information of the focus feature, and determining a region to be segmented of the focus feature in the medical image according to the pixel information. In this embodiment, the lesion feature with the marked position information is performed according to the pixel points, and the region to be segmented including the lesion feature is obtained by fusing the lesion feature and the spatial position information. In this embodiment, a pixel matrix corresponding to a region to be divided is determined through analysis, so as to obtain pixel information of the region to be divided.
And S403, performing binarization segmentation (binarization segmentation) processing according to the pixel information, and separating to obtain a foreground region and a background region.
In this embodiment, an approximate contour with a separated foreground is determined according to pixel information of a region to be segmented, a trimap static image matting algorithm is created from a binarization segmentation result, a finer foreground edge soft segmentation is performed on the contour region by adopting an alpha matching algorithm, and the foreground region and the background region are obtained by separation. One trimap static image matting algorithm and an alpha matching algorithm are in the prior art, and are not described herein. Specifically, the gray value is set for the pixel points on the medical image according to the preset binarization processing rule, and in order to make the pixel settings consistent with the position information pixel settings of the lesion features, in this embodiment, the gray value is set to 0 or 255, so that the visual effects generated when the medical image and the position information of the lesion features are marked can be consistent, that is, the black and white effects with obvious visual effect differences are generated. Then, the texture of the lesion feature in the medical image is mapped. After binarization processing, determining a binarization threshold value at the pixel position according to pixel value distribution of the focus characteristics, so as to segment the focus characteristics of the white pixels as foreground regions and other background regions of the black pixels, and performing soft matting by adopting a trimap static image matting algorithm and an alpha matting algorithm to obtain a better separation effect.
S404, the foreground region obtained through separation is used as a matting processing result, and a focus region image block is obtained. In this embodiment, as shown in fig. 2 (c), the lesion area image blocks of the foreground area obtained by segmentation are obtained by matting and separation.
After the lesion area image block is divided, the size of the lesion area image block is calculated according to the pixel point of the lesion feature, for example, the size of the polyp in the colon segment is calculated by multiplying the pixel value of the lesion feature identification pixel point by the empirical parameter. Specifically, when there are 1012 pixels in the region of the focal region image block, and the resolution of the image is 72 pixels/inch, the empirical parameter is equivalent to 1 inch length position and includes 72 pixels, that is, the empirical parameter is 5184 pixels/square inch, and it is calculated that 1012 pixels of the focal region image block are equivalent to 0.1952 square inches, which is equal to 125.935232 square millimeters and is approximately equal to a square region with a side length of 11.22 millimeters, and the diameter can be converted according to the focal region image block shape.
And S105, comparing focus area image blocks of adjacent medical images according to a time axis, and calculating to obtain the difference area and the difference ratio between the adjacent focus area image blocks.
In an embodiment of the present invention, referring to fig. 1 and 5, a method for comparing image blocks of a lesion area of adjacent medical images includes:
s501, acquiring a plurality of focus area image blocks with different time labels of the same patient, sorting the focus area image blocks according to a time axis, and dividing adjacent focus area image blocks into a first image block and a second image block;
wherein, the focus area image block captured at the first time is positioned as a first image block, and the focus area image block captured at the second time is positioned as a second image block.
S502, aligning the first image block and the second image block to generate overlapped image information, and respectively calculating a difference ratio and a difference area according to the difference value of the number of matched pixels of the first image block and the second image block.
In this embodiment, the image blocks of the adjacent lesion area are divided into a first image block and a second image block according to the capturing time sequence, and taking the first image block and the second image block as rectangles as an example, image alignment may be performed according to the center point of the first image block and the center point of the second image block by identifying the vertex of the region of the pixel matrix where the two image blocks are located as the center point, so as to generate overlapping image information. Or, for example, the two image blocks are circular, and the center of the area of the pixel matrix where the two image blocks are located is used as the center point. Determining the central point of the pixel matrix according to the areas of the two image blocks, and performing image alignment according to the central point of the first image block and the central point of the second image block to generate concentric overlapped image information; as shown in fig. 2 (d), the comparison result of the focal region image blocks obtains difference pixel points other than the pixel point pairs formed by the first image block and the second image block, and calculates the difference ratio between the difference pixel points and the pixel point pairs, and the difference region area corresponding to the difference pixel points to obtain the difference ratio and the difference region area.
When the number of pixel points of the first image block is larger than that of the second image block, the calculated difference ratio represents the improvement of a focus area such as a tumor, the difference ratio is marked as a positive value, and the positive value of the area mark position of the difference area is calculated;
when the number of the pixel points of the first image block is smaller than that of the pixel points of the second image block, the calculated difference ratio represents the deterioration of lesion areas such as tumors, the difference ratio is marked as a negative value, and the negative value of the area mark of the difference area is calculated.
S106: and drawing a trend graph of the lesion along the time axis according to the size, the difference area and the difference ratio of each obtained lesion area image block.
In the embodiment of the invention, a time axis is taken as an X axis, a time label mark corresponding to each medical image is marked along the X axis, meanwhile, the difference ratio corresponding to two adjacent medical images is taken as a first histogram, the difference area corresponding to two adjacent medical images is taken as a second histogram, the first histogram and the second histogram are marked according to the Y axis, and the improvement time period and the deterioration time period are respectively separated so as to intuitively feed back the change condition of the focus.
Meanwhile, a time label is used as an X-axis mark, the calculated size of each focus area image block is used as a vertical coordinate, and a histogram of focus area changes along with time is drawn to represent the size change of the focus in each period.
According to the technical scheme provided by the invention, according to medical data of the same patient in different time periods, a medical image set is sequenced according to a time axis, focus features are extracted and position information is marked on the corresponding medical image, focus region image blocks are segmented by carrying out binarization processing on the focus features, two adjacent focus region image blocks on the time axis are compared, baseline data of image block difference is calculated, focus diagnosis and treatment index data is obtained, a trend graph changing along the time axis is drawn, so that a doctor can judge the treatment effect of a focus according to the trend graph, and the patient is treated in a targeted manner according to the obtained treatment effect evaluation result.
As shown in fig. 6, fig. 6 is a block diagram of a medical image evaluation system provided in an embodiment of the present application, which can be applied to an image processing apparatus and can execute a method for evaluating a medical image in any of the above-mentioned method embodiments. Specifically, in an embodiment of the present invention, a medical image evaluation system includes a data preprocessing module 11, a feature extraction module 12, a binarization segmentation module 13, an image block comparison module 14, and a trend graph generation module 15.
The data preprocessing module 11 is configured to sort the acquired initial medical data according to the time tag to obtain a medical image queue to be identified.
The feature extraction module 12 is configured to perform feature extraction processing on the input medical image queue to be identified according to a pre-trained neural network model, and output focus features of the medical image queue to be identified.
The binarization segmentation module 13 is configured to perform portrait binarization segmentation on the acquired medical image to be segmented, which includes lesion features, to separate the image to obtain a foreground region and an absolute background region, and obtain a lesion region image block by using the foreground region as a matting processing result.
The image block comparison module 14 is configured to align image blocks of a focus area corresponding to two medical images on adjacent time axes to generate overlapped image information, and obtain a difference ratio and a difference area according to a difference ratio of pixel points of the image blocks of the focus area.
The trend graph generating module 15 is used for drawing a trend graph of the lesion along the time axis according to the obtained difference ratio and the area of the difference region.
The medical image evaluation system further comprises an overlapped image information module, wherein the overlapped image information module is used for aligning a first image block and a second image block corresponding to adjacent focus area image blocks on a time axis to generate overlapped image information, and respectively calculating a difference ratio and a difference area according to the difference value of the number of matched pixels of the first image block and the second image block.
In an embodiment of the present invention, the medical image evaluation system may further adopt the steps of a medical image evaluation method as described above when executed, and may be applied in the subsequent processing software for the acquired and captured medical image in the image processing device. The invention can be used for finding out the difference of focuses among medical images at different time, so that a doctor objectively evaluates the diagnosis curative effect of a patient within a period of time according to the obtained trend graph, accurately analyzes whether the focus area is deteriorated, appropriately adjusts a treatment method, and further can track the change condition of tumors and other stages, thereby achieving the purpose of auxiliary diagnosis. Therefore, the operation of the medical image evaluation system in this embodiment will not be described in detail.
In an embodiment of the present invention, a computer device is provided, which comprises a memory in which at least one instruction, at least one program, set of codes, or set of instructions is stored, and a processor, which when loaded and executed carries out the steps of the above-mentioned method embodiments.
Furthermore, some embodiments may include a storage medium having a program for executing the method set forth in the present specification on a computer, on which at least one instruction, at least one program, code set, or instruction set is stored, the at least one instruction, at least one program, code set, or instruction set being loaded and executed by a processor to implement the steps in the above-described embodiments of the method, examples of the computer-readable recording medium including a hardware device specifically configured for storing and executing program commands, a magnetic medium such as a hard disk, a floppy disk, and a magnetic tape, an optical recording medium such as a CD-ROM, a DVD, a magneto-optical medium such as a floppy disk, and a ROM, RAM, flash memory, and the like. Examples of program commands may include machine language code written by a compiler and high-level language code executed by a computer using an interpreter or the like.
One of ordinary skill in the art will appreciate that all or part of the processes of the methods of the above embodiments may be implemented by at least one instruction, at least one program, code set, or instruction set, which may be stored in a non-volatile computer-readable storage medium, and the at least one instruction, at least one program, code set, or instruction set may include the processes of the embodiments of the methods described above when executed. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory.
In summary, the present invention can rapidly find the difference ratio and the difference area of the two time node lesions by extracting the lesion features and comparing the difference of the lesion features on the adjacent time axes according to any two or more medical images of the same patient at different times, and generate the trend graph distributed along the time axes. The method can not only reduce the problem that a doctor cannot effectively judge the diagnosis and treatment effect due to the fact that the doctor cannot find out the difference of the focus between medical images at different time in a manual mode, but also enable the doctor to objectively evaluate the diagnosis and treatment effect of a patient within a period of time according to the obtained trend graph, give accurate analysis according to the condition that whether the focus area is deteriorated, properly adjust the treatment method, further track the change condition of tumors and other stages, and achieve the purpose of auxiliary diagnosis.
In addition, the difference ratio and the area of the difference region between the image blocks of the focus region are obtained, the focus change conditions of all time periods are communicated according to the time axis, a trend graph of the focus region changing along with time is given, the size of the focus variation region such as tumor and the like is conveniently tracked according to the trend graph as an evaluation condition, so that the disease condition change of the focus is tracked, and the diagnosis of doctors is facilitated.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A medical image evaluation method characterized by comprising:
acquiring initial medical data, wherein the initial medical data comprises medical image sets acquired by the same patient at different time periods and time labels corresponding to the medical image sets;
sequencing the medical image sets according to the time labels to obtain a medical image queue to be identified;
inputting the medical image queue to be identified into a pre-trained neural network model for feature extraction processing, and outputting the focus features of the medical image queue to be identified;
acquiring a medical image containing position information of focus features, analyzing the position information of the focus features to acquire pixel information of the focus features, determining a region to be segmented of the focus features in the medical image according to the pixel information, performing binarization segmentation processing according to the pixel information, performing segmentation processing on the focus features on the medical image to obtain focus region image blocks corresponding to each medical image, and calculating the size of each focus region image block;
comparing focus area image blocks of adjacent medical images according to a time axis, determining a central point according to an area of a pixel matrix where the adjacent focus area image blocks are located, aligning images according to the central point of a first image block and the central point of a second image block, generating concentric overlapped image information, obtaining difference pixel points except pixel point pairs formed by the first image block and the second image block, calculating a difference ratio between the difference pixel points and the pixel point pairs, and obtaining the difference ratio and a difference area by the difference area corresponding to the difference pixel points;
and drawing a trend graph of the lesion along the time axis according to the size, the difference area and the difference ratio of each obtained lesion area image block.
2. The medical image evaluation method according to claim 1, characterized in that: the initial medical data comprises image data from one of angiography, computerized tomography, mammography, positron emission tomography, magnetic resonance imaging, or medical ultrasound examination.
3. The medical image evaluation method according to claim 1 or 2, characterized in that: the neural network model is determined by:
acquiring a sample data set, wherein the sample data set comprises a plurality of medical image sets, and each medical image set corresponds to at least two medical images of a patient at different time periods and focus characteristics corresponding to the medical images;
sequentially inputting a plurality of medical image sets in the sample data set into a preset neural network model to train each medical image set in the sample data set so as to obtain the focus characteristics in each medical image, outputting the focus characteristics of each medical image, and training to obtain the neural network model.
4. The medical image evaluation method according to claim 1, characterized in that: the method for segmenting the position information of the lesion features on the corresponding medical image to segment the lesion area image block on each medical image further comprises the following steps:
performing binarization segmentation processing according to the pixel information, and separating to obtain a foreground area and a background area;
and taking the foreground region obtained by separation as a matting processing result to obtain a focus region image block.
5. The medical image evaluation method according to claim 4, characterized in that: comparing the focus area image blocks of the adjacent medical images, wherein the comparison comprises the following steps:
acquiring a plurality of focus region image blocks of different time labels of the same patient, sorting the focus region image blocks according to a time axis, and dividing adjacent focus region image blocks into a first image block and a second image block;
and respectively calculating the area of a difference region and the difference ratio according to the difference value of the matched pixel points of the first image block and the second image block.
6. The medical image evaluation method according to claim 5, characterized in that: the image block of the lesion area captured at the first time is positioned as a first image block, and the image block of the lesion area captured at the second time is positioned as a second image block.
7. A medical image evaluation system characterized by: the medical image evaluation system adopts the medical image evaluation method of any one of claims 1 to 6 to obtain a trend graph of lesion changes along a time axis; the medical image evaluation system includes:
the data preprocessing module is used for sequencing the acquired initial medical data according to the time labels to obtain a medical image queue to be identified;
the characteristic extraction module is used for carrying out characteristic extraction processing on the input medical image queue to be identified according to a pre-trained neural network model and outputting focus characteristics of the medical image queue to be identified;
the binarization segmentation module is used for carrying out portrait binarization segmentation processing on the acquired medical image to be segmented containing the focus characteristics, separating to obtain a foreground region and an absolute background region, and taking the foreground region as a matting processing result to obtain a focus region image block;
the image block comparison module is used for aligning focus area image blocks corresponding to two medical images on adjacent time axes to generate overlapped image information, and obtaining a difference ratio and a difference area according to the difference ratio of the pixel points of the two focus area image blocks; and the trend graph generating module is used for drawing a trend graph of the lesion along with the change of the time axis according to the obtained difference ratio and the area of the difference region.
8. The medical image evaluation system of claim 7, wherein: further comprising:
and the overlapped image information module is used for carrying out image alignment on a first image block and a second image block corresponding to adjacent focus area image blocks on a time axis so as to generate overlapped image information, and respectively calculating a difference ratio and a difference area according to the difference value of the matched pixel points of the first image block and the second image block.
9. A computer device comprising a memory and a processor, the memory storing at least one instruction, at least one program, set of codes, or set of instructions, wherein the processor when loading and executing the at least one instruction, at least one program, set of codes, or set of instructions implements the steps of the method of any one of claims 1 to 7.
10. A storage medium storing at least one instruction, at least one program, set of codes or set of instructions, wherein said at least one instruction, at least one program, set of codes or set of instructions, when loaded and executed by a processor, implements the steps of the method of any one of claims 1 to 7.
CN202111052193.1A 2021-09-08 2021-09-08 Medical image evaluation method, system, computer equipment and storage medium Active CN113506294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052193.1A CN113506294B (en) 2021-09-08 2021-09-08 Medical image evaluation method, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052193.1A CN113506294B (en) 2021-09-08 2021-09-08 Medical image evaluation method, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113506294A CN113506294A (en) 2021-10-15
CN113506294B true CN113506294B (en) 2022-02-08

Family

ID=78016908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052193.1A Active CN113506294B (en) 2021-09-08 2021-09-08 Medical image evaluation method, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113506294B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025253A (en) * 2021-11-05 2022-02-08 杭州联众医疗科技股份有限公司 Drug efficacy evaluation system based on real world research
CN113822385B (en) * 2021-11-24 2022-05-13 深圳江行联加智能科技有限公司 Coal conveying abnormity monitoring method, device and equipment based on image and storage medium
CN116258717B (en) * 2023-05-15 2023-09-08 广州思德医疗科技有限公司 Lesion recognition method, device, apparatus and storage medium
CN116646062B (en) * 2023-06-08 2023-12-22 南京大经中医药信息技术有限公司 Intelligent auxiliary analysis system for traditional Chinese medicine tongue diagnosis instrument
CN116433660B (en) * 2023-06-12 2023-09-15 吉林禾熙科技开发有限公司 Medical image data processing device, electronic apparatus, and computer-readable storage medium
CN117274244B (en) * 2023-11-17 2024-02-20 艾迪普科技股份有限公司 Medical imaging inspection method, system and medium based on three-dimensional image recognition processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274443A (en) * 2017-06-05 2017-10-20 河北大学 Prostate diffusion weighted images sequence method for registering
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6563942B2 (en) * 2014-02-12 2019-08-21 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. System for monitoring lesion size trend and operation method thereof
JP2016202721A (en) * 2015-04-27 2016-12-08 コニカミノルタ株式会社 Medical image display apparatus and program
IT201800010833A1 (en) * 2018-12-05 2020-06-05 St Microelectronics Srl Process of image processing, corresponding computer system and product
CN110021430A (en) * 2019-04-09 2019-07-16 科大讯飞股份有限公司 A kind of the attribute information prediction technique and device of lesion
CN111047591A (en) * 2020-03-13 2020-04-21 北京深睿博联科技有限责任公司 Focal volume measuring method, system, terminal and storage medium based on deep learning
CN111528918B (en) * 2020-04-30 2023-02-21 深圳开立生物医疗科技股份有限公司 Tumor volume change trend graph generation device after ablation, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274443A (en) * 2017-06-05 2017-10-20 河北大学 Prostate diffusion weighted images sequence method for registering
CN107492097A (en) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 A kind of method and device for identifying MRI image area-of-interest

Also Published As

Publication number Publication date
CN113506294A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN113506294B (en) Medical image evaluation method, system, computer equipment and storage medium
Wang et al. A benchmark for comparison of dental radiography analysis algorithms
Wang et al. Evaluation and comparison of anatomical landmark detection methods for cephalometric x-ray images: a grand challenge
Mamonov et al. Automated polyp detection in colon capsule endoscopy
US9763635B2 (en) Method, apparatus and system for identifying a specific part of a spine in an image
US9087259B2 (en) Organ-specific enhancement filter for robust segmentation of medical images
EP3424017B1 (en) Automatic detection of an artifact in patient image data
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
US7539332B1 (en) Method and system for automatically identifying regions of trabecular bone tissue and cortical bone tissue of a target bone from a digital radiograph image
US20240020823A1 (en) Assistance diagnosis system for lung disease based on deep learning and assistance diagnosis method thereof
JP2023516651A (en) Class-wise loss function to deal with missing annotations in training data
KR20130101867A (en) A method and apparatus outputting an analyzed information for a specific part in the living body
Dhalia Sweetlin et al. Patient-Specific Model Based Segmentation of Lung Computed Tomographic Images.
Giv et al. Lung segmentation using active shape model to detect the disease from chest radiography
Kumar et al. Pulmonary nodules diagnosis from x-ray imaging using image processing
JP7155274B2 (en) Systems and methods for accelerated clinical workflow
Ding et al. Research on Spinal Canal GenerationMethod based on Vertebral Foramina Inpainting of Spinal CT Images by using BEGAN.
Ramesh et al. Automatic Endoscopic Ultrasound Station Recognition with Limited Data
KR102559805B1 (en) Medical Image Conversion Method and Device based on Artificial Intelligence having Improved Versatility
CN113989277B (en) Imaging method and device for medical radiation diagnosis and treatment examination
Kanade et al. Suppressing Chest Radiograph Ribs for Improving Lung Nodule Visibility by using Circular Window Adaptive Median Outlier (CWAMO)
Mallios et al. Deep Rectum Segmentation for Image Guided Radiation Therapy with Synthetic Data
WO2023020609A1 (en) Systems and methods for medical imaging
Ray et al. Identification of abnormal brain images from CT image dataset
Ebrahimi et al. Automatic Segmentation and Identification of Spinous Processes on Sagittal X-Rays Based on Random Forest Classification and Dedicated Contextual Features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant