CN107146222B - Medical image compression method based on human anatomy structure similarity - Google Patents
Medical image compression method based on human anatomy structure similarity Download PDFInfo
- Publication number
- CN107146222B CN107146222B CN201710267790.3A CN201710267790A CN107146222B CN 107146222 B CN107146222 B CN 107146222B CN 201710267790 A CN201710267790 A CN 201710267790A CN 107146222 B CN107146222 B CN 107146222B
- Authority
- CN
- China
- Prior art keywords
- region
- segmentation
- regions
- density
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to a medical image compression algorithm based on human anatomy structure similarity, which combines the traditional segmentation algorithm based on intensity with the knowledge of anatomy to segment specific organs in abdominal CT data set. A candidate region for each organ is first obtained based on a priori anatomical knowledge of the current dataset, and then the organ's data is accurately extracted using a density-based approach in the candidate region. The invention can be applied to images of different patient sizes, using the relative position of the organ in the body. Secondly, this segmentation technique is performed in a progressive manner, roughly defining candidate regions, and then refining the target region using a density-based segmentation method that makes the segmentation accuracy more desirable. The invention can be used to segment individual organs in medical images and can adapt to anatomical variability between different patients, helping to reduce segmentation errors and ultimately helping to improve subsequent compression operations.
Description
Technical Field
The invention relates to the technical field of computer application, in particular to the field of digital image processing.
Background
Medical imaging techniques play an indispensable role in modern clinical medicine, and are important bases for doctors to diagnose conditions. However, as the resolution of medical imaging equipment increases, the acquired images generate enormous data volume, and huge pressure is brought to image storage and real-time transmission. It is therefore necessary to find an efficient compression algorithm that can compress an image while maintaining a good quality.
Most medical images are three-dimensional image sequences, not only intra-slice correlation exists, but also strong correlation exists between slices, and the more slices, the stronger correlation exists. Such characteristics of medical images determine that compression of medical images differs from ordinary image compression, which mainly employs lossy compression and lossless compression to reduce or remove image correlation.
The lossy compression technology is used for improving the transmission speed and saving space, under the condition of a given target code rate, a reconstructed image and an original image are very close to each other in the mean square error sense, but in order to generate a higher compression ratio, the lossy compression inevitably brings certain degradation to a medical image, and the loss of key diagnostic information can be caused.
Lossless compression mainly uses predictive coding or transform coding. Predictive coding is one of the earliest compression techniques used in the field of medical images, and one of the key challenges of this compression technique is to generate an accurate prediction model, and the current main prediction models are a JPEG-based prediction model, a context-based adaptive prediction model, and a least-squares-based adaptive prediction model.
JPEG-LS has better effect in the lossless compression of still images, the performance of the JPEG-LS even exceeds that of JPEG2000 lossless compression, but the JPEG-LS only aims at the compression of a single image and cannot utilize the inter-frame correlation of the image. JPEG-based prediction techniques are computationally inexpensive, but perform poorly when used to compress complex images because JPEG predictions do not fit well into a particular image context.
Based on the context self-adaptive lossless compression algorithm, when the intensity gradient of a point near a current pixel is changed remarkably, different sub-predictors can be switched, and the optimal sub-predictors are used for executing decorrelation operation in6 directions of the pixel, namely the upper direction, the lower direction, the left direction, the right direction, the front direction, the rear direction and the rear direction. This technique requires longer encoding and decoding time than JPEG-based prediction techniques, and in addition, the parameters of the prediction model (switching threshold and predictor coefficients) are experimentally predefined and cannot be adaptively changed based on the local data characteristics of the compressed image, reducing the decorrelation performance.
Adaptive methods based on least squares have demonstrated significant improvements over context-based adaptive prediction schemes, which update the model by locally optimizing prediction coefficients, usually calculated by the least-mean-square principle, to produce accurate prediction values. Although the least square based adaptive method can adaptively update the prediction model during the encoding and decoding stages, there is still another way of the least square based adaptive method, which directly generates the prediction model through decoding based on header information, eliminates heavy computation in the decoding stage, and provides a fast decoding method.
Transform coding is another important class of compression techniques in the medical field, and most research has focused on wavelet transforms at the transform stage, which have superior performance in decorrelation and localization in both the time and frequency domains relative to other types of transforms. However, the wavelet transform compression method is further expanded, and should be combined with the visual characteristics of human eyes to improve the image quality and the compression ratio, and combined with the advantages of other compression methods.
The medical image has features different from general images, such as weak edges due to Partial Volume Effect (PVE) phenomenon, and pixels on the boundary cause "blurring" of the boundary region due to having an average value of all surrounding pixels, so that the general prediction method is not suitable for directly applying compression with the medical image, and furthermore, it is very necessary to propose a compression algorithm tailored to the medical image in consideration of the characteristics of the medical image data, i.e., bilateral anatomical symmetry and structural anatomical similarity across different patients.
Disclosure of Invention
The invention aims to provide a medical image compression algorithm based on human anatomy structure similarity, which is an adaptive prediction technology with anatomy guidance and local optimization. The compression scheme utilizes the anatomical features of the patient to pre-locate different anatomical regions within the medical data set, then optimizes the prediction factors for the adaptive prediction model for each particular anatomical region to produce predicted values, then subtracts the predicted values from the actual values to obtain the final prediction error, and finally uses the final prediction error for entropy coding.
Lossless compression is characterized by a reversible process in which the decompressed data is numerically identical to the original data. This type of technique is preferred in the case of medical image compression, as the loss of any diagnostic information in the image may lead to serious consequences, such as misdiagnosis. Therefore, the compression technique of the present invention firstly considers the scheme of lossless compression, proposes a medical image compression algorithm based on human anatomy structure similarity, processes a medical image data set by using a method of generating 'divide and conquer' in a segmentation process, divides an image into a plurality of regions, and then compresses different segmentation regions separately by using a local optimization method to realize a high compression ratio, which comprises the following steps:
a medical image compression algorithm based on similarity of human anatomy, comprising:
step 1, acquiring a CTC data set. The data can be downloaded from a free medical image database, and the website is as follows:https://public.cancerimagingarchive.net/ncia/dataBasketDisplay.jsf(ii) a After selecting CT COLONOGRAPHY in the 'Collection(s)' board of the webpage and CT in the 'Image Module' board, the CTC data set can be downloaded.
Step 2, identifying a specific anatomical region on the CTC data set by using the density and the anatomical features, segmenting, and completing the preprocessing stage of the data set, wherein the preprocessing stage specifically comprises the following steps:
step 2.1, identify four regions outside the scan area, i.e., regions with a constant density of-1024H in the four corners of the image. A seeded region growing algorithm is used starting from four corner points and then representing the boundaries between regions and the actual scan data by simply recording and storing these region parameters.
And 2.2, classifying different anatomical regions, and dividing the CTC data set into 9 main categories, namely a bone region, a soft tissue region, an air region, a PVE region, an adipose tissue region, a table region, an undefined region and an air region outside the body of the patient according to different density values.
Step 2.3, after extracting the whole body and the parts outside the scanned area, the remaining pixels in the image are assigned to one of the predefined categories based on their position and essential features. Histogram thresholding is performed on the image, and the image is roughly divided into different regions by means of thresholding. Density-based segmentation is then applied to these regions to avoid erroneous segmentation.
Step 2.4, after the various organs in the CTC dataset are differentiated, the bone, bone PVE, colon PVE, body PVE, clothing, table and outside air regions are identified based on their density characteristics and previous segmentation results, which are used to provide useful information to guide the subsequent segmentation process to extract specific organs. Using segmentation based on a combination of density and anatomical features. After the organ segmentation is completed, the remaining voxels within the body region are assigned to adipose tissue or lean tissue based on their density characteristics;
and 3, based on the segmentation result in the step 2, a series of predictors optimized for each specific anatomical region are generated, and then an adaptive prediction model composed of the optimized predictors is applied to the whole data set.
Step 3.1, a linear prediction model is selected, P β0+β1x1+β2x2+…+βNxNWhere P is the predicted value, β0、β1…βNAre the coefficients of the prediction model, N is 58 for soft tissue, air and adipose tissue regions and 96 for bone regions. The optimization coefficients for each pixel can be calculated by the least mean square principle, and the process is repeated on each segmented region.
Step 3.2, in the XZ plane, 8 discrete orientations are defined for the normal: 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °. If the actual angle is not equal to one of these 8 angles, it is quantized to the closest angle, and to avoid pixel scaling at 45 °, 135 °, 225 °, and 315 °, additional templates are developed for normals in these directions.
And 3.3, after the normal is determined, generating a predicted value of each direction based on the cube template. A template size of 5 pixels is used in the edge-based predictor to maximize progress and minimize complexity. Adaptation of edge regionsThe model should be: m ═ a, P { P }T1,PT2,PT3,PT4Where a is a particular angle and P is the corresponding predictor. T1, T2, T3, T4 are initial template positions, and other templates are generated using rotation and translation.
And 4, after decorrelation is carried out by using the self-adaptive prediction model, the contour code, the predictor parameter and the residual data are sent to an entropy coder. The final compressed file contains both a header and a body. The header of the file contains the contour code, predictor parameters, voxel size, slice number and boundary parameters, and the body of the file contains residual data.
In the above medical image compression algorithm based on similarity of human anatomical structures, the step 2.3 specifically includes:
step 2.3.1, histogram threshold processing is performed on the CTC dataset, the histogram of the image is analyzed, valley points between consecutive main peaks are found, and then the image is segmented into different regions using intensity values corresponding to the valley points as thresholds, specifically including:
step 2.3.3.1, skin and subcutaneous tissue are extracted, and the location of the subcutaneous tissue area is determined using the eroded body contour, an overview of the erosion step points being calculated as:
step size ═ skin thickness + subcutaneous tissue thickness/pixel size
And 2.3.3.2, extracting lung regions, utilizing the position characteristics of the lungs in the body and being positioned at the upper part of the body, extracting the lungs in the top slice by using a simple threshold technology, and realizing the segmentation of the lungs by using a simple top-down segmentation method based on the initial segmentation result.
Step 2.3.3.3, extracting a liver region, removing muscles around the liver by using the relative position of the liver in a body, removing skeleton and lung pixels in the liver region by using a threshold technology, removing non-liver elements by using an anatomy and density-based rule, and realizing the segmentation of the liver region.
At step 2.3.3.4, the kidney area is extracted, the kidneys are two small organs located on both sides of the spine, located under the thorax, the top of the kidney is identified by tracking the ribs in the top-down serial slices according to the position characteristics of the kidney, and then the kidney in the subsequent slice is located, the two kidney areas are represented as:
left kidney (xcos 60-zsin 60) (xcos 60-zsin 60) ((0.13 h) × 0.13h) + (zcos60 ° +)
sin60°)(zcos60°+xsin60°)/(0.2v*0.2v)<=1
Right kidney (xcos (-60) ° -zsin (-60) °) (xcos (-60) ° -zsin (-60) °)/(0.13h ×)
0.13h)+(zcos(-60)°+xsin(-60)°)(zcos(-60)°+xsin(-60)°)/(0.2v*0.
2v)<=1
Where (x, z) represents the coordinates of a pixel point in the image, and h and v refer to the length of the horizontal axis and the length of the vertical axis of the body region. cos and sin represent the cosine and sine values, respectively, of the angle formed by a point in the renal region and the spine.
Step 2.3.2, where the density ranges of some organs are relatively close, meaning that the histogram thresholding is also unable to segment these objects correctly, the specific organ in the CTC dataset is automatically detected based on the prior theory of anatomy, using the relative position and shape of each organ in the body.
The segmentation method proposed in the present invention combines the traditional intensity-based segmentation algorithm with knowledge of the anatomy to segment a specific organ in the abdominal CT dataset. A candidate region for each organ is first obtained based on a priori anatomical knowledge of the current dataset, and then the organ's data is accurately extracted using a density-based approach in the candidate region. This method has several advantages, firstly, it can be applied to images of different patient sizes, using the relative position of the organ in the body. Secondly, this segmentation technique is performed in a progressive manner, roughly defining candidate regions, and then refining the target region using a density-based segmentation method that makes the segmentation accuracy more desirable. In summary, the proposed technique utilizes anatomical and density features to guide the segmentation process, can be used to segment individual organs in medical images, and can adapt to anatomical variability between different patients, helping to reduce segmentation errors, and ultimately helping to improve subsequent compression operations.
The invention provides a prediction model for medical image compression, which is developed based on two modeling methods and is used for processing inner and edge regions respectively. The first approach to inner region design is based on identifying the best template to generate an efficient predictor. The predictor can achieve high prediction accuracy with relatively low computational cost. Processing edge regions in a second way to account for different edge orientations may rotate the predictive model to ensure a consistent input pattern in the predictive model that may optimize the edge predictor. The prediction model provided by the invention comprises the two types of predictors and is adaptively switched to the optimal predictor according to the characteristics of a compressed region, so that the problem that the adaptive prediction model based on the context can not adaptively change based on the local data characteristics of a compressed image is solved, and in addition, the prediction model fully utilizes the inter-frame relation and makes up the defect that the inter-frame correlation can not be utilized based on the JPEG prediction model.
The final stage of the present invention is to encode residual data using an entropy encoder, and the Prediction with partial Matching (PPM) technique can maximize the overall compression performance based on the probability of allocating a current symbol to a previous context, and thus the PPM technique is used as an entropy encoder in a compression scheme.
To illustrate the effectiveness of the overall compression scheme, including the data decorrelation and entropy coding stages, the compression results provided by the present invention are compared with the compression results obtained by various alternative techniques, and the results show that the compression method proposed by the present invention improves JPEG2000 and 3D-JPEG2000 by an average of 12% and 6% over the standard 3D-JPEG4+ PPMd method, and even though the 3D-JPEG2000 and 3DJPEG4+ PPM methods exploit the correlation between slices, they cannot compress the border region efficiently. These techniques do not perform well in the case of medical images due to the fact that there are a large number of edges in medical images.
The compression algorithm proposed by the invention combines a novel edge-based prediction method to specially process PVE regions in medical image data sets, the predictor is very effective in reducing residual information quantity associated with edges, and the rest regions are decorrelated through a series of optimized predictors, and the result shows that better compression performance is realized.
Drawings
Fig. 1 is a block diagram of the compression scheme proposed by the present invention.
Fig. 2 is a flow chart of the segmentation process.
Fig. 3 is a template for region identification.
FIG. 4 is an illustration of four template categories.
FIG. 5 is a diagram of the encoding and decoding process for the full compression scheme technique
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b):
the validity of the proposed compression scheme is demonstrated in the present invention using CTC data sets, and fig. 1 is a schematic depiction of a complete compression scheme for CTC data.
Step 1, acquiring a CTC data set.
And 2, identifying a specific anatomical region on the CTC data set by using the density and the anatomical features, and segmenting to finish the preprocessing stage of the data set.
Step 2.1, whole body is extracted from CTC dataset and its profile is recorded using a series of chain codes (one for each slice in the dataset), the body profile is obtained using Roberts edge detector and represented using 4-concatenated chain codes. When encoding a volume contour, a point on the boundary is selected and its coordinates are stored, the encoder follows the boundary in a sequential manner and keeps track of the direction from one boundary pixel to another, the symbols representing the direction of motion are stored using a chain code, which is compressed using a lossless encoding technique in order to further reduce the size of the contour file.
Step 2.2, four regions outside the scan area, i.e. regions with a constant density of-1024 HU (Hounsfield Units, medical image unit) in the four corners of the image, are identified. It can be implemented using a seeded region growing algorithm starting from four corner points and then representing the boundaries between regions and the actual scan data by simply recording the parameters of these regions stored in a very efficient way.
After extracting the whole body and the parts outside the scanned area, the remaining pixels in the image can be assigned to one of the predefined classes (e.g. air, soft tissue, bone and PVE voxels) based on their location and essential features, step 2.3.
Step 2.3.1, the CTC data set is subdivided into 9 main categories, bone, soft tissue, internal air, PVE area, adipose tissue, table, clothing, air outside the patient and undefined area, respectively.
And 2.3.2, performing histogram threshold processing on the image, analyzing the histogram of the image, finding valley points between continuous main peaks, and segmenting the image into different areas by using the intensity values of the valley points as thresholds.
Step 2.3.3, for regions that cannot be correctly segmented with histogram thresholding, a density-based segmentation method is applied in these candidate regions, combined with a priori knowledge of the anatomy, to avoid incorrect segmentation.
Step 2.3.3.1, extracting skin and subcutaneous tissue, using the eroded body contour to determine the location of the subcutaneous tissue region, an overview of the erosion step points can be calculated as:
step size ═ skin thickness + subcutaneous tissue thickness/pixel size
And 2.3.3.2, extracting lung regions, utilizing the position characteristics of the lungs in the body and being positioned at the upper part of the body, extracting the lungs in the top slice by using a simple threshold technology, and realizing the segmentation of the lungs by using a simple top-down segmentation method based on the initial segmentation result.
Step 2.3.3.3, extracting a liver region, removing muscles around the liver by using the relative position of the liver in a body, removing skeleton and lung pixels in the liver region by using a threshold technology, removing non-liver elements by using an anatomy and density-based rule, and realizing the segmentation of the liver region.
At step 2.3.3.4, the kidney region is extracted, the kidneys are two small organs located on both sides of the spine, located under the thorax, the top of the kidney is identified by tracking the ribs in the top-down serial slices according to the position characteristics of the kidney, and then the kidney in the subsequent slices is located, and the two kidney regions can be represented as:
left kidney (xcos 60-zsin 60) (xcos 60-zsin 60) ((0.13 h) × 0.13h) + (zcos60 ° +)
sin60°)(zcos60°+xsin60°)/(0.2v*0.2v)<=1
Right kidney (xcos (-60) ° -zsin (-60) °) (xcos (-60) ° -zsin (-60) °)/(0.13h ×)
0.13h)+(zcos(-60)°+xsin(-60)°)(zcos(-60)°+xsin(-60)°)/(0.2v*0.
2v)<=1
Wherein (x, z) represents the coordinates of the pixel points in the image, h and v refer to the length of the horizontal axis and the length of the vertical axis of the body region, and cos and sin respectively represent the cosine value and sine value of the angle formed by the point in the kidney region and the spine.
Step 2.4 segmentation process after extracting the body region and the regions outside the scanned region, as shown in fig. 2, the bones, bones PVE, colon PVE, body PVE, clothing, tables and outside air regions are identified based on their density characteristics and previous segmentation results, which can be used to provide useful information to guide the subsequent segmentation process, extracting specific organs. Using segmentation based on a combination of density and anatomical features. After the organ segmentation is completed, the remaining voxels within the body region are assigned to adipose tissue or lean tissue based on their density characteristics.
Step 2.4.1, automatic identification is done by examining two known neighboring faces of the current pixel, the details of which are as follows:
step 2.4.1.1, judging whether the values of PN and PW are in the range of the bone density
If PN ∈ bone region & PW ∈ bone region
If the values of PN, PW belong to a bone region, then X belongs to a bone region
X belongs to the bone region
And 2.4.1.2, if the last step is judged to be negative, continuously judging whether the PN and the PW value are in the range of the air density, if true, judging whether the position of X is in the lung region according to the medical priori knowledge and the relative position of the internal organ, if true, judging that the position of X is in the lung region, otherwise, judging that X belongs to the colon region.
If PN ∈ air region & PW ∈ air region
If LX belongs to the lung region and X belongs to the lung region
Otherwise X belongs to colon region
And 2.4.1.3, if the PN and the PW do not satisfy the above assumption, continuing to judge whether the PN and the PW are in the soft tissue region, and if so, judging the region to which the X belongs according to the position of the X.
If PN ∈ soft tissue region & PW ∈ soft tissue region
If LX belongs to the liver region and X belongs to the liver region
If LX belongs to the left kidney region and X belongs to the left kidney region
If LX belongs to the right kidney region and X belongs to the right kidney region
If LX belongs to the spleen region and X belongs to the spleen region
Otherwise X belongs to lean meat tissue
If PN and PW do not satisfy the above assumption, step 2.4.1.4, it is continuously determined whether PN and PW belong to a region having bone volume effect, and if so, then the region to which X belongs is determined according to the position of X.
If PN is equal to the bone volume effect region and PW is equal to the bone volume effect region
If LX is close to the bone region X is equal to the bone volume effect region
If LX is close to the colon area X is equal to the colon volume effect area
If LX is close to the lung region X ∈ lung volume effect region
Otherwise X belongs to the adipose tissue region
And 2.4.1.5, if the PN and the PW do not meet the above assumption, judging the edge area, and judging the area to which the X belongs according to the position of the X.
Volume effect region if LX ∈ body region X ∈ volume effect region of body region
If LX is equal to the subcutaneous tissue area and X is equal to the subcutaneous tissue area
And step 2.4.1.6, finally judging the in-vitro scanning area, wherein the density difference of the in-vitro object is large, so that the area to which the X belongs does not need to be judged according to the position of the X, and the area to which the X belongs only needs to be judged according to the values of PN and PW.
If PN ∈ table area & PW ∈ table area
X belongs to the table area
If PN belongs to the clothing region and PW belongs to the clothing region
X belongs to the clothing area
Otherwise X belongs to the air region outside the body
Where PN represents the pixel value of a pixel above the current pixel X, PW represents the pixel value of a pixel to the left of the current pixel X, LX represents the position of the current pixel, and X represents the current pixel. The relative positions of PN, PW and X are shown in FIG. 3.
Based on the segmentation results in step 2, a series of predictors optimized for each specific anatomical region is then generated, step 3. The adaptive predictive model consisting of these optimized predictors is then applied to the entire data set.
Step 3.1, based on the internal prediction model, selects a linear prediction model, denoted P- β0+ β1x1+β2x2+…+βNxNWhere P is the predicted value, β0、β1…βNAre the coefficients of the prediction model, N is 58 for soft tissue, air and adipose tissue regions and 96 for bone regions. The optimization coefficients for each pixel can be calculated by the least mean square principle, and the process is repeated on each segmented region. The adaptive model finally combines all these sub-predictors, denoted as M ═ { R, P { P ═ Pbone;Pliver;Pspleen;Plean_tissue;Pair_lung;Pair_colon;Psubcutaneous_tissue;Padipose_tissue;Ptable;Pclothing;Pexternal_airWhere R is the particular region and P is the corresponding predictor.
Step 3.2, based on the edge prediction model, first estimate the normal direction, rotate the template based on the edge normal to ensure that the template is always aligned with the edge region, identify the edge region using derivatives in order to determine the normal direction and magnitude of the current point, in the case of first derivative, local minima and maxima indicate the presence of an edge, let y ═ f (x) be a function of the density distribution, the derivative at point x can be expressed as
The corresponding two-sided pixel edge detection mask is [ -1, 1 ].
Step 3.3, using the discrete orientations of the normal, in the XZ plane, 8 discrete orientations are defined for the normal: 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °. If the actual angle is not equal to one of these 8 angles, it is quantized to the closest angle, and to avoid pixel scaling at 45 °, 135 °, 225 °, and 315 °, additional templates are developed for normals in these directions.
Step 3.3.1, two template classes are defined in the XZ plane according to the normal direction: one along the axis and the other at an angle of 45 to the axis. Other templates are defined in a similar manner, and all template category details are shown in FIG. 4.
And 3.4, after the normal is determined, generating a predicted value of each direction based on the cube template. A template size of 5 pixels is used in the edge-based predictor to maximize progress and minimize complexity. The adaptive model of the edge region is: m ═ a, P { P }T1,PT2,PT3,PT4Where a is a particular angle and P is the corresponding predictor. T1, T2, T3, T4 are initial template positions, and other templates are generated using rotation and translation.
And 4, after decorrelation is carried out by using the self-adaptive prediction model, the contour code, the predictor parameter and the residual data are sent to an entropy coder. The final compressed file contains both a header and a body. The header of the file contains the contour code, predictor parameters, voxel size, slice number and boundary parameters, and the body of the file contains residual data.
The new compression scheme proposed by the present invention uses the a priori knowledge of the anatomical information to improve the compression performance of the medical image, and consists of an anatomy-based segmentation process, an adaptive prediction model and an entropy coder, and the complete compression scheme at the encoding and decoding stages is shown in fig. 5.
During the encoding stage, the original data set is initially partitioned into different anatomical regions and optimized predictors are then generated for each region, the adaptive prediction (AAP) model is decorrelated using a series of optimized predictors, and the residual data is then sent for entropy encoding. The decoding process is just the opposite of the encoding process, however, since the coefficients of each predictor are already stored in the header within the compressed data, the prediction model can be generated without significant computational cost. Lossless compression is characterized by a reversible process in which the decompressed data is numerically identical to the original data.
Depending on the type of region with which the current pixel is associated, the decoder switches to the corresponding predictor to generate a prediction value, which is added to the stored prediction error to reconstruct the original data.
The specific embodiments described herein are merely illustrative of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be similarly employed by those skilled in the art, for example, using other predictive models for decorrelation operations, without departing from the spirit of the invention or exceeding the scope of the invention as defined in the appended claims.
Claims (1)
1. A medical image compression method based on human anatomy similarity, comprising:
step 1, acquiring a CTC data set;
step 2, identifying a specific anatomical region on the CTC data set by using the density and the anatomical features, segmenting, and completing the preprocessing stage of the data set, wherein the preprocessing stage specifically comprises the following steps:
step 2.1, identifying four areas outside the scanning area, namely areas with constant density of-1024H in four corners of the image; starting from four corner points, using a seed region growing algorithm, and then representing the boundary between regions and actual scanning data by simply recording and storing parameters of the regions;
2.2, classifying different anatomical regions, and dividing the CTC data set into 9 main categories, namely a bone region, a soft tissue region, an air region, a PVE region, an adipose tissue region, a table region, an undefined region and an air region outside the body of the patient according to different density values;
step 2.3, after extracting the whole body and the parts outside the scanned area, assigning the remaining pixels in the image to one of the predefined categories based on their position and essential features; performing histogram threshold processing on the image, and roughly dividing the image into different regions through a threshold value; then applying density-based segmentation in these regions to avoid erroneous segmentation;
step 2.4, after the various organs in the CTC dataset are differentiated, using the features based on their density and previous segmentation results to identify bone, bone PVE, colon PVE, body PVE, clothing, table and outside air regions, which are used to provide useful information to guide the subsequent segmentation process, extracting specific organs; using segmentation based on a combination of density and anatomical features; after the organ segmentation is completed, the remaining voxels within the body region are assigned to adipose tissue or lean tissue based on their density characteristics;
step 3, based on the segmentation result in the step 2, then generating a series of predictors optimized for each specific anatomical region, and then applying an adaptive prediction model composed of the optimized predictors to the whole data set;
step 3.1, a linear prediction model is selected, P β0+β1x1+β2x2+…+βNxNWhere P is the predicted value, β0、β1…βNIs a predictive modelN is 58 for soft tissue, air and adipose tissue regions and 96 for bone regions; the optimization coefficient of each pixel can be calculated by the least mean square principle, and the process is repeated on each segmentation region;
step 3.2, in the XZ plane, 8 discrete orientations are defined for the normal: 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °; if the actual angle is not equal to one of these 8 angles, it is quantized to the closest angle, and to avoid pixel scaling at 45 °, 135 °, 225 °, and 315 °, additional templates are developed for normals in these directions;
step 3.3, after the normal is determined, generating a predicted value of each direction based on the cube template; using a template size of 5 pixels in the edge-based predictor to maximize progress and minimize complexity; the adaptive model of the edge region is: m ═ a, P { P }T1,PT2,PT3,PT4} where a is a particular angle and P is the corresponding predictor; t1, T2, T3, T4 are initial template positions, other templates are generated using rotation and translation;
step 4, after decorrelation is carried out by using a self-adaptive prediction model, the contour code, the predictor parameter and the residual data are sent to an entropy coder; the final compressed file comprises a header part and a body part; the head of the file comprises contour codes, predictor parameters, voxel size, slice number and boundary parameters, and the body of the file comprises residual error data;
the step 2.3 specifically comprises:
step 2.3.1, histogram threshold processing is performed on the CTC dataset, the histogram of the image is analyzed, valley points between consecutive main peaks are found, and then the image is segmented into different regions using intensity values corresponding to the valley points as thresholds, specifically including:
step 2.3.3.1, skin and subcutaneous tissue are extracted, and the location of the subcutaneous tissue area is determined using the eroded body contour, an overview of the erosion step points being calculated as:
step size ═ skin thickness + subcutaneous tissue thickness/pixel size
Step 2.3.3.2, extracting lung regions, locating the lungs in the upper part of the body by using the position characteristics of the lungs in the body, extracting the lungs in the top slice by using a simple threshold technology, and realizing the segmentation of the lungs by using a simple top-down segmentation method based on the initial segmentation result;
step 2.3.3.3, extracting a liver region, removing muscles around the liver by using the relative position of the liver in a body, removing skeleton and lung pixels in the liver region by using a threshold technology, removing non-liver elements by using anatomy and density-based rules, and realizing segmentation of the liver region;
at step 2.3.3.4, the kidney area is extracted, the kidneys are two small organs located on both sides of the spine, located under the thorax, the top of the kidney is identified by tracking the ribs in the top-down serial slices according to the position characteristics of the kidney, and then the kidney in the subsequent slice is located, the two kidney areas are represented as:
left kidney (xcos 60-zsin 60) (xcos 60-zsin 60) (0.13 h) + (zcos60 ° + sin 60) (zcos60 ° + xsin 60) (0.2 v) <1-
Right kidney (xcos (-60) ° -zsin (-60) °) (xcos (-60) ° -zsin (-60) °)/(0.13 h) + (zcos (-60) ° + xsin (-60) °) (zcos (-60) ° + xsin (-60) °)/(0.2 v) < ═ 1 °
Wherein (x, z) represents the coordinates of pixel points in the image, h and v refer to the length of the horizontal axis and the length of the vertical axis of the body region, cos and sin respectively represent the cosine value and sine value of an included angle formed by a point in the kidney region and the spine;
step 2.3.2, where the density ranges of some organs are relatively close, meaning that the histogram thresholding is also unable to segment these objects correctly, the specific organ in the CTC dataset is automatically detected based on the prior theory of anatomy, using the relative position and shape of each organ in the body.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710267790.3A CN107146222B (en) | 2017-04-21 | 2017-04-21 | Medical image compression method based on human anatomy structure similarity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710267790.3A CN107146222B (en) | 2017-04-21 | 2017-04-21 | Medical image compression method based on human anatomy structure similarity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107146222A CN107146222A (en) | 2017-09-08 |
CN107146222B true CN107146222B (en) | 2020-03-10 |
Family
ID=59774988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710267790.3A Active CN107146222B (en) | 2017-04-21 | 2017-04-21 | Medical image compression method based on human anatomy structure similarity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107146222B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2567636B (en) * | 2017-10-17 | 2021-11-10 | Perspectum Diagnostics Ltd | Method and apparatus for imaging an organ |
US11182920B2 (en) * | 2018-04-26 | 2021-11-23 | Jerry NAM | Automated determination of muscle mass from images |
CN109584233A (en) * | 2018-11-29 | 2019-04-05 | 广西大学 | Three-dimensional image segmentation method based on subjective threshold value and three-dimensional label technology |
CN109949309B (en) * | 2019-03-18 | 2022-02-11 | 安徽紫薇帝星数字科技有限公司 | Liver CT image segmentation method based on deep learning |
CN110555853B (en) * | 2019-08-07 | 2022-07-19 | 杭州深睿博联科技有限公司 | Method and device for segmentation algorithm evaluation based on anatomical priors |
CN111694491A (en) * | 2020-05-26 | 2020-09-22 | 珠海九松科技有限公司 | Method and system for automatically selecting and zooming specific area of medical material by AI (artificial intelligence) |
CN112419330B (en) * | 2020-10-16 | 2024-05-24 | 北京工业大学 | Temporal bone key anatomical structure automatic positioning method based on space relative position priori |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1430185A (en) * | 2001-12-29 | 2003-07-16 | 田捷 | Ultralarge scale medical image surface reconstruction method based on single-layer surface tracking |
WO2009109971A2 (en) * | 2008-03-04 | 2009-09-11 | Innovea Medical Ltd. | Segmentation device and method |
WO2012139205A1 (en) * | 2011-04-13 | 2012-10-18 | Hamid Reza Tizhoosh | Method and system for binary and quasi-binary atlas-based auto-contouring of volume sets in medical images |
CN104134210A (en) * | 2014-07-22 | 2014-11-05 | 兰州交通大学 | 2D-3D medical image parallel registration method based on combination similarity measure |
CN104270638A (en) * | 2014-07-29 | 2015-01-07 | 武汉飞脉科技有限责任公司 | Compression and quality evaluation method for region of interest (ROI) of CT (Computed Tomography) image |
CN104933288A (en) * | 2014-03-18 | 2015-09-23 | 三星电子株式会社 | Apparatus and method for visualizing anatomical elements in a medical image |
-
2017
- 2017-04-21 CN CN201710267790.3A patent/CN107146222B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1430185A (en) * | 2001-12-29 | 2003-07-16 | 田捷 | Ultralarge scale medical image surface reconstruction method based on single-layer surface tracking |
WO2009109971A2 (en) * | 2008-03-04 | 2009-09-11 | Innovea Medical Ltd. | Segmentation device and method |
WO2012139205A1 (en) * | 2011-04-13 | 2012-10-18 | Hamid Reza Tizhoosh | Method and system for binary and quasi-binary atlas-based auto-contouring of volume sets in medical images |
CN104933288A (en) * | 2014-03-18 | 2015-09-23 | 三星电子株式会社 | Apparatus and method for visualizing anatomical elements in a medical image |
CN104134210A (en) * | 2014-07-22 | 2014-11-05 | 兰州交通大学 | 2D-3D medical image parallel registration method based on combination similarity measure |
CN104270638A (en) * | 2014-07-29 | 2015-01-07 | 武汉飞脉科技有限责任公司 | Compression and quality evaluation method for region of interest (ROI) of CT (Computed Tomography) image |
Non-Patent Citations (2)
Title |
---|
《An Edge-based Prediction Approach for Medical Image Compression》;Qiusha Min et al;;《2012 IEEE EMBS International Conference on Biomedical Engineering and Sciences》;20121231;第717-722页; * |
《Medical Image Compression Using Region-based》;Qiusha Min et al;;《2012 IEEE EMBS International Conference on Biomedical Engineering and Sciences》;20121231;第677-682页; * |
Also Published As
Publication number | Publication date |
---|---|
CN107146222A (en) | 2017-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107146222B (en) | Medical image compression method based on human anatomy structure similarity | |
CN109934235B (en) | Unsupervised abdominal CT sequence image multi-organ simultaneous automatic segmentation method | |
CN113808146B (en) | Multi-organ segmentation method and system for medical image | |
CN109389585B (en) | Brain tissue extraction method based on full convolution neural network | |
JP5279245B2 (en) | Method and apparatus for detection using cluster change graph cut | |
CN111428709A (en) | Image processing method, image processing device, computer equipment and storage medium | |
Jaffar et al. | Fuzzy entropy based optimization of clusters for the segmentation of lungs in CT scanned images | |
CN111242956A (en) | U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method | |
Bandyopadhyay | Pre-processing of mammogram images | |
CN110008992B (en) | Deep learning method for prostate cancer auxiliary diagnosis | |
CN110689525A (en) | Method and device for recognizing lymph nodes based on neural network | |
CN112634265B (en) | Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network) | |
CN111145185B (en) | Lung substance segmentation method for extracting CT image based on clustering key frame | |
CN114170244A (en) | Brain glioma segmentation method based on cascade neural network structure | |
Danciu et al. | 3D DCT supervised segmentation applied on liver volumes | |
Shao et al. | Application of an improved u2-net model in ultrasound median neural image segmentation | |
CN118351300A (en) | Automatic crisis organ sketching method and system based on U-Net model | |
Kekre et al. | Detection of tumor in MRI using vector quantization segmentation | |
Astaraki et al. | Autopaint: A self-inpainting method for unsupervised anomaly detection | |
Chithra et al. | Otsu's Adaptive Thresholding Based Segmentation for Detection of Lung Nodules in CT Image | |
CN117710317A (en) | Training method and detection method of detection model | |
CN113160208A (en) | Liver lesion image segmentation method based on cascade hybrid network | |
CN114612478B (en) | Female pelvic cavity MRI automatic sketching system based on deep learning | |
Kalinin et al. | A classification approach for anatomical regions segmentation | |
Georgieva et al. | Multistage Approach for Simple Kidney Cysts Segmentation in CT Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |