CN106780492B - Method for extracting key frame of CT pelvic image - Google Patents

Method for extracting key frame of CT pelvic image Download PDF

Info

Publication number
CN106780492B
CN106780492B CN201710050599.3A CN201710050599A CN106780492B CN 106780492 B CN106780492 B CN 106780492B CN 201710050599 A CN201710050599 A CN 201710050599A CN 106780492 B CN106780492 B CN 106780492B
Authority
CN
China
Prior art keywords
image
key frame
sequence
frames
candidate key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710050599.3A
Other languages
Chinese (zh)
Other versions
CN106780492A (en
Inventor
余辉
王海均
孙敬来
张力新
时尧
安家宝
曹玉珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710050599.3A priority Critical patent/CN106780492B/en
Publication of CN106780492A publication Critical patent/CN106780492A/en
Application granted granted Critical
Publication of CN106780492B publication Critical patent/CN106780492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a method for extracting a key frame of a CT pelvic image, which comprises the following steps: preprocessing a CT image; obtaining a region-of-interest mask; convolving the mask and the initial image to obtain an intermediate image; arranging the intermediate images according to a spatial sequence, selecting a pixel difference value of two adjacent frames of CT images as a characteristic distance, taking the two adjacent frames with the characteristic distance smaller than a specified threshold value as similar frames, and primarily screening the CT sequence to obtain a candidate key frame sequence; and performing fine screening on the candidate key frame sequence.

Description

Method for extracting key frame of CT pelvic image
Technical Field
The invention relates to the field of medical image segmentation, in particular to a method for segmenting bone regions such as a pelvis, a sacrum, a hip bone, an acetabulum and the like in a pelvis CT image.
Background
The medical image segmentation is the basis of medical image analysis and processing, and the accuracy of medical image segmentation directly influences the judgment of doctors on the disease conditions and the selection of surgical schemes. CT has been widely used for diagnosis of various diseases because CT images have a high resolution and can clearly display the characteristics of anatomical structures and lesion tissue regions. The pelvic fracture is one of the factors causing morbidity and mortality, and for the displacement fracture, the fracture range, the crushing degree, the soft tissue damage degree and the like can be accurately and quickly determined, so that reference can be provided for the selection of a treatment mode and the condition after healing. Secondly, for the cases of pelvic dysplasia caused by congenital or acquired, poliomyelitis, hereditary and other reasons, the functional reestablishment and the deformity correction depend on accurate early diagnosis and timely treatment, and the CT has a very important meaning for pelvic anatomical structure description before and after the operation, the formulation of an operation scheme and the effect evaluation after the operation is completed.
Among the bone CT image segmentation methods, the most commonly used segmentation method is a segmentation method based on gray scale information, wherein the threshold method is a typical method, but it is difficult to select an appropriate threshold during the use process due to non-uniformity of bone density, narrow connection between the femoral head and the acetabulum, weak marginality caused by pathological changes, and the like. In addition, classification and clustering methods in machine vision are also used in segmentation, and the method has good robustness to noise, but the segmentation effect depends on the number and types of samples, and the segmentation algorithm has limitation due to large individual difference among patients. At present, a great deal of research focuses on statistical shape Model segmentation methods, such as a Snake Model, a GVF Model [1] (Gradient Vector connected Field Snake Model), a segmentation Model based on a level set, and the like, the research of the segmentation method focuses on the aspects of automatic selection of marking points, construction of a Model with less training sets, improvement of the Model, combination with other methods, and the like, a great amount of human processing is needed to give prior information before segmentation is realized, the final segmentation effect depends on the accuracy and integrity of the prior information, and as the individualized difference among patients is large, the segmentation effect needs to be improved, the prior information needs to be added, so the method has large workload at the early stage, but the effect cannot be guaranteed, and is not suitable for being directly used by hospitals. Because the GVF model [2] better solves two problems which are difficult to solve by the traditional snap model: 1. very sensitive to the initial contour; 2. the desired effect cannot be achieved when the concave portion of the image is divided. At present, a great deal of research is carried out on image segmentation based on a GVF model, but the GVF model still has the defects of sensitivity to an initial contour and long operation time, so if a more accurate initial contour can be obtained and then the GVF model is used for carrying out bone CT image segmentation, the curve deformation range is greatly reduced, the iteration times are reduced, and the method has important influence on the acceleration of operation speed and the improvement of segmentation precision.
Considering that the existing hospital uses manual marking of the pelvis region to make a surgical plan and the number of CT sequences of one patient is large, the time for manually marking a single CT image clinically takes about 15 minutes, so the manual segmentation of the pelvis region takes long time and is high in pressure. Because the shooting distance in the serial CT slices is small, the bone morphological characteristics between two adjacent frames are not changed greatly, and the similarity is high, the key frame extraction has important significance for reducing the processing time of the segmentation algorithm.
Reference documents:
[1]Wu Bingrong;Xie Mei;Li Guo;Gao Jingjing.Medical Image Segmentation Based on GVF Snake Model;Intell igent Computation Technology and Automation;2009:637-640.
[2]Chen,L.,et a l.,Segmentation of the Pelvic Bone Using a Generalized Gradient Vector Convolution Field Snake Model.JOURNAL OF MEDICALIMAGING AND HEALTH INFORMATICS,2015.5(7):p.1482-1487.
disclosure of Invention
The invention aims to provide a key frame extraction method adopted in sequential CT pelvic image segmentation so as to reduce the number of images to be segmented. The invention utilizes the characteristic of the continuity of the time space of the bone morphology among the CT sequences to extract key frames from the CT sequence slices. The technical scheme is as follows:
a method for extracting key frames of a CT pelvic image comprises the following steps:
step 1: CT image preprocessing
Windowing processing, denoising, artifact and CT table non-body area removing and image cutting operation are carried out on the CT image, so that the image size of the CT image sequence is kept consistent after preprocessing, and the relative position of the body area is kept unchanged. For convenience of description, the cropped image is referred to as an initial image.
Step 2: obtaining candidate key frame sequence by preliminary screening
Carrying out mean value filtering, spot removal and morphological processing on the initial image to obtain an approximate region of bone distribution, and taking the region as a region of interest to obtain a region of interest mask; convolving the mask and the initial image to obtain an intermediate image; arranging the intermediate images according to a spatial sequence, selecting a pixel difference value of two adjacent frames of CT images as a characteristic distance, taking the two adjacent frames with the characteristic distance smaller than a specified threshold value as similar frames, and primarily screening the CT sequence to obtain a candidate key frame sequence.
And step 3: fine screening of candidate key frame sequences
Calculating the number of interested areas, a normalized correlation coefficient and a mutual information quantity characteristic of the candidate key frame sequence, taking adjacent candidate key frames with the same number of the interested areas as similar frames, judging whether the adjacent candidate key frames are the similar frames or not by comparing the normalized correlation coefficient and the mutual information quantity characteristic with the corresponding specified threshold value, and further screening to obtain the target key frame sequence.
The method for extracting the key frame of the CT pelvis image can greatly reduce the data volume of the image to be segmented, improve the segmentation efficiency of the CT image and lay a foundation for the rapid segmentation of the pelvis CT sequence image.
Drawings
FIG. 1: flow chart of method for rapidly extracting pelvis outline of sequence CT image based on key frame mark
FIG. 2: initial image display
FIG. 3: candidate key frame extraction flow chart
FIG. 4: target set extraction flow chart
Fig. 5(a) (b): respectively labeling contours of adjacent key frames
FIG. 6: automatic segmentation result of a certain CT image between two adjacent key frames in FIG. 5
Detailed Description
Considering that the existing hospital uses manual marking of the pelvis region to make a surgical plan and the number of CT sequences of one patient is large, the time for manually marking a single CT image clinically takes about 15 minutes, so the manual segmentation of the pelvis region takes long time and is high in pressure. Due to the fact that the shooting distance in the CT serial slices is small, the change of the bone morphological characteristics between two adjacent frames is not large, and the similarity is high, the key frames are extracted from the CT serial slices by utilizing the characteristic, and the data volume needing to be segmented is greatly reduced through the step. The invention and its application scenarios will now be further described with reference to the embodiments and the accompanying drawings:
step 1: CT image preprocessing
For one example of a patient with pelvic fracture, 540 CT sequences were obtained in a flat scan at 1mm intervals, each CT size was 512x512, and only image portions were extracted for each DICOM file. The windowing processing is carried out on the image part, wherein the window level is 900, the window width is 600, namely c is 900, w is 600, the data in the image is converted according to the formula (1), and the data is compressed to 256 gray levels:
where f is the displayed bitmap gray scale value, x is the image data, w is the window width, and c is the window level.
After windowing the image, it is converted into BMP format, and the effect before and after windowing is shown in fig. 2. Carrying out binarization processing on the BMP image according to the formula (2),filling the holes in the binary image by using morphological operation; according to experience, a body region and a CT table can be distinguished according to the area size of a connected domain, so that the areas of the connected domains in a binary image are sequenced, the region with the largest area is set as 1, the rest regions are set as 0, a mask image is obtained, and the size Rect (x, y, width, height) of an adjacent rectangle of the region with the largest area is simultaneously obtained; and (3) convolving the obtained mask image with the BMP image to remove the interference of a CT table, an artifact and the like, only keeping a body area in the image at the moment, and setting the rest areas as 0.
Considering that the body area of the patient does not change much when CT scanning is performed, in order to reduce the data amount and simultaneously save the body area information, the adjacent Rectangle Rect (x, y, width, height) is extended by 10 pixels in each of the four directions of the upper direction, the lower direction, the left direction, the right direction to obtain the Rectangle Rect (x, y, width, height), and then the image is cut with reference to the Rectangle Rect (x, y, width, height), and the original image is regenerated, wherein the original image is shown in fig. 3, and the size of the original image is 436x240 in this example. Thereafter, all CT images are cropped according to this rectangle, ensuring that the relative position of the body region in adjacent CT images remains unchanged.
Step 2: key frame extraction
This step includes two parts, namely, the preliminary screening to obtain the candidate key frame sequence and the fine screening to obtain the target key frame sequence.
In the primary screening process, firstly, operations such as mean value filtering, speckle removing, morphological processing and the like are carried out on an initial image to obtain an approximate bone distribution area, the area is used as an interested area, an interested area mask is obtained, and the number RoiNumber of the interested areas is recorded. Region of interest mask as shown in fig. 4, the region of interest, i.e. the region to be accurately segmented. The mask is convolved with the initial image to obtain an intermediate image.
Arranging the intermediate images according to a spatial sequence, namely keeping the relative sequence between the CT sequence sets unchanged, selecting the pixel difference value of two adjacent frames of CT images as a characteristic distance to carry out primary screening on the CT sequence, and obtaining a candidate key frame sequence { y1,y2,y3,……,yl}. Fig. 5 shows a flowchart of a candidate key frame sequence extraction method, and the specific steps are described as follows:
1) CT sequence set { f1,f2,f3,……,fnFirstly, select the first frame f1As the current key frame, j is 1, and the current key frame is added into a candidate set;
2) the pixel difference between the next frame image j +1 and the current frame image j is Dif ═ fj+1-fjIf the difference value is larger than the specified threshold value, the difference between the two images is considered to be larger, the similarity is lower, and the j +1 frame image is added into the candidate set; if the difference is less than the specified threshold, then the current frame is considered to be summarized fj+1The j +1 frame is not processed;
3) j is j +1, namely the j +1 th frame image is used as the current key frame, whether the frame is the last frame or not is judged, and if yes, the extraction of the candidate key frame is stopped; if not, the step 2 is continued.
According to the above step flow, candidate key frame sequence { y ] can be obtained1,y2,y3,……,ylAnd in the example, l is 237, and fine screening is carried out on the basis of the l, so that the target key frame sequence can be obtained.
In the fine screening process, the candidate key frame sequence y is subjected to1,y2,y3,……,ylSequentially using the number of the interested areas, the normalized correlation coefficient and the mutual information quantity characteristic as a characteristic value for further extracting the key frame, and obtaining a target key frame sequence { k ] by comparing the characteristic value with a set threshold value1,k2,k3,……,ktExtracting final key frame k in the candidate set1,k2,k3,……,ktThe algorithm flow of the method is shown in fig. 6, and the specific steps are described as follows:
1) first, a first frame y is selected1As a key frame, i is 1, and the frame is added into the target set;
2) judging whether the number of the interested areas of the i +1 th frame image is the same as that of the i frame image, namely RoiNumber (i +1) ═ RoiNumber (i), if the number of the interested areas of the i +1 th frame image is different from that of the interested areas of the i frame image, considering that the similarity between the i +1 th frame image and the current frame image is low, and adding the i +1 th frame image into a target set; if the two are the same, turning to 3);
3) according to the formulaCalculating the mutual information quantity characteristics of two frames of images according to a formulaAnd calculating the normalized correlation coefficient of the two frames of images, wherein the larger the I value is, the higher the correlation degree of the two frames of images is, the smaller the I value is, the lower the correlation degree of the two frames of images is, and the reverse is true. Therefore, when I (j +1, j) is satisfied>T1 simultaneous R (j +1, j)<Adding the (i +1) th frame into the target set under the condition of T2; when it is notWhen this condition is satisfied, no treatment is performed.
4) if the current frame is the last frame in the candidate set, generating a key frame target set; if not, the step 2 is continued.
Following the above-described process flow, a sequence y of candidate key frames can be selected1,y2,y3,……,ylIs further screened, thereby obtaining a target key frame sequence k1,k2,k3,……,kt}. In this example, t is 28. That is, a total of 28 CT images are acquired as a key frame, via step 2. The number of key frames extracted in the two stages of step (2) of the method of the present invention is shown in table 1, and the use time of each stage is shown in table 2.
TABLE 1
TABLE 2
And step 3: interactive labeling of key frame bone contours
For a target key frame sequence k1,k2,k3,……,ktAnd (4) firstly, skeleton extraction is carried out on the key frame region of interest to obtain mark points, and then an initial contour is automatically generated by utilizing a watershed algorithm based on the mark points. And displaying the initial contour, judging whether the initial contour conforms to the medical anatomical structure by a doctor, and if the initial contour does not conform to the medical anatomical structure, manually adding or deleting the mark points to correct the initial contour and improve topological information so as to obtain a contour curve of a bone region of the CT image in the target key frame sequence. The bone contour markers of the keyframes will serve as initial contours to further segment the sequential CT images of the same patient.
And 4, step 4: bone contour of CT image of each layer is extracted by introducing GVF model
The CT image sequence between key frame i and key frame i +1 uses the standard contour of key frame i as the initial contour. And (3) deforming the initial contour by using a GVF model to obtain a target contour of the sequence CT image of the same patient.
In the GVF model, a curve with energy is defined near an image target, and the curve moves towards the position of the image corresponding to the energy minimum value under the action of internal and external energy, and is represented as x(s) ═ x(s), y (s)), where s ∈ [0,1]Then the curve energy functional may be defined as:to minimize this energy, the curve should then satisfy the equation:
-α(s)X″(s)+β(s)X″(s)+Eext(X)=0 (3)
the target edge can be obtained by solving the minimum value of the equation. In the GVF model, the gradient vector field is defined as V (x, y) — (μ (x, y), V (x, y)), where μ (x, y), V (x, y) is V (x, y), and the energy functional of the image edge contour is expressed as:
wherein (x, y) is the coordinate of any point in the image,denotes the gradient value at (x, y) and μ is the adjustment factor. According to variational theory, GVF satisfies the euler equation, its decomposition form is:by this equation, V (x, y) can be obtained as the external force FextV (x, y), the solution curve x(s) in equation (3) is substituted, and the solution process is a process of moving the initial contour curve to the target edge, i.e. an initial contour deformation process.
And (3) regarding the CT image sequence of the same patient, using the manual contour mark of the key frame as an initial contour, and transforming the initial contour by using the GVF model to obtain the bone segmentation results of all pelvis CT sequence images of the same patient.

Claims (1)

1. A method for extracting key frames of a CT pelvic image comprises the following steps:
step 1: CT image preprocessing
Windowing the CT image, denoising, removing artifacts and a non-body area of a CT table, and cutting the image to ensure that the image size of the CT image sequence is kept consistent after preprocessing, and the relative position of the body area is kept unchanged; for convenience of description, the clipped image is referred to as an initial image;
step 2: obtaining candidate key frame sequence by preliminary screening
Carrying out mean value filtering, spot removal and morphological processing on the initial image to obtain an approximate region of bone distribution, and taking the region as a region of interest to obtain a region of interest mask; convolving the mask and the initial image to obtain an intermediate image; arranging the intermediate images according to a spatial sequence, selecting a pixel difference value of two adjacent frames of CT images as a characteristic distance, taking the two adjacent frames with the characteristic distance smaller than a specified threshold value as similar frames, and primarily screening the CT sequence to obtain a candidate key frame sequence;
and step 3: fine screening of candidate key frame sequences
Calculating the number of interested areas, a normalized correlation coefficient and a mutual information quantity characteristic of the candidate key frame sequence, taking adjacent candidate key frames with the same number of the interested areas as similar frames, judging whether the adjacent candidate key frames are the similar frames or not by comparing the normalized correlation coefficient and the mutual information quantity characteristic with the corresponding specified threshold value, and further screening to obtain the target key frame sequence.
CN201710050599.3A 2017-01-23 2017-01-23 Method for extracting key frame of CT pelvic image Active CN106780492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710050599.3A CN106780492B (en) 2017-01-23 2017-01-23 Method for extracting key frame of CT pelvic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710050599.3A CN106780492B (en) 2017-01-23 2017-01-23 Method for extracting key frame of CT pelvic image

Publications (2)

Publication Number Publication Date
CN106780492A CN106780492A (en) 2017-05-31
CN106780492B true CN106780492B (en) 2019-12-20

Family

ID=58941807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710050599.3A Active CN106780492B (en) 2017-01-23 2017-01-23 Method for extracting key frame of CT pelvic image

Country Status (1)

Country Link
CN (1) CN106780492B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671067B (en) * 2018-12-14 2021-02-19 强联智创(北京)科技有限公司 Method and system for measuring core infarction volume based on skull CT image
CN109671069B (en) * 2018-12-14 2021-02-19 强联智创(北京)科技有限公司 Method and system for measuring core infarction volume based on skull CT image
CN110148127B (en) * 2019-05-23 2021-05-11 数坤(北京)网络科技有限公司 Intelligent film selection method, device and storage equipment for blood vessel CTA post-processing image
CN112330665B (en) * 2020-11-25 2024-04-26 沈阳东软智能医疗科技研究院有限公司 CT image processing method, device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1318681A2 (en) * 2001-11-16 2003-06-11 Monolith Co., Ltd. Image presentation method and apparatus
CN105139421A (en) * 2015-08-14 2015-12-09 西安西拓电气股份有限公司 Video key frame extracting method of electric power system based on amount of mutual information
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1318681A2 (en) * 2001-11-16 2003-06-11 Monolith Co., Ltd. Image presentation method and apparatus
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion
CN105139421A (en) * 2015-08-14 2015-12-09 西安西拓电气股份有限公司 Video key frame extracting method of electric power system based on amount of mutual information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Medical Image Segmentation Based on GVF Snake Model;Wu Bingrong et al.;《2009 Second International Conference on Intelligent Computation Technology and Automation》;20091231;第638-640页 *
改进的互信息量动画视频关键帧提取算法;曾华 等;《电脑知识与技术》;20160531;第12卷(第15期);第220-222页 *
面向分娩仿真的虚拟器官表现方法研究;杨南粤;《中国优秀硕士学位论文全文数据库医药卫生科技辑》;20071115(第05期);论文正文第10-18页 *

Also Published As

Publication number Publication date
CN106780492A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106846346B (en) Method for rapidly extracting pelvis outline of sequence CT image based on key frame mark
CN106780491B (en) Initial contour generation method adopted in segmentation of CT pelvic image by GVF method
Yao et al. Automated spinal column extraction and partitioning
Kim et al. A fully automatic vertebra segmentation method using 3D deformable fences
CN106780492B (en) Method for extracting key frame of CT pelvic image
Ma et al. Two graph theory based methods for identifying the pectoral muscle in mammograms
Pulagam et al. Automated lung segmentation from HRCT scans with diffuse parenchymal lung diseases
CN103440665A (en) Automatic segmentation method of knee joint cartilage image
US20100049035A1 (en) Brain image segmentation from ct data
CN109753997B (en) Automatic accurate robust segmentation method for liver tumor in CT image
CN111681230A (en) System and method for scoring high-signal of white matter of brain
Barbieri et al. Vertebral body segmentation of spine MR images using superpixels
Lou et al. Automatic breast region extraction from digital mammograms for PACS and telemammography applications
Sagar et al. Color channel based segmentation of skin lesion from clinical images for the detection of melanoma
CN111325754B (en) Automatic lumbar vertebra positioning method based on CT sequence image
Umadevi et al. Enhanced Segmentation Method for bone structure and diaphysis extraction from x-ray images
Onal et al. Image based measurements for evaluation of pelvic organ prolapse
CN111627005B (en) Fracture area identification method and system for bone subdivision based on shape
Vasilache et al. Automated bone segmentation from pelvic CT images
CN109993754B (en) Method and system for skull segmentation from images
Dawod et al. Adaptive Slices in Brain Haemorrhage Segmentation Based on the SLIC Algorithm.
Areeckal et al. Fully automated radiogrammetric measurement of third metacarpal bone from hand radiograph
Chen et al. Automatic lung segmentation in HRCT images
Wang et al. A machine learning approach to extract spinal column centerline from three-dimensional CT data
El Soufi et al. CIMOR: An automatic segmentation to extract bone tissue in hand x-ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant