CN106910193B - Scanning image processing method - Google Patents

Scanning image processing method Download PDF

Info

Publication number
CN106910193B
CN106910193B CN201710268563.2A CN201710268563A CN106910193B CN 106910193 B CN106910193 B CN 106910193B CN 201710268563 A CN201710268563 A CN 201710268563A CN 106910193 B CN106910193 B CN 106910193B
Authority
CN
China
Prior art keywords
image
scanning
scanned image
current
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710268563.2A
Other languages
Chinese (zh)
Other versions
CN106910193A (en
Inventor
李迎燕
叶宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Mingfeng Medical Technology Co., Ltd.
Original Assignee
Minfound Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minfound Medical Systems Co Ltd filed Critical Minfound Medical Systems Co Ltd
Priority to CN201710268563.2A priority Critical patent/CN106910193B/en
Publication of CN106910193A publication Critical patent/CN106910193A/en
Application granted granted Critical
Publication of CN106910193B publication Critical patent/CN106910193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

The invention relates to a scanning image processing method, which is characterized in that a clear target image is obtained after ROI in a scanning image is identified and processed, filtering is carried out on the scanning image of different parts of a head or a body or a coordinate system is established to delete a non-target image by comparing y values in a space direction, an accessory is automatically removed and a target is identified, and the method is not only suitable for head volume data but also suitable for body data. The invention is not dependent on anatomical information, is also suitable for the scanning data of other parts of the body, uses special marks to identify and remove the attachments in the scanning data, finally identifies the ROI of the body and keeps the original gray scale and brightness information of the scanning pattern, not only avoids the complexity of manually removing the attachments on the scanning image, but also removes the attachment information in place, has better fusion visual effect of PET/CT images, and can more intuitively diagnose the patient aiming at the scanning image.

Description

Scanning image processing method
Technical Field
The invention belongs to the technical field of general image data processing or generation, and particularly relates to a scanning image processing method capable of improving the quality of an image and improving a diagnosis effect.
Background
The medical image is a medical auxiliary mode for researching interaction with human body by means of some medium (such as X-ray, electromagnetic field and ultrasonic wave), and expressing internal tissue organ structure and density of human body in the form of image, and making diagnosis doctor judge according to the information provided by image so as to make evaluation of health condition of human body, including two aspects of medical imaging system and medical image processing. The medical images mainly include X-ray imaging instruments, CT instruments, Positron Emission Tomography (PET) instruments, magnetic resonance imaging instruments, and the like.
In the medical imaging technique, a scan image is an important part, and the scan image represents the physiological condition of a patient and directly affects the diagnosis of a doctor. Such a scan image may be a CT scan image or an MR scan image.
In the prior art, a large amount of accessory information is presented on a scanned image, including head supports, bed plates, clothes, straps and other non-emissive objects, the range of scanning the body of a patient without adding accessories is limited by certain objective factors, including that the patient must lie on a scanning bed to scan, so that the bed plate of the scanning bed must exist in the scanned image, and the patient needs the head supports in the scanning process, so that the head supports must exist in the scanned image, the patient must wear clothes in the scanning process, and the clothes must exist in the scanned image, and the like.
In the actual process, the manual removal of the attachments from the scanned image is time consuming and cumbersome. While the diagnostic effect of these collateral information on the physician is very small, the PET/CT image quality usually depends on reconstruction algorithms, the collateral part in the scanned image affects the scatter estimation, and the fusion visual effect of the PET/CT image is better if the collateral is removed, so that the pathological detail diagnosis becomes more intuitive if the interference factors and useless image information of these collateral are not available.
Disclosure of Invention
The invention solves the technical problem that in the prior art, a plurality of accessory information including head supports, bed plates, clothes, straps and other non-emissive objects are presented on a scanned image for objective reasons, so that in the actual working process, the manual removal of the accessories on the scanned image is time-consuming and complicated, the accessory information has little influence on the diagnosis of a doctor, if the accessories can be removed, the fusion visual effect of a PET/CT image is better, and if the accessories are not removed, the interference factors and the useless image information of the accessories can become more intuitive, thereby providing an optimized scanned image processing method.
The technical scheme adopted by the invention is that the scanning image processing method comprises the following steps:
step 1: inputting a scanning image sequence to be processed; taking a first scanned image of the current scanned image sequence to carry out the next step;
step 2: preprocessing the scanned image, and searching and obtaining the ROI of the current scanned image;
and step 3: marking all targets in the image by adopting an image marking algorithm and removing interference points near the ROI;
and 4, step 4: adjusting the ROI obtained after the interference points are removed;
and 5: performing low-pass filtering on the image subjected to the current scanning processing to remove the head rest and the clothes information outside the ROI in the scanned image;
step 6: acquiring a serial number of a current scanning image, performing step 7 when the current scanning image is a head scanning image, and performing step 8 when the current scanning image is a non-head scanning image;
and 7: establishing coordinates for a current scanning image, representing all objects to be selected in a coordinate system by i, and deleting the current object i when any object meets a condition Center (i,2) > (Center (max,2) + a), wherein the Center (i,2) represents a y-direction value of the gravity Center of the object i, the Center (max,2) represents a y-direction value of the gravity Center of the object i with the largest area, max represents an area with the largest product in all the candidate objects i, and a is an adjusting parameter;
and 8: obtaining an image mask, and juxtaposing a target mask to the current scanning original image to obtain a final target;
and step 9: finishing the processing of the current scanning image, taking the next image of the current scanning image sequence, and repeating the step 2; and if the current scanned image sequence is traversed, finishing the scanned image processing.
Preferably, the sequence of scan images to be processed is an image sequence comprising a head scan image and scan images of a number of beds.
Preferably, the serial number of the head scan image is 1, and the serial numbers of the scan images of the beds are 2 to n.
Preferably, in the step 2, the pretreatment comprises the following steps:
step 2.1: carrying out binarization on a current scanned image;
step 2.2: using a rectangular template, and using a threshold value T to perform median filtering on the current scanned image to obtain a smooth image of the binarization scanned image of the current ROI;
step 2.3: projecting the image subjected to median filtering in the horizontal direction and the vertical direction;
step 2.4: the first segmented target ROI is obtained from the projections in the horizontal and vertical directions.
Preferably, the threshold T < -140 HU.
Preferably, in step 4, the adjusting includes correcting a low level line.
Preferably, in the step 5, the frequency of the low-pass filtering is less than or equal to-100 HU.
Preferably, in step 5, the head rest includes highlight information and non-highlight information, and the clothing information is non-highlight information.
Preferably, in the step 7, a is less than or equal to 40 mm.
Preferably, in step 7, the objects with the remaining candidate object area value smaller than b, b being smaller than 320mm, are deleted2
The invention provides an optimized scanning image processing method, which is characterized in that a clearer target image is obtained after ROI in a scanning image is identified and processed, filtering is carried out on the scanning image of different parts of a head or a body or a coordinate system is established to delete a non-target image by comparing y values in a space direction, attachments are automatically removed, and a target is identified, so that the method is not only suitable for head body data but also suitable for body data. The invention is not dependent on anatomical information, is also suitable for the scanning data of other parts of the body, uses special marks to identify and remove the attachments in the scanning data, finally identifies the ROI of the body and keeps the original gray scale and brightness information of the scanning pattern, not only avoids the complexity of manually removing the attachments on the scanning image, but also removes the attachment information in place, has better fusion visual effect of PET/CT images, and can more intuitively diagnose the patient aiming at the scanning image.
Detailed Description
The present invention is described in further detail with reference to the following examples, but the scope of the present invention is not limited thereto.
The invention relates to a scanned image processing method, which comprises the following steps:
step 1: inputting a scanning image sequence to be processed; taking the first scanning image of the current scanning image sequence to carry out the next step.
The scanning image sequence to be processed is an image sequence comprising a head scanning image and a plurality of scanning images of beds.
The serial number of the head scanning image is 1, and the serial numbers of the scanning images of the beds are 2-n.
In the present invention, the appendages to be removed include non-emissive objects such as a bed plate, a head rest, a strap, and clothes.
In the present invention, the scan image sequence to be processed may be a CT image sequence or an MR image sequence.
In the present invention, the content presented by the scan image can be scan volume data of any part of the body, including but not limited to the head, the chest, the waist and the abdomen.
In the invention, the whole scanned images are arranged in sequence, because the head rest exists in the scanned image of the head, the head rest is different from the scanned images of other parts of the body, and simultaneously, the method traverses all the scanned images, so that the serial number of the scanned image of the head is marked to be 1, and the rest scanned images are arranged in sequence.
Step 2: preprocessing the scanned image, and searching and obtaining the ROI of the current scanned image.
In the step 2, the pretreatment comprises the following steps:
step 2.1: carrying out binarization on a current scanned image;
step 2.2: using a rectangular template, and using a threshold value T to perform median filtering on the current scanned image to obtain a smooth image of the binarization scanned image of the current ROI;
step 2.3: projecting the image subjected to median filtering in the horizontal direction and the vertical direction;
step 2.4: the first segmented target ROI is obtained from the projections in the horizontal and vertical directions.
The threshold T < -140 HU.
In the present invention, finding and obtaining the ROI (region of interest) of the current scan pattern is a matter that is well understood in the art.
In the invention, the preprocessing comprises the binarization of the scanned image of the current ROI, after the binarization, the whole scanned image has only black and white visual effects, after median filtering, the edge of the scanned image is smoothed, and at the moment, the horizontal projection and the vertical projection of the smoothed image can be obtained, thereby obtaining the coarsely segmented target ROI.
In the invention, the threshold is generally selected to be less than-140 HU, and because the HU value of the CT has a corresponding relation with the body to a certain extent, when the threshold is less than-140 HU, a lot of useless information can be removed, and image information such as skin fat and the like can be well kept.
And step 3: using an image marking algorithm, all objects in the image are marked and the interference points near the ROI are removed.
In the invention, after binarization, interference points near the ROI can be marked and removed.
In the present invention, an image marking algorithm may be employed. The image labeling algorithm is implemented by segmenting isolated continuous regions on the image and then labeling them with numbers starting from 1 in order. The dots are labeled 1 on the first region, 2 on the second region, and so on. After marking is completed, removing the unnecessary interference points.
And 4, step 4: and adjusting the ROI obtained after the interference points are removed.
In step 4, the adjustment includes correcting the low level horizontal line.
In the invention, the low-level horizontal line is corrected to remove the residual bed plate information to obtain a more accurate ROI, and in actual operation, the residual bed plate information is usually not removed through low-pass filtering, so that the step is very important.
And 5: and performing low-pass filtering on the image after the current scanning processing to remove the head rest and the clothes information outside the ROI in the scanned image.
In the step 5, the frequency of the low-pass filtering is less than or equal to-100 HU.
In the step 5, the head support comprises highlight information and non-highlight information, and the clothes information is non-highlight information.
In the invention, the step 5 is mainly used for carrying out primary processing on the image after the current scanning processing to remove most of head supports and clothes information outside the recognizable ROI.
In the present invention, the frequency of the low-pass filtering is equal to or less than-100 HU.
Step 6: and (4) acquiring the serial number of the current scanning image, performing the step (7) when the current scanning image is the head scanning image, and performing the step (8) when the current scanning image is the non-head scanning image.
And 7: establishing coordinates for a current scanning image, representing all objects to be selected in a coordinate system by i, and deleting the current object i when any object meets a condition Center (i,2) > (Center (max,2) + a), wherein the Center (i,2) represents a y-direction value of the gravity Center of the object i, the Center (max,2) represents a y-direction value of the gravity Center of the object i with the largest area, max represents an area with the largest product in all the candidate objects i, and a is an adjusting parameter.
In the step 7, a is less than or equal to 40 mm.
And 8: and obtaining an image mask, and juxtaposing the current scanning original image by using a target mask to obtain a final target.
In the step 7, the targets with the residual candidate target area value smaller than b, wherein b is smaller than 320mm, are deleted2
And step 9: finishing the processing of the current scanning image, taking the next image of the current scanning image sequence, and repeating the step 2; and if the current scanned image sequence is traversed, finishing the scanned image processing.
In the present invention, the treatment modes for the head and body are different due to the different attributes of the appendages.
In the invention, because the head rest of the head is made of metal and belongs to highlight information, the step 7 is adopted for processing the scanned image of the head, because the highlight information is similar to information such as bones, if the step 7 is not adopted, only the HU value is used for removing the head, the body part can be removed, and the reality of the image is reduced.
In the invention, coordinates are established for a current scanning image, i represents all objects to be selected in a coordinate system, when any object meets a condition Center (i,2) > (Center (max,2) + a), the current object i is deleted, wherein the Center (i,2) represents a y-direction value of the gravity Center of the object i, the Center (max,2) represents a y-direction value of the gravity Center of the area maximum value of the object i, max represents an area with the largest product in all the candidate objects i, and a is an adjusting parameter.
In the invention, the Center function is also called a centroid function, when information is removed, a plurality of connected regions exist on an image, a plurality of area values are obtained, so max represents the region with the largest inner product of all candidate targets i, and the y-direction value of the centroid of the connected region with the largest area is obtained by the Center (max,2) function.
In the present invention, for the scanned image of the non-head, such as the shoulder, waist, abdomen, etc., the spatial information can be used to remove the non-target.
In the invention, step 8 is generally adopted to obtain an image mask, a target mask is used for juxtaposing the current scanning original image, and the final target is obtained by using the mask.
And step 9: finishing the processing of the current scanning image, taking the next image of the current scanning image sequence, and repeating the step 2; and if the current scanned image sequence is traversed, finishing the scanned image processing.
In the invention, the processing operation is finished by traversing the subsequent scanned images.
The processing method utilizes the same coordinate system for the processing of all the scanned images in the sequence of scanned images.
In the present invention, a joint marker projection method is employed for all scanned images, i.e., the processing of all scanned images in a sequence of scanned images utilizes the same coordinate system.
And the gray scale of the ROI in the to-be-processed scanned image sequence is equal to that of the ROI in the finished scanned image sequence.
And the gray scale of the ROI in the to-be-processed scanned image sequence is equal to the brightness of the ROI in the finished scanned image sequence.
In the invention, in order to ensure that the scanned image is presented faithfully, the gray scale and the brightness of the ROI in the scanned image sequence to be processed are equal to those of the ROI in the scanned image sequence after the processing is finished.
The invention solves the problems that in the prior art, a plurality of accessory information is presented on a scanned image due to objective reasons, including head supports, bed plates, clothes, straps and other non-emissive objects, the work of manually removing the accessories on the scanned image is time-consuming and complicated in the actual working process, the accessory information has little influence on the diagnosis of doctors, if the accessories can be removed, the fusion visual effect of a PET/CT image is better, if the accessory information does not have interference factors and useless image information of the accessories in the diagnosis, a clear target image is obtained after the ROI in the scanned image is identified and processed, and the operation of filtering the scanned image of different parts of the head or the body or establishing a coordinate system to delete the non-target image by comparing y values in the space direction is carried out, automatically removing the appendage and identifying the target is applicable to both head and body data. The invention is not dependent on anatomical information, is also suitable for the scanning data of other parts of the body, uses special marks to identify and remove the attachments in the scanning data, finally identifies the ROI of the body and keeps the original gray scale and brightness information of the scanning pattern, not only avoids the complexity of manually removing the attachments on the scanning image, but also removes the attachment information in place, has better fusion visual effect of PET/CT images, and can more intuitively diagnose the patient aiming at the scanning image.

Claims (9)

1. A method of processing a scanned image, comprising: the method comprises the following steps:
step 1: inputting a scanning image sequence to be processed, wherein the scanning image sequence to be processed is an image sequence comprising a head scanning image and a plurality of scanning images of beds; taking a first scanned image of the current scanned image sequence, and carrying out the next step;
step 2: preprocessing the scanned image, and searching and obtaining the ROI of the current scanned image;
and step 3: segmenting isolated continuous regions on the image, marking the isolated continuous regions into numbers from 1 in sequence, marking all targets in the image and removing interference points near the ROI;
and 4, step 4: adjusting the ROI obtained after the interference points are removed;
and 5: performing low-pass filtering on the image subjected to the current scanning processing to remove head supports and/or clothes information outside the ROI in the scanned image;
step 6: acquiring a serial number of a current scanning image, performing step 7 when the current scanning image is a head scanning image, and performing step 8 when the current scanning image is a non-head scanning image;
and 7: coordinates are established for the current scan image toiRepresenting all the objects to be selected in the coordinate system, when any one of the objects meets the conditionCenter(i,2)(Center(max,2)+a)Then delete the current targetiWhereinCenter(i,2)Representing objectsiThe value of the y-direction of the center of gravity of,Center(max,2)representing the object of greatest areaiThe value of the y-direction of the center of gravity of,maxrepresenting all candidate objectsiThe area with the largest inner area is formed,ato adjust the parameters;
and 8: obtaining an image mask, and juxtaposing the current scanning original image by using the image mask to obtain a final target;
and step 9: finishing the processing of the current scanning image, taking the next image of the current scanning image sequence, and repeating the step 2; and if the current scanned image sequence is traversed, finishing the scanned image processing.
2. A method of processing a scanned image as claimed in claim 1, characterized in that: the serial number of the head scanning image is 1, and the serial numbers of the scanning images of the beds are 2-n.
3. A method of processing a scanned image as claimed in claim 1, characterized in that: in the step 2, the pretreatment comprises the following steps:
step 2.1: carrying out binarization on a current scanned image;
step 2.2: performing median filtering on the current scanned image by using a rectangular template and a threshold value T to obtain a smooth image of the binary scanned image comprising the ROI;
step 2.3: projecting the image subjected to median filtering in the horizontal direction and the vertical direction;
step 2.4: the first segmented ROI is obtained from the projections in the horizontal and vertical directions.
4. A method of processing a scanned image as claimed in claim 3, characterized in that: the threshold T < -140 HU.
5. A method of processing a scanned image as claimed in claim 1, characterized in that: in step 4, the adjustment includes correcting the low level horizontal line.
6. A method of processing a scanned image as claimed in claim 1, characterized in that: in the step 5, the frequency of the low-pass filtering is less than or equal to-100 HU.
7. A method of processing a scanned image as claimed in claim 1, characterized in that: in the step 5, the head support comprises highlight information and non-highlight information, and the clothes information is non-highlight information.
8. A method of processing a scanned image as claimed in claim 1, characterized in that: in the step 7, the process is carried out,a≤40mm。
9. a method of processing a scanned image as claimed in claim 1, characterized in that: in said step 7, the candidate targets having remaining area values smaller than b, which are smaller than 320mm, are deleted.
CN201710268563.2A 2017-04-23 2017-04-23 Scanning image processing method Active CN106910193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710268563.2A CN106910193B (en) 2017-04-23 2017-04-23 Scanning image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710268563.2A CN106910193B (en) 2017-04-23 2017-04-23 Scanning image processing method

Publications (2)

Publication Number Publication Date
CN106910193A CN106910193A (en) 2017-06-30
CN106910193B true CN106910193B (en) 2020-04-07

Family

ID=59209722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710268563.2A Active CN106910193B (en) 2017-04-23 2017-04-23 Scanning image processing method

Country Status (1)

Country Link
CN (1) CN106910193B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197712B (en) * 2019-06-05 2023-09-15 桂林电子科技大学 Medical image storage system and storage method
CN111815735B (en) * 2020-09-09 2020-12-01 南京安科医疗科技有限公司 Human tissue self-adaptive CT reconstruction method and reconstruction system
CN115294110B (en) * 2022-09-30 2023-01-06 杭州太美星程医药科技有限公司 Scanning period identification method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916443A (en) * 2010-08-19 2010-12-15 中国科学院深圳先进技术研究院 Processing method and system of CT image
CN102016911A (en) * 2008-03-03 2011-04-13 新加坡科技研究局 A method and system of segmenting CT scan data
CN103886621A (en) * 2012-11-14 2014-06-25 上海联影医疗科技有限公司 Method for automatically extracting bed plate
CN104240198A (en) * 2014-08-29 2014-12-24 西安华海盈泰医疗信息技术有限公司 Method and system for removing bed board in CT image
CN105976339A (en) * 2016-05-11 2016-09-28 妙智科技(深圳)有限公司 Method and device for automatically removing bed plate in CT image based on Gaussian model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9044153B2 (en) * 2013-01-09 2015-06-02 Siemens Medical Solutions Usa, Inc. Random sinogram variance reduction in continuous bed motion acquisition
US9996919B2 (en) * 2013-08-01 2018-06-12 Seoul National University R&Db Foundation Method for extracting airways and pulmonary lobes and apparatus therefor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102016911A (en) * 2008-03-03 2011-04-13 新加坡科技研究局 A method and system of segmenting CT scan data
CN101916443A (en) * 2010-08-19 2010-12-15 中国科学院深圳先进技术研究院 Processing method and system of CT image
CN103886621A (en) * 2012-11-14 2014-06-25 上海联影医疗科技有限公司 Method for automatically extracting bed plate
CN104240198A (en) * 2014-08-29 2014-12-24 西安华海盈泰医疗信息技术有限公司 Method and system for removing bed board in CT image
CN105976339A (en) * 2016-05-11 2016-09-28 妙智科技(深圳)有限公司 Method and device for automatically removing bed plate in CT image based on Gaussian model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A fully automatic bed/linen segmentation for fused PET/CT MIP rendering;Jinman Kim 等;《The journal of nuclear medicine》;20081231;第49卷;第387页 *
An automatic technique for finding and localizing externally attached markers in CT and MR volume images of the head;Matthew Y 等;《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》;19960630;第43卷(第6期);第627-637页 *
Automated medical image segmentation techniques;Neeraj Sharma 等;《Journal of medical physics》;20100531;第35卷(第1期);第3-14页 *
一种基于快速区域标识的交互式体切割算法;郑杰 等;《西安电子科技大学学报》;20070228;第34卷(第1期);第49-53页 *

Also Published As

Publication number Publication date
CN106910193A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
US8787648B2 (en) CT surrogate by auto-segmentation of magnetic resonance images
WO2019020048A1 (en) Spinal image generation system based on ultrasonic rubbing technique and navigation positioning system for spinal surgery
Memon et al. Segmentation of lungs from CT scan images for early diagnosis of lung cancer
US9111174B2 (en) Machine learnng techniques for pectoral muscle equalization and segmentation in digital mammograms
CN106910193B (en) Scanning image processing method
US20040101184A1 (en) Automatic contouring of tissues in CT images
CN106530236B (en) Medical image processing method and system
Lim et al. Generative data augmentation for diabetic retinopathy classification
JP6458166B2 (en) MEDICAL IMAGE PROCESSING METHOD, DEVICE, SYSTEM, AND PROGRAM
JP2016041245A (en) Medical image processor and medical image processing method
Yin et al. Automatic breast tissue segmentation in MRIs with morphology snake and deep denoiser training via extended Stein’s unbiased risk estimator
CN113469935B (en) Automatic detection and positioning method for posterior superior iliac spine based on CT image
CN112767333A (en) CTA (computed tomography angiography) image-based double-lower-limb blood vessel region judgment method and system
Gardner et al. A point-correspondence approach to describing the distribution of image features on anatomical surfaces, with application to atrial fibrillation
CN115137342A (en) Image acquisition medical system and method based on deep learning
Chetty et al. A survey on brain tumor extraction approach from MRI images using image processing
CN109671131B (en) Image correction method, device, medical image equipment and storage medium
CN114332270B (en) CT image metal artifact removing method and device for minimally invasive interventional surgery
JP7404058B2 (en) Visualization of lesions formed by thermal ablation in magnetic resonance imaging (MRI) scans
US10061979B2 (en) Image processing apparatus and method
CN116309647B (en) Method for constructing craniocerebral lesion image segmentation model, image segmentation method and device
CN116725640B (en) Construction method of body puncture printing template
WO2023020609A1 (en) Systems and methods for medical imaging
Manikandan et al. Lobar fissure extraction in isotropic CT lung images—an application to cancer identification
KR101839764B1 (en) The method for nemerical algorithms of meronecrosis by lung ct image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20191127

Address after: 450000 first floor, building 2, No. 399, West Fourth Ring Road, Zhengzhou high tech Industrial Development Zone, Henan Province

Applicant after: Henan Mingfeng Medical Technology Co., Ltd.

Address before: 312099 Zhejiang province Shaoxing City Jishan Dongshan Road No. 6 Building 2-3

Applicant before: FMI Technologies Inc.

GR01 Patent grant
GR01 Patent grant