CN110782422A - Method for synthesizing X-ray film and marking through CT image - Google Patents

Method for synthesizing X-ray film and marking through CT image Download PDF

Info

Publication number
CN110782422A
CN110782422A CN201911003131.4A CN201911003131A CN110782422A CN 110782422 A CN110782422 A CN 110782422A CN 201911003131 A CN201911003131 A CN 201911003131A CN 110782422 A CN110782422 A CN 110782422A
Authority
CN
China
Prior art keywords
image
ray
marked
forward projection
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911003131.4A
Other languages
Chinese (zh)
Inventor
曾凯
何健
冯亚崇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Anke Medical Technology Co Ltd
Original Assignee
Nanjing Anke Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Anke Medical Technology Co Ltd filed Critical Nanjing Anke Medical Technology Co Ltd
Priority to CN201911003131.4A priority Critical patent/CN110782422A/en
Publication of CN110782422A publication Critical patent/CN110782422A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a method for synthesizing an X-ray film and a mark through a CT image, which comprises the following steps: (1) acquiring a CT image; (2) extracting a marked image of the required tissue from the CT image: marking the voxel points belonging to the required tissue in the contour image as a specific value, and marking the rest voxel points as another value to obtain a marked image; (3) carrying out forward projection processing on the obtained CT image at a specific angle to obtain an X-ray image of the CT image; (4) carrying out forward projection processing on the obtained marked image at a specific angle to obtain an X-ray image of the marked image; (5) and synthesizing the X-ray image of the CT image and the X-ray image of the marker image to obtain a synthesized X-ray image with the required tissue marker. The invention synthesizes the X-ray image through the CT image, synthesizes the label on the X-ray image according to the label obtained on the CT image, and can shorten the time and improve the accuracy compared with the method of directly carrying out the label on the X-ray film.

Description

Method for synthesizing X-ray film and marking through CT image
Technical Field
The invention relates to the technical field of CT image processing, in particular to a method for synthesizing an X-ray film and a mark through a CT image, which is used for constructing a training data set of a lesion tissue segmentation deep learning model and belongs to a training data preprocessing part in the training process of the deep learning model.
Background
X-ray film is one of the most commonly used diagnostic imaging techniques. The X-ray machine has low cost and small volume, and is basically a necessary imaging system for hospitals of all levels. Projection images of different tissues can be obtained due to the different X-ray absorption capabilities of different organ tissues. The X-ray film has the advantages of high imaging speed, low cost, small radiation dose, capability of displaying specific pathological structures and the like, and is usually preferred to be used for disease diagnosis and physical examination screening in clinic. However, since the shots are superimposed images, the influence and interference of other tissue structures objectively exist, and for example, the pathological changes of the paravertebral part, the postcardiac part, and other parts cannot be clearly displayed by a simple orthostatic chest film, so that the pathological changes of the parts objectively have the possibility of missed diagnosis. Especially in the process of large-scale physical examination or disease screening, the huge amount of data is very inefficient if it depends on manual methods.
Therefore, development of auxiliary detection technology suitable for X-ray film has become an active research field in recent years (e.g., auxiliary diagnosis of tuberculosis, pneumonia, etc.). The artificial intelligence method based on deep learning can rapidly and accurately segment the lesion area, has strong repeatability, and is a main technique in auxiliary detection technology. Training of artificial intelligence based fast segmentation models of lesion regions generally comprises the following steps: 1. data preparation and preprocessing, 2, network model design and loss function design, 3, network training, 4 and network model verification. The most important part of this is how to generate large and accurate data for neural networks to learn. Using deep learning on small data sets tends to be easy to overfit, so deep learning requires a large amount of data, whereas labeling images is a time consuming process. Especially for X-ray films, due to the imaging principle of the X-ray film, three-dimensional human body structures are compressed on one X-ray film, and a lot of contrast information and three-dimensional space information are lost. It becomes more challenging how to accurately mark regions of tissue structures and lesions, which is not only time consuming but also prone to error due to the nature of overlapping images. Taking an X-ray chest film as an example, the chest cavity contains many important organs of the human body, so the marking of the X-ray chest film is a time-consuming and labor-consuming process. The current labeling method comprises manual labeling and labeling tens of thousands of pictures based on automatic segmentation. Because of the properties of the chest radiograph such as low contrast, organ overlapping, fuzzy boundary and the like, the manual marking directly on the X-ray chest radiograph is time-consuming and has low accuracy. Based on segmentation methods such as threshold segmentation, feature space clustering, region growing, and the like, and based on edge detection, edge tracking methods, and the like, combinations of these segmentation algorithms and new algorithms formed by improvement have been accumulated to reach thousands of times, however, none of these algorithms has good versatility and the labeling effect is worse. There is a need in the art for a method that provides accurate labeling to improve the accuracy of the assisted detection.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the technical problems that X-ray film marking as training data is difficult and low in accuracy in the existing lesion tissue segmentation technology based on deep learning, the invention provides a method for synthesizing an X-ray film through a CT image and adding an artificial mark. The method utilizes an imaging device CT, which also utilizes X-rays to image a human body, so that the CT image and the X-ray film have great common points. But because of the adoption of the CT scanning mode, the image quality of the CT scanning system has higher contrast and more accurate pixel value compared with an X-ray film. The invention utilizes the data of the CT image to generate the data set of the X-ray film, and provides a more accurate training data set for a lesion region fast segmentation model based on deep learning and taking the X-ray film as input.
The technical scheme is as follows: the invention provides the following technical scheme:
a method for synthesizing X-ray film and mark by CT image includes the following steps:
(1) acquiring a CT image;
(2) segmenting outline images of different tissues on the CT image;
(3) extracting a mark image of the required tissue from the contour image: marking the voxel points belonging to the required tissue in the contour image as a specific value, and marking the rest voxel points as another value to obtain a marked image;
(4) carrying out forward projection processing of a specific angle on the contour image obtained in the step (2) to obtain an X-ray image of the contour image;
(5) carrying out forward projection processing of a specific angle on the mark image added in the step (2) to obtain an X-ray image of the mark image;
(6) and synthesizing the X-ray image of the contour image and the X-ray image of the mark image to obtain a synthesized X-ray image with the required tissue mark.
Further, the forward projection mode includes cone beam forward projection, fan beam forward projection and parallel beam forward projection.
Further, the step (4) further includes performing enhancement processing on the X-ray image of the obtained contour image.
Specifically, the enhancement processing method is an adaptive image equalization method.
Further, before the step (6) is executed, the method further comprises the steps of: and (4) carrying out post-processing on the X-ray image of the marked image obtained in the step (5) to eliminate burrs in the image, wherein the post-processing comprises corrosion and expansion operations.
Has the advantages that: compared with the prior art, the invention has the following advantages:
the invention firstly synthesizes an X-ray image through a CT image and synthesizes a label on the X-ray image according to the label obtained on the CT image. Compared with the marking on an X-ray image, the marking on the CT image by a doctor can obviously shorten the marking time and improve the accuracy, and the automatic segmentation marking method on the CT image is also obviously superior to the marking method on the X-ray image.
By the method, the training data set with the accurate marks can be provided for the lesion tissue segmentation model based on deep learning, so that the trained lesion tissue segmentation model based on deep learning is higher in accuracy.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a CT image of a breast, from left to right, in the order of a cross-sectional, coronal and sagittal image;
FIG. 3 is a chest lung tissue marker image, in left-to-right order, a cross-sectional, coronal and sagittal image;
FIG. 4 is an X-ray image of a CT image of the breast;
FIG. 5 is an enhanced chest X-ray image;
FIG. 6 is an X-ray image of the lungs;
FIG. 7 is an X-ray image of the lungs after post-treatment;
fig. 8 is a composite X-ray image of the chest with lung tissue markers.
Detailed Description
The invention will be further described with reference to the following drawings and specific embodiments.
FIG. 1 is a flow chart of an embodiment of the present invention, comprising the steps of:
(1) data collection and sorting: dicom image data or CT image data in other formats is collected. In this embodiment, a dicom image of a chest CT is taken, as shown in fig. 2, the image of fig. 2 is sequentially a cross-sectional image, a coronal image and a sagittal image from left to right, each image has a size of 1024 × 800, and there are a total of 800 images, and each image has 1024 rows and columns. The pixel size of the image is 0.3125mm by 0.3125mm, and the smaller the pixel size of the image, the higher the resolution, and the better the quality of the synthesized X-ray image and the labeled image.
(2) Performing forward projection processing of a specific angle, such as 0-degree or 90-degree forward projection processing, on the CT image to obtain an X-ray image of the CT image; the forward projection can be performed according to the geometric parameters of cone beam, fan beam or parallel beam, and the embodiment adopts the simplest geometric parameters of parallel beam, and the forward projection obtains 0-degree synthesized X-ray image, as shown in FIG. 4.
(3) The X-ray image of CT is enhanced to increase the contrast of the image. There are many image enhancement methods, and in this embodiment, an adaptive image equalization method is used to enhance an image, and the enhanced image is shown in fig. 5.
(4) The tissue of interest is extracted, that is, usually on the DR flat sheet, the tissue region needs to be seen, such as the outline of the lung, the outline of the pericardium, the rib structure, the lung nodule, etc., and the doctor needs to see the image of the tissue first and then give a diagnosis suggestion in the diagnosis. This tissue of interest may be extracted singly or plurally. We extract the tissue of interest by means of image labeling: and (3) marking the voxel points belonging to the required tissue in the CT image obtained in the step (1) as a specific value, and marking the rest voxel points as another value to obtain a marked image. In this embodiment, a lung tissue is selected as a required tissue to be marked, the lung tissue is extracted from the dicom image of the chest CT shown in fig. 2, a voxel point belonging to the lung tissue is marked as 1, and a voxel point not belonging to the lung tissue is marked as 0, so that a marked image of the lung tissue is obtained, as shown in fig. 3, a cross-section marked image of the lung tissue, a coronal plane marked image of the lung tissue, and a sagittal plane marked image of the lung tissue are sequentially shown from left to right in fig. 3.
(5) The marker image obtained in step (4) is subjected to forward projection processing as in the case of the CT image to obtain an X-ray image of the marker image, i.e., an X-ray image of lung tissue, as shown in fig. 6.
(6) And (3) post-treatment: post-processing operations such as erosion and dilation are performed on the X-ray image of the lung tissue to remove the burred points in the image, and the processed image is shown in fig. 7.
(7) The X-ray image of the CT image is combined with the X-ray image of the marker image to obtain a combined X-ray image with the desired tissue marker, as shown in fig. 8.
When a lesion tissue segmentation model based on deep learning is trained, a neural network is trained using a synthetic X-ray image with a desired tissue marker as training data.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (5)

1. A method for synthesizing X-ray film and mark by CT image includes steps:
(1) acquiring a CT image;
(2) extracting a marked image of the required tissue from the CT image: marking the voxel points belonging to the required tissue in the contour image as a specific value, and marking the rest voxel points as another value to obtain a marked image;
(3) carrying out forward projection processing of a specific angle on the CT image obtained in the step (1) to obtain an X-ray image of the CT image;
(4) carrying out forward projection processing of a specific angle on the marked image obtained in the step (2) to obtain an X-ray image of the marked image;
(5) and synthesizing the X-ray image of the CT image and the X-ray image of the marker image to obtain a synthesized X-ray image with the required tissue marker.
2. The method of claim 1, wherein the forward projection modes include cone beam forward projection, fan beam forward projection and parallel beam forward projection.
3. The method of claim 1, wherein the step (4) further comprises enhancing the X-ray image of the contour image.
4. The method of claim 1, wherein the enhancement processing is adaptive image equalization.
5. The method for synthesizing X-ray film and marker by CT image as claimed in claim 1, further comprising the step of, before performing step (6): and (4) carrying out post-processing on the X-ray image of the marked image obtained in the step (5) to eliminate burrs in the image, wherein the post-processing comprises corrosion and expansion operations.
CN201911003131.4A 2019-10-21 2019-10-21 Method for synthesizing X-ray film and marking through CT image Pending CN110782422A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911003131.4A CN110782422A (en) 2019-10-21 2019-10-21 Method for synthesizing X-ray film and marking through CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911003131.4A CN110782422A (en) 2019-10-21 2019-10-21 Method for synthesizing X-ray film and marking through CT image

Publications (1)

Publication Number Publication Date
CN110782422A true CN110782422A (en) 2020-02-11

Family

ID=69386233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911003131.4A Pending CN110782422A (en) 2019-10-21 2019-10-21 Method for synthesizing X-ray film and marking through CT image

Country Status (1)

Country Link
CN (1) CN110782422A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103315764A (en) * 2013-07-17 2013-09-25 沈阳东软医疗系统有限公司 Method for acquiring CT locating images and CT device
CN107545551A (en) * 2017-09-07 2018-01-05 广州华端科技有限公司 The method for reconstructing and system of digital galactophore body layer composograph
CN108711177A (en) * 2018-05-15 2018-10-26 南方医科大学口腔医院 The fast automatic extracting method of volume data arch wire after a kind of oral cavity CBCT is rebuild

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103315764A (en) * 2013-07-17 2013-09-25 沈阳东软医疗系统有限公司 Method for acquiring CT locating images and CT device
CN107545551A (en) * 2017-09-07 2018-01-05 广州华端科技有限公司 The method for reconstructing and system of digital galactophore body layer composograph
CN108711177A (en) * 2018-05-15 2018-10-26 南方医科大学口腔医院 The fast automatic extracting method of volume data arch wire after a kind of oral cavity CBCT is rebuild

Similar Documents

Publication Publication Date Title
Chen et al. Automatic segmentation of individual tooth in dental CBCT images from tooth surface map by a multi-task FCN
CN108898595B (en) Construction method and application of positioning model of focus region in chest image
CN105105775B (en) Cardiac motion resolver
CN103294883A (en) Method and system for intervention planning for transcatheter aortic valve implantation
KR20150045885A (en) Systems and methods for registration of ultrasound and ct images
CN109801276B (en) Method and device for calculating heart-chest ratio
JP2017064370A (en) Image processing device, and method and program for controlling image processing device
CN112862833A (en) Blood vessel segmentation method, electronic device and storage medium
CN104616289A (en) Removal method and system for bone tissue in 3D CT (Three Dimensional Computed Tomography) image
CN109934829B (en) Liver segmentation method based on three-dimensional graph segmentation algorithm
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
EP2689344B1 (en) Knowledge-based automatic image segmentation
Jimenez-Carretero et al. Optimal multiresolution 3D level-set method for liver segmentation incorporating local curvature constraints
CN111311626A (en) Skull fracture automatic detection method based on CT image and electronic medium
CN110866905A (en) Rib identification and marking method
Tseng et al. An adaptive thresholding method for automatic lung segmentation in CT images
CN111080676B (en) Method for tracking endoscope image sequence feature points through online classification
CN110570430B (en) Orbital bone tissue segmentation method based on volume registration
Karthikeyan et al. Lungs segmentation using multi-level thresholding in CT images
CN116797612B (en) Ultrasonic image segmentation method and device based on weak supervision depth activity contour model
CN104915989A (en) CT image-based blood vessel three-dimensional segmentation method
CN116309647B (en) Method for constructing craniocerebral lesion image segmentation model, image segmentation method and device
WO2023092124A1 (en) Machine-learning based segmentation of biological objects in medical images
CN116168097A (en) Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
CN110782422A (en) Method for synthesizing X-ray film and marking through CT image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200211