CN116597951A - CT film intelligent typesetting method and system based on multi-part combined scanning - Google Patents

CT film intelligent typesetting method and system based on multi-part combined scanning Download PDF

Info

Publication number
CN116597951A
CN116597951A CN202310460258.9A CN202310460258A CN116597951A CN 116597951 A CN116597951 A CN 116597951A CN 202310460258 A CN202310460258 A CN 202310460258A CN 116597951 A CN116597951 A CN 116597951A
Authority
CN
China
Prior art keywords
scanning
image
mask
printing
typesetting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310460258.9A
Other languages
Chinese (zh)
Inventor
于德新
樊昭磊
尚永生
李传朋
齐亚飞
韩晓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Hospital of Shandong University
Original Assignee
Qilu Hospital of Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Hospital of Shandong University filed Critical Qilu Hospital of Shandong University
Priority to CN202310460258.9A priority Critical patent/CN116597951A/en
Publication of CN116597951A publication Critical patent/CN116597951A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an intelligent typesetting method and system for CT films based on multi-part joint scanning, and relates to the technical field of image processing. CT data are acquired, and preprocessing is carried out on the CT data; identifying the preprocessed CT data and determining a scanning part; sequentially processing the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part; extracting a body area according to the printing range of each scanning part, and determining a body boundary; and taking the body boundary as a film printing range, and determining the number and the sequence of typeset images according to the printing range, thereby completing automatic typesetting. The invention realizes the automatic typesetting of multiple images of craniocerebral CT, thoracic CT, abdominal CT and pelvic CT multi-part joint scanning, simplifies the working flow and improves the working efficiency of medical care workers.

Description

CT film intelligent typesetting method and system based on multi-part combined scanning
Technical Field
The invention relates to the technical field of image processing, in particular to an intelligent typesetting method and system for CT films based on multi-part joint scanning.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the improvement of health consciousness and living standard of people, the number of people going to hospitals for regular examination is gradually increased. The performance of the medical image CT equipment is continuously improved, the imaging is clearer, the working efficiency is greatly improved, and the scanning time of a single patient is greatly shortened. However, after the CT equipment images, typesetting is a time-consuming and labor-consuming operation, and typesetting time for shooting a plurality of parts of a single patient is longer, so that labor cost is wasted, waiting time for taking films by the patient is prolonged, and medical experience of the patient is affected.
In order to solve the problems in manual film typesetting, the prior art realizes automatic typesetting on CT images by an artificial intelligence technology, however, the traditional automatic typesetting method only aims at single-part scanning results, and has no standard typesetting mode for multi-part combined scanning, so how to realize intelligent typesetting on CT films of rapid and reasonable multi-part combined scanning becomes a technical problem to be solved urgently.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an intelligent typesetting method and system for CT films based on multi-position combined scanning, which adopts an artificial intelligent technology to automatically identify special areas such as cranium, chest, abdomen, pelvis and the like, well define starting points, scaling, typesetting size, typesetting number, four-corner information and the like of each position and realize automatic typesetting for CT images of multiple positions of a single patient.
In order to achieve the above object, the present invention is realized by the following technical scheme:
the invention provides an intelligent typesetting method for CT films based on multi-part combined scanning, which comprises the following steps:
CT data are acquired, and preprocessing is carried out on the CT data;
identifying the preprocessed CT data and determining a scanning part; the scanning part comprises one or more of cranium, chest, abdomen and pelvis;
sequentially processing the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part;
extracting a body area according to the printing range of each scanning part, and determining a body boundary;
and taking the body boundary as a film printing range, and determining the number and the sequence of typeset images according to the printing range, thereby completing automatic typesetting.
Further, the specific steps of preprocessing the CT data are as follows: and splitting the received CT data into sequences, and extracting tag information and image data in the sequences.
Further, the body region extraction method comprises craniocerebral body region extraction and body region extraction, wherein if the printing range is craniocerebral, the craniocerebral body region extraction is adopted, otherwise, the body region extraction is adopted.
Further, the specific steps for treating cranium are as follows:
screening the HU image in the range of [0, 100 ];
generating a binarized image from the screened image, and searching the maximum communication domain on the sagittal plane layer by layer according to the coronal axis direction to obtain a sagittal plane maximum communication domain image;
searching the maximum communicating domain in the cross section layer by layer according to the vertical axis direction by using the sagittal plane maximum communicating domain image to obtain a sagittal plane-cross section maximum communicating domain image;
searching a three-dimensional maximum connected domain according to the sagittal plane-cross section maximum connected domain image, wherein the obtained maximum connected domain is an intracranial region;
recording the area of the intracranial region layer by layer in the cross section, and recording the layer surface where the maximum intracranial region area is located;
the area ratio is calculated according to the intracranial area of each layer and the area of the largest layer surface to define a craniocerebral printing area;
calculating the boundary of the print area determines the craniocerebral print range.
Further, the chest treatment comprises the following specific steps:
screening the image with HU within the range of-1200, 600 to perform normalization treatment;
dividing the normalized image by adopting a lung segmentation model to obtain an image with a mask; the mask is divided into a non-lung area, a left lung and a right lung;
combining the left lung mask and the right lung mask according to the predicted maximum communication area to form a lung area mask;
determining the position of a lung tip and a lung bottom according to the mask boundary of the lung region;
the chest print range is determined from the apex and base positions.
Further, the specific steps of treating the abdomen are as follows:
screening HU images in the ranges of [ -15, 85], [ -215, 285], [ -465, 535] and carrying out normalization treatment;
dividing the normalized image by adopting a multi-organ division model to obtain an image with a mask; the mask is divided into five categories of background, liver, ilium, bladder and femur;
analyzing the image with the mask layer by layer in the Z direction, and respectively extracting the maximum connected domains of the liver, ilium, bladder and femur to serve as organ areas;
and taking the minimum boundary value of the liver Z-direction mask in the organ area as an abdomen printing start position, taking the minimum boundary value of the ilium Z-direction mask as an abdomen printing end position, and determining an abdomen printing range.
Further, the bladder and femur pixels in the image are calculated and if the total number of pixels is greater than a set threshold, the current image is considered to contain the pelvic region.
Further, the specific steps for treating the pelvic cavity are as follows:
and taking the minimum boundary value of the ilium Z-direction mask as a pelvic printing start position, taking the maximum boundary value of the ilium Z-direction mask as a pelvic printing end position, and determining a pelvic printing range.
The second aspect of the invention provides an intelligent typesetting system for CT films based on multi-part joint scanning, which comprises the following components:
the preprocessing module is configured to acquire CT data and preprocess the CT data;
the scanning position identification module is configured to identify the preprocessed CT data and determine a scanning position; the scanning part comprises one or more of cranium, chest, abdomen and pelvis;
the printing range determining module is configured to sequentially process the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part;
a body region module configured to extract a body region from a printing range of each of the scan sites, and determine a body boundary;
and the automatic typesetting module is configured to take the body boundary as a film printing range, and determine the number and the sequence of typeset images according to the printing range so as to complete automatic typesetting.
Further, the preprocessing module comprises a server end, and the server end is used for receiving CT image data; the automatic typesetting module comprises a printer, typesetting results are previewed and displayed on a front-end interface after automatic typesetting, and the printer prints according to the typesetting results.
The one or more of the above technical solutions have the following beneficial effects:
the invention discloses an intelligent typesetting method of CT films based on multi-position combined scanning, which is characterized in that the printing ranges of all parts are independently divided by processing the human body part by part according to the sequence from top to bottom, so that the automatic typesetting of multi-image of multi-position combined scanning of craniocerebral CT, thoracic CT, abdominal CT and pelvic CT is realized, the robustness and generalization are higher, and workers do not need to manually adjust typesetting size, image scaling, image cutting and other operations, thereby providing a new idea for the intelligent medical field.
The invention discloses an intelligent typesetting system for CT films based on multi-part joint scanning, which can automatically typeset images, typeset and page the parts automatically after the image loading is completed, and workers only need to audit and print typeset contents, so that the working flow is simplified, and the working efficiency of medical workers is improved.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a flow chart of an intelligent typesetting method for CT films based on multi-part joint scanning in a first embodiment of the invention;
FIG. 2 is a diagram of automatic craniocerebral typesetting results according to one embodiment of the present invention;
FIG. 3 is a diagram showing the result of automatic chest (lung window) typesetting in accordance with the first embodiment of the present invention;
FIG. 4 is a diagram showing the result of automatic chest (mediastinum) typesetting in accordance with one embodiment of the present invention;
FIG. 5 is a diagram showing automatic layout results for the middle web part according to an embodiment of the present invention;
fig. 6 is a schematic diagram of automatic pelvic layout results in accordance with an embodiment of the present invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It should be noted that, in the embodiments of the present invention, related data such as CT images are related, when the above embodiments of the present invention are applied to specific products or technologies, user permission or consent is required to be obtained, and the collection, use and processing of related data are required to comply with related laws and regulations and standards of related countries and regions.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof;
term interpretation:
x, Y, Z direction: the X direction (Left-Right) is a Left-to-Right direction, the Y direction (Anterior-Posterior) is a front-to-back direction, and the Z direction (Head-Feet) is a top-to-bottom direction.
Embodiment one:
the embodiment of the invention provides an intelligent typesetting method for CT films based on multi-part combined scanning, which comprises the following steps:
step 1, CT data are acquired, and preprocessing is carried out on the CT data.
In one embodiment, the received CT data is split into sequences, and tag information and image data in the sequences are extracted. The image data are data formed by combining cross-sectional images extracted layer by layer according to the sequence of medical digital imaging and communication (dicom) from top to bottom of a human body.
Step 2, recognizing the preprocessed CT data, and determining a scanning part; the scanning part comprises one or more of cranium, chest, abdomen and pelvis.
In one specific embodiment, the scanning site is identified according to a body part examination (body part extracted) field, specifically, tag information of attribute information contained in the dicom image is identified by a python program after receiving the preprocessed CT data: body part exact. Wherein HEAD is craniocerebral part, CHEST is CHEST part, ABDOMEN ABDOMEN part, and PELVIS is pelvic part. As shown in fig. 1, when a single-site scan is identified, the single site is taken as the scan site; if multi-part joint scanning condition occurs in the checking process (in order to facilitate the multi-part to be checked in the same window, generally only one time of scanning), the upper part is used as the current scanning part according to the sequence of the human body from top to bottom, for example, the CHEST and abdomen joint scanning is performed, and the CHEST is selected to scan the CHEST and abdomen part. After the current scanning part is processed each time, the subsequent parts are judged according to the sequence from top to bottom of the human body, and the determined scanning parts are processed and the printing range is extracted sequentially. And obtaining a final physical range according to the printing ranges of all the parts and automatically typesetting and printing.
And step 3, sequentially processing the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part.
(1) The specific steps for treating cranium are as follows:
screening the image of HU in the range of [0, 100] as HEAD;
setting the pixels in the range as 1 and the pixels not in the range as 0 to generate a binarized image;
according to the coronal axis direction, carrying out communication region analysis on each layer by using a Two-Pass algorithm (Two-Pass scanning method), and searching the maximum communication domain layer by layer on the sagittal plane to obtain a sagittal plane maximum communication domain image;
searching the maximum communicating domain in the cross section layer by layer according to the vertical axis direction by using the sagittal plane maximum communicating domain image to obtain a sagittal plane-cross section maximum communicating domain image;
searching a three-dimensional maximum connected domain according to the sagittal plane-cross section maximum connected domain image, wherein the obtained maximum connected domain is an intracranial region;
recording the area of the intracranial region layer by layer in the cross section, and recording the layer surface where the maximum intracranial region area is located;
the area ratio is calculated according to the intracranial area of each layer and the area of the largest layer surface to define a craniocerebral printing area;
calculating the boundary of the print area determines the craniocerebral print range. In this embodiment, the print area is defined in terms of an area ratio greater than a 0.1 threshold.
Calculating the boundary of the printing area to determine the craniocerebral printing range, and completing craniocerebral body area extraction according to the craniocerebral printing range: the printing start position Zmin and the printing end position Zmax are respectively the topmost end and bottommost end of the printing range, and in this embodiment, the topmost end of the printing area is the cranium top and the bottommost end is the cranium base. In order to calculate the printing areas in the Y direction and the X direction, the preprocessed image data is firstly valued according to HU ranges [ -100, 100] and normalized to 0-1, a three-dimensional maximum connected domain is searched for, a craniocerebral body area is obtained, minimum and maximum boundary values in the Y direction and the X direction are searched for in the printing areas at the topmost end and the bottommost end calculated in the Z direction, and the maximum and minimum boundaries in the Y direction and the X direction are taken as the upper, lower, left and right boundaries of the craniocerebral area, so that the printing range of the craniocerebral part is obtained. Calculating the ratio of the number of images from Zmax to the bottommost end of the images to the number of images from Zmax to Zmin, if the ratio is more than 2, performing chest processing, otherwise, directly extracting the craniocerebral body region according to the craniocerebral printing range.
(2) The chest treatment comprises the following specific steps:
screening the image with HU within the range of minus 1200, 600 as CHEST and carrying out normalization treatment;
dividing the normalized image by adopting a lung segmentation model to obtain an image with a mask (mask); the mask is divided into a non-lung area, a left lung and a right lung;
calculating maximum connected regions of the left lung and the right lung respectively by using a Two-Pass algorithm, and preventing influence of non-lung regions caused by inaccurate separation, wherein the calculated maximum connected regions are the left lung and the right lung respectively;
combining the left lung mask and the right lung mask according to the predicted maximum communication area to form a lung area mask;
determining the position of a lung tip and a lung bottom according to the mask boundary of the lung region; the chest print range is determined from the apex and base positions.
In a specific embodiment, the minimum boundary value Zmin of the mask in the Z direction is taken as a lung tip position, the maximum boundary value Zmax in the Z direction is taken as a lung bottom position, wherein the lung tip is a printing start position, and the lung bottom is a printing end position.
In one embodiment, the lung segmentation model is a 10-layer residual convolutional neural network with the number of channels of [4,8, 16, 32, 64, 32, 16,8,4,3], each layer contains a residual block, and finally three classification masks are output, namely a non-lung region, a left lung and a right lung. After the normalized images are output, one mask is obtained for each image.
And then calculating the ratio of the number of images from Zmax to the bottommost end of the images to the number of images from Zmax to Zmin, if the ratio is more than 0.3, performing abdominal processing, otherwise, directly extracting the body area.
(3) The specific steps for treating the abdomen are as follows:
screening the images of HU in the ranges of [ -15, 85], [ -215, 285], [ -465, 535] as ABDOMEN and carrying out normalization treatment;
dividing the normalized image by adopting a multi-organ division model to obtain an image with a mask; the mask is divided into five categories of background, liver, ilium, bladder and femur;
analyzing the image with the mask layer by layer in the Z direction, and respectively extracting the maximum connected domains of the liver, ilium, bladder and femur to serve as organ areas;
and taking the minimum boundary value of the liver Z-direction mask in the organ area as an abdomen printing start position, taking the minimum boundary value of the ilium Z-direction mask as an abdomen printing end position, and determining an abdomen printing range.
In one embodiment, the abdomen and pelvic film printing position depends on a plurality of organs, such as kidneys, ilium, etc., so this example designs a multi-organ segmentation model with 10 layers of residual convolutional neural network, and the number of input characteristic channels is [32, 64, 128, 256, 512, 256, 128, 64, 32,5], each layer contains one residual block, and consists of 3x3 convolutional layers. The last layer is the output layer, and three classification masks are output, namely background, liver, ilium, bladder and femur. The network input is 3 channels, and the three HU ranges of the original image according to [ -15, 85], [ -215, 285], [ -465, 535] are respectively valued and normalized to 0-1.
(4) The specific steps for treating the pelvic cavity are as follows:
the judgment method of the pelvic region is that the sum of bladder and femur pixels in the image is calculated, if the total number of pixels is larger than a set threshold, the current image is considered to contain the pelvic region, and the PELVIS is determined. In this embodiment, if the threshold is set such that the total number of pixels is greater than 10000, then the current image is considered to include a pelvic region.
If the pelvic region is judged to be included, the pelvic region is processed:
and taking the minimum boundary value of the ilium Z-direction mask in the organ area as a pelvic printing start position, taking the maximum boundary value of the ilium Z-direction mask as a pelvic printing end position, and determining a pelvic printing range.
And 4, extracting a body area according to the printing range of each scanning part, and determining the body boundary.
In one embodiment, the body region extraction method includes craniocerebral body region extraction and body region extraction, wherein if the printing range is craniocerebral, the craniocerebral body region extraction is adopted, otherwise, the body region extraction is adopted.
The specific steps of body area extraction are as follows: the image is valued according to HU range [ -550, 100], normalized to 0-1, the normalized image is filtered by a mean value of 11 to remove noise points, the filtered image is binarized according to a threshold value of 0.5, and the maximum connected domain, namely the body area, is taken for the binarized image. And acquiring a split area of the scanning part according to the printing starting position and the printing ending position in the printing range of the scanning part, and calculating the maximum and minimum boundaries of the Y direction and the X direction as the upper, lower, left and right boundaries of the body so as to obtain the film printing range of each checking part in the X direction and the Y direction.
And 5, taking the body boundary as a film printing range, and determining the number and sequence of typeset images according to the printing range, thereby completing automatic typesetting.
In one embodiment, after the Dicom image determines the range, the order and the number of sheets are automatically typeset, for example: if 56 images are directly typeset into 7*8, if 45 images are used, the last image is copied three times and is full-padded, and the last image is arranged into 6*8.
In a specific embodiment, controlling printing typesetting according to the X, Y and Z direction ranges of all parts (single part scanning or multiple part combined scanning) obtained by analysis, and submitting the printing typesetting to a printer for printing, wherein a printing effect diagram is shown in fig. 2-6, and fig. 2 is craniocerebral printing; FIG. 3 is a chest (lung window) print; fig. 4 is a chest (mediastinum) print; FIG. 5 is an abdominal print; fig. 6 is a pelvic print.
Embodiment two:
the second embodiment of the invention provides an intelligent typesetting system for CT films based on multi-part combined scanning, which comprises the following components:
the preprocessing module is configured to acquire CT data and preprocess the CT data;
the scanning position identification module is configured to identify the preprocessed CT data and determine a scanning position; the scanning part comprises one or more of cranium, chest, abdomen and pelvis;
the printing range determining module is configured to sequentially process the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part;
a body region module configured to extract a body region from a printing range of each of the scan sites, and determine a body boundary;
and the automatic typesetting module is configured to take the body boundary as a film printing range, and determine the number and the sequence of typeset images according to the printing range so as to complete automatic typesetting.
The preprocessing module comprises a server end, wherein the server end is used for receiving CT image data;
after receiving CT image data, the server side judges the checked parts according to the body part extracted field in the sequence, and the checked parts respectively comprise four parts of cranium, chest, abdomen and pelvis. And respectively executing different processing modes according to different parts, so as to analyze each part.
And determining the number of images to be typeset according to the X, Y and Z direction ranges of each part (single part scanning or multiple part combined scanning) obtained by analysis.
Meanwhile, basic information of the patient is displayed on a front-end interface, including examination number, patient name, examination time, sequence description, image number, department, printing state and the like, the patient information is clicked, a typesetting interface is automatically opened, automatic typesetting of each part is automatically realized after the image loading is completed,
the automatic typesetting module comprises a printer, typesetting results are previewed and displayed on a front-end interface after automatic typesetting, and the printer prints according to the typesetting results. After checking, the staff clicks the print button to realize the printing of the film, and after successful printing, the column of 'printing' on the front end interface is updated from 'unprinted' to 'successful printing'.
The steps involved in the second embodiment correspond to those of the first embodiment of the method, and the detailed description of the second embodiment can be found in the related description section of the first embodiment.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented by general-purpose computer means, alternatively they may be implemented by program code executable by computing means, whereby they may be stored in storage means for execution by computing means, or they may be made into individual integrated circuit modules separately, or a plurality of modules or steps in them may be made into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (10)

1. An intelligent typesetting method for CT films based on multi-part joint scanning is characterized by comprising the following steps:
CT data are acquired, and preprocessing is carried out on the CT data;
identifying the preprocessed CT data and determining a scanning part; the scanning part comprises one or more of cranium, chest, abdomen and pelvis;
sequentially processing the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part;
extracting a body area according to the printing range of each scanning part, and determining a body boundary;
and taking the body boundary as a film printing range, and determining the number and the sequence of typeset images according to the printing range, thereby completing automatic typesetting.
2. The intelligent typesetting method for CT film based on multi-part joint scanning as claimed in claim 1, wherein the specific steps of preprocessing CT data are as follows: and splitting the received CT data into sequences, and extracting tag information and image data in the sequences.
3. The intelligent typesetting method of CT film based on multi-part joint scan according to claim 1, wherein the body region extraction method comprises craniocerebral body region extraction and body region extraction, wherein if the printing range is craniocerebral, the craniocerebral body region extraction is adopted, otherwise, the body region extraction is adopted.
4. The intelligent typesetting method for the CT film based on multi-part joint scanning as claimed in claim 1, wherein the specific steps of processing the cranium are as follows:
screening the HU image in the range of [0, 100 ];
generating a binarized image from the screened image, and searching the maximum communication domain on the sagittal plane layer by layer according to the coronal axis direction to obtain a sagittal plane maximum communication domain image;
searching the maximum communicating domain in the cross section layer by layer according to the vertical axis direction by using the sagittal plane maximum communicating domain image to obtain a sagittal plane-cross section maximum communicating domain image;
searching a three-dimensional maximum connected domain according to the sagittal plane-cross section maximum connected domain image, wherein the obtained maximum connected domain is an intracranial region;
recording the area of the intracranial region layer by layer in the cross section, and recording the layer surface where the maximum intracranial region area is located;
the area ratio is calculated according to the intracranial area of each layer and the area of the largest layer surface to define a craniocerebral printing area;
calculating the boundary of the print area determines the craniocerebral print range.
5. The intelligent typesetting method for CT films based on multi-part joint scanning as claimed in claim 1, wherein the specific steps of chest processing are as follows:
screening the image with HU within the range of-1200, 600 to perform normalization treatment;
dividing the normalized image by adopting a lung segmentation model to obtain an image with a mask; the mask is divided into a non-lung area, a left lung and a right lung;
combining the left lung mask and the right lung mask according to the predicted maximum communication area to form a lung area mask;
determining the position of a lung tip and a lung bottom according to the mask boundary of the lung region;
the chest print range is determined from the apex and base positions.
6. The intelligent typesetting method for CT film based on multi-part joint scanning as claimed in claim 1, wherein the specific steps of processing the abdomen are as follows:
screening HU images in the ranges of [ -15, 85], [ -215, 285], [ -465, 535] and carrying out normalization treatment;
dividing the normalized image by adopting a multi-organ division model to obtain an image with a mask; the mask is divided into five categories of background, liver, ilium, bladder and femur;
analyzing the image with the mask layer by layer in the Z direction, and respectively extracting the maximum connected domains of the liver, ilium, bladder and femur to serve as organ areas;
and taking the minimum boundary value of the liver Z-direction mask in the organ area as an abdomen printing start position, taking the minimum boundary value of the ilium Z-direction mask as an abdomen printing end position, and determining an abdomen printing range.
7. The intelligent typesetting method for CT film based on multi-part joint scanning as recited in claim 6, wherein the sum of bladder and femur pixels in the image is calculated, and if the total number of pixels is greater than a set threshold, the current image is considered to contain a pelvic region.
8. The intelligent typesetting method for CT films based on multi-part joint scanning as claimed in claim 7, wherein the specific steps of processing the pelvis are as follows:
and taking the minimum boundary value of the ilium Z-direction mask as a pelvic printing start position, taking the maximum boundary value of the ilium Z-direction mask as a pelvic printing end position, and determining a pelvic printing range.
9. CT film intelligence typesetting system based on multi-position joint scan, characterized by comprising:
the preprocessing module is configured to acquire CT data and preprocess the CT data;
the scanning position identification module is configured to identify the preprocessed CT data and determine a scanning position; the scanning part comprises one or more of cranium, chest, abdomen and pelvis;
the printing range determining module is configured to sequentially process the scanning parts according to the sequence from top to bottom of the human body to obtain the printing range of each scanning part;
a body region module configured to extract a body region from a printing range of each of the scan sites, and determine a body boundary;
and the automatic typesetting module is configured to take the body boundary as a film printing range, and determine the number and the sequence of typeset images according to the printing range so as to complete automatic typesetting.
10. The intelligent typesetting system of CT film based on multi-part joint scan of claim 9, wherein the preprocessing module comprises a server side for receiving CT image data; the automatic typesetting module comprises a printer, typesetting results are previewed and displayed on a front-end interface after automatic typesetting, and the printer prints according to the typesetting results.
CN202310460258.9A 2023-04-21 2023-04-21 CT film intelligent typesetting method and system based on multi-part combined scanning Pending CN116597951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310460258.9A CN116597951A (en) 2023-04-21 2023-04-21 CT film intelligent typesetting method and system based on multi-part combined scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310460258.9A CN116597951A (en) 2023-04-21 2023-04-21 CT film intelligent typesetting method and system based on multi-part combined scanning

Publications (1)

Publication Number Publication Date
CN116597951A true CN116597951A (en) 2023-08-15

Family

ID=87598279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310460258.9A Pending CN116597951A (en) 2023-04-21 2023-04-21 CT film intelligent typesetting method and system based on multi-part combined scanning

Country Status (1)

Country Link
CN (1) CN116597951A (en)

Similar Documents

Publication Publication Date Title
WO2022063199A1 (en) Pulmonary nodule automatic detection method, apparatus and computer system
Coppini et al. Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms
CN108898175A (en) Area of computer aided model building method based on deep learning gastric cancer pathological section
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
CN108010021A (en) A kind of magic magiscan and method
US8290568B2 (en) Method for determining a property map of an object, particularly of a living being, based on at least a first image, particularly a magnetic resonance image
CN109859233A (en) The training method and system of image procossing, image processing model
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN113506294B (en) Medical image evaluation method, system, computer equipment and storage medium
JP2021002338A (en) Method and system for image segmentation and identification
CN110689521B (en) Automatic identification method and system for human body part to which medical image belongs
CN112150442A (en) New crown diagnosis system based on deep convolutional neural network and multi-instance learning
CN110097128B (en) Medical image classification device and system
CN114565761A (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
US9275452B2 (en) Method and system for automatically determining compliance of cross sectional imaging scans with a predetermined protocol
US20060078184A1 (en) Intelligent splitting of volume data
CN111462139A (en) Medical image display method, medical image display device, computer equipment and readable storage medium
CN111325754A (en) Automatic lumbar vertebra positioning method based on CT sequence image
CN110706217A (en) Deep learning-based lung tumor automatic delineation method
CN117237351A (en) Ultrasonic image analysis method and related device
CN113034522A (en) CT image segmentation method based on artificial neural network
CN113222996A (en) Heart segmentation quality evaluation method, device, equipment and storage medium
TSAI Automatic segmentation of liver structure in CT images using a neural network
CN111724356A (en) Image processing method and system for CT image pneumonia identification
CN116597951A (en) CT film intelligent typesetting method and system based on multi-part combined scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination