CN111292378A - CT scanning auxiliary method, device and computer readable storage medium - Google Patents
CT scanning auxiliary method, device and computer readable storage medium Download PDFInfo
- Publication number
- CN111292378A CN111292378A CN202010168254.XA CN202010168254A CN111292378A CN 111292378 A CN111292378 A CN 111292378A CN 202010168254 A CN202010168254 A CN 202010168254A CN 111292378 A CN111292378 A CN 111292378A
- Authority
- CN
- China
- Prior art keywords
- image
- scanning
- artifact
- patient
- body position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000002591 computed tomography Methods 0.000 title claims abstract description 165
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000008569 process Effects 0.000 claims abstract description 44
- 238000013441 quality evaluation Methods 0.000 claims abstract description 32
- 230000008859 change Effects 0.000 claims abstract description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 33
- 238000013145 classification model Methods 0.000 claims description 21
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000003745 diagnosis Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 10
- 238000012549 training Methods 0.000 description 10
- 238000011524 similarity measure Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 235000003642 hunger Nutrition 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000009659 non-destructive testing Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000037351 starvation Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/04—Positioning of patients; Tiltable beds or the like
- A61B6/0407—Supports, e.g. tables or beds, for the body or parts of the body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Multimedia (AREA)
- Pulmonology (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a CT scanning auxiliary method, a device and a computer readable storage medium, relates to the technical field of CT scanning, and divides the whole CT scanning into three stages, and provides a corresponding technical scheme aiming at factors possibly causing poor CT image quality in each stage. Before scanning, the problem of poor quality of the image reconstructed by the final CT scanning caused by incorrect initial positioning of the patient is avoided by detecting the positioning of the patient; in the scanning process, the problem of poor image quality of the final CT scanning reconstruction caused by the position change of the patient is avoided by detecting the position of the patient, and if the patient is detected to move, the scanning is prompted to be interrupted, so that the patient is prevented from receiving unnecessary dose; after the scanning is finished, a quality evaluation is carried out on the final CT scanning reconstructed image, the CT scanning reconstructed image with poor quality is removed, and the condition that a doctor is influenced by the low-quality CT scanning reconstructed image in the subsequent diagnosis process is avoided.
Description
Technical Field
The present invention relates to the field of CT scanning technologies, and in particular, to a CT scanning assistance method, an apparatus, and a computer-readable storage medium.
Background
CT (computed tomography) scans an object with X-rays to obtain projection data, and processes the projection data through a tomographic reconstruction algorithm to obtain tomographic and three-dimensional density information of the object, thereby achieving the purpose of nondestructive testing (fig. 1). Has important application in the fields of medical diagnosis, industrial nondestructive detection and the like. In the field of medical diagnostics, CT has been known since 1970 as a three-key imaging system for medical use, along with Magnetic Resonance Imaging (MRI), positron emission computed tomography (PET) and CT combined systems (PET/CT). Compared with other imaging means, the CT reconstruction can quickly obtain high-resolution images, the contrast precision of the reconstruction result can be controlled within 1%, and objects of 0.5mm level can be distinguished. Due to the complexity of the imaging physics, even the most advanced CT systems deal with the impact of various image artifacts on the final image quality. The higher-end machines, in complex and stressful hospital settings, can also cause various artifacts if not used properly, such as truncation artifacts caused by too large objects, streak artifacts caused by photon starvation, motion artifacts caused by patient breathing, banding artifacts caused by improper patient positioning, and the like.
The current CT image quality control mainly depends on the experience of radiologists and technicians, positions patients according to the experience of the radiologists and the technicians and mastery of the using function of a CT machine, and controls scanning parameters to ensure the scanned image quality.
However, in the stressful use environment of the hospital radiology department, each patient has only a short scanning and preparation time of several minutes, and it is difficult to ensure that each patient obtains good image quality. This is especially true in hospitals below the third-class hospital, where many technicians are students who have just been graduate, and have not so much experience in use.
In summary, there is a lack of automatic diagnosis technology for CT image quality in the industry at present.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects of the prior art, the invention provides a CT scanning auxiliary method, a CT scanning auxiliary device and a computer readable storage medium, which can reduce the operation difficulty of a radiological technician, reduce the dependence on a training level and reduce the workload and the cost after sale to the maximum extent.
The technical scheme is as follows: in order to achieve the purpose, the technical scheme provided by the invention is as follows:
a CT scan auxiliary method, the method carries on real-time detection to the patient's position before scanning, can avoid the problem that the final CT scan rebuilds the poor image quality because the patient's position is incorrect at the beginning, the method includes the step:
(1) before CT scanning, a swing image of a patient on a CT bed is acquired or a positioning image of the patient on the CT bed is acquired through a CT machine;
(2) extracting coordinates of key points of the human body from the image obtained in the step (1) through a pre-trained convolutional neural network;
(3) acquiring a pre-constructed standard positioning template, wherein the standard positioning template and an input image of the convolutional neural network are in the same coordinate system, and the standard positioning template records coordinates of each key point of a human body in a standard positioning state;
(4) selecting key points representing a part to be scanned from all key points extracted by the convolutional neural network as a point set for comparison, comparing each key point in the point set with the corresponding key point coordinate in the standard positioning template, and if the difference of the distances between each pair of corresponding key points is less than a preset threshold value, judging that the positioning of the patient is correct; otherwise, the patient is judged to be incorrectly positioned.
Further, the structure of the convolutional neural network comprises: alexnet, ZFNet, OverFeat, VGG, GoogleNet, ResNet, densnet.
The invention also provides a CT scanning auxiliary method, which on one hand detects the positioning of the patient in real time before scanning to avoid the problem of poor image quality of the final CT scanning reconstruction caused by incorrect positioning of the patient at the beginning, and on the other hand detects the positioning of the patient in real time during scanning, and prompts to interrupt scanning if the movement of the patient is detected to avoid the patient from receiving unnecessary dose. The method comprises the following steps:
before CT scanning, a swing image of a patient on a CT bed is acquired or a positioning image of the patient on the CT bed is acquired through a CT machine; extracting coordinates of key points of the human body from the obtained image through a pre-trained convolutional neural network; acquiring a pre-constructed standard positioning template, wherein the standard positioning template and an input image of the convolutional neural network are in the same coordinate system, and the standard positioning template records coordinates of each key point of a human body in a standard positioning state; selecting key points representing a part to be scanned from all key points extracted by the convolutional neural network as a point set for comparison, comparing each key point in the point set with the corresponding key point coordinate in the standard positioning template, and if the difference of the distances between each pair of corresponding key points is less than a preset threshold value, judging that the positioning of the patient is correct; otherwise, judging that the patient is incorrectly positioned;
during the scanning process, the positioning of the patient is detected in real time, and whether the patient moves or not is judged through image domain similarity measurement or CT scanning data similarity measurement.
The method for judging whether the patient moves or not through the image domain similarity measure comprises the following steps:
in the CT scanning process, the body position image of a patient is collected in real time; inputting the collected body position image into the convolutional neural network, and extracting coordinates of all key points in the patient body position image; and constructing an image domain similarity index calculation formula according to the key point coordinates in the body position image acquired this time and the key point coordinates in the body position image acquired last time, calculating the similarity of the two body position images through the image domain similarity index calculation formula, judging that the body position of the patient changes in the scanning process if the calculated similarity value does not meet a preset threshold condition, and otherwise, judging that the body position of the patient does not change in the scanning process.
Preferably, the specific steps of determining whether the patient has movement or not through the image domain similarity measure are as follows:
selecting a point of a human body or a point on a CT sickbed as a reference point;
finding the corresponding point of the reference point on the last acquired body position imageAll key points and points in the last acquired body position imageAngle and pixel distance between;
finding out the corresponding point of the reference point on the body position image acquired this timeCalculating all key points in the collected body position imageAngle and pixel distance between;
for each key point, calculating the angle variation and the pixel distance variation of the key point in the posture images acquired twice and the reference point as image domain similarity indexes; if the angle variation or the pixel distance variation is larger than a preset threshold value, the body position of the patient is judged to be changed in the scanning process, otherwise, the body position of the patient is judged not to be changed in the scanning process.
The method for judging whether the patient moves or not through the similarity measurement of CT scanning data comprises the following steps:
in the CT scanning process, adjacent layer CT scanning reconstructed images are obtained in real time, the difference value of the adjacent two layers of CT scanning reconstructed images in the Z direction is calculated, and when the difference value is larger than a set threshold value, the body position of a patient is judged to be changed in the scanning process.
Further, the structure of the convolutional neural network comprises: alexnet, ZFNet, OverFeat, VGG, GoogleNet, ResNet, densnet.
The invention also provides a CT scanning auxiliary method, which on one hand detects the positioning of the patient in real time before scanning to avoid the problem of poor quality of the image reconstructed by the CT scanning caused by incorrect positioning of the patient at the beginning, and on the other hand, evaluates the quality of the reconstructed image of the CT scanning after the CT scanning to remove the reconstructed image of the CT scanning with poor quality. The method comprises the following steps:
before CT scanning, a swing image of a patient on a CT bed is acquired or a positioning image of the patient on the CT bed is acquired through a CT machine; extracting coordinates of key points of the human body from the obtained image through a pre-trained convolutional neural network; acquiring a pre-constructed standard positioning template, wherein the standard positioning template and an input image of the convolutional neural network are in the same coordinate system, and the standard positioning template records coordinates of each key point of a human body in a standard positioning state; selecting key points representing a part to be scanned from all key points extracted by the convolutional neural network as a point set for comparison, comparing each key point in the point set with the corresponding key point coordinate in the standard positioning template, and if the difference of the distances between each pair of corresponding key points is less than a preset threshold value, judging that the positioning of the patient is correct; otherwise, judging that the patient is incorrectly positioned;
after CT scanning, the quality of the reconstructed image of the CT scanning is evaluated:
setting image quality evaluation indexes and threshold conditions corresponding to the image quality evaluation indexes, wherein the image quality evaluation indexes comprise: mean value of the image, noise of the image, truncation error of the image, and histogram mean value of the image;
constructing an artifact classification model based on a neural network, and identifying the artifact type in the reconstructed image of the CT scan through the artifact classification model, wherein the artifact classification model comprises the following steps: no artifact, ring artifact, strip artifact, banding artifact, truncation artifact;
after CT scanning, calculating an image quality evaluation index for each CT scanning reconstructed image, and if the calculated image quality evaluation index does not meet the corresponding threshold condition, judging that the image quality is unqualified; otherwise, inputting the reconstructed image of the CT scanning into the artifact classification model for artifact classification, and if the classification result is that the artifact exists, judging that the image quality is unqualified.
Further, the structure of the convolutional neural network comprises: alexnet, ZFNet, OverFeat, VGG, GoogleNet, ResNet, densnet.
The invention also provides a CT scanning auxiliary method, which respectively detects the patient positioning in real time before and during the CT scanning process, avoids the problems of incorrect initial positioning of the patient and poor quality of the final reconstructed image of the CT scanning caused by movement of the positioning during the scanning process, evaluates the quality of the reconstructed image of the CT scanning after the CT scanning, and eliminates the reconstructed image of the CT scanning with poor quality. The method comprises the following steps:
before CT scanning, a swing image of a patient on a CT bed is acquired or a positioning image of the patient on the CT bed is acquired through a CT machine; extracting coordinates of key points of the human body from the obtained image through a pre-trained convolutional neural network; acquiring a pre-constructed standard positioning template, wherein the standard positioning template and an input image of the convolutional neural network are in the same coordinate system, and the standard positioning template records coordinates of each key point of a human body in a standard positioning state; selecting key points representing a part to be scanned from all key points extracted by the convolutional neural network as a point set for comparison, comparing each key point in the point set with the corresponding key point coordinate in the standard positioning template, and if the difference of the distances between each pair of corresponding key points is less than a preset threshold value, judging that the positioning of the patient is correct; otherwise, judging that the patient is incorrectly positioned;
in the CT scanning process, the body position image of a patient is collected in real time; inputting the collected body position image into the convolutional neural network, and extracting coordinates of all key points in the patient body position image; constructing an image domain similarity index calculation formula according to the key point coordinates in the body position image acquired this time and the key point coordinates in the body position image acquired last time, calculating the similarity of the two body position images through the image domain similarity index calculation formula, judging that the body position of the patient changes in the scanning process if the calculated similarity value does not meet a preset threshold condition, and otherwise, judging that the body position of the patient does not change in the scanning process;
after CT scanning, the quality of the reconstructed image of the CT scanning is evaluated: setting image quality evaluation indexes and threshold conditions corresponding to the image quality evaluation indexes, wherein the image quality evaluation indexes comprise: mean value of the image, noise of the image, truncation error of the image, and histogram mean value of the image; constructing an artifact classification model based on a neural network, and identifying the artifact type in the reconstructed image of the CT scan through the artifact classification model, wherein the artifact classification model comprises the following steps: no artifact, ring artifact, strip artifact, banding artifact, truncation artifact; after CT scanning, calculating an image quality evaluation index for each CT scanning reconstructed image, and if the calculated image quality evaluation index does not meet the corresponding threshold condition, judging that the image quality is unqualified; otherwise, inputting the reconstructed image of the CT scanning into the artifact classification model for artifact classification, and if the classification result is that the artifact exists, judging that the image quality is unqualified.
Further, the structure of the convolutional neural network comprises: alexnet, ZFNet, OverFeat, VGG, GoogleNet, ResNet, densnet.
Further, the method for judging whether the patient has movement or not through the image domain similarity measure comprises the following steps:
in the CT scanning process, the body position image of a patient is collected in real time; inputting the collected body position image into the convolutional neural network, and extracting coordinates of all key points in the patient body position image; and constructing an image domain similarity index calculation formula according to the key point coordinates in the body position image acquired this time and the key point coordinates in the body position image acquired last time, calculating the similarity of the two body position images through the image domain similarity index calculation formula, judging that the body position of the patient changes in the scanning process if the calculated similarity value does not meet a preset threshold condition, and otherwise, judging that the body position of the patient does not change in the scanning process.
Preferably, the specific steps of determining whether the patient has movement or not through the image domain similarity measure are as follows:
selecting a point of a human body or a point on a CT sickbed as a reference point;
finding the corresponding point of the reference point on the last acquired body position imageAll key points and points in the last acquired body position imageAngle and pixel distance between;
finding out the corresponding point of the reference point on the body position image acquired this timeCalculating all key points in the collected body position imageAngle and pixel distance between;
for each key point, calculating the angle variation and the pixel distance variation of the key point in the posture images acquired twice and the reference point as image domain similarity indexes; if the angle variation or the pixel distance variation is larger than a preset threshold value, the body position of the patient is judged to be changed in the scanning process, otherwise, the body position of the patient is judged not to be changed in the scanning process.
Further, the method for judging whether the patient has movement or not through the similarity measurement of the CT scanning data comprises the following steps:
in the CT scanning process, adjacent layer CT scanning reconstructed images are obtained in real time, the difference value of the adjacent two layers of CT scanning reconstructed images in the Z direction is calculated, and when the difference value is larger than a set threshold value, the body position of a patient is judged to be changed in the scanning process.
The invention provides a computer-readable storage medium, which stores at least one instruction executable by a processor, and when the at least one instruction is executed by the processor, the at least one instruction implements the CT scan assisting method described in any one of the above.
The invention provides a CT scanning apparatus, which includes a memory and a processor, wherein the memory is used for storing at least one instruction, and the processor is used for executing the at least one instruction to implement the CT scanning auxiliary method.
Has the advantages that: compared with the prior art, the invention has the following advantages:
the invention divides the whole CT scanning into three stages: before scanning, during scanning and after scanning, and aiming at factors which can cause poor CT image quality in each stage, a corresponding technical scheme is provided. Before scanning, the problem of poor quality of the image reconstructed by the final CT scanning caused by incorrect initial positioning of the patient is avoided by detecting the positioning of the patient; in the scanning process, the problem of poor image quality of the final CT scanning reconstruction caused by the position change of the patient is avoided by detecting the position of the patient, and if the patient is detected to move, the scanning is prompted to be interrupted, so that the patient is prevented from receiving unnecessary dose; after the scanning is finished, the quality evaluation is carried out on the final CT scanning reconstructed image, the CT scanning reconstructed image with poor quality can be removed, and the condition that a doctor is influenced by the low-quality CT scanning reconstructed image in the subsequent diagnosis process is avoided.
The invention can reduce the operation difficulty of the radiological technician, reduce the dependence on the training level and reduce the workload and the cost after sale to the maximum extent.
Drawings
FIG. 1 is a schematic diagram of the principles of the present invention;
FIG. 2 is a flow chart of the operation of a convolutional neural network in accordance with an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the operation of an artifact classification model according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a standard setup template for head scanning according to an embodiment of the present invention;
FIG. 5 is a diagram of a standard setup template for chest scanning according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of changes in reconstructed images from adjacent slice CT scans due to patient positioning changes during the scan according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating differences between CT scan reconstructed images of adjacent layers in the Z direction according to an embodiment of the present invention, where the abscissa is a Z-direction axis and the ordinate represents a difference value;
fig. 8 is an exemplary illustration of different types of artifacts according to an embodiment of the present invention, wherein the first row, from left to right, is: the second row comprises the following components in a first row: truncation artifacts, motion artifacts, and streak artifacts caused by too small a selection of the bulb current.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments. It is to be understood that the present invention may be embodied in various forms, and that there is no intention to limit the invention to the specific embodiments illustrated, but on the contrary, the intention is to cover some exemplary and non-limiting embodiments shown in the attached drawings and described below.
It is to be understood that the features listed above for the different embodiments may be combined with each other to form further embodiments within the scope of the invention, where technically feasible. Furthermore, the particular examples and embodiments of the invention described are non-limiting, and various modifications may be made in the structure, steps, and sequence set forth above without departing from the scope of the invention.
FIG. 1 is a schematic diagram of the present invention, which divides the whole CT scan into three phases: before scanning, during scanning and after scanning, and aiming at factors which can cause poor CT image quality in each stage, a corresponding technical scheme is provided. Before scanning, the problem of poor quality of the image reconstructed by the final CT scanning caused by incorrect initial positioning of the patient is avoided by detecting the positioning of the patient; in the scanning process, the problem of poor image quality of the final CT scanning reconstruction caused by the position change of the patient is avoided by detecting the position of the patient, and if the patient is detected to move, the scanning is prompted to be interrupted, so that the patient is prevented from receiving unnecessary dose; after the scanning is finished, the quality evaluation is carried out on the final CT scanning reconstructed image, the CT scanning reconstructed image with poor quality can be removed, and the condition that a doctor is influenced by the low-quality CT scanning reconstructed image in the subsequent diagnosis process is avoided.
The technical solutions of the three stages are explained in detail below.
1. Before scanning, the positioning of the patient is detected
When CT scanning is carried out, different parts are scanned, different positions of a patient are required to be arranged, for example, when the head is scanned, the arm cannot be lifted, the chest and the abdomen are scanned, and the arm is lifted. If the patient is initially positioned incorrectly, the quality of the scanned CT images must be questioned.
For the positioning problem before scanning, the present embodiment adopts the following method to solve:
(1) before CT scanning is carried out, firstly, a proper scanning environment is constructed, for example, no object can be arranged outside a scanning area, a proper bed height is adjusted, and the like, then, a positioning image of a patient on a CT sickbed is obtained through a camera or equipment with a camera function, and the camera is installed on a scanning rack or the ceiling of a shielding room and can shoot the whole body of the patient. If the installation of a camera or equipment with the camera function is inconvenient, the positioning image of the patient on the CT sickbed can be directly obtained through the CT machine. Scout images are CT images like X-ray plain films that require the tube and detector to be fixed at the angle required for scanning, and are obtained after the patient is automatically fed into the gantry with the couch and a series of X-exposures are performed.
(2) Extracting coordinates of key points of the human body from the image obtained in the step (1) through a pre-trained convolutional neural network; these key points are typically selected to represent key parts of the body (e.g., wrist, elbow, shoulder, eye, nose, etc.) and provide information for determining the patient's position. The convolutional neural network is a network framework such as DenseNet and ResNet, the work flow of the convolutional neural network is shown in FIG. 2, firstly, a picture is input into the convolutional neural network, the convolutional neural network extracts image features to obtain key point coordinates, then the key point coordinates are sent into a classification network for classification, and are regressed through a regional coordinate regression network, and finally the categories and the coordinates of the key points are output.
The training process of the convolutional neural network is as follows: 10000 pictures are selected from clinical data, the size of the pictures is 640 x 480, the coordinates of key points of the human body are marked manually, and the network is trained through a tensoflow frame. The size of the network input data is 1 × 640 × 480, the shape of the network output data is nkp +1 × 640 × 480, wherein nkp is the number of human body key points, and the network loss function is a cross entropy function.
(3) After the network outputs a series of key point coordinates of the human body, the current key point coordinates are compared with a standard positioning template in a scanning protocol, the standard positioning template and an input image of the convolutional neural network are in the same coordinate system, the coordinates of each key point of the human body in a standard positioning state are recorded in the standard positioning template, the standard positioning template is shown in figures 4 and 5, figure 4 is a standard positioning template for head scanning, and figure 5 is a standard positioning template for chest scanning. Selecting key points representing a part to be scanned from all key points extracted by the convolutional neural network as a point set for comparison, comparing each key point in the point set with the corresponding key point coordinate in the standard positioning template, and if the difference of the distances between each pair of corresponding key points is less than a preset threshold value, judging that the positioning of the patient is correct; otherwise, the patient is judged to be incorrectly positioned. The comparison of the key points with the template is actually a relative distance determination, for example, for chest scanning, the arm of a patient needs to be lifted to the head, and only the elbow position coordinate and the distance of the head key point coordinate in the Y direction need to be compared.
2. During the scanning process, the positioning of the patient is detected
Because the CT scanning time is long, the arrangement of the patient is difficult to avoid and changes, and once the arrangement changes, the scanned CT image has quality problems, so that whether the patient moves or not in the scanning process needs to be detected. In this embodiment, whether the patient moves is determined by the image domain similarity measure or the CT scan data similarity measure. If patient motion is detected, the prompt interrupts the scan to prevent the patient from receiving unnecessary doses.
The image domain similarity measure is to compare the similarity of two images within a short period of time, for example, for two images separated by one second during the scanning process, the specific comparison scheme is as follows:
selecting a point of a human body or a point on a CT sickbed as a reference point;
finding the corresponding point of the reference point on the last acquired body position imageAll key points and points in the last acquired body position imageAngle and pixel distance between; such as key pointsAndthe calculation formula of the Angle and the pixel distance Dist of the two points is as follows:
wherein the content of the first and second substances,indicating the minimum numerical calculation error
Finding out the corresponding point of the reference point on the body position image acquired this timeCalculating all key points in the body position image acquired this timeAndangle and pixel distance between;
for each key point, calculating the angle variation and the pixel distance variation of the key point in the posture images acquired twice and the reference point as image domain similarity indexes; if the angle variation or the pixel distance variation is larger than a preset threshold value, the body position of the patient is judged to be changed in the scanning process, otherwise, the body position of the patient is judged not to be changed in the scanning process.
The similarity measure of CT scan data is measured by the difference between adjacent layer images (MSE). During the CT scan, a CT image is generated as the scan progresses. The current hardware condition can reach a very high reconstruction speed (100 pieces/second). The resulting reconstructed image is almost real-time. Because the structure of the human body is also an approximately continuously changing structure in the Z direction, the difference between adjacent layers is small and relatively continuous when there is no motion. When motion is generated, the difference of images between adjacent layers has a sudden change, and after the motion is finished, the difference is restored to a lower level, fig. 6 shows the change of reconstructed images of CT scanning of adjacent layers caused by the change of the positioning of a patient in the scanning process, fig. 7 shows a schematic diagram of the difference of reconstructed images of CT scanning of adjacent layers in the Z direction, and the difference of the images of adjacent layers caused by the movement of a human body can be obviously captured from fig. 6 and 7.
Based on this principle, in this embodiment, we use the following scheme to implement the similarity measurement of CT scan data:
in the CT scanning process, adjacent layer CT scanning reconstructed images are obtained in real time, the difference value of the adjacent two layers of CT scanning reconstructed images in the Z direction is calculated, and when the difference value is larger than a set threshold value, the body position of a patient is judged to be changed in the scanning process.
3. After the scanning is finished, a quality evaluation is carried out on the final CT scanning reconstructed image, and the CT scanning reconstructed image with poor quality can be removed.
Firstly, some classical image indexes are used for rapid judgment, such as an image mean value, image noise, an image histogram and the like, so as to prompt a part of obvious image problems. Then, in addition, a neural network is used for training whether various artifacts exist in the resolution image or not, and the problem of image quality is prompted. This function can be solved by training with neural networks, or by classical image processing methods, such as looking at noise, truncation errors, etc.
The artifact representation of CT images generally includes: the image of various artifacts is shown in fig. 8, and the first row in fig. 8, from left to right, is: the second row comprises the following components in a first row: truncation artifacts, motion artifacts, and streak artifacts caused by too small a selection of the bulb current.
In order to identify whether artifacts exist in a reconstructed image of a CT scan and the types of the artifacts, a neural network is used to identify the artifacts in this embodiment, and the specific steps are as follows:
(1) collecting artifact data and normal data, such as CT reconstruction data of the head, chest, abdomen, limbs and other regions of the patient, wherein the data comprises normal high-quality image data and artifact data, such as stripe artifact caused by noise, streak artifact caused by bulb fire, metal artifact, motion artifact, truncation artifact, ring artifact caused by a detector and the like;
(2) designing a neural network model, wherein a VGG network model is selected in the embodiment, as shown in fig. 3; the network image input is a scanned image (512 × 512), features are extracted through convolution pooling, and finally the type of noise is output through a full connection layer, and the network output data format is (1 × N + 1), wherein N is the noise type in training. The network loss function adopts an MSE mean square error function:whereinIn order to be the real data,and outputting data for the neural network.
(3) Training network model parameters, classifying collected data into different categories corresponding to different label values, as shown in table 1:
TABLE 1
Artifacts | Normal image | Noise(s) | Strike fire | Exercise of sports | Cut off | Metal | Ring (C) | … |
Label (R) | 0 | 1 | 2 | 3 | 4 | 5 | 6 | … |
And inputting the data and the label into a designed neural network model, selecting a mean square error by a loss function, and selecting Adam by an optimization method for training. Taking four artifacts, namely, a ring artifact, a strip artifact, a banding artifact and a shadow artifact as examples, the training process of the neural network model is as follows:
preparing data, and selecting 1000 CT scanning data of 512 x 512 for each artifact and 1000 data without artifacts;
generating a label, and adopting one-hot coding for each artifact image, wherein the first bit of coding corresponds to no artifact, and the following analogy respectively corresponds to ring artifact, strip artifact, banding artifact and truncation artifact, for example, for ring artifact coding (0,1,0,0, 0);
inputting data into a network, training on a Tensorflow platform, and adopting MSE as a loss function;
and saving the trained network model.
(4) And (4) judging the image quality, and inputting the image into a trained network model to judge the image quality and the artifact type after scanning. For example, taking ring artifacts, streak artifacts, banding artifacts, and truncation artifacts as examples, an image (512 × 512) of the artifact type to be resolved is input into a trained neural network model to obtain five-bit output codes, and the position of the maximum probability is taken to obtain the artifact type of the input image, for example, for output (0,0,1,0,0), corresponding to the streak artifacts.
In addition to neural network methods for identifying artifacts that determine reconstructed images, classical image processing methods can be used, for example, improper setting of a scanning protocol can cause image noise increase, which is manifested as increase of standard deviation (std) of an image, and the method for measuring and comparing std can be used for determining whether the protocol is set correctly; the truncation error is represented by an increase in the CT value of the edge of the image, and the presence or absence of the truncation artifact can be determined by measuring the CT value of the region, and other artifacts have different characteristics in the image region, and can also be determined by image processing.
In the three-stage technical solutions provided by the above embodiments, any one of the individual technical solutions or a combination of more than one of the technical solutions may provide an auxiliary service for CT scanning, so as to improve the quality of CT scanned images, and therefore, any one of the individual technical solutions or a combination of more than one of the technical solutions belongs to the protection scope of the present invention.
Further, the present invention provides a computer-readable storage medium storing at least one instruction executable by a processor, wherein the at least one instruction, when executed by the processor, implements the CT scan assisting method according to any one of the above aspects.
Further, the present invention provides a CT scanning apparatus, which includes a memory and a processor, wherein the memory is configured to store at least one instruction, and the processor is configured to execute the at least one instruction to implement the CT scanning assisting method according to any one of the above items.
The above-described embodiments, particularly any "preferred" embodiments, are possible examples of implementations, and are presented merely for a clear understanding of the principles of the invention. Many variations and modifications may be made to the above-described embodiments without departing substantially from the spirit and principles of the technology described herein, and such variations and modifications are to be considered within the scope of the invention.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (10)
1. A CT scan support method, comprising the steps of:
(1) before CT scanning, a swing image of a patient on a CT bed is acquired or a positioning image of the patient on the CT bed is acquired through a CT machine;
(2) extracting coordinates of key points of the human body from the image obtained in the step (1) through a pre-trained convolutional neural network;
(3) acquiring a pre-constructed standard positioning template, wherein the standard positioning template and an input image of the convolutional neural network are in the same coordinate system, and the standard positioning template records coordinates of each key point of a human body in a standard positioning state;
(4) selecting key points representing a part to be scanned from all key points extracted by the convolutional neural network as a point set for comparison, comparing each key point in the point set with the corresponding key point coordinate in the standard positioning template, and if the difference of the distances between each pair of corresponding key points is less than a preset threshold value, judging that the positioning of the patient is correct; otherwise, the patient is judged to be incorrectly positioned.
2. The method of claim 1, wherein the structure of the convolutional neural network comprises: alexnet, ZFNet, OverFeat, VGG, GoogleNet, ResNet, densnet.
3. The method of any one of claims 1 to 2, further comprising the steps of:
in the CT scanning process, the body position image of a patient is collected in real time;
inputting the collected body position image into the convolutional neural network, and extracting coordinates of all key points in the patient body position image;
and constructing an image domain similarity index calculation formula according to the key point coordinates in the body position image acquired this time and the key point coordinates in the body position image acquired last time, calculating the similarity of the two body position images through the image domain similarity index calculation formula, judging that the body position of the patient changes in the scanning process if the calculated similarity value does not meet a preset threshold condition, and otherwise, judging that the body position of the patient does not change in the scanning process.
4. The CT scan support method of claim 3, wherein the image domain similarity index is calculated by:
selecting a point of a human body or a point on a CT sickbed as a reference point;
finding the corresponding point of the reference point on the last acquired body position imageAll key points and points in the last acquired body position imageAngle and pixel distance between;
finding out the corresponding point of the reference point on the body position image acquired this timeCalculating all key points in the collected body position imageAngle and pixel distance between;
for each key point, calculating the angle variation and the pixel distance variation of the key point in the posture images acquired twice and the reference point as image domain similarity indexes; if the angle variation or the pixel distance variation is larger than a preset threshold value, the body position of the patient is judged to be changed in the scanning process, otherwise, the body position of the patient is judged not to be changed in the scanning process.
5. The CT scan support method according to any one of claims 1 to 2, further comprising the steps of:
in the CT scanning process, adjacent layer CT scanning reconstructed images are obtained in real time, the difference value of the adjacent two layers of CT scanning reconstructed images in the Z direction is calculated, and when the difference value is larger than a set threshold value, the body position of a patient is judged to be changed in the scanning process.
6. The CT scan support method according to any one of claims 1 to 2, further comprising the steps of:
after CT scanning, the quality evaluation of the reconstructed image of the CT scanning is carried out, and the method specifically comprises the following steps:
setting image quality evaluation indexes and threshold conditions corresponding to the image quality evaluation indexes, wherein the image quality evaluation indexes comprise: mean value of the image, noise of the image, truncation error of the image, and histogram mean value of the image;
constructing an artifact classification model based on a neural network, and identifying the artifact type in the reconstructed image of the CT scan through the artifact classification model, wherein the artifact classification model comprises the following steps: no artifact, ring artifact, strip artifact, banding artifact, truncation artifact;
after CT scanning, calculating an image quality evaluation index for each CT scanning reconstructed image, and if the calculated image quality evaluation index does not meet the corresponding threshold condition, judging that the image quality is unqualified; otherwise, inputting the reconstructed image of the CT scanning into the artifact classification model for artifact classification, and if the classification result is that the artifact exists, judging that the image quality is unqualified.
7. A CT scan support method according to claim 3, further comprising the steps of:
after CT scanning, the quality evaluation of the reconstructed image of the CT scanning is carried out, and the method specifically comprises the following steps:
setting image quality evaluation indexes and threshold conditions corresponding to the image quality evaluation indexes, wherein the image quality evaluation indexes comprise: mean value of the image, noise of the image, truncation error of the image, and histogram mean value of the image;
constructing an artifact classification model based on a neural network, and identifying the artifact type in the reconstructed image of the CT scan through the artifact classification model, wherein the artifact classification model comprises the following steps: no artifact, ring artifact, strip artifact, banding artifact, truncation artifact;
after CT scanning, calculating an image quality evaluation index for each CT scanning reconstructed image, and if the calculated image quality evaluation index does not meet the corresponding threshold condition, judging that the image quality is unqualified; otherwise, inputting the reconstructed image of the CT scanning into the artifact classification model for artifact classification, and if the classification result is that the artifact exists, judging that the image quality is unqualified.
8. The method of claim 5, further comprising the steps of:
after CT scanning, the quality evaluation of the reconstructed image of the CT scanning is carried out, and the method specifically comprises the following steps:
setting image quality evaluation indexes and threshold conditions corresponding to the image quality evaluation indexes, wherein the image quality evaluation indexes comprise: mean value of the image, noise of the image, truncation error of the image, and histogram mean value of the image;
constructing an artifact classification model based on a neural network, and identifying the artifact type in the reconstructed image of the CT scan through the artifact classification model, wherein the artifact classification model comprises the following steps: no artifact, ring artifact, strip artifact, banding artifact, truncation artifact;
after CT scanning, calculating an image quality evaluation index for each CT scanning reconstructed image, and if the calculated image quality evaluation index does not meet the corresponding threshold condition, judging that the image quality is unqualified; otherwise, inputting the reconstructed image of the CT scanning into the artifact classification model for artifact classification, and if the classification result is that the artifact exists, judging that the image quality is unqualified.
9. A computer-readable storage medium storing at least one instruction executable by a processor, the at least one instruction, when executed by the processor, implementing the CT scan assist method of any one of claims 1 to 8.
10. A CT scanning apparatus, comprising a memory for storing at least one instruction and a processor for executing the at least one instruction to implement the CT scan assist method of any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010168254.XA CN111292378A (en) | 2020-03-12 | 2020-03-12 | CT scanning auxiliary method, device and computer readable storage medium |
PCT/CN2020/108882 WO2021179534A1 (en) | 2020-03-12 | 2020-08-13 | Ct scan auxiliary method and device, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010168254.XA CN111292378A (en) | 2020-03-12 | 2020-03-12 | CT scanning auxiliary method, device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111292378A true CN111292378A (en) | 2020-06-16 |
Family
ID=71030267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010168254.XA Pending CN111292378A (en) | 2020-03-12 | 2020-03-12 | CT scanning auxiliary method, device and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111292378A (en) |
WO (1) | WO2021179534A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541941A (en) * | 2020-12-07 | 2021-03-23 | 明峰医疗系统股份有限公司 | Scanning flow decision method and system based on CT locating sheet |
CN112741643A (en) * | 2020-12-31 | 2021-05-04 | 苏州波影医疗技术有限公司 | CT system capable of automatically positioning and scanning and positioning and scanning method thereof |
CN112862869A (en) * | 2020-12-31 | 2021-05-28 | 上海联影智能医疗科技有限公司 | Image scanning processing method, imaging scanning device, electronic device, and readable medium |
WO2021179534A1 (en) * | 2020-03-12 | 2021-09-16 | 南京安科医疗科技有限公司 | Ct scan auxiliary method and device, and computer readable storage medium |
CN115381471A (en) * | 2022-10-26 | 2022-11-25 | 南方医科大学南方医院 | CT scanning auxiliary system and method based on motion detection |
CN112862869B (en) * | 2020-12-31 | 2024-05-28 | 上海联影智能医疗科技有限公司 | Image scanning processing method, imaging scanning device, electronic device, and readable medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114782321B (en) * | 2022-03-24 | 2022-12-06 | 北京医准智能科技有限公司 | Chest CT image selection method, device, equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9492685B2 (en) * | 2014-06-13 | 2016-11-15 | Infinitt Healthcare Co., Ltd. | Method and apparatus for controlling and monitoring position of radiation treatment system |
CN106139414A (en) * | 2016-06-23 | 2016-11-23 | 深圳市奥沃医学新技术发展有限公司 | A kind of position monitoring method for radiotherapy system, device and radiotherapy system |
CN106340015A (en) * | 2016-08-30 | 2017-01-18 | 沈阳东软医疗系统有限公司 | Key point positioning method and device |
CN107545309A (en) * | 2016-06-23 | 2018-01-05 | 西门子保健有限责任公司 | Scored using the picture quality of depth generation machine learning model |
CN107789001A (en) * | 2017-10-31 | 2018-03-13 | 上海联影医疗科技有限公司 | A kind of pendulum position method and system for image scanning |
US20180280727A1 (en) * | 2017-03-30 | 2018-10-04 | Shimadzu Corporation | Positioning apparatus and method of positioning |
CN109199387A (en) * | 2018-10-22 | 2019-01-15 | 上海联影医疗科技有限公司 | Scan guide device and scanning bootstrap technique |
CN109508681A (en) * | 2018-11-20 | 2019-03-22 | 北京京东尚科信息技术有限公司 | The method and apparatus for generating human body critical point detection model |
CN109685206A (en) * | 2018-09-30 | 2019-04-26 | 上海联影医疗科技有限公司 | The system and method for generating the neural network model for image procossing |
CN110148454A (en) * | 2019-05-21 | 2019-08-20 | 上海联影医疗科技有限公司 | A kind of pendulum position method, apparatus, server and storage medium |
CN110400617A (en) * | 2018-04-24 | 2019-11-01 | 西门子医疗有限公司 | The combination of imaging and report in medical imaging |
CN110400351A (en) * | 2019-07-30 | 2019-11-01 | 晓智科技(成都)有限公司 | A kind of X-ray front end of emission automatic adjusting method and system |
CN110807755A (en) * | 2018-08-01 | 2020-02-18 | 通用电气公司 | Plane selection using localizer images |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021919A (en) * | 2016-10-28 | 2018-05-11 | 夏普株式会社 | The image processing apparatus and image processing method of acupuncture point positioning |
CN111292378A (en) * | 2020-03-12 | 2020-06-16 | 南京安科医疗科技有限公司 | CT scanning auxiliary method, device and computer readable storage medium |
-
2020
- 2020-03-12 CN CN202010168254.XA patent/CN111292378A/en active Pending
- 2020-08-13 WO PCT/CN2020/108882 patent/WO2021179534A1/en active Application Filing
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9492685B2 (en) * | 2014-06-13 | 2016-11-15 | Infinitt Healthcare Co., Ltd. | Method and apparatus for controlling and monitoring position of radiation treatment system |
CN106139414A (en) * | 2016-06-23 | 2016-11-23 | 深圳市奥沃医学新技术发展有限公司 | A kind of position monitoring method for radiotherapy system, device and radiotherapy system |
CN107545309A (en) * | 2016-06-23 | 2018-01-05 | 西门子保健有限责任公司 | Scored using the picture quality of depth generation machine learning model |
CN106340015A (en) * | 2016-08-30 | 2017-01-18 | 沈阳东软医疗系统有限公司 | Key point positioning method and device |
US20180280727A1 (en) * | 2017-03-30 | 2018-10-04 | Shimadzu Corporation | Positioning apparatus and method of positioning |
CN107789001A (en) * | 2017-10-31 | 2018-03-13 | 上海联影医疗科技有限公司 | A kind of pendulum position method and system for image scanning |
CN110400617A (en) * | 2018-04-24 | 2019-11-01 | 西门子医疗有限公司 | The combination of imaging and report in medical imaging |
CN110807755A (en) * | 2018-08-01 | 2020-02-18 | 通用电气公司 | Plane selection using localizer images |
CN109685206A (en) * | 2018-09-30 | 2019-04-26 | 上海联影医疗科技有限公司 | The system and method for generating the neural network model for image procossing |
CN109199387A (en) * | 2018-10-22 | 2019-01-15 | 上海联影医疗科技有限公司 | Scan guide device and scanning bootstrap technique |
CN109508681A (en) * | 2018-11-20 | 2019-03-22 | 北京京东尚科信息技术有限公司 | The method and apparatus for generating human body critical point detection model |
CN110148454A (en) * | 2019-05-21 | 2019-08-20 | 上海联影医疗科技有限公司 | A kind of pendulum position method, apparatus, server and storage medium |
CN110400351A (en) * | 2019-07-30 | 2019-11-01 | 晓智科技(成都)有限公司 | A kind of X-ray front end of emission automatic adjusting method and system |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021179534A1 (en) * | 2020-03-12 | 2021-09-16 | 南京安科医疗科技有限公司 | Ct scan auxiliary method and device, and computer readable storage medium |
CN112541941A (en) * | 2020-12-07 | 2021-03-23 | 明峰医疗系统股份有限公司 | Scanning flow decision method and system based on CT locating sheet |
CN112541941B (en) * | 2020-12-07 | 2023-12-15 | 明峰医疗系统股份有限公司 | Scanning flow decision method and system based on CT (computed tomography) positioning sheet |
CN112741643A (en) * | 2020-12-31 | 2021-05-04 | 苏州波影医疗技术有限公司 | CT system capable of automatically positioning and scanning and positioning and scanning method thereof |
CN112862869A (en) * | 2020-12-31 | 2021-05-28 | 上海联影智能医疗科技有限公司 | Image scanning processing method, imaging scanning device, electronic device, and readable medium |
CN112862869B (en) * | 2020-12-31 | 2024-05-28 | 上海联影智能医疗科技有限公司 | Image scanning processing method, imaging scanning device, electronic device, and readable medium |
CN115381471A (en) * | 2022-10-26 | 2022-11-25 | 南方医科大学南方医院 | CT scanning auxiliary system and method based on motion detection |
Also Published As
Publication number | Publication date |
---|---|
WO2021179534A1 (en) | 2021-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111292378A (en) | CT scanning auxiliary method, device and computer readable storage medium | |
CN111789614B (en) | Imaging system and method | |
JP6833444B2 (en) | Radiation equipment, radiography system, radiography method, and program | |
US9918691B2 (en) | Device and method for determining image quality of a radiogram image | |
US20230063828A1 (en) | Methods and systems for image acquisition, image quality evaluation, and medical image acquisition | |
CN111242947B (en) | CT scanning image quality evaluation method, computer readable storage medium and CT scanning device | |
CN111260647B (en) | CT scanning auxiliary method based on image detection, computer readable storage medium and CT scanning device | |
KR102527440B1 (en) | Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device | |
EP3745950B1 (en) | System and method for detecting anatomical regions | |
CN111524200B (en) | Method, apparatus and medium for segmenting a metal object in a projection image | |
CN108182434B (en) | Image processing method and device | |
CN117084699A (en) | System and method for dose prediction | |
US11730440B2 (en) | Method for controlling a medical imaging examination of a subject, medical imaging system and computer-readable data storage medium | |
CN114529502A (en) | Method and system for depth-based learning for automated subject anatomy and orientation identification | |
EP4184454A1 (en) | Weight estimation of a patient | |
US20230394657A1 (en) | Dynamic image analysis apparatus and recording medium | |
WO2023020609A1 (en) | Systems and methods for medical imaging | |
JP2021072946A (en) | Radiographic apparatus, radiographic system, radiographic method, and program | |
CN115770056A (en) | Imaging system, method | |
CN117883100A (en) | Dual-energy X-ray image motion correction training method and system based on artificial intelligence | |
EP4211662A1 (en) | Determining target object type and position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |