CN107468267B - Data processing method and medical imaging equipment - Google Patents

Data processing method and medical imaging equipment Download PDF

Info

Publication number
CN107468267B
CN107468267B CN201710684844.6A CN201710684844A CN107468267B CN 107468267 B CN107468267 B CN 107468267B CN 201710684844 A CN201710684844 A CN 201710684844A CN 107468267 B CN107468267 B CN 107468267B
Authority
CN
China
Prior art keywords
respiratory motion
sub
region
projection data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710684844.6A
Other languages
Chinese (zh)
Other versions
CN107468267A (en
Inventor
孙友军
王骥喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710684844.6A priority Critical patent/CN107468267B/en
Publication of CN107468267A publication Critical patent/CN107468267A/en
Application granted granted Critical
Publication of CN107468267B publication Critical patent/CN107468267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • A61B6/5264Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to motion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/037Emission tomography

Abstract

The embodiment of the invention provides a data processing method and medical imaging equipment. The embodiment of the invention identifies the respiratory motion area by acquiring the original PET projection data, determines the first PET projection data corresponding to the respiratory motion area, acquires the respiratory motion amplitude curve corresponding to the respiratory motion area, dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve, dividing part of the first PET projection data corresponding to the sub-areas into a plurality of data frames according to the respiratory motion curve of the sub-areas, dividing the first PET projection data into a plurality of data frames according to the data frames of each, the sub-regions are respectively subjected to image reconstruction to obtain first local reconstruction images corresponding to the sub-regions, so that the number of data frame segmentation parts of each sub-region in the respiratory motion region is in direct proportion to the respiratory motion amplitude, the respiratory artifact correction effect is optimized, the image quality of the reconstructed images after framing is improved, the problem of poor quality of reconstructed images after the respiratory motion area is framed in the prior art is solved to a certain extent.

Description

Data processing method and medical imaging equipment
[ technical field ] A method for producing a semiconductor device
The scheme relates to the technical field of image processing, in particular to a data processing method and medical imaging equipment.
[ background of the invention ]
PET (Positron Emission Tomography) technology has wide medical applications. The original PET projection data obtained by the examined object through PET scanning can obtain useful PET images after framing and reconstruction processing.
During a PET scan of an examination subject, a part of the body of the examination subject is exposed to different respiratory motion states at different scan times, which part is referred to as a respiratory motion region. Due to the existence of breathing artifacts, the PET projection data corresponding to the breathing motion region needs to be corrected during the reconstruction process. The positive correlation between the respiration artifact correction and the data frame division is realized, and the more the data frame division is, the better the respiration artifact correction effect is.
The amplitude of the respiratory motion is different in different regions of the respiratory motion region. In the prior art, the whole respiratory motion area is framed according to the uniform data frame division, so that the respiratory artifact correction effect of an area with large respiratory motion amplitude in the respiratory motion area is poor, and the quality of a reconstructed image after framing is poor.
[ summary of the invention ]
In view of this, an embodiment of the present disclosure provides a data processing method and a medical imaging device, so as to solve the problem in the prior art that, a respiratory artifact correction effect of a region with a large respiratory motion amplitude is poor and a reconstructed image after framing is poor due to framing of an entire respiratory motion region according to a uniform number of data frame partitions.
In a first aspect, an embodiment of the present invention provides a data processing method, where the method includes:
acquiring original PET projection data of an object to be detected, which is obtained by long-axis PET scanning;
identifying a respiratory motion region of the detected object, and determining first PET projection data corresponding to the respiratory motion region in the original PET projection data;
acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object;
dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve;
acquiring sub-region breathing motion curves corresponding to the sub-regions, and dividing part of first PET projection data corresponding to the sub-regions into a plurality of data frames according to the breathing motion curves of the sub-regions, wherein the division number of the data frames of each sub-region is related to the breathing motion amplitude of the sub-region;
and respectively carrying out image reconstruction on part of the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
The above aspect and any possible implementation manner further provide an implementation manner that acquiring a respiratory motion amplitude curve in a specific direction corresponding to the respiratory motion region includes:
generating an initial respiratory motion curve corresponding to the respiratory motion region according to the first PET projection data corresponding to the respiratory motion region;
screening inhalation phase PET projection data at the end of inspiration and exhalation phase PET projection data at the end of expiration from the first PET projection data according to the initial respiratory motion curve;
reconstructing based on the inhalation phase PET projection data to obtain an inhalation phase PET reconstructed image, and reconstructing based on the exhalation phase PET projection data to obtain an exhalation phase PET reconstructed image;
and determining a respiratory motion amplitude curve in the designated direction corresponding to the respiratory motion region according to the inhalation phase PET reconstructed image and the exhalation phase PET reconstructed image.
The above aspect and any possible implementation manner further provide an implementation manner in which the respiratory motion region is divided into a plurality of sub-regions according to the respiratory motion amplitude curve, including:
searching at least one first continuous line segment with the minimum respiratory motion amplitude larger than an amplitude threshold value and at least one second continuous line segment with the maximum respiratory motion amplitude smaller than or equal to the amplitude threshold value on the respiratory motion amplitude curve;
and dividing the area of the respiratory motion area corresponding to each first continuous line segment into a sub-area, and dividing the area of the respiratory motion area corresponding to each second continuous line segment into a sub-area.
The above-described aspect and any possible implementation manner further provide an implementation manner that, according to a respiratory motion curve of each sub-region, divides a portion of the first PET projection data corresponding to the corresponding sub-region into a plurality of data frames, including:
determining the maximum respiratory motion amplitude of the corresponding sub-region according to the sub-region respiratory motion curve;
acquiring an amplitude reference value;
comparing the maximum respiratory motion amplitude of the corresponding sub-region with the amplitude reference value to obtain a comparison result, and determining the number of data frame segmentation parts of the corresponding sub-region according to the comparison result;
and carrying out data frame division on part of the first PET projection data corresponding to the corresponding sub-region according to the data frame division number.
The above aspect and any possible implementation manner further provide an implementation manner, wherein identifying a respiratory motion region of the subject comprises:
selecting projection data with different phases from the original PET projection data as first identification basic data;
projecting based on the first identification basic data to obtain projection images corresponding to the different phases;
and identifying a respiratory motion area of the detected object according to the projection image.
The above aspect and any possible implementation manner further provide an implementation manner, wherein identifying a respiratory motion region of the subject comprises:
selecting data of a plurality of different phases from the original PET projection data as second identification basic data;
carrying out image reconstruction based on the second identification basic data to obtain reconstructed images corresponding to the different phases;
and identifying a respiratory motion region of the detected object according to the reconstructed image.
The above aspect and any possible implementation manner further provide an implementation manner, wherein identifying a respiratory motion region of the subject comprises:
acquiring anatomical image data of the subject;
carrying out image segmentation on the anatomical image data to obtain a segmented image corresponding to a respiratory motion area;
and mapping the segmented image corresponding to the respiratory motion area to a coordinate system corresponding to the original PET projection data to obtain a mapping area, and taking the mapping area as the respiratory motion area of the detected object.
The above-described aspects and any possible implementations further provide an implementation in which the anatomical image data is CT image data or MR image data.
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
determining second PET projection data corresponding to other areas except the respiratory motion area according to the original PET projection data and the first PET projection data corresponding to the respiratory motion area;
carrying out image reconstruction based on the second PET projection data to obtain second local reconstructed images corresponding to the other regions;
and merging the second local reconstructed image and the first local reconstructed images corresponding to all the sub-regions to obtain a merged image.
In a second aspect, an embodiment of the present invention provides a medical imaging apparatus, including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to:
acquiring original PET projection data of an object to be examined, which is obtained through PET scanning;
identifying a respiratory motion region of the detected object, and determining first PET projection data corresponding to the respiratory motion region in the original PET projection data;
acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object;
dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve;
acquiring sub-region respiratory motion curves corresponding to the sub-regions, and dividing the first PET projection data corresponding to the sub-regions into a plurality of data frames according to the sub-region respiratory motion curves, wherein the division number of the data frames of each sub-region is related to the respiratory motion amplitude of the sub-region;
and respectively carrying out image reconstruction on part of the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
The embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the original PET projection data is obtained, the respiratory motion area of the detected object is identified, the first PET projection data corresponding to the respiratory motion area is determined, the respiratory motion amplitude curve in the designated direction corresponding to the respiratory motion area is obtained, the respiratory motion area is divided into a plurality of sub-areas according to the respiratory motion amplitude curve, the respiratory motion curve of the sub-areas is obtained, part of the first PET projection data corresponding to the corresponding sub-areas is divided into a plurality of data frames according to the respiratory motion curve of the sub-areas, the image reconstruction is respectively carried out on each sub-area according to the data frame of each sub-area to obtain the first local reconstruction image corresponding to each sub-area, the whole respiratory motion area is divided into a plurality of sub-areas according to the respiratory motion amplitude, the division number of the data frames is determined according to the respiratory motion amplitude for each sub-area, so that the division number of the data frames of, the respiratory artifact correction effect of the area with larger respiratory motion amplitude in the respiratory motion area is optimized, and the image quality of the reconstructed image after framing is improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a diagram illustrating a first process of a data processing method according to an embodiment of the present invention.
Fig. 2 is a schematic view of a respiratory motion region provided by an embodiment of the present invention.
Fig. 3 is a schematic side profile view of a respiratory motion amplitude curve corresponding to the respiratory motion region in fig. 2.
Fig. 4 is a schematic side profile of the respiratory motion amplitude curve corresponding to fig. 3 after segmentation.
Fig. 5 is a schematic view of the division of the respiratory motion region corresponding to fig. 4.
Fig. 6 is a schematic diagram of a respiratory motion curve corresponding to the respiratory motion region 1 in fig. 5.
Fig. 7 is a schematic diagram of a respiratory motion curve corresponding to the respiratory motion region 2 in fig. 5.
Fig. 8 is a diagram illustrating a second flow of a data processing method according to an embodiment of the present invention.
Fig. 9 is a simplified block diagram of a medical imaging device.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Example one
Fig. 1 is a diagram illustrating a first process of a data processing method according to an embodiment of the present invention. As shown in fig. 1, in this embodiment, the data processing method includes the following steps:
s101, acquiring original PET projection data of an object to be detected through PET scanning.
S102, identifying a respiratory motion area of the detected object, and determining first PET projection data corresponding to the respiratory motion area in the original PET projection data.
And S103, acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object.
And S104, dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve.
S105, obtaining sub-region breathing motion curves corresponding to the sub-regions, and dividing part of the first PET projection data corresponding to the sub-regions into a plurality of data frames according to the sub-region breathing motion curves, wherein the division number of the data frames of each sub-region is related to the breathing motion amplitude of the sub-region.
And S106, respectively carrying out image reconstruction on part of the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
The subject may be the entire human body, or a large part of the human body, such as a part above the leg of the human body.
The long axis direction of the subject may be a direction from the head of the subject to the foot of the subject.
The raw PET projection data may be PET projection data obtained by scanning the object under examination with a PET apparatus. The PET device may be a long axis PET having an axial scan range of 1m or more, or a short axis PET having an axial scan range of 1m or less.
The respiratory motion region generally refers to a portion of the human body below the mouth and above the legs. Fig. 2 is a schematic view of a respiratory motion region provided by an embodiment of the present invention.
The PET projection data obtained by scanning the respiratory motion region at different respiratory phases are different. For example, PET projection data scanned for a respiratory motion region at the end of inspiration is different from PET projection data scanned for a respiratory motion region at the end of expiration.
The respiratory motion region can be identified from raw PET projection data or CT image data of the examination object.
In one exemplary implementation, identifying a respiratory motion region of a subject may include: selecting data of a plurality of different phases from the original PET projection data as first identification basic data; projecting based on the first identification basic data to obtain projection images corresponding to the different phases; from the projection images, a respiratory motion region of the subject is identified.
For example. Selecting PET projection data at the end of expiration and PET projection data at the end of inspiration from the original PET projection data, projecting based on the PET projection data at the end of expiration to obtain an expiration phase PET reconstructed image, projecting based on the PET projection data at the end of inspiration to obtain an inspiration phase PET reconstructed image, comparing the expiration phase PET reconstructed image with the inspiration phase PET reconstructed image, and finding a region with a larger difference in the two images, namely a respiratory motion region.
In one exemplary implementation, identifying a respiratory motion region of a subject may include: selecting data of a plurality of different phases from the original PET projection data as second identification basic data; carrying out image reconstruction based on the second identification basic data to obtain reconstructed images corresponding to the different phases; from the reconstructed image, a respiratory motion region of the subject is identified.
For example. Selecting PET projection data at the end of expiration and PET projection data at the end of inspiration from the original PET projection data, reconstructing based on the PET projection data at the end of expiration to obtain an expiration phase PET reconstructed image, reconstructing based on the PET projection data at the end of inspiration to obtain an inspiration phase PET reconstructed image, comparing the expiration phase PET reconstructed image with the inspiration phase PET reconstructed image, and finding a region with a larger difference in the two images, namely a respiratory motion region.
In one exemplary implementation, identifying a respiratory motion region of a subject may include: acquiring anatomical image data of a subject; carrying out image segmentation on the anatomical image data to obtain a segmented image corresponding to the respiratory motion area; and mapping the segmented image corresponding to the respiratory motion area to a coordinate system corresponding to the original PET projection data to obtain a mapping area, and taking the mapping area as the respiratory motion area of the detected object.
The anatomical image data may be CT image data or MR (Magnetic Resonance) image data.
In one exemplary implementation, acquiring a respiratory motion amplitude curve in a designated direction corresponding to a respiratory motion region includes: generating an initial respiratory motion curve corresponding to the respiratory motion area according to the first PET projection data; screening inhalation phase PET projection data at the end of inhalation and exhalation phase PET projection data at the end of exhalation from the first PET projection data according to the initial respiratory motion curve; reconstructing based on the inhalation phase PET projection data to obtain an inhalation phase PET reconstructed image, and reconstructing based on the exhalation phase PET projection data to obtain an exhalation phase PET reconstructed image; and determining a respiratory motion amplitude curve in the designated direction corresponding to the respiratory motion region according to the inhalation phase PET reconstructed image and the exhalation phase PET reconstructed image.
For example. Fig. 3 is a schematic side profile view of a respiratory motion amplitude curve corresponding to the respiratory motion region in fig. 2. Referring to fig. 3, the horizontal axis represents the amplitude of the respiratory motion and the vertical axis (direction identified as Z in fig. 3) represents the direction. In fig. 3, the designated direction is a direction from the head of the subject to the foot of the subject.
In one exemplary implementation, dividing the respiratory motion region into a number of sub-regions according to the respiratory motion amplitude curve may include: searching at least one first continuous line segment with the minimum respiratory motion amplitude larger than an amplitude threshold value and at least one second continuous line segment with the maximum respiratory motion amplitude smaller than or equal to the amplitude threshold value on a respiratory motion amplitude curve; and dividing the area of the respiratory motion area corresponding to each first continuous line segment into a sub-area, and dividing the area of the respiratory motion area corresponding to each second continuous line segment into a sub-area.
Wherein the amplitude threshold may be set based on empirical values.
For example. Fig. 4 is a schematic side profile of the respiratory motion amplitude curve corresponding to fig. 3 after segmentation. Referring to fig. 4, in the line segment of the respiratory motion amplitude curve corresponding to the part above the dividing line, the minimum respiratory motion amplitude is greater than the amplitude threshold; and in the line segment of the respiratory motion amplitude curve corresponding to the part below the dividing line, the maximum respiratory motion amplitude is less than or equal to the amplitude threshold value.
Fig. 5 is a schematic view of the division of the respiratory motion region corresponding to fig. 4. Referring to fig. 5, the respiratory motion region 1 corresponds to a portion above the dividing line in fig. 4, and the respiratory motion region 1 corresponds to a portion above the dividing line in fig. 4. Comparing fig. 2 and fig. 5, it can be seen that the whole respiratory motion region in fig. 2 is divided into two sub-regions shown in fig. 5 according to the division of the respiratory motion amplitude curve.
After the division according to fig. 5, please refer to fig. 6 and fig. 7 for the respiratory motion curves corresponding to the sub-regions. Fig. 6 is a schematic diagram of a respiratory motion curve corresponding to the respiratory motion region 1 in fig. 5. Fig. 7 is a schematic diagram of a respiratory motion curve corresponding to the respiratory motion region 2 in fig. 5.
In one exemplary implementation, dividing a portion of the first PET projection data corresponding to each sub-region into a plurality of data frames according to the respiratory motion curve of each sub-region includes: determining the maximum respiratory motion amplitude of the corresponding sub-area according to the respiratory motion curve of each sub-area; acquiring an amplitude reference value; comparing the maximum respiratory motion amplitude of the corresponding sub-region with an amplitude reference value to obtain a comparison result, and determining the number of data frame segmentation parts of the corresponding sub-region according to the comparison result; and dividing the data frame of the partial first PET projection data corresponding to the corresponding sub-region according to the data frame division number.
For example. Assuming that the maximum respiratory motion amplitude of the sub-region corresponds to a value of 10 and the reference amplitude value corresponds to a value of 3, the maximum respiratory motion amplitude is about 3.33 times the reference amplitude value after comparison. From this, it is possible to determine that the number of divided data frames is 3(3.33 rounded to 3). And dividing part of the first PET projection data corresponding to the subareas into 3 groups of data frames according to the data frame division number.
For another example, if the maximum respiratory motion amplitude of the sub-region corresponds to a value of 3.8 and the reference amplitude value corresponds to a value of 2, the maximum respiratory motion amplitude is about 1.9 times the reference amplitude value after comparison. From this, it is possible to determine that the number of divided data frames is 2 (2 after rounding to 1.9). And dividing part of the first PET projection data corresponding to the subarea into 2 groups of data frames according to the data frame division number.
The above method for determining the number of data frame partitions according to the comparison result between the maximum respiratory motion amplitude of the sub-region and the amplitude reference value is an exemplary frame partitioning method, and is not limited to the frame partitioning method.
The embodiment of the invention divides different data frames according to different breathing amplitude curves so as to ensure the breathing artifact correction effect of each sub-region, in particular the breathing artifact correction effect of the sub-region with larger breathing motion amplitude.
In the embodiment of the present invention, the time information of different LORs (Line of Response) may also be acted on the whole body images of different frames to reconstruct the whole body image corresponding to the maximum number of frames.
In the embodiment shown in fig. 1, the whole respiratory motion region is divided into a plurality of sub-regions according to the respiratory motion amplitude in step S104, the number of data frame partitions is determined according to the respiratory motion amplitude for each sub-region in step S105, and step S106 reconstructs the sub-regions on the basis of the framing in step S105, so that the number of data frame partitions of each sub-region is in direct proportion to the respiratory motion amplitude, thereby optimizing the respiratory artifact correction effect of the region with larger respiratory motion amplitude in the respiratory motion region and improving the image quality of the reconstructed image after framing.
Fig. 8 is a diagram illustrating a second flow of a data processing method according to an embodiment of the present invention. As shown in fig. 8, in this embodiment, a data processing method is applied to a terminal, and the method includes the following steps:
s801, acquiring original PET projection data of the examined object obtained through PET scanning.
S802, a respiratory motion area of the detected object is identified, and first PET projection data corresponding to the respiratory motion area in the original PET projection data are determined.
And S803, acquiring a respiratory motion amplitude curve in a specified direction corresponding to the respiratory motion area, wherein the specified direction is the long axis direction of the detected object.
And S804, dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve.
S805, obtaining sub-region respiratory motion curves corresponding to a plurality of sub-regions, and dividing part of first PET projection data corresponding to the corresponding sub-regions into a plurality of data frames according to the sub-region respiratory motion curves, wherein the division number of the data frames of each sub-region is related to the respiratory motion amplitude of the sub-region.
And S806, respectively performing image reconstruction on part of the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
And S807, determining second PET projection data corresponding to other regions except the respiratory motion region according to the original PET projection data and the first PET projection data.
And S808, carrying out image reconstruction based on the second PET projection data to obtain second local reconstructed images corresponding to other areas.
And S809, merging the second local reconstructed image and the first local reconstructed images corresponding to all the sub-regions to obtain a merged image.
For convenience of description, the regions other than the respiratory motion region are referred to as non-respiratory motion regions.
In the embodiment shown in fig. 8, based on the embodiment shown in fig. 1, image reconstruction is performed on the non-respiratory motion region based on the second PET projection data corresponding to the non-respiratory motion region to obtain a second local reconstructed image corresponding to the non-respiratory motion region, and the second local reconstructed image and the first local reconstructed images corresponding to all sub-regions are further merged to obtain a merged image. Since the embodiment shown in fig. 1 improves the image quality of the first partially reconstructed image, the embodiment shown in fig. 8 also improves the image quality of the merged image when the merged image is obtained based on the first partially reconstructed image.
The data processing method provided by the embodiments of the invention can be applied to a PET imaging device and a multi-modality device comprising the PET imaging device.
The data processing method provided by the embodiment of the invention comprises the steps of identifying a respiratory motion area of a detected object by acquiring original PET projection data, determining first PET projection data corresponding to the respiratory motion area, acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve, acquiring sub-area respiratory motion curves, dividing part of the first PET projection data corresponding to the sub-areas into a plurality of data frames according to the respiratory motion curves of the sub-areas, respectively carrying out image reconstruction on the sub-areas according to the data frames of the sub-areas to obtain first local reconstruction images corresponding to the sub-areas, dividing the whole respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude, determining the division number of data frames for each sub-area according to the respiratory motion amplitude, the division number of the data frames of each sub-area is in direct proportion to the respiratory motion amplitude of the data frames, the respiratory artifact correction effect of the area with larger respiratory motion amplitude in the respiratory motion area is optimized, and the image quality of the reconstructed image after framing is improved.
Example two
An embodiment of the present invention provides a medical imaging apparatus, including: a processor; a memory for storing processor-executable instructions; the processor is configured to: acquiring original PET projection data of an object to be examined, which is obtained through PET scanning; identifying a respiratory motion region of an object to be examined, and determining first PET projection data corresponding to the respiratory motion region in original PET projection data; acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object; dividing a respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve; acquiring sub-region respiratory motion curves corresponding to a plurality of sub-regions, and dividing part of first PET projection data corresponding to the sub-regions into a plurality of data frames according to the sub-region respiratory motion curves, wherein the division number of the data frames of each sub-region is related to the respiratory motion amplitude of the sub-region; and respectively carrying out image reconstruction on part of the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
The medical imaging device may be a PET imaging device or a multi-modality device with a PET imaging device, among others.
Fig. 9 is a simplified block diagram of a medical imaging device. Referring to fig. 9, the medical imaging device 900 may include a processor 901 coupled to one or more data storage tools, which may include a storage medium 906 and a memory unit 904. Medical imaging apparatus 900 may also include an input interface 905 and an output interface 907 for communicating with another device or system. Program codes executed by the CPU of the processor 901 may be stored in the memory unit 904 or the storage medium 906.
The processor 901 of the medical imaging device 900 invokes program code stored in the memory unit 904 or the storage medium 906 to perform the following steps:
acquiring original PET projection data of an object to be examined, which is obtained through PET scanning;
identifying a respiratory motion region of an object to be examined, and determining first PET projection data corresponding to the respiratory motion region in original PET projection data;
acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object;
dividing a respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve;
acquiring sub-region respiratory motion curves corresponding to a plurality of sub-regions, and dividing part of first PET projection data corresponding to the sub-regions into a plurality of data frames according to the sub-region respiratory motion curves, wherein the division number of the data frames of each sub-region is related to the respiratory motion amplitude of the sub-region;
and respectively carrying out image reconstruction on part of the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
generating an initial respiratory motion curve corresponding to the respiratory motion area according to the first PET projection data corresponding to the respiratory motion area;
screening inhalation phase PET projection data at the end of inhalation and exhalation phase PET projection data at the end of exhalation from the first PET projection data according to the initial respiratory motion curve;
reconstructing based on the inhalation phase PET projection data to obtain an inhalation phase PET reconstructed image, and reconstructing based on the exhalation phase PET projection data to obtain an exhalation phase PET reconstructed image;
and determining a respiratory motion amplitude curve in the designated direction corresponding to the respiratory motion region according to the inhalation phase PET reconstructed image and the exhalation phase PET reconstructed image.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
searching at least one first continuous line segment with the minimum respiratory motion amplitude larger than an amplitude threshold value and at least one second continuous line segment with the maximum respiratory motion amplitude smaller than or equal to the amplitude threshold value on a respiratory motion amplitude curve;
and dividing the area of the respiratory motion area corresponding to each first continuous line segment into a sub-area, and dividing the area of the respiratory motion area corresponding to each second continuous line segment into a sub-area.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
determining the maximum respiratory motion amplitude of the corresponding sub-area according to the respiratory motion curve of each sub-area;
acquiring an amplitude reference value;
comparing the maximum respiratory motion amplitude of the corresponding sub-region with an amplitude reference value to obtain a comparison result, and determining the number of data frame segmentation parts of the corresponding sub-region according to the comparison result;
and dividing the data frame of the partial first PET projection data corresponding to the corresponding sub-region according to the data frame division number.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
selecting data of a plurality of different phases from the original PET projection data as first identification basic data;
projecting based on the first identification basic data to obtain projection images corresponding to the different phases;
from the projection images, a respiratory motion region of the subject is identified.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
selecting data of a plurality of different phases from the original PET projection data as second identification basic data;
carrying out image reconstruction based on the second identification basic data to obtain reconstructed images corresponding to the different phases;
from the reconstructed image, a respiratory motion region of the subject is identified.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
acquiring anatomical image data of a subject;
carrying out image segmentation on the anatomical image data to obtain a segmented image corresponding to the respiratory motion area;
and mapping the segmented image corresponding to the respiratory motion area to a coordinate system corresponding to the original PET projection data to obtain a mapping area, and taking the mapping area as the respiratory motion area of the detected object.
The anatomical image data may be CT image data or MR image data, among others.
In an exemplary implementation, the processor 901 may be further configured to perform the following steps:
determining second PET projection data corresponding to other regions except the respiratory motion region according to the original PET projection data and the first PET projection data;
carrying out image reconstruction based on the second PET projection data to obtain second local reconstruction images corresponding to other areas;
and merging the second local reconstructed image and the first local reconstructed images corresponding to all the sub-regions to obtain a merged image.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of data processing, the method comprising:
acquiring original PET projection data of an object to be examined, which is obtained through PET scanning;
identifying a respiratory motion region of the detected object, and determining first PET projection data corresponding to the respiratory motion region in the original PET projection data;
acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object;
dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve;
acquiring sub-region breathing motion curves corresponding to the sub-regions, and dividing part of first PET projection data corresponding to the sub-regions into a plurality of data frames according to the breathing motion curves of the sub-regions, wherein the division number of the data frames of each sub-region is related to the breathing motion amplitude of the sub-region;
and respectively carrying out image reconstruction on partial first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
2. The method of claim 1, wherein obtaining a respiratory motion amplitude curve in a given direction corresponding to the respiratory motion region comprises:
generating an initial respiratory motion curve corresponding to the respiratory motion region according to the first PET projection data corresponding to the respiratory motion region;
screening inhalation phase PET projection data at the end of inspiration and exhalation phase PET projection data at the end of expiration from the first PET projection data according to the initial respiratory motion curve;
reconstructing based on the inhalation phase PET projection data to obtain an inhalation phase PET reconstructed image, and reconstructing based on the exhalation phase PET projection data to obtain an exhalation phase PET reconstructed image;
and determining a respiratory motion amplitude curve in the designated direction corresponding to the respiratory motion region according to the inhalation phase PET reconstructed image and the exhalation phase PET reconstructed image.
3. The method of claim 1, wherein dividing the respiratory motion region into a number of sub-regions according to the respiratory motion amplitude profile comprises:
searching at least one first continuous line segment with the minimum respiratory motion amplitude larger than an amplitude threshold value and at least one second continuous line segment with the maximum respiratory motion amplitude smaller than or equal to the amplitude threshold value on the respiratory motion amplitude curve;
and dividing the area of the respiratory motion area corresponding to each first continuous line segment into a sub-area, and dividing the area of the respiratory motion area corresponding to each second continuous line segment into a sub-area.
4. The method of claim 1, wherein dividing the portion of the first PET projection data corresponding to each sub-region into a plurality of data frames according to the respiratory motion curve of each sub-region comprises:
determining the maximum respiratory motion amplitude of the corresponding sub-area according to the respiratory motion curve of each sub-area;
acquiring an amplitude reference value;
comparing the maximum respiratory motion amplitude of the corresponding sub-region with the amplitude reference value to obtain a comparison result, and determining the number of data frame segmentation parts of the corresponding sub-region according to the comparison result;
and carrying out data frame division on part of the first PET projection data corresponding to the corresponding sub-region according to the data frame division number.
5. The method of claim 1, wherein identifying the subject's respiratory motion region comprises:
selecting projection data with different phases from the original PET projection data as first identification basic data;
projecting based on the first identification basic data to obtain PET reconstruction images corresponding to the different phases;
and identifying a respiratory motion region of the detected object according to the PET reconstruction image.
6. The method of claim 1, wherein identifying the subject's respiratory motion region comprises:
selecting data of a plurality of different phases from the original PET projection data as second identification basic data;
carrying out image reconstruction based on the second identification basic data to obtain reconstructed images corresponding to the different phases;
and identifying the respiratory motion region of the detected object according to the reconstructed images corresponding to the different phases respectively.
7. The method of claim 1, wherein identifying the subject's respiratory motion region comprises:
acquiring anatomical image data of the subject;
carrying out image segmentation on the anatomical image data to obtain a segmented image corresponding to a respiratory motion area;
and mapping the segmented image corresponding to the respiratory motion area to a coordinate system corresponding to the original PET projection data to obtain a mapping area, and taking the mapping area as the respiratory motion area of the detected object.
8. The method of claim 7, wherein the anatomical image data is CT image data or MR image data.
9. The method of claim 1, further comprising:
determining second PET projection data corresponding to other areas except the respiratory motion area according to the original PET projection data and the first PET projection data corresponding to the respiratory motion area;
carrying out image reconstruction based on the second PET projection data to obtain second local reconstructed images corresponding to the other regions;
and merging the second local reconstructed image and the first local reconstructed images corresponding to all the sub-regions to obtain a merged image.
10. A medical imaging device, characterized in that the device comprises:
a processor;
a memory for storing processor-executable instructions;
the processor is configured to:
acquiring original PET projection data of an object to be examined, which is obtained through PET scanning;
identifying a respiratory motion region of the detected object, and determining first PET projection data corresponding to the respiratory motion region in the original PET projection data;
acquiring a respiratory motion amplitude curve in a designated direction corresponding to the respiratory motion area, wherein the designated direction is the long axis direction of the detected object;
dividing the respiratory motion area into a plurality of sub-areas according to the respiratory motion amplitude curve;
acquiring sub-region breathing motion curves corresponding to the sub-regions, and dividing part of first PET projection data corresponding to the sub-regions into a plurality of data frames according to the breathing motion curves of the sub-regions, wherein the division number of the data frames of each sub-region is related to the breathing motion amplitude of the sub-region;
and respectively carrying out image reconstruction on the first PET projection data corresponding to each sub-region according to the data frame of each sub-region to obtain a first local reconstruction image corresponding to each sub-region.
CN201710684844.6A 2017-08-11 2017-08-11 Data processing method and medical imaging equipment Active CN107468267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710684844.6A CN107468267B (en) 2017-08-11 2017-08-11 Data processing method and medical imaging equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710684844.6A CN107468267B (en) 2017-08-11 2017-08-11 Data processing method and medical imaging equipment

Publications (2)

Publication Number Publication Date
CN107468267A CN107468267A (en) 2017-12-15
CN107468267B true CN107468267B (en) 2020-12-04

Family

ID=60600310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710684844.6A Active CN107468267B (en) 2017-08-11 2017-08-11 Data processing method and medical imaging equipment

Country Status (1)

Country Link
CN (1) CN107468267B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345526A (en) * 2018-09-27 2019-02-15 上海联影医疗科技有限公司 Image rebuilding method, device, computer equipment and storage medium
US11024062B2 (en) 2018-06-11 2021-06-01 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for evaluating image quality
CN109171781B (en) * 2018-11-06 2022-05-13 上海联影医疗科技股份有限公司 Perfusion scanning image reconstruction method and device, image scanning equipment and storage medium
CN110298794A (en) * 2019-05-20 2019-10-01 上海联影智能医疗科技有限公司 Medical image system, method, computer equipment and readable storage medium storing program for executing
CN110215226B (en) * 2019-05-28 2023-10-03 上海联影医疗科技股份有限公司 Image attenuation correction method, image attenuation correction device, computer equipment and storage medium
CN110473180A (en) * 2019-07-31 2019-11-19 南方医科大学南方医院 Recognition methods, system and the storage medium of respiratory chest motion

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009236793A (en) * 2008-03-28 2009-10-15 Hitachi Ltd Method for creating image information, method for creating tomographic image information for tomographic photographing apparatus, and tomographic photographing apparatus
CN101658426B (en) * 2009-09-11 2011-08-31 昆明理工大学 Respiratory gating technology based on characteristic of sensitivity of three-dimensional positron emission (PET) detector
JP5874636B2 (en) * 2010-08-27 2016-03-02 コニカミノルタ株式会社 Diagnosis support system and program
CN102151142B (en) * 2011-04-14 2012-08-15 华中科技大学 Motion door control method and system in positron emission tomography
CN104000618B (en) * 2014-05-13 2016-03-30 昆明理工大学 The true number of photons gate control method of one ring carries out the bearing calibration of respiratory movement gate
US11633166B2 (en) * 2015-09-23 2023-04-25 Siemens Medical Solutions Usa, Inc. Spatial registration of positron emission tomography and computed tomography acquired during respiration
CN107016661B (en) * 2017-06-19 2019-10-15 上海联影医疗科技有限公司 A kind of medical imaging procedure and device

Also Published As

Publication number Publication date
CN107468267A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107468267B (en) Data processing method and medical imaging equipment
US10839567B2 (en) Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
US8600132B2 (en) Method and apparatus for motion correcting medical images
US9471987B2 (en) Automatic planning for medical imaging
US10362970B2 (en) Phase-to-amplitude/slope mapping
CN110400626B (en) Image detection method, image detection device, computer equipment and storage medium
EP3424017B1 (en) Automatic detection of an artifact in patient image data
US10255684B2 (en) Motion correction for PET medical imaging based on tracking of annihilation photons
US20130294670A1 (en) Apparatus and method for generating image in positron emission tomography
US20210104037A1 (en) Motion correction for medical image data
US20230222709A1 (en) Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
CN108182434B (en) Image processing method and device
US9895122B2 (en) Scanning apparatus, medical image device and scanning method
Bai et al. Development and evaluation of a new fully automatic motion detection and correction technique in cardiac SPECT imaging
KR102616736B1 (en) Automated motion compensation in PET imaging
Yang et al. Tissue classification for PET/MRI attenuation correction using conditional random field and image fusion
CN107610083B (en) Data processing method and equipment and medical image acquisition method
US20230260141A1 (en) Deep learning for registering anatomical to functional images
US20230190216A1 (en) Systems and methods for correcting motion artifacts in images
Liu et al. NMAR3: Normalized Metal Artifact Reduction for Cone Beam Computed Tomography
RU2779714C1 (en) Automated motion correction in pet imaging
Sun et al. Data-driven rigid motion correction for helical CT
Brankov Mesh modeling, reconstruction and spatio-temporal processing of medical images
KR20230134926A (en) Medical image processing method
CN116547695A (en) Switching between neural networks based on scout scan image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant