CN110301928B - Method, device and system for reconstructing PET (positron emission tomography) image - Google Patents

Method, device and system for reconstructing PET (positron emission tomography) image Download PDF

Info

Publication number
CN110301928B
CN110301928B CN201910600340.0A CN201910600340A CN110301928B CN 110301928 B CN110301928 B CN 110301928B CN 201910600340 A CN201910600340 A CN 201910600340A CN 110301928 B CN110301928 B CN 110301928B
Authority
CN
China
Prior art keywords
pet
identification information
image
target
pet image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910600340.0A
Other languages
Chinese (zh)
Other versions
CN110301928A (en
Inventor
何青雷
刘勺连
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Zhihe Medical Technology Co ltd
Original Assignee
Shenyang Zhihe Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Zhihe Medical Technology Co ltd filed Critical Shenyang Zhihe Medical Technology Co ltd
Priority to CN201910600340.0A priority Critical patent/CN110301928B/en
Publication of CN110301928A publication Critical patent/CN110301928A/en
Application granted granted Critical
Publication of CN110301928B publication Critical patent/CN110301928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/037Emission tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to a method, a device and a system for reconstructing PET images, wherein the method is applied to a PET system and comprises the following steps: determining part identification information of at least one target detection part of the detected object, and acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scanning parameters, and carrying out PET image reconstruction on the target detection parts according to PET image processing data of each target detection part to obtain PET images of detected objects, so that PET image reconstruction is carried out on the target detection parts by using PET image processing data set for different target detection parts, and a PET image with very good image quality is obtained.

Description

Method, device and system for reconstructing PET (positron emission tomography) image
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a method, an apparatus, and a system for reconstructing a PET image.
Background
The positron emission computed tomography (Positron Emission Tomography, PET) technology is a nuclear medicine imaging technology, has higher application value in clinical medicine, and can be applied to tumor detection, diagnosis of cerebrovascular diseases and the like.
When PET equipment performs PET image reconstruction on a detected object (such as a patient), PET scanning is performed on the detected object to obtain PET scanning data of the detected object, then PET image reconstruction is performed on the detected object by using the PET image reconstruction data and the PET scanning data, and specifically, the PET equipment performs PET image reconstruction on all detection parts of the whole body of the detected object by using the same PET image reconstruction data. The PET image reconstruction data used is data set by a user (e.g., doctor).
When the user does not know the principle of PET image reconstruction, the situation of error setting of PET image reconstruction data can occur, resulting in poor PET image quality. Based on the different tissue structures of different detection parts, in a PET image obtained by using the same PET image reconstruction data, the image quality of part of detection parts is better, and the image quality of other detection parts is poorer, so that the PET image quality is poorer.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus and system for reconstructing PET images to solve the above-mentioned technical problems.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
according to a first aspect of an embodiment of the present invention, there is provided a method of reconstructing a PET image, for use in a PET system, the method comprising:
Determining part identification information of at least one target detection part of the detected object;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scan parameters;
and carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
In an embodiment, the performing PET image reconstruction on the target detection part according to the PET image processing data of each target detection part to obtain the PET image of the detected object includes:
for each part identification information, carrying out PET scanning on the target detection part by utilizing the PET scanning parameters corresponding to the part identification information to obtain PET scanning data of the target detection part;
and carrying out PET image reconstruction on the target detection part according to the PET scanning data and the corresponding PET image reconstruction data to obtain the PET image.
In one embodiment, the determining the location identification information of the at least one target detection location of the detected object includes:
Dividing the part of the detected object according to a preset dividing rule to obtain part identification information of at least two detection parts;
and determining the position identification information of the at least one target detection position from the position identification information of the at least two detection positions.
In one embodiment, the PET system comprises: a part recognition device; the method for dividing the detected object into parts according to a preset dividing rule to obtain part identification information of at least two detected parts comprises the following steps:
performing part recognition on the detected object by using the part recognition device to obtain a part recognition result;
and obtaining the part identification information of the at least two detection parts according to the part identification result.
In an embodiment, the dividing the part of the detected object according to a preset dividing rule to obtain part identification information of at least two detected parts includes:
acquiring CT image data of the detected object and target object attribute information, wherein the target object attribute information comprises at least one of the following items: height information, race information, region information;
searching a corresponding relation between preset object attribute information and image region division ratios to obtain target image region division ratios corresponding to the target object attribute information, wherein the target image region division ratios are ratios among part image region sizes of all detection parts in CT images of the detected objects;
Acquiring ordinate information of a top end point and a bottom end point of the detected object along the arrangement direction of a plurality of scanning beds from the CT image data respectively;
determining the ordinate range of each part image area according to the target image area dividing proportion and the ordinate information;
and acquiring the position identification information corresponding to each ordinate range.
In an embodiment, the obtaining the location identification information corresponding to each of the ordinate ranges includes:
determining ordering information of all detection parts in all detection parts according to the arrangement information of the ordinate ranges of all detection parts;
and determining the part identification information corresponding to the sequencing information of each detection part according to the corresponding relation between the preset sequencing information of the detection part and the part identification information.
In an embodiment, the determining the location identification information of the at least one target detection location from the location identification information of the at least two detection locations includes:
displaying a mark frame on the edge of the total CT image area of the at least two detection parts in the output CT image according to each ordinate range;
Receiving a sizing operation for the marker frame;
and determining the part identification information of the at least one target detection part from the part identification information of the at least two detection parts by using the size of the marking frame after the size determination operation is finished.
In an embodiment, displaying a marker frame on an edge of a total CT image area of the at least two detection sites in the output CT image according to each of the ordinate ranges includes:
displaying a total mark frame on the edge of the total CT image area in the output CT image according to each ordinate range;
the determining, using the size of the marking frame after the size determining operation is finished, the location identification information of the at least one target detection location from the location identification information of the at least two detection locations, includes:
determining a CT image area defined by the total mark frame after the size determining operation is finished;
and acquiring the part identification information of each target detection part of which a part image area is positioned in the CT image area.
In an embodiment, displaying a marker frame on an edge of a total CT image area of the at least two detection sites in the output CT image according to each of the ordinate ranges includes:
Displaying a mark dividing frame on the edge of the part image area of each detection part in the output CT image according to each ordinate range;
the determining, using the size of the marking frame after the size determining operation is finished, the location identification information of the at least one target detection location from the location identification information of the at least two detection locations, includes:
for each of the sub-mark frames, the part identification information of the detection part defined by the sub-mark frame before the sizing operation is performed is determined as the part identification information of the target detection part defined by the sub-mark frame after the sizing operation is completed.
In an embodiment, before the receiving the sizing operation for the marker box, the method further comprises:
determining target scanning bed information corresponding to each ordinate range according to the corresponding relation between the ordinate information of the pixel points in the CT image and the bed information of the scanning bed;
and displaying the schematic structure of the target scanning bed with the target scanning bed information and a target part image area in the same height in the output CT image, wherein the target part image area is the part image area to which the ordinate range belongs.
In an embodiment, after the determining the target scan bed information corresponding to the ordinate range, the method further comprises:
establishing a corresponding relation between the position identification information corresponding to each ordinate range and the target scanning bed information to obtain a bed information set;
the step of performing PET scanning on the target detection part by using the PET scanning parameters corresponding to the part identification information for each part identification information to obtain the PET scanning data of the target detection part includes:
before the PET system performs the PET scanning on a target scanning bed, searching the bed information set according to the target scanning bed information of the target scanning bed to obtain the position identification information corresponding to the target scanning bed information;
determining the PET scanning parameters corresponding to the position identification information;
and carrying out PET scanning on the target detection part positioned on the target scanning bed by utilizing the determined PET scanning parameters.
In an embodiment, the performing PET image reconstruction on the target detection part according to the PET image processing data of each target detection part to obtain the PET image of the detected object includes:
Directly utilizing the PET image processing data of each target detection part to reconstruct the image of the target detection part to obtain the PET image;
or outputting each piece of PET image processing data;
detecting a data management operation for each of the PET image processing data;
in response to detecting the data management operation, determining each of the PET image processing data after data management as target PET image processing data;
and carrying out PET image reconstruction on the target detection parts according to the target PET image processing data of each target detection part to obtain the PET image.
According to a second aspect of embodiments of the present invention, there is provided an apparatus for reconstructing a PET image, for use in a PET system, the apparatus comprising:
a first determination module configured to determine location identification information of at least one target detection location of the detected object;
an acquisition module configured to acquire PET image processing data corresponding to each of the part identification information, the PET image processing data including: PET image reconstruction data and PET scan parameters;
and the reconstruction module is configured to reconstruct a PET image of each target detection part according to the PET image processing data of each target detection part, so as to obtain the PET image of the detected object.
In an embodiment, the reconstruction module includes:
the scanning sub-module is configured to perform PET scanning on the target detection part by utilizing the PET scanning parameters corresponding to the part identification information for each part identification information to obtain the PET scanning data of the target detection part;
and the first reconstruction sub-module is configured to reconstruct a PET image of the target detection part according to the PET scanning data and the corresponding PET image reconstruction data to obtain the PET image.
In an embodiment, the first determining module includes:
the obtaining submodule is configured to divide the parts of the detected object according to a preset division rule to obtain part identification information of at least two detection parts;
a determination sub-module configured to determine the location identification information of the at least one target detection location from the location identification information of the at least two detection locations.
In an embodiment, the obtaining sub-module includes:
an identification unit configured to include, at the PET system: in the case of the part recognition device, the part recognition device is used for recognizing the part of the detected object to obtain a part recognition result;
And a first obtaining unit configured to obtain the part identification information of the at least two detection parts according to the part identification result.
In an embodiment, the obtaining sub-module includes:
a first acquisition unit configured to acquire CT image data of the detected object and target object attribute information including at least one of: height information, race information, region information;
a second obtaining unit configured to find a corresponding relation between preset object attribute information and an image region division ratio, to obtain a target image region division ratio corresponding to the target object attribute information, where the target image region division ratio is a ratio between part image region sizes of each detection part in the CT image of the detected object;
a second acquisition unit configured to acquire, from the CT image data, ordinate information of a top end point and a bottom end point of the detected object in an arrangement direction of a plurality of scanning beds, respectively;
a first determination unit configured to determine an ordinate range of each of the part image areas based on the target image area division ratio and the ordinate information;
And a third acquisition unit configured to acquire the location identification information corresponding to each of the ordinate ranges.
In an embodiment, the third obtaining unit includes:
a first determining subunit configured to determine ordering information of each of the detection sites in all of the detection sites according to arrangement information of the ordinate ranges of all of the detection sites;
and the second determination subunit is configured to determine the part identification information corresponding to the sequencing information of each detection part according to the corresponding relation between the preset sequencing information of the detection part and the part identification information.
In an embodiment, the determining submodule includes:
a first display unit configured to display a marker frame on an edge of a total CT image area of the at least two detection sites in the output CT image according to each of the ordinate ranges;
a receiving unit configured to receive a size determining operation for the mark frame;
and a second determination unit configured to determine the part identification information of the at least one target detection part from part identification information of the at least two detection parts using a size of the marking frame after the size determination operation is ended.
In an embodiment, the first display unit includes:
a first display subunit configured to display a total mark frame on an edge of the total CT image area in the output CT image according to each of the ordinate ranges;
the second determination unit includes:
a third determination subunit configured to determine a CT image region defined by the total mark frame after the size determination operation is ended;
an acquisition subunit configured to acquire the part identification information of each of the target detection parts whose part image area is located within the CT image area.
In an embodiment, the first display unit includes:
a second display subunit configured to display a mark dividing frame on an edge of the region image area of each detection region in the output CT image according to each of the ordinate ranges;
the second determination subunit is configured to determine, for each of the sub-mark frames, the part identification information of the detection part defined by the sub-mark frame before the size determination operation is performed, as the part identification information of the target detection part defined by the sub-mark frame after the size determination operation is ended.
In an embodiment, the device further comprises:
the second determining module is configured to determine, for each of the ordinate ranges, target scanning bed information corresponding to the ordinate range according to a correspondence between ordinate information of a pixel point in the CT image and bed information of a scanning bed, before the receiving of the size determining operation for the marker frame;
and a display module configured to display, in the output CT image, a schematic structure of a target scanning couch having the target scanning couch information at the same height as a target region image region, the target region image region being the region image region to which the ordinate range belongs.
In an embodiment, the device further comprises:
the establishing module is configured to establish a corresponding relation between the position identification information corresponding to each ordinate range and the target scanning bed information after the target scanning bed information corresponding to the ordinate range is determined, so as to obtain a bed information set;
the scanning sub-module comprises:
the acquisition unit is configured to search the bed information set according to the target scanning bed information of the target scanning bed before the PET system performs the PET scanning on the target scanning bed, so as to acquire the position identification information corresponding to the target scanning bed information;
A determining unit configured to determine the PET scan parameters corresponding to the part identification information;
and a scanning unit configured to perform the PET scan on the target detection region located on the target scanning bed using the determined PET scan parameters.
In an embodiment, the reconstruction module is configured to reconstruct the image of the target detection site directly using the PET image processing data of each target detection site, and obtain the PET image;
alternatively, the reconstruction module includes:
an output sub-module configured to output each of the PET image processing data;
a detection sub-module configured to detect a data management operation for each of the PET image processing data;
as a sub-module configured to determine each of the PET image processing data after data management as a target PET image processing data in response to detecting the data management operation;
and the second reconstruction sub-module is configured to reconstruct a PET image of each target detection part according to the target PET image processing data of each target detection part, so as to obtain the PET image.
According to a third aspect of embodiments of the present invention, there is provided a PET system comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining part identification information of at least one target detection part of the detected object;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scan data;
and carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
According to a fourth aspect of embodiments of the present invention, there is provided a machine-readable storage medium having stored thereon computer instructions which, when executed, perform the following:
determining part identification information of at least one target detection part of the detected object;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scan data;
and carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
According to the technical scheme, the position identification information of at least one target detection position of the detected object is determined, the PET image processing data corresponding to each position identification information is obtained, PET image reconstruction is carried out on the target detection position according to the PET image processing data of each target detection position, the PET image of the detected object is obtained, PET image reconstruction is carried out on the target detection position according to the PET image processing data set for different target detection positions, and therefore the PET image with good image quality is obtained.
The PET image processing data used in reconstructing the PET image is automatically determined by the PET system, so that manual setting operation is omitted, the occurrence of manual setting error condition is avoided, and the quality of the PET image is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flowchart illustrating a method of reconstructing a PET image according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating another method of reconstructing a PET image according to an example embodiment;
FIG. 3 is a flowchart illustrating another method of reconstructing a PET image according to an example embodiment;
FIG. 4 is a flowchart illustrating another method of reconstructing a PET image according to an example embodiment;
FIG. 5 is a flowchart illustrating another method of reconstructing a PET image according to an example embodiment;
FIG. 6 is a CT image displayed by the PET system shown according to an exemplary embodiment;
FIG. 7 is a flowchart illustrating another method of reconstructing a PET image according to an example embodiment;
FIG. 8 is a block diagram illustrating an apparatus for reconstructing a PET image in accordance with an exemplary embodiment;
fig. 9 is a block diagram illustrating a system for reconstructing PET images according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
FIG. 1 is a flowchart illustrating a method of reconstructing a PET image, the method being applied to a PET image reconstruction process of a PET system, the method comprising:
in step S101, part identification information of at least one target detection part of the detection object is determined.
The PET system has a function of reconstructing a PET image of a detected object. The PET system comprises a plurality of scanning beds which are sequentially arranged, a detected object is arranged on the scanning beds, for example, a patient is laid on the plurality of scanning beds, so that PET equipment can carry out PET scanning on the detected object to obtain PET scanning data of the detected object, and the obtained PET scanning data can be used for carrying out PET image reconstruction on the detected object.
When the PET system executes the method for reconstructing the PET image provided by the embodiment of the disclosure, firstly, the detected object is divided into detection parts, and at least one target detection part of the detected object to be detected is obtained. When the object to be detected is a human body, the detection site of the human body may include a laryngeal site, a cardiac site, a visceral site, an intestinal site, a bladder site, etc., and the target detection site to be detected may be one or more of the above-mentioned human body sites. The site identification information of the target site is various, for example, a site name, a site number, and the like.
Referring to fig. 2, which is a flowchart illustrating another method of reconstructing a PET image according to an exemplary embodiment, the present step S101 may be implemented by: in step S1011, the detected object is subjected to part division according to a preset division rule, so as to obtain part identification information of at least two detected parts; in step S1012, the part identification information of at least one target detection part is determined from the part identification information of at least two detection parts.
Based on the setting of step S1011 and step S1012, the PET system has the function of dividing the part of the detected object to obtain the part identification information of at least two detection parts and determining the part identification information of at least one target detection part from the part identification information, and the functions of the PET system are enriched.
There are various implementations of step S1011, for example, the PET system includes: a part recognition device having a function of recognizing each part of the object to be detected, in which case step S1011 may be implemented by: firstly, using a part recognition device to perform part recognition on a detected object to obtain a part recognition result; and secondly, obtaining the part identification information of at least two detection parts according to the part identification result. Illustratively, the site recognition device performs site recognition on the patient, obtains a site name of the laryngeal site after the laryngeal mask is recognized, and obtains a site name of the cardiac site after the cardiac site is recognized.
As another example, referring to fig. 3, which is a flowchart illustrating another method of reconstructing a PET image according to an exemplary embodiment, step S1011 may be implemented by the following steps S1011-1 to S1011-5:
in step S1011-1, CT image data of the detected object and target object attribute information including at least one of: height information, race information, and region information.
When the PET system reconstructs a PET image of a detected object, CT image data of the detected object and target object attribute information can be acquired, wherein the CT image data is obtained after CT scanning is carried out on the detected object, and the target object attribute information can comprise at least one of the following: height information, race information (e.g., yellow race, black race, white race), regional information (e.g., geographic coordinates, country name, native place, etc.), and the like.
The PET system can comprise PET equipment and CT equipment, wherein the CT equipment performs CT scanning on the detected object to obtain CT image data of the detected object, and then the obtained CT image data is transmitted to the PET equipment; the PET equipment performs PET image reconstruction by using the CT image data transmitted internally to obtain PET image data of the detected object. The PET system has the function of acquiring CT image data and PET image data.
Alternatively, the PET system includes a PET device but does not include a CT device, and in this case, the PET system may receive CT image data of the detected object from an external device, and perform PET image reconstruction using the externally received CT image data to obtain PET image data of the detected object. The PET system has the function of acquiring PET image data.
In step S1011-2, a correspondence between preset object attribute information and image region division ratios is searched for, and a target image region division ratio corresponding to the target object attribute information is obtained, where the target image region division ratio is a ratio between the size of the part image region of each detection part in the CT image of the detected object.
For the same kind of object attribute information, when the contents of the object attribute information are different, the ratio between the part image area sizes of the respective detection parts in the CT image is different. In general, the image width of a CT image is constant, and in this case, when the contents of object attribute information are different, the ratio between the part image area heights of the respective detection parts in the CT image is different.
For example, for a patient one meter five to one meter six in height, the ratio between the heights of the region image areas of the laryngeal region, the cardiac region, the visceral region, the intestinal region, and the bladder region in the CT image is a first ratio, while for a patient one meter six to one meter seven in height, the ratio between the heights of the region image areas of the laryngeal region, the cardiac region, the visceral region, the intestinal region, and the bladder region in the CT image is a second ratio, the first ratio being different from the second ratio.
The PET system presets the corresponding relation between the object attribute information and the image region division ratio. For example, correspondence between height information and image area division ratio, correspondence between race information and image area division ratio, correspondence between region information and image area division ratio, and the like.
After acquiring the target object attribute information of the detected object, the PET system searches a preset corresponding relation to obtain a target image region division ratio corresponding to the target object attribute information. For example, the attribute information of the target object is height information, and the PET system determines the target image region division ratio corresponding to the height information of the detected object by searching the corresponding relation between the preset height information and the image region division ratio.
In step S1011-3, ordinate information of the top end point and the bottom end point of the object to be detected in the arrangement direction of the plurality of scanning beds is acquired from the CT image data, respectively.
The CT image data comprises color information (such as RGB values) and position information of each pixel point in the CT image, the position information comprises abscissa information and ordinate information, and the PET system respectively acquires the ordinate information of the top end point and the ordinate information of the bottom end point of the detected object along the arrangement direction of a plurality of scanning beds from the CT image data after acquiring the CT image data of the detected object.
For example, a patient is lying on a plurality of scanning beds, and ordinate information of the top end point of the head of the patient and ordinate information of the bottom end point of the foot are acquired.
In step S1011-4, the ordinate range of the region image region of each detection region in the CT image is determined based on the target image region division ratio and the ordinate information.
After acquiring the ordinate information of the top end point and the bottom end point of the detected object in the CT image, the PET system can determine the length of the detected object in the CT image along the arrangement direction of the plurality of scanning beds, for example, the height of the patient.
And the PET system determines the ordinate range of the part image area of each detection part in the CT image according to the acquired target image area dividing ratio and the ordinate information of the top end point and the bottom end point.
For example, in the CT image, the ordinate information of the top point of the detected object is a, the ordinate information of the bottom point is b, a and b are both positive numbers, b is greater than a, and the target image region division ratio is: the ratio of the width of the region image area of the region 1, the width of the region image area of the region 2 and the width of the region image area of the region 3 is 1:2:3, then the ordinate range of the region image area of the region 1 in the CT image is determined to be [ a, a+ (b-a/6) ], the ordinate range of the region image area of the region 2 is determined to be [ a+ (b-a/6), a+3 (b-a/6) ], the ordinate range of the region image area of the region 3 is determined to be [ a+3 (b-a/6), b ],
In step S1011-5, the location identification information corresponding to each ordinate range is acquired.
The ordinate range obtained in the above step is an ordinate range of a part image area of a detection part, and a correspondence exists between the ordinate range and the detection part, and further a correspondence exists between the ordinate range and part identification information of the detection part.
After acquiring the ordinate ranges of the image areas of all parts in the CT image, the PET system acquires the part identification information corresponding to each ordinate range.
The operation of obtaining the location identification information corresponding to each ordinate range may be implemented in the following manner: firstly, determining ordering information of all detection parts in all detection parts according to arrangement information of ordinate ranges of all detection parts; and secondly, determining the position identification information corresponding to the sequencing information of each detection position according to the corresponding relation between the preset sequencing information of the detection position and the position identification information.
For the detected object, arrangement of the detection sites is determined, and ranking information of the detection sites in all the detection sites is determined. The PET system presets the corresponding relation between the sequencing information of the detection parts and the part identification information, and after determining the sequencing information of each detection part in all detection parts according to the arrangement information of the ordinate ranges of all detection parts, the PET system searches the corresponding relation by using the sequencing information of the detection parts for each detection part to obtain the part identification information corresponding to the sequencing information of the detection parts, so that the part identification information of at least two detection parts is obtained.
For example, the PET system obtains the ordinate information of the laryngeal part, the cardiac part and the visceral part, sequentially arranges the ordinate range of the laryngeal part, the ordinate range of the cardiac part and the ordinate range of the visceral part along the direction from the top point to the bottom point of the detected object, determines that the ranking information of the laryngeal part is 1, the ranking information of the cardiac part is 2 and the ranking information of the visceral part is 3 according to the arrangement information, and then determines that the ranking information 1 corresponds to the laryngeal name, the ranking information 2 corresponds to the cardiac name and the ranking information 3 corresponds to the visceral name by using the preset correspondence between the ranking information and the part identification information.
Based on the arrangement of the two steps, the PET system has the function of determining the part identification information of each detection part according to the ordinate ranges of all detection parts.
Based on the settings of the steps S1011-1-S1011-5, the PET system has the function of acquiring the part identification information of each detection part according to the CT image data of the detected object and the attribute information of the target object, and the functions of the PET system are enriched.
On the basis of the method shown in fig. 2, referring to fig. 4, which is a flowchart illustrating another method of reconstructing a PET image according to an exemplary embodiment, step S1012 may be implemented by the following steps S1012-1 to S1012-3:
In step S1012-1, a marker frame is displayed on the edge of the total CT image area of at least two detection sites in the output CT image according to each ordinate range.
Specifically, the total mark frame may be displayed on the edge of the total CT image area in the output CT image according to the ordinate range of the part image area of each detection part in the CT image.
Alternatively, a mark dividing frame may be displayed on the edge of the region image area of each detected region in the output CT image according to the ordinate range of the region image area of each detected region in the CT image.
In step S1012-2, a sizing operation for the marker box is received.
After the PET system displays the marker frame on the output CT image, the operator can adjust the size of the marker frame or not as needed.
For example, a confirmation option is arranged on a display interface of the CT image, if an operator does not adjust the size of the marking frame, the operator can directly click the confirmation option, and correspondingly, the PET system receives the size confirmation operation of the operator, and the size of the marking frame is confirmed to be the original size of the display; if the operator adjusts the size of the mark frame, the operator can reduce the size of the mark frame by dragging the mark frame with a mouse, and click a confirmation option after the reduction is finished, and accordingly, the PET system receives the size confirmation operation of the operator and confirms that the size of the mark frame is the reduced size.
In step S1012-3, the size of the marker frame after the size determination operation is completed is used to determine the location identification information of at least one target detection location from the location identification information of at least two detection locations.
When the above-mentioned total mark frame is displayed on the CT image, this step can be implemented by: firstly, determining a CT image area defined by a total mark frame after the size determining operation is finished; next, part identification information of each target detection part of which the part image area is located in the CT image area is acquired.
For example, before the size confirmation operation is performed, the CT image area defined by the total mark frame includes the part image areas of the three detection parts, and after the size of the total mark frame is reduced, the CT image area defined by the total mark frame includes the part image areas of the two detection parts, the PET system determines the two detection parts as the two target detection parts, and acquires the part identification information of each target detection part.
When at least two of the above-mentioned sub-marker frames are displayed on the CT image, this step can be implemented by: for each sub-frame, the part identification information of the detection part defined by the sub-frame before the sizing operation is performed is determined as the part identification information of the target detection part defined by the sub-frame after the sizing operation is completed.
For example, when the size confirmation operation is a size adjustment operation of the split marking frame, the size of the part image area of the target detection part and the size of the part image area of the detection part are changed, and the number of the target detection parts is the same as the number of the detection parts, that is, one detection part determines one target detection part.
The operator can perform the size adjustment of the split marking frame and also can perform the deletion operation of the split marking frame, and the PET system can determine the detection portion defined by the remaining marking frame after the deletion operation is completed as a target detection portion and obtain the portion identification information of the target detection portion.
In an alternative embodiment, the PET system includes a CT device and a PET device, and the PET system has a CT scanning function and a PET scanning function, and performs CT scanning and PET scanning on an object to be detected on a scanning bed.
Referring to fig. 5, which is a flowchart illustrating another method of reconstructing a PET image according to an exemplary embodiment, the PET system may further perform the following operations before receiving a sizing operation for a marker frame, i.e., before performing step S1012-2: in step 104, for each ordinate range, determining target scanning bed information corresponding to the ordinate range according to the corresponding relation between the ordinate information of the pixel points in the CT image and the bed information of the scanning bed; in step 105, in the output CT image, the schematic structure of the target scanning bed to which the target scanning bed information belongs is displayed at the same height as the target region image region, and the target region image region is the region image region to which the ordinate range belongs.
The detected object is placed on a scanning bed to perform CT scanning, and the ordinate information of each pixel point in the obtained CT image corresponds to the bed information of a scanning bed, and the bed information can be a bed number and the like.
After determining the ordinate ranges of the part image areas of all detection parts in the CT image, the PET system determines target scanning bed information corresponding to each ordinate range, determines target scanning bed information occupied by all detection parts, and then displays the schematic structure of the target scanning bed to which the target scanning bed information belongs and the part image area to which the ordinate ranges belong in the output CT image at the same height. An operator can perform size determination operation on the marker frame by referring to the height relation between the middle image region of the CT image and the schematic structure of the scanning bed, so that the size confirmation operation is more accurate.
For example, if a detection part occupies all of the first scanning bed and part of the second scanning bed, an operator can increase the size of a sub-marking frame arranged at the edge of a part image area by referring to the height relation of the schematic structure of the part image area and the scanning bed in a CT image, so that the combination of the first scanning bed and the second scanning bed is displayed at the same height as the adjusted sub-marking frame, and the PET system performs PET scanning on the first scanning bed and the second scanning bed by using the same PET scanning parameter.
For example, referring to fig. 6, a CT image is shown displayed by a PET system according to an exemplary embodiment, from which it is seen that a patient has five detection sites, namely, a laryngeal site, a lung/heart, an internal organ, an intestine, a bladder/pelvis, respectively, the PET system includes five scanning beds numbered 1, 2, 3, 4, 5, respectively.
In the CT image, a mark dividing frame a is displayed at the edge of a part image area of each detection part, the part name of the detection part is displayed at one side of the part image area of each detection part, and the part image area of any detection part in the CT image is displayed at the same height as a schematic structure b of a scanning bed occupied by the detection part.
In order to prevent gaps from occurring between the five scanning beds, the five scanning beds are stacked, the whole scanning bed at the end is determined to be a No. 1 scanning bed, and other scanning beds except the No. 1 scanning bed are removed from the overlapping parts of the beds. In fig. 6, a schematic structure b of a scanning bed is arranged between two adjacent dotted lines.
In step S102, PET image processing data corresponding to each of the part identification information is acquired, the PET image processing data including: PET image reconstruction data and PET scan parameters.
After the PET system acquires the position identification information of at least one target detection position, acquiring PET image processing data corresponding to each position identification information, wherein the PET image processing data is used for reconstructing a PET image of the target detection position by the PET system.
The PET image processing data includes: PET image reconstruction data and PET scan parameters.
There are various PET scan parameters, such as PET scan duration, PET scan number, etc. The PET scan parameters set for different detection sites may be different based on the different drug distribution at the different detection sites. Specifically, when the drug distribution at the detection site is small, the PET scanning period at the detection site may be longer, and when the drug distribution at the detection site is large, the PET scanning period at the detection site may be shorter. For example, there is less drug distribution in the lungs, more drug distribution in the liver, the PET scan duration of the lungs is set to 120s, and the PET scan duration of the liver is set to 90s.
In addition, because the probability of occurrence of lesions at different detection positions is different, the PET scanning time length of the detection position can be shorter when the probability of occurrence of lesions at the detection position is smaller, and the PET scanning time length of the detection position can be longer when the probability of occurrence of lesions at the detection position is larger.
The PET image reconstruction data may include: PET image reconstruction method and PET image reconstruction parameters used in PET image reconstruction.
There are various PET image reconstruction methods, for example, a high resolution reconstruction method (e.g., PSF (Point Spread Function, point spread function) reconstruction method, etc.), a low noise reconstruction method (e.g., normalized constraint reconstruction method), etc. A high resolution reconstruction method can be used for head images, and a low noise reconstruction method can be used for chest and other parts.
There are various parameters for PET image reconstruction, such as the pixel size used in reconstructing the PET image, etc.
The pixel sizes used in reconstructing PET images of different detection sites may be different. For example, the resolution of the head image is required to be high, and the size of the pixels used for reconstructing the head PET image can be set to be larger, while the size of the pixels used for reconstructing the PET image of other parts is set to be smaller. For example, the pixel size used for reconstructing a head PET image is about 1mm×1mm, and the pixel size used for reconstructing a PET image of another portion is between 2mm×2mm and 4mm×4 mm.
In step S103, PET image reconstruction is performed on each target detection site based on the PET image processing data of the target detection site, and a PET image of the detected object is obtained.
After the PET system acquires PET image processing data of each target detection part, PET image reconstruction is carried out on each target detection part by utilizing the PET image processing data of the target detection part, so as to obtain a PET image of the target detection part. And after the PET images of all the target detection devices are reconstructed, obtaining PET images of the detected objects.
Referring to fig. 7, which is a flowchart illustrating another method of reconstructing a PET image according to an exemplary embodiment, step S103 may be implemented by: in step S1031, for each part identification information, PET scanning is performed on the target detection part having the part identification information by using the PET scanning parameter corresponding to the part identification information, so as to obtain PET scanning data of the target detection part; in step S1032, the PET image reconstruction is performed on the target detection part according to the obtained PET scan data and the PET image reconstruction data corresponding to the part identification information, so as to obtain a PET sub-image of the target detection part; in step S1033, a PET image of the object to be detected is obtained from the PET sub-images of the respective target detection sites.
For step S1031, after the execution of step 104 is finished, i.e., after determining the target scan bed information corresponding to each ordinate range, the PET system may further execute the following operations on the basis of the method shown in fig. 4: and establishing a corresponding relation between the position identification information corresponding to each ordinate range and the target scanning bed information to obtain a bed information set.
Specifically, a correspondence between the target scan bed information and the position identification information may be established, or a correspondence between the ordinate range and the position identification information, and a correspondence between the ordinate range and the target scan bed information may be established.
On this basis, step S1031 may be implemented by: the first step: before PET scanning is carried out on a target scanning bed by a PET system, searching a pre-established bed information set according to target scanning bed information of the target scanning bed, and obtaining position identification information corresponding to the target scanning bed information; and a second step of: determining PET scanning parameters corresponding to the obtained position identification information; and a third step of: and carrying out PET scanning on the target detection part positioned on the target scanning bed by utilizing the determined PET scanning parameters to obtain PET scanning data of the target detection part.
When the PET system performs PET scanning on a plurality of detection parts on a plurality of scanning beds, the detection parts are subjected to PET scanning by utilizing PET scanning parameters corresponding to the detection parts, so that the image quality of the detection parts in the finally obtained PET image is good.
In an alternative embodiment, the PET system reconstructs a PET image of each target detection site according to PET image processing data of each target detection site, and there are various implementation manners for obtaining a PET image of the detected object, for example, the first implementation manner is: directly utilizing PET image processing data of each target detection part to reconstruct an image of the target detection part, and obtaining a PET image.
The second implementation mode is as follows: firstly, outputting each PET image processing data; secondly, detecting a data management operation for each PET image processing data; again, in response to detecting the data management operation, determining each PET image processing data after the data management operation is finished as a target PET image processing data; and finally, carrying out PET image reconstruction on the target detection parts according to the target PET image processing data of each target detection part to obtain PET images of the detected objects.
Specifically, each PET image processing data may be displayed in a configuration interface, and an operator may perform operations such as modification, deletion, and the like on the PET image processing data on the configuration interface.
The method realizes interaction between the PET system and operators, enriches the source mode of PET image processing data during PET image reconstruction, improves the use experience of operators, and enriches the functions of the PET system.
In the method, the PET system determines the position identification information of at least one target detection position of the detected object, acquires PET image processing data corresponding to each position identification information, and performs PET image reconstruction on the target detection position according to the PET image processing data of each target detection position to obtain a PET image of the detected object, so that PET image reconstruction on the target detection position by using the PET image processing data specific to the target detection position is realized, and a PET image with very good image quality is obtained.
The PET image processing data used in reconstructing the PET image can be automatically determined by the PET system, so that manual setting operation is omitted, the occurrence of manual setting error condition is avoided, and the quality of the PET image is ensured.
The PET system can analyze and count PET image processing data used when different detection parts are rebuilt in a historical period, and update the corresponding relation between preset part identification information and the PET image processing data according to analysis and statistics results. Based on the setting of the operation, the PET system has the function of automatically updating the corresponding relation between the position identification information and the PET image processing data, and the functions of the PET system are enriched.
In order for those skilled in the art to more clearly understand the method of reconstructing a PET image provided by the embodiments of the present disclosure, the method will now be described in detail by way of the following examples.
The method for reconstructing a PET image provided by the example is applied to a PET system, the PET system comprises a CT device and a PET device, wherein the CT device performs CT scanning on a patient to obtain a CT image of the patient, the obtained CT image is transmitted to the PET device, and the PET device performs PET image reconstruction on the patient by using the CT image to obtain a PET image of the patient.
The specific operation process is as follows:
the first step: after the PET system starts working, patient information and protocol information of a patient are received, wherein the patient information comprises information such as name, gender, age, height and the like of the patient.
And a second step of: CT scanning is carried out on a patient by using a CT device, a CT image is generated, and the generated CT image is transmitted to a PET device.
And a third step of: according to the height information of a patient, acquiring an image region division ratio which is set corresponding to the height information, dividing an image region of a CT image according to the image region division ratio to obtain position image regions of different detection positions, displaying a mark dividing frame on the edge of each position image region in the output CT image, displaying a schematic structure of a scanning bed occupied by each detection position on one side of the position image region of each detection position, and making the positions of the position image regions and the schematic structure equal in height in the CT image. The image content of the CT image output in this step can be referred to fig. 6.
Fourth step: an operator judges whether the size of the split marking frame is adjusted or not and judges whether the position of the split marking frame is proper or not through the height relation between the middle image area of the CT image and the schematic structure of the scanning bed. When the positions of the marking boxes are judged to be reasonable, namely, when the bed information of the scanning beds occupied by the detection parts is judged to be reasonable, the PET equipment can be triggered to set a PET image reconstruction method, PET image reconstruction parameters and PET scanning parameters for each detection part and each scanning bed through preset operation (such as clicking a determination option on an image interface and the like).
When the position of the sub-mark frame is judged to be unreasonable, the size of the sub-mark frame can be adjusted in a mode of dragging the sub-mark frame by a mouse, after the size adjustment is finished, the bed information of the scanning bed occupied by the detection part is newly defined, and the PET equipment is triggered to set a PET image reconstruction method, PET image reconstruction parameters and PET scanning parameters for each detection part and the scanning bed through preset operation.
Fifth step: the PET equipment acquires a PET image reconstruction method, PET image reconstruction parameters and PET scanning parameters corresponding to the position identification information of each detection position according to the preset operation, and sets the PET image reconstruction method, the PET image reconstruction parameters and the PET scanning parameters for each detection position and each scanning bed.
Sixth step: and carrying out PET scanning on each detection part according to PET scanning parameters set for the detection part to obtain PET scanning data of the detection part, thereby obtaining PET scanning data of a patient.
Seventh step: and carrying out PET image reconstruction on each detection part according to the PET scanning data of the detection part and the PET image reconstruction method and PET image reconstruction parameters set for the detection part to obtain PET sub-images of the detection part, wherein the PET sub-images of the detection parts form a PET image of a patient.
FIG. 8 is a block diagram illustrating an apparatus for reconstructing a PET image in accordance with an exemplary embodiment; the apparatus for reconstructing a PET image shown in fig. 7 is applied to a PET system, and includes: a first determination module 21, an acquisition module 22 and a reconstruction module 23; wherein, the liquid crystal display device comprises a liquid crystal display device,
the first determining module 21 is configured to determine location identification information of at least one target detection location of the detected object;
the acquiring module 22 is configured to acquire PET image processing data corresponding to each of the location identification information, where the PET image processing data includes: PET image reconstruction data and PET scan parameters;
the reconstruction module 23 is configured to reconstruct a PET image of the target detection site according to the PET image processing data of each target detection site, so as to obtain the PET image of the detected object.
In an embodiment, the reconstruction module 23 may include: a scanning sub-module and a first reconstruction sub-module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the scanning sub-module is configured to perform PET scanning on the target detection part by using the PET scanning parameters corresponding to the part identification information for each part identification information to obtain the PET scanning data of the target detection part;
The first reconstruction sub-module is configured to reconstruct a PET image of the target detection part according to the PET scanning data and the corresponding PET image reconstruction data, so as to obtain the PET image.
In an embodiment, the first determining module may include: obtaining a sub-module and determining the sub-module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the obtaining submodule is configured to divide the part of the detected object according to a preset division rule to obtain part identification information of at least two detection parts;
the determination submodule is configured to determine the part identification information of the at least one target detection part from the part identification information of the at least two detection parts.
In an embodiment, the obtaining a sub-module may include: an identification unit and a first obtaining unit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the identification unit is configured to include, at the PET system: in the case of the part recognition device, the part recognition device is used for recognizing the part of the detected object to obtain a part recognition result;
the first obtaining unit is configured to obtain the part identification information of the at least two detection parts according to the part identification result.
In an embodiment, the obtaining a sub-module may include: the device comprises a first acquisition unit, a second acquisition unit, a first determination unit and a third acquisition unit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the first acquisition unit is configured to acquire CT image data of the detected object and target object attribute information including at least one of: height information, race information, region information;
the second obtaining unit is configured to find a corresponding relation between preset object attribute information and an image region division ratio, and obtain a target image region division ratio corresponding to the target object attribute information, where the target image region division ratio is a ratio between part image region sizes of each detection part in the CT image of the detected object;
the second acquisition unit is configured to acquire, from the CT image data, ordinate information of a top end point and a bottom end point of the detected object in an arrangement direction of a plurality of scanning beds, respectively;
the first determining unit is configured to determine an ordinate range of each of the part image areas according to the target image area division ratio and the ordinate information;
The third acquisition unit is configured to acquire the location identification information corresponding to each of the ordinate ranges.
In an embodiment, the third obtaining unit may include: a first determination subunit and a second determination subunit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the first determining subunit is configured to determine ordering information of all the detection parts in all the detection parts according to the arrangement information of the ordinate ranges of all the detection parts;
the second determining subunit is configured to determine, according to a preset correspondence between the ordering information of the detection parts and the part identification information, the part identification information corresponding to the ordering information of each detection part.
In an embodiment, the determining submodule may include: the device comprises a first display unit, a receiving unit and a second determining unit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the first display unit is configured to display a mark frame on the edge of the total CT image area of the at least two detection parts in the output CT image according to each ordinate range;
the receiving unit is configured to receive a size determining operation for the marking frame;
the second determination unit is configured to determine the part identification information of the at least one target detection part from the part identification information of the at least two detection parts using the size of the marking frame after the size determination operation is completed.
In an embodiment, the first display unit may include: a first display subunit;
the first display subunit is configured to display a total mark frame on the edge of the total CT image area in the output CT image according to each ordinate range;
the second determining unit may include: a third determining subunit, an acquiring subunit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the third determining subunit is configured to determine a CT image area defined by the total mark frame after the size determining operation is finished;
the acquisition subunit is configured to acquire the part identification information of each target detection part whose part image area is located within the CT image area.
In an embodiment, the first display unit may include: a second display subunit;
the second display subunit is configured to display a mark dividing frame on the edge of the part image area of each detection part in the output CT image according to each ordinate range;
the second determination subunit may be configured to determine, for each of the sub-mark frames, the part identification information of the detection part defined by the sub-mark frame before the size determination operation is performed, as the part identification information of the target detection part defined by the sub-mark frame after the size determination operation is ended.
In an embodiment, the apparatus may further include: the second determining module and the display module; wherein, the liquid crystal display device comprises a liquid crystal display device,
the second determining module is configured to determine, for each of the ordinate ranges, target scanning bed information corresponding to the ordinate range according to a correspondence between ordinate information of a pixel point in the CT image and bed information of a scanning bed, before the receiving of the size determining operation for the marker frame;
the display module is configured to display a schematic structure of a target scanning bed having the target scanning bed information and a target part image area at the same height in the output CT image, wherein the target part image area is the part image area to which the ordinate range belongs.
In an embodiment, the apparatus may further include: establishing a module;
the establishing module is configured to establish a corresponding relation between the position identification information corresponding to each ordinate range and the target scanning bed information after determining the target scanning bed information corresponding to the ordinate range, so as to obtain a bed information set;
the scanning sub-module may include: the device comprises an obtaining unit, a determining unit and a scanning unit; wherein, the liquid crystal display device comprises a liquid crystal display device,
The obtaining unit is configured to search the bed information set according to the target scanning bed information of the target scanning bed before the PET system performs the PET scanning on the target scanning bed, so as to obtain the position identification information corresponding to the target scanning bed information;
the determining unit is configured to determine the PET scanning parameters corresponding to the part identification information;
the scanning unit is configured to perform the PET scan on the target detection site located on the target scanning bed using the determined PET scan parameters.
In an embodiment, the reconstruction module may be configured to reconstruct the image of the target detection site directly using the PET image processing data of each target detection site, to obtain the PET image;
alternatively, the reconstruction module may include:
an output sub-module configured to output each of the PET image processing data;
a detection sub-module configured to detect a data management operation for each of the PET image processing data;
as a sub-module configured to determine each of the PET image processing data after data management as a target PET image processing data in response to detecting the data management operation;
And the second reconstruction sub-module is configured to reconstruct a PET image of each target detection part according to the target PET image processing data of each target detection part, so as to obtain the PET image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Embodiments of the apparatus for reconstructing a PET image of the present invention may be applied to a network device. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking a software implementation as an example, as a device in a logic sense, the device is formed by reading computer program instructions into a memory by a processor of a device where the device is located. In terms of hardware, as shown in fig. 9, a hardware structure diagram of a PET system where the apparatus for reconstructing a PET image of the present invention is located is shown, where in addition to the processor, the network interface, and the memory shown in fig. 9, the apparatus where the apparatus is located in the embodiment may generally include other hardware, such as a forwarding chip responsible for processing a packet, and so on; the device may also be a distributed device in terms of hardware architecture, possibly comprising a plurality of interface cards, for the extension of the message processing at the hardware level.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when processed by a processor, implements the following task processing method:
determining part identification information of at least one target detection part of the detected object;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scan parameters;
and carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (15)

1. A method of reconstructing a PET image for use in a PET system, the method comprising:
dividing the part of the detected object according to a preset dividing rule to obtain part identification information of at least two detection parts, and determining the part identification information of at least one target detection part from the part identification information of at least two detection parts;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scanning parameters, wherein each piece of position identification information and PET image processing data are corresponding, and the PET image processing data are automatically determined by the PET system according to the position identification information;
and carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
2. The method according to claim 1, wherein said reconstructing a PET image of said target detection site from said PET image processing data of each said target detection site to obtain said PET image of said detected object comprises:
for each part identification information, carrying out PET scanning on the target detection part by utilizing the PET scanning parameters corresponding to the part identification information to obtain PET scanning data of the target detection part;
and carrying out PET image reconstruction on the target detection part according to the PET scanning data and the corresponding PET image reconstruction data to obtain the PET image.
3. The method of claim 1, wherein the PET system comprises: a part recognition device; the method for dividing the detected object into parts according to a preset dividing rule to obtain part identification information of at least two detected parts comprises the following steps:
performing part recognition on the detected object by using the part recognition device to obtain a part recognition result;
and obtaining the part identification information of the at least two detection parts according to the part identification result.
4. The method according to claim 1, wherein the performing the portion division on the detected object according to the preset division rule to obtain the portion identification information of at least two detected portions includes:
Acquiring CT image data of the detected object and target object attribute information, wherein the target object attribute information comprises at least one of the following items: height information, race information, region information;
searching a corresponding relation between preset object attribute information and image region division ratios to obtain target image region division ratios corresponding to the target object attribute information, wherein the target image region division ratios are ratios among part image region sizes of all detection parts in CT images of the detected objects;
acquiring ordinate information of a top end point and a bottom end point of the detected object along the arrangement direction of a plurality of scanning beds from the CT image data respectively;
determining the ordinate range of each part image area according to the target image area dividing proportion and the ordinate information;
and acquiring the position identification information corresponding to each ordinate range.
5. The method of claim 4, wherein the obtaining the location identification information corresponding to each of the ordinate ranges comprises:
determining ordering information of all detection parts in all detection parts according to the arrangement information of the ordinate ranges of all detection parts;
And determining the part identification information corresponding to the sequencing information of each detection part according to the corresponding relation between the preset sequencing information of the detection part and the part identification information.
6. The method of claim 4, wherein determining the location identification information of the at least one target detection location from the location identification information of the at least two detection locations comprises:
displaying a mark frame on the edge of the total CT image area of the at least two detection parts in the output CT image according to each ordinate range;
receiving a sizing operation for the marker frame;
and determining the part identification information of the at least one target detection part from the part identification information of the at least two detection parts by using the size of the marking frame after the size determination operation is finished.
7. The method of claim 6, wherein displaying a marker box on an edge of a total CT image area of the at least two detection sites in the output CT image according to each of the ordinate ranges, comprises:
displaying a total mark frame on the edge of the total CT image area in the output CT image according to each ordinate range;
The determining, using the size of the marking frame after the size determining operation is finished, the location identification information of the at least one target detection location from the location identification information of the at least two detection locations, includes:
determining a CT image area defined by the total mark frame after the size determining operation is finished;
and acquiring the part identification information of each target detection part of which a part image area is positioned in the CT image area.
8. The method of claim 6, wherein displaying a marker box on an edge of a total CT image area of the at least two detection sites in the output CT image according to each of the ordinate ranges, comprises:
displaying a mark dividing frame on the edge of the part image area of each detection part in the output CT image according to each ordinate range;
the determining, using the size of the marking frame after the size determining operation is finished, the location identification information of the at least one target detection location from the location identification information of the at least two detection locations, includes:
for each of the sub-mark frames, the part identification information of the detection part defined by the sub-mark frame before the sizing operation is performed is determined as the part identification information of the target detection part defined by the sub-mark frame after the sizing operation is completed.
9. The method of claim 6, wherein prior to the receiving a sizing operation for the marker box, the method further comprises:
determining target scanning bed information corresponding to each ordinate range according to the corresponding relation between the ordinate information of the pixel points in the CT image and the bed information of the scanning bed;
and displaying the schematic structure of the target scanning bed with the target scanning bed information and a target part image area in the same height in the output CT image, wherein the target part image area is the part image area to which the ordinate range belongs.
10. The method of claim 9, wherein after said determining the target scan bed information corresponding to the ordinate range, the method further comprises:
establishing a corresponding relation between the position identification information corresponding to each ordinate range and the target scanning bed information to obtain a bed information set;
the step of performing PET scanning on the target detection part by using the PET scanning parameters corresponding to the part identification information for each part identification information to obtain the PET scanning data of the target detection part includes:
Before the PET system performs the PET scanning on a target scanning bed, searching the bed information set according to the target scanning bed information of the target scanning bed to obtain the position identification information corresponding to the target scanning bed information;
determining the PET scanning parameters corresponding to the position identification information;
and carrying out PET scanning on the target detection part positioned on the target scanning bed by utilizing the determined PET scanning parameters.
11. The method according to claim 1, wherein said reconstructing a PET image of said target detection site from said PET image processing data of each said target detection site to obtain said PET image of said detected object comprises:
directly utilizing the PET image processing data of each target detection part to reconstruct the image of the target detection part to obtain the PET image;
or outputting each piece of PET image processing data;
detecting a data management operation for each of the PET image processing data;
in response to detecting the data management operation, determining each of the PET image processing data after data management as target PET image processing data;
And carrying out PET image reconstruction on the target detection parts according to the target PET image processing data of each target detection part to obtain the PET image.
12. An apparatus for reconstructing a PET image for use in a PET system, said apparatus comprising:
the first determining module is configured to divide the part of the detected object according to a preset dividing rule to obtain part identification information of at least two detection parts, and determine the part identification information of at least one target detection part from the part identification information of at least two detection parts;
an acquisition module configured to acquire PET image processing data corresponding to each of the part identification information, the PET image processing data including: PET image reconstruction data and PET scanning parameters, wherein each piece of position identification information and PET image processing data are corresponding, and the PET image processing data are automatically determined by the PET system according to the position identification information;
and the reconstruction module is configured to reconstruct a PET image of each target detection part according to the PET image processing data of each target detection part, so as to obtain the PET image of the detected object.
13. The apparatus of claim 12, wherein the reconstruction module comprises:
the scanning sub-module is configured to perform PET scanning on the target detection part by utilizing the PET scanning parameters corresponding to the part identification information for each part identification information to obtain the PET scanning data of the target detection part;
and the first reconstruction sub-module is configured to reconstruct a PET image of the target detection part according to the PET scanning data and the corresponding PET image reconstruction data to obtain the PET image.
14. A PET system, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
dividing the part of the detected object according to a preset dividing rule to obtain part identification information of at least two detection parts, and determining the part identification information of at least one target detection part from the part identification information of at least two detection parts;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scan data, each of the location identification information and PET image processing data corresponding, the PET image processing data being automatically determined by the PET system based on the location identification information;
And carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
15. A machine-readable storage medium having stored thereon computer instructions that when executed perform the following:
dividing the part of the detected object according to a preset dividing rule to obtain part identification information of at least two detection parts, and determining the part identification information of at least one target detection part from the part identification information of at least two detection parts;
acquiring PET image processing data corresponding to each part identification information, wherein the PET image processing data comprises: PET image reconstruction data and PET scan data, each of the location identification information and PET image processing data corresponding, the PET image processing data being automatically determined by the PET system based on the location identification information;
and carrying out PET image reconstruction on the target detection parts according to the PET image processing data of each target detection part to obtain the PET image of the detected object.
CN201910600340.0A 2019-07-04 2019-07-04 Method, device and system for reconstructing PET (positron emission tomography) image Active CN110301928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910600340.0A CN110301928B (en) 2019-07-04 2019-07-04 Method, device and system for reconstructing PET (positron emission tomography) image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910600340.0A CN110301928B (en) 2019-07-04 2019-07-04 Method, device and system for reconstructing PET (positron emission tomography) image

Publications (2)

Publication Number Publication Date
CN110301928A CN110301928A (en) 2019-10-08
CN110301928B true CN110301928B (en) 2023-07-28

Family

ID=68079120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910600340.0A Active CN110301928B (en) 2019-07-04 2019-07-04 Method, device and system for reconstructing PET (positron emission tomography) image

Country Status (1)

Country Link
CN (1) CN110301928B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111493908A (en) * 2020-04-30 2020-08-07 上海联影医疗科技有限公司 Medical image scanning method, apparatus, computer device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123095A (en) * 2017-04-01 2017-09-01 上海联影医疗科技有限公司 A kind of PET image reconstruction method, imaging system
CN107392976A (en) * 2017-07-31 2017-11-24 上海联影医疗科技有限公司 Data processing method, device and equipment
CN108447553A (en) * 2018-03-07 2018-08-24 上海联影医疗科技有限公司 A kind of medical imaging procedure, device, system and computer readable storage medium
CN109567843A (en) * 2019-02-02 2019-04-05 上海联影医疗科技有限公司 A kind of image scanning automatic positioning method, device, equipment and medium
WO2019096614A1 (en) * 2017-11-15 2019-05-23 Koninklijke Philips N.V. Short leading and trailing frames to improve image quality in positron emission tomography (pet)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9155514B2 (en) * 2013-08-01 2015-10-13 Siemens Medical Solutions Usa, Inc. Reconstruction with partially known attenuation information in time of flight positron emission tomography
US9814437B2 (en) * 2013-10-24 2017-11-14 Siemens Medical Solutions Usa, Inc. Anatomic range planning in positron emission tomography

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123095A (en) * 2017-04-01 2017-09-01 上海联影医疗科技有限公司 A kind of PET image reconstruction method, imaging system
CN107392976A (en) * 2017-07-31 2017-11-24 上海联影医疗科技有限公司 Data processing method, device and equipment
WO2019096614A1 (en) * 2017-11-15 2019-05-23 Koninklijke Philips N.V. Short leading and trailing frames to improve image quality in positron emission tomography (pet)
CN108447553A (en) * 2018-03-07 2018-08-24 上海联影医疗科技有限公司 A kind of medical imaging procedure, device, system and computer readable storage medium
CN109567843A (en) * 2019-02-02 2019-04-05 上海联影医疗科技有限公司 A kind of image scanning automatic positioning method, device, equipment and medium

Also Published As

Publication number Publication date
CN110301928A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN109567843B (en) Imaging scanning automatic positioning method, device, equipment and medium
CN101233521B (en) Form the method and apparatus of multinomial research
US8588496B2 (en) Medical image display apparatus, medical image display method and program
US7433503B2 (en) Method and arrangement for positioning a patient in a medical diagnosis or therapy device
CN107909622B (en) Model generation method, medical imaging scanning planning method and medical imaging system
CN103222876B (en) Medical image-processing apparatus, image diagnosing system, computer system and medical image processing method
US8238630B2 (en) Image processing apparatus and program for the same
US20030028401A1 (en) Customizable lung report generator
US20030018245A1 (en) Methods for generating a lung report
US20150003702A1 (en) Processing and displaying a breast image
CN111754553A (en) Multi-modal scanning image registration method and device, computer equipment and storage medium
CN106388843A (en) Medical imaging equipment and scanning method thereof
CN107833231A (en) Medical image display method, device and computer-readable storage medium
US20120287131A1 (en) Image processing apparatus and image registration method
CN112022191A (en) Positioning method and system
CN108182434B (en) Image processing method and device
CN114582456B (en) Medical report generation method and device and near-infrared brain function imaging system
CN111312370B (en) Method and device for generating image display layout and image processing method and device
CN110301928B (en) Method, device and system for reconstructing PET (positron emission tomography) image
CN111462139A (en) Medical image display method, medical image display device, computer equipment and readable storage medium
CN110619621A (en) Method and device for identifying rib region in image, electronic equipment and storage medium
CN106580356A (en) CT machine scanning area determining method, system and CT machine
WO2023232068A1 (en) Systems and methods for image processing
JP5363962B2 (en) Diagnosis support system, diagnosis support program, and diagnosis support method
CN114299019A (en) Scanning method, system and device for nuclear medicine equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230512

Address after: Room 308, No. 177-2 Chuangxin Road, Hunnan District, Shenyang City, Liaoning Province, 110167

Applicant after: Shenyang Zhihe Medical Technology Co.,Ltd.

Address before: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Applicant before: Shenyang Neusoft Medical Systems Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant