CN112862916B - CT perfusion function map quantitative parameter processing equipment and method - Google Patents

CT perfusion function map quantitative parameter processing equipment and method Download PDF

Info

Publication number
CN112862916B
CN112862916B CN202110266924.6A CN202110266924A CN112862916B CN 112862916 B CN112862916 B CN 112862916B CN 202110266924 A CN202110266924 A CN 202110266924A CN 112862916 B CN112862916 B CN 112862916B
Authority
CN
China
Prior art keywords
image
images
processed
determining
positions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110266924.6A
Other languages
Chinese (zh)
Other versions
CN112862916A (en
Inventor
王拥军
熊云云
李子孝
吴振洲
史睿琼
刘昱
陈峰蔚
邓悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ande Yizhi Technology Co ltd
Beijing Tiantan Hospital
Original Assignee
Beijing Ande Yizhi Technology Co ltd
Beijing Tiantan Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ande Yizhi Technology Co ltd, Beijing Tiantan Hospital filed Critical Beijing Ande Yizhi Technology Co ltd
Priority to CN202110266924.6A priority Critical patent/CN112862916B/en
Publication of CN112862916A publication Critical patent/CN112862916A/en
Application granted granted Critical
Publication of CN112862916B publication Critical patent/CN112862916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Abstract

The present disclosure relates to a CT perfusion function map quantitative parameter processing device and method, the device comprises: medical image acquisition apparatus and processing apparatus, the processing apparatus being for: performing first preprocessing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence; extracting a central line of each first image in the first image sequence; determining the target position of the key point in each first image in the first images; and determining the perfusion function index of the physiological area and the relative values of the perfusion function indexes on two sides of the midline according to the first image sequence and the target position. According to the CT perfusion function diagram quantitative parameter processing equipment disclosed by the embodiment of the disclosure, the perfusion function indexes can be determined based on the preprocessed first image sequence and the target position, and then the relative values of the perfusion function indexes on the two sides of the central line are automatically determined according to the central line, so that the uncertainty of manual labeling is reduced, and the processing efficiency is improved.

Description

CT perfusion function map quantitative parameter processing equipment and method
Technical Field
The disclosure relates to the technical field of image processing, in particular to a CT perfusion function map quantitative parameter processing device and method.
Background
In the related art, a conventional image processing method is usually used to process a perfusion image, so as to calculate perfusion function indexes such as cerebral blood flow and cerebral blood volume and relative values of the perfusion function indexes. However, the conventional image processing method has relatively low processing speed and relatively low accuracy, and part of medical images need manual labeling, so that the workload is increased, the processing efficiency is reduced, and the manual labeling is limited by experience and technology of labeling personnel and has high uncertainty, so that the calculated perfusion function index cannot meet the requirements of efficiency and accuracy.
Disclosure of Invention
The disclosure provides a CT perfusion function diagram quantitative parameter processing device and method.
According to an aspect of the present disclosure, there is provided a CT perfusion function map quantitative parameter processing apparatus, the apparatus comprising a medical image acquisition device and a processing device, wherein the medical image acquisition device is configured to acquire a sequence of images to be processed, wherein the sequence of images to be processed includes images to be processed of the same physiological region acquired at multiple moments, and the images to be processed include the three-dimensional medical image; the processing device is used for: performing first preprocessing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence; extracting a central line of each first image in the first image sequence; determining the target positions of key points in each first image in the first images; and determining the perfusion function index of the physiological area and the relative values of the perfusion function indexes on two sides of the midline according to the first image sequence and the target position.
In one possible implementation manner, performing a first pre-processing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence includes: determining a reference layer of a plurality of images to be processed in the image sequence to be processed; according to the reference layers of the multiple images to be processed and the shapes of the areas to be registered in the reference layers, carrying out image registration on the multiple images to be processed to obtain multiple second images; performing image registration based on mutual information among the plurality of second images to obtain a plurality of third images; and correcting the time corresponding to the third images according to the average time interval between the acquisition moments of the images to be processed in the image sequence to be processed to obtain the first image sequence.
In one possible implementation, extracting a central line of each first graph in the first image sequence includes: obtaining a target area in each first image; determining a center of gravity of the target region; determining a plurality of dividing lines of the target area according to the gravity center; respectively determining the mirror image similarity of the target area relative to each dividing line; and determining the dividing line with the highest mirror image similarity as the middle line of the first image.
In one possible implementation, determining the target position of the keypoint in each first image in the first image includes: performing second preprocessing on the first image to obtain a fourth image; inputting the fourth image into a key point detection network for processing, and determining candidate positions of the key points in the first image; and screening the candidate positions according to the pixel values of the candidate positions in the plurality of first images to obtain the target position.
In a possible implementation manner, performing second preprocessing on the first image to obtain a fourth image includes: performing fusion processing on the plurality of first images to obtain a fifth image; and screening out the area where the physiological area is located in the fifth image to obtain the fourth image.
In one possible implementation, determining the target position of the keypoint in each first image in the first image includes: determining a target layer in the first image; performing superposition processing on a target layer in the plurality of first images to obtain a superposed image; and determining the target position according to the brightness characteristics of a plurality of positions in the superposed image.
In one possible implementation, determining a perfusion function indicator of the physiological region and a relative value of the perfusion function indicators on both sides of the midline according to the first image sequence and the target position includes: obtaining perfusion function indexes of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position; and determining relative values of the perfusion function indexes of the areas on two sides of the central line according to the perfusion function indexes of a plurality of positions in the first image and the central line of the first image.
In one possible implementation, the perfusion function indicator includes at least one of contrast agent arrival delay, cerebral blood flow, cerebral blood volume, mean transit time, and peak arrival time.
In one possible implementation, obtaining perfusion function indicators of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position includes: determining contrast arrival delays for a plurality of locations of the first image from pixel values for the plurality of locations; determining cerebral blood flow at a plurality of positions of the first image according to pixel values of the plurality of positions; determining cerebral blood volumes of a plurality of positions of the plurality of first images according to pixel values of the plurality of positions and pixel values of target positions in the plurality of first images; determining an average transit time for the plurality of locations based on the cerebral blood flow and the cerebral blood volume; determining the time to peak of the plurality of positions according to the arrival delay of the contrast agent at the plurality of positions and the arrival delay of the contrast agent at the target position.
According to an aspect of the present disclosure, a method for processing quantitative parameters of a CT perfusion function map is provided, which includes: performing first preprocessing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence, wherein the images to be processed comprise three-dimensional medical images, and the image sequence to be processed comprises the three-dimensional medical images of the same physiological area obtained at a plurality of moments; extracting a central line of each first image in the first image sequence; determining the target positions of key points in each first image in the first images; and determining the perfusion function index of the physiological area and the relative values of the perfusion function indexes on two sides of the midline according to the first image sequence and the target position.
In one possible implementation manner, performing a first pre-processing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence includes: determining a reference layer of a plurality of images to be processed in the image sequence to be processed; according to the reference layers of the multiple images to be processed and the shapes of the areas to be registered in the reference layers, carrying out image registration on the multiple images to be processed to obtain multiple second images; performing image registration based on mutual information among the plurality of second images to obtain a plurality of third images; and correcting the time corresponding to the third images according to the average time interval between the acquisition moments of the images to be processed in the image sequence to be processed to obtain the first image sequence.
In one possible implementation, extracting a central line of each first graph in the first image sequence includes: obtaining a target area in each first image; determining a center of gravity of the target region; determining a plurality of dividing lines of the target area according to the gravity center; respectively determining the mirror image similarity of the target area relative to each dividing line; and determining the dividing line with the highest mirror image similarity as the middle line of the first image.
In one possible implementation, determining the target position of the keypoint in each first image in the first image includes: performing second preprocessing on the first image to obtain a fourth image; inputting the fourth image into a key point detection network for processing, and determining candidate positions of the key points in the first image; and screening the candidate positions according to the pixel values of the candidate positions in the plurality of first images to obtain the target position.
In a possible implementation manner, performing second preprocessing on the first image to obtain a fourth image includes: performing fusion processing on the plurality of first images to obtain a fifth image; and screening out the area where the physiological area is located in the fifth image to obtain the fourth image.
In one possible implementation, determining the target position of the keypoint in each first image in the first image includes: determining a target layer in the first image; performing superposition processing on a target layer in the plurality of first images to obtain a superposed image; and determining the target position according to the brightness characteristics of a plurality of positions in the superposed image.
In one possible implementation, determining a perfusion function indicator of the physiological region and a relative value of the perfusion function indicators on both sides of the midline according to the first image sequence and the target position includes: obtaining perfusion function indexes of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position; and determining relative values of the perfusion function indexes of the areas on two sides of the central line according to the perfusion function indexes of a plurality of positions in the first image and the central line of the first image.
In one possible implementation, the perfusion function indicator includes at least one of contrast agent arrival delay, cerebral blood flow, cerebral blood volume, mean transit time, and peak arrival time.
In one possible implementation, obtaining perfusion function indicators of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position includes: determining contrast arrival delays for a plurality of locations of the first image from pixel values for the plurality of locations; determining cerebral blood flow at a plurality of positions of the first image according to pixel values of the plurality of positions; determining cerebral blood volumes of a plurality of positions of the plurality of first images according to pixel values of the plurality of positions and pixel values of target positions in the plurality of first images; determining an average transit time for the plurality of locations based on the cerebral blood flow and the cerebral blood volume; determining the time to peak of the plurality of positions according to the arrival delay of the contrast agent at the plurality of positions and the arrival delay of the contrast agent at the target position.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a block diagram of a CT perfusion function map quantitative parameter processing apparatus according to an embodiment of the present disclosure;
FIG. 2 shows a schematic view of a skull in position according to an embodiment of the present disclosure;
FIG. 3 shows a schematic view of an elliptical region in accordance with an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a second image after layers are superimposed, according to an embodiment of the disclosure;
FIG. 5 shows a schematic diagram of a plurality of third images after superposition of identical layers according to an embodiment of the disclosure;
FIG. 6 shows a schematic view of a target area according to an embodiment of the present disclosure;
FIG. 7 shows a schematic view of a centerline according to an embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of a keypoint detection network according to an embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of an overlay image according to an embodiment of the present disclosure;
FIG. 10 shows a schematic diagram of determining a target location according to an embodiment of the present disclosure;
FIG. 11 shows a schematic diagram of arterial and venous points, according to an embodiment of the present disclosure;
12A, 12B, 12C, and 12D show schematic diagrams of perfusion function diagrams according to embodiments of the present disclosure;
FIG. 13 shows a flow chart of a CT perfusion function map quantitative parameter processing method according to an embodiment of the present disclosure;
FIG. 14 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 15 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a block diagram of a CT perfusion function map quantitative parameter processing apparatus according to an embodiment of the present disclosure, as shown in fig. 1, the apparatus including: a medical image acquisition device 11 and a processing device 12,
the medical image acquiring device 11 is configured to acquire a sequence of images to be processed, where the sequence of images to be processed includes images to be processed of the same physiological region acquired at multiple times, and the images to be processed include the three-dimensional medical image;
the processing device 12 is configured to:
performing first preprocessing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence;
extracting a central line of each first image in the first image sequence;
determining the target positions of key points in each first image in the first images;
and determining the perfusion function index of the physiological area and the relative values of the perfusion function indexes on two sides of the midline according to the first image sequence and the target position.
According to the CT perfusion function diagram quantitative parameter processing equipment disclosed by the embodiment of the disclosure, the perfusion function index can be determined based on the preprocessed first image sequence and the target position, and then the relative values of the perfusion function indexes on the two sides of the central line are automatically determined according to the central line, so that the uncertainty of manual labeling is reduced, the processing efficiency is improved, and the accuracy of calculating the relative values of the perfusion function indexes is improved.
In one possible implementation, the medical image acquisition device may comprise a Computed Tomography device, and may acquire a sequence of three-dimensional medical images of the same physiological region taken for the same patient, for example, Computed Tomography (CT) perfusion imaging of a brain region of the same patient. In an example, the perfusion of blood in the brain can be determined by adding a contrast agent to blood flowing into the brain and acquiring CT images of the brain at multiple times to observe the perfusion of the contrast agent in the brain. Further, a plurality of perfusion function indicators of the brain, such as at least one of contrast agent arrival delay, cerebral blood flow, cerebral blood volume, mean transit time, and peak arrival time, may be determined from medical images acquired at a plurality of times, and the present disclosure does not limit the category of the perfusion function indicators.
In a possible implementation manner, the images to be processed may be sorted, for example, the images to be processed may be sorted according to parameters such as the number of the images to be processed, the acquisition time, and the like, so as to obtain the image sequence to be processed. In an example, the sequence of images to be processed may include 15 or 16 images to be processed acquired at the time. The number of images to be processed included in the sequence of images to be processed is not limited by the present disclosure.
In a possible implementation manner, the image to be processed includes a three-dimensional medical image, and the three-dimensional medical image may be composed of two-dimensional images of a plurality of image layers obtained by scanning or the like, and during the scanning process, the acquisition orientation of the image may shift. For example, due to the movement of the patient, the angle of the acquired two-dimensional image changes, which results in a decrease in the accuracy of the three-dimensional image. Furthermore, the sequence of images to be processed comprises images to be processed obtained at a plurality of instants in which the orientation of the acquired images may also be different. Further, intervals between a plurality of times of acquiring the to-be-processed images may be uneven, for example, an image quality of the to-be-processed image acquired at a certain time is low and cannot be used, so that an effective to-be-processed image is not acquired at the certain time, and further, a previous to-be-processed image and a next to-be-processed image of the to-be-processed image become two adjacent to-be-processed images, and an acquisition time interval of the adjacent to-be-processed images is large.
In a possible implementation manner, based on the above problem, a plurality of images to be processed in the image sequence to be processed may be subjected to first preprocessing to obtain a first image sequence, so as to reduce the influence of the above problem. For example, the images to be processed may be registered and the time interval between adjacent images may be corrected to obtain the first sequence of images.
In one possible implementation manner, performing a first pre-processing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence may include: determining a reference layer of a plurality of images to be processed in the image sequence to be processed; according to the reference layers of the multiple images to be processed and the shapes of the areas to be registered in the reference layers, carrying out image registration on the multiple images to be processed to obtain multiple second images; performing image registration based on mutual information among the plurality of second images to obtain a plurality of third images; and correcting the time corresponding to the third images according to the average time interval between the acquisition moments of the images to be processed in the image sequence to be processed to obtain the first image sequence.
In one possible implementation, the image to be processed is a medical image of a brain region surrounded by a skull, and therefore, the region of the brain in the image to be processed is located in the skull with an elliptical shape or an approximate elliptical shape. Thus, image registration may be performed based on the shape of the region to be registered, e.g. the brain is in an elliptical or approximately elliptical region, and image registration may be performed based on features of the ellipse.
In one possible implementation, a reference layer in the image to be processed may be determined, for example, for any image to be processed, an image with the largest area of the skull region in the processed image may be determined as the reference layer, or the first scanned layer may be determined as the reference layer. The selection mode of the reference layer is not limited in the present disclosure.
In an example, the position of the skull can be segmented in the reference image layer based on the brightness characteristics of the skull, and the inside of the skull is the region to be registered (i.e., the brain region).
FIG. 2 shows a schematic view of the location of the skull according to an embodiment of the disclosure. As shown in fig. 2, if the structure, composition, etc. of the skull and the brain are different, the brightness characteristics in the acquired medical image are different. As shown in FIG. 2, the position of the skull can be segmented according to the brightness characteristics of the skull.
In a possible implementation manner, the shape of the skull in the two-dimensional image is an ellipse or is similar to an ellipse, and the pixel point of the skull can be subjected to ellipse fitting to obtain an ellipse region.
Fig. 3 shows a schematic diagram of an elliptical region in accordance with an embodiment of the present disclosure. As shown in fig. 3, after performing ellipse fitting on the pixel point of the skull in the reference image layer, an elliptical region can be obtained, and based on the elliptical region, parameters such as the center and the angle of the brain region can be determined. For example, in the reference layer, the center position of the elliptical region may be determined based on the outline of the elliptical region. Further, the ellipse is offset to the left, for example, by an angle α.
In a possible implementation manner, other image layers of the image to be processed may be registered based on parameters such as the center and the angle of the elliptical region of the reference image layer, for example, so that the parameters such as the center and the angle of the elliptical region after skull fitting in the other image layers are consistent with the reference image layer. In an example, the center and the angle of the ellipse region after skull fitting in the other image layers have a deviation from the center and the angle of the ellipse region after skull fitting in the reference image layer, and the other image layers may be adjusted based on the deviation, for example, the region where the skull in the other image layers is located may be subjected to processing such as translation or rotation, so that parameters such as the center and the angle of the ellipse region after skull fitting in the other image layers are consistent with the reference image layer. Further, through the processing, parameters such as the center, the angle and the like of the ellipse region after skull fitting in each layer of other images to be processed in the image sequence to be processed are made to be consistent with the reference layer. The present disclosure does not limit the registration manner.
In a possible implementation manner, after the registration, a plurality of second images after registration may be obtained, and the orientation and the angle of the brain region in each image layer of each second image are consistent. The effect of the offset occurring during the image acquisition process can be reduced.
Fig. 4 shows a schematic diagram of a second image after superposition of layers according to an embodiment of the disclosure. As shown in fig. 4, after the registration processing, the image layers of the second image may be superimposed into a two-dimensional image to verify the registration effect. If the registration effect is poor, the skull region may be relatively disordered in the superimposed two-dimensional images, that is, the skull regions of multiple angles are superimposed in the same two-dimensional image, so that the images are disordered. If the registration effect is good, the overlap ratio of the superposed skull region is high, and the two-dimensional image is not confused. As shown in fig. 4, in the two-dimensional image, the skull region is clearer, that is, the registration effect is better, and the influence of the shift occurring during the image acquisition process can be reduced.
In one possible implementation, the registration may reduce errors caused by offsets (e.g., angular offsets or displacements) during acquisition, but local information deviations may also occur during the capture process, such as blood flow induced changes in pixel information of local regions of brain complement. Therefore, the plurality of second images may be further registered to further improve the registration accuracy.
In one possible implementation, the plurality of second images may be registered based on mutual information between the plurality of second images. The mutual information may represent a correlation between two images, e.g. the amount of identical information contained. In registering based on the mutual information, the information entropy or joint entropy between the two second images may be determined. The higher the degree of similarity between the two second images or the greater the specific gravity of the included overlapping portions, the smaller the joint entropy or information entropy between the two second images. In an example, image registration may be performed by adjusting a local region such that the joint entropy or information entropy between two second images increases to further improve registration accuracy. Further, a plurality of second images may be registered based on the above method, for example, a reference image (e.g., a first second image, or an arbitrarily selected second image, etc.) in the second images may be determined, and other second images may be registered with the reference image based on the above method, and a plurality of third images may be obtained.
Fig. 5 shows a schematic diagram of a plurality of third images after superposition of the same layers according to an embodiment of the disclosure. As shown in fig. 5, after the registration processing, the same image layers of a plurality of third images may be superimposed into a two-dimensional image to verify the registration effect. If the registration effect is poor, the skull region may be relatively disordered in the superimposed two-dimensional image, that is, the skull region containing various local information is superimposed in the same two-dimensional image, so that the image is disordered (as in the first row of the image in fig. 5). If the registration effect is good, the overlap ratio of the superposed skull region is high, and the two-dimensional image is not confused. As shown in the second row image in fig. 5, in the second row two-dimensional image, the skull region is clearer, that is, the registration effect is better, and the influence of the deviation of the local information can be reduced.
In a possible implementation manner, the image to be processed may be acquired by the same acquisition device, or may be acquired by different acquisition devices. Due to the different acquisition devices, the acquisition intervals of the images to be processed may also be different. In addition, it is also possible that no valid image to be processed is acquired at some point in time. The above factors may cause the acquisition intervals of the images to be processed to be different, i.e., the acquisition timings to be non-uniform.
In a possible implementation manner, the times corresponding to the plurality of third images may be corrected based on an average time interval between the acquisition moments of the plurality of images to be processed. For example, if the average time interval between the acquisition timings of the images to be processed is i (i is a positive number) seconds, and the time interval between the nth (n is a positive integer) image to be processed and the (n + 1) th image to be processed is j (j is a positive number, and j ≠ i) seconds, the time interval may be corrected, and for example, the correction processing may be performed by fitting, interpolation, or the like, to obtain the first image sequence.
In an example, the average time interval is 1 second, and the time interval between the 5 th image to be processed and the 6 th image to be processed is 2 seconds, an image may be inserted between the 5 th image to be processed and the 6 th image to be processed, the time corresponding to the image is an average value of the time when the 5 th image to be processed is acquired and the time when the 6 th image to be processed is acquired, and the pixel value of the image is a result of interpolation processing or fitting processing performed on the pixel values of the 5 th image to be processed and the 6 th image to be processed, for example, spline interpolation, linear interpolation, polynomial fitting, or the like may be performed to obtain the pixel value of the image.
In this way, the registration and the time correction can be performed, so that the registration accuracy of the plurality of first images in the first image sequence is improved, and the time interval between the first images is uniform, which is helpful for calculating the perfusion function index.
In one possible implementation, since after determining the perfusion function index, a relative value of the perfusion function index (e.g., a relative value between perfusion function indexes of the left and right brains) may be determined, a central line of each first image may be determined, the central line may divide a target region (e.g., a region in which the brain is located) in the first image, for example, the brain region may be accurately divided into a left brain region and a right brain region, and after determining the perfusion function index of each location in the brain region, a symmetric location may be determined based on the central line, and a relative value between the perfusion function indexes of each location and its symmetric location may be determined.
In one possible implementation, extracting a central line of each first graph in the first image sequence may include: obtaining a target area in each first image; determining a center of gravity of the target region; determining a plurality of dividing lines of the target area according to the gravity center; respectively determining the mirror image similarity of the target area relative to each dividing line; and determining the dividing line with the highest mirror image similarity as the middle line of the first image.
In one possible implementation, the first image may include a region such as a skull, which may affect the segmentation of the brain region. Thus, the target region (e.g. the brain region) in the first image may be determined first, and the influence of other regions (e.g. the region where the skull is located) may be excluded to obtain an accurate centerline.
Fig. 6 shows a schematic view of a target region, for example a brain region, which may be segmented in a first image as shown in fig. 6, according to an embodiment of the present disclosure. In an example, the target region in the first image may be determined by a neural network or a luminance feature, or the like. In an example, only the target region in any layer in any one of the first images may be obtained, and the target regions of other layers in the first image and layers in other first images may be obtained based on the registration relationship. The present disclosure does not limit the manner in which the target region is obtained.
In one possible implementation, the center of gravity of the target region may be determined, for example, the masses of the positions in the target region may be equal, and the position of the center of gravity may be determined by a geometric method, and the determination manner of the center of gravity is not limited by the present disclosure.
In one possible implementation, a plurality of dividing lines of the target region may be determined by the center of gravity, i.e., straight lines passing through the center of gravity, which divide the target region into two parts. Further, the mirror image similarity of the target region with respect to the dividing line may be determined, that is, one of the two parts is subjected to mirror image processing through the dividing line, and the similarity between the mirrored region and the other of the two parts is determined.
In one possible implementation, the mirror image similarity of each partition line may be determined in the above manner, and the partition line with the highest mirror image similarity may be determined as a central line of the first image, which may uniformly divide the target region in the first image into two parts, for example, may accurately divide the brain region into a left brain region and a right brain region.
Fig. 7 shows a schematic diagram of a midline according to an embodiment of the present disclosure, as shown in fig. 7, where the target region is a brain region, and in any image layer of any one of the first images, the midline can accurately divide the brain region into a left brain region and a right brain region.
In a possible implementation manner, the central line of any layer of any one first image can be determined by the method, and the central lines of the layers of other images of the first image and other first images can be determined by the registration relationship. The manner in which the alignment line is obtained is not limited by this disclosure.
By the method, the interference factor when the midline is determined can be reduced by screening out the target area in the first image, the midline can be accurately determined through the mirror image similarity, and the accuracy of the relative values of the perfusion function indexes at the two sides of the midline is improved.
In one possible implementation, when determining the perfusion function index of each location in the brain region, the key points such as artery points and vein points in the brain region play an important role in determining the perfusion function index, for example, the pixel values in CT perfusion imaging may reflect the absorption degree of each tissue in the imaging to the ray, the pixel values of a certain location at different times may reflect the flowing condition of blood at the location, and the like. The pixel values of the arterial and venous points are important parameters for determining an index of the perfusion function at any location in the brain region. Therefore, the target positions of key points such as an artery point and a vein point can be determined.
In a possible implementation manner, the target positions of the keypoints in the first image may be determined through a keypoint detection network, or the target positions of the keypoints in the first image may also be determined in a manner of superimposing image layers of the first image at different times. Alternatively, the above two ways may be complemented, for example, first, the target location of the keypoint is determined by using the keypoint detection network, if the keypoint detection network can determine the accurate target location, the target location does not need to be determined by using the other way, and if the keypoint detection network cannot detect the target location of the keypoint or the detected keypoint is inaccurate, the target location is determined by using the other way.
In one possible implementation, determining the target position of the keypoint in each first image in the first image may include: performing second preprocessing on the first image to obtain a fourth image; inputting the fourth image into a key point detection network for processing, and determining candidate positions of the key points in the first image; and screening the candidate positions according to the pixel values of the candidate positions in the plurality of first images to obtain the target position.
In a possible implementation manner, the first image may be preprocessed and then input to the keypoint detection network, so as to improve the detection accuracy. Performing second preprocessing on the first image to obtain a fourth image, including: performing fusion processing on the plurality of first images to obtain a fifth image; and screening out the area where the physiological area is located in the fifth image to obtain the fourth image.
In a possible implementation manner, the second preprocessing may include performing fusion processing on a plurality of first images, where the plurality of first images may represent changes of pixel values of the target region at a plurality of time instants, and the plurality of first images may be fused, for example, in a fifth image obtained after the fusion, information of each pixel point may include the pixel values at the plurality of time instants. The determination of keypoints in the target region can be facilitated by the variation of pixel values at multiple time instances. For example, changes in pixel values of a brain region at multiple times may reflect a flow of blood flow in the brain region at multiple times, which may be useful in determining arterial and venous points of the brain region.
In a possible implementation manner, the second preprocessing may further include screening out the target region in the fifth image, for example, the target region may be screened out through feature information such as a luminance feature of a pixel value or according to a statistical rule of the target region (for example, a statistical rule of parameters such as a shape and a size of the target region in multiple layers of the fifth image). In an example, the brain region in the fifth image may be screened out in the above manner, and further, the fifth image may be subjected to cropping and/or coordinate transformation, for example, a region other than the target region, for example, a region such as a skull, may be removed by the cropping and/or coordinate transformation. The region where the key points may exist can be further determined based on statistical rules of the positions of the key points in the historical sample image (for example, the positions of the key points such as artery points and vein points are marked by professionals such as doctors in the historical sample image), so that the key points can be detected in the region, the detection range is further reduced, and the detection efficiency and accuracy are improved.
In a possible implementation manner, after the second preprocessing, a fourth image may be obtained, and further, the fourth image may be input to the keypoint detection network to determine candidate positions of the keypoints in the first image.
In one possible implementation, the keypoint detection Network may be a convolutional neural Network, for example, a Stacked-hour glass Network (Stacked-hour glass Network) based on a deep convolutional neural Network, the Stacked-hour glass Network may include a plurality of hour glass modules, the hour glass modules may include a plurality of levels, may perform top-down processing (e.g., convolution, down-sampling, and the like), may reduce resolution of the acquired feature information, increase a receptive field of the feature information, and may also perform bottom-up processing (e.g., deconvolution, up-sampling, and the like), may increase resolution of the feature information, and reduce a receptive field of the feature information. The hourglass modules can repeatedly perform the processing on the input image to acquire characteristic information of multiple scales, so that the detection of key points is facilitated.
Fig. 8 shows a schematic diagram of a keypoint detection network according to an embodiment of the present disclosure, as shown in fig. 8, which may be a deep convolutional neural network comprising four hourglass modules, which may perform the above-described processing four times on the input fourth image. A supervision module can be further included between the hourglass modules for facilitating the calculation of the loss function during neural network training. Furthermore, the system also comprises an activation module and an output module after the last hourglass module, and the characteristic information output by the last hourglass module can be processed through an activation function, and the position information of the key point is output. The position information may be output in the form of a thermodynamic diagram (Heatmap), and the output form of the position information is not limited by the present disclosure, and may also be output in the form of coordinates, for example.
In a possible implementation manner, through the processing of the keypoint detection network, a plurality of candidate positions of the keypoint in the first image may be output, and the most accurate position among the plurality of candidate positions may be determined as the target position. In an example, in the thermodynamic diagram described above, the pixel value of the pixel point may be a confidence of the target position, and a corresponding position of a position with the highest pixel value in the first image may be determined as the target position. Alternatively, a plurality of positions with confidence degrees higher than a preset confidence degree threshold value can be determined in the thermodynamic diagram, corresponding positions of the plurality of positions in the thermodynamic diagram in the first image are candidate positions, screening can be performed according to pixel values of the candidate positions, and a target position is determined in the candidate positions. In an example, the pixel values in the first image may reflect a flow condition of blood, and whether the candidate location is an accurate target location may be determined based on the pixel values of the candidate locations in the plurality of first images and corresponding time instants of the first images.
In an example, the keypoints are artery points and vein points, and the above-described keypoint detection network can detect candidate positions of a plurality of sets of artery points and vein points. For example, for a first set of candidate locations, pixel values for arterial and venous points in the plurality of candidate locations may be determined, the pixel values for the arterial and venous points may change over time, and the pixel values may be plotted versus time as a curve, i.e., a curve of pixel values for the arterial point versus time and a curve of pixel values for the arterial point versus time. Further, whether the set of candidate locations is accurate may be determined by preset screening conditions, for example, whether the peaks of the two relation curves exceed a preset threshold (for example, the preset threshold is 100), and/or whether the time of the peak of the relation curve of the pixel value of the artery point and the time is earlier than the time of the peak of the relation curve of the pixel value of the vein point and the time (blood flows into the brain through the artery first and then flows out of the brain through the vein, so the time of the peak of the relation curve of the pixel value of the artery point and the time should be earlier than the time of the peak of the relation curve of the pixel value of the vein point and the time), which may further include other conditions, which are not listed herein. If the group of candidate positions does not satisfy the screening condition, the group of candidate positions is excluded, and whether the next group of candidate positions satisfies the screening condition is continuously determined. Through the above processing, a group of most accurate candidate positions can be selected as the target position from the plurality of groups of candidate positions. In an example, if multiple sets of candidate positions still satisfy the filtering condition after the above processing, the corresponding position of the set of key points with the highest confidence in the thermodynamic diagram in the first image may still be used as the target position. The present disclosure does not limit the screening method.
In a possible implementation manner, if the keypoint detection network fails to detect an accurate target position, or there is no target position satisfying the screening condition in the candidate positions, the target positions of the keypoints in the first image may be determined by superimposing the image layers of the first image at different times. Determining the target position of the key point in each first image in the first image may include: determining a target layer in the first image; performing superposition processing on a target layer in the plurality of first images to obtain a superposed image; and determining the target position according to the brightness characteristics of a plurality of positions in the superposed image.
In one possible implementation, the keypoints may be located in a target image layer in the first image, e.g., the arterial and venous points may be located in the image layer in the first image where arteriovenous anatomical locations are most apparent. The target image layer in the first image can be determined according to the image layers where the artery points and the vein points are located in the plurality of historical sample images, for example, in the historical sample images, staff such as doctors and the like can mark the positions of the artery points and the vein points, statistics can be performed on the image layer where the marked positions are located, and the target image layer in the first image can be estimated according to statistical data.
In a possible implementation manner, the target image layers in the plurality of first images may be subjected to an overlay process, where the target image layers in the plurality of first images may reflect the flow conditions of blood in the target image layers at different times, and the overlay image obtained after the overlay process may reflect flow information of blood flow, for example, positions exist in the target image layers, and blood flows through the positions at each time, and the positions have higher pixel values and higher brightness in the overlay image. The target position may be determined based on the luminance characteristics of the plurality of positions in the superimposed image.
Fig. 9 shows a schematic diagram of an overlaid image according to an embodiment of the present disclosure, as shown in fig. 9, a target layer may be first determined, the layer where the artery and vein points are most apparent at the anatomical location, and the target layer may be selected among a plurality of layers of the first image based on statistical rules in the historical samples. Further, the target image layers of the plurality of first images may be overlaid to obtain an overlaid image.
Fig. 10 illustrates a schematic diagram of determining a target location according to an embodiment of the present disclosure, as shown in fig. 10, an artery point is generally located in an artery region, such as an anterior artery region or a middle artery region, and a vein point is generally located in a vein region, such as a transverse sinus or a sinus junction location in the vein region. In an example, the brain region in the superimposed image may be quartered in the longitudinal direction to obtain four regions, a second region of which may be determined as the artery region, and a fourth region of which may be determined as the vein region. Further, the luminance characteristics of the arterial and venous regions are analyzed.
In an example, the artery region and the vein region in the target layer in each first image may also be selected first, and the artery region and the vein region in each target layer may also be obtained, and the artery region and the vein region in the superimposed image may also be obtained.
In an example, target locations of arterial and venous points may be determined in an arterial region and a venous region, respectively.
Fig. 11 shows a schematic diagram of arterial and venous points, according to an embodiment of the present disclosure. As shown in fig. 11, a position where the luminance value is greater than or equal to the luminance threshold value may be determined as the target position of the artery point in the artery region, and a position where the luminance value is greater than or equal to the luminance threshold value may be determined as the target position of the vein point in the vein region.
By the method, the target position of the key point can be detected through the key point detection network, and the detection accuracy and the detection efficiency are improved. When the key point detection network cannot detect the target position, the target position can be determined in a mode of overlapping the target image layers, and the robustness of the target point detection process can be improved.
In one possible implementation, after determining the target position of the key point and the central line, the perfusion function index of each position of the physiological region may be determined based on the target position, and the relative values of the perfusion function indexes of the physiological region on both sides of the central line may be determined. Determining a perfusion function indicator of the physiological region from the first image sequence and the target location, and a relative value of the perfusion function indicators on both sides of the centerline, may include: obtaining perfusion function indexes of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position; and determining relative values of the perfusion function indexes of the areas on two sides of the central line according to the perfusion function indexes of a plurality of positions in the first image and the central line of the first image.
In one possible implementation, the physiological region is a brain region in the first image, and the perfusion function indicator may include at least one of contrast arrival delay, cerebral blood flow, cerebral blood volume, mean transit time, and peak arrival time. And taking the perfusion function index of each position as a pixel value of each position, and taking the obtained image as a perfusion function chart. A perfusion function map for each perfusion function index may be determined first. Further, a central line in the first image is a central line of the perfusion function map, and relative values of perfusion function indicators of regions on both sides of the central line may be determined based on the central line.
In one possible implementation, obtaining perfusion function indicators of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position includes: determining contrast arrival delays for a plurality of locations of the first image from pixel values for the plurality of locations; determining cerebral blood flow at a plurality of positions of the first image according to pixel values of the plurality of positions; determining cerebral blood volumes of a plurality of positions of the plurality of first images according to pixel values of the plurality of positions and pixel values of target positions in the plurality of first images; determining an average transit time for the plurality of locations based on the cerebral blood flow and the cerebral blood volume; determining the time to peak of the plurality of positions according to the arrival delay of the contrast agent at the plurality of positions and the arrival delay of the contrast agent at the target position.
In one possible implementation, contrast agent arrival delays for each location may be determined. The gray scale index in the pixel value of a position in the plurality of first images is different for the position, that is, the gray scale index of the position changes with time, and therefore the gray scale index reaches a peak at a certain time, and the contrast agent arrival delay at the position refers to a time when the gray scale index at the position reaches a certain percentage of the peak, for example, a time when the gray scale index at the position reaches 5% of the peak. The time when the gray scale index of each location in the brain region reaches 5% of the peak may be different from each other. For example, the contrast agent arrival delay at position a is 2 seconds, the contrast agent arrival delay at position B is 3 seconds, and so on.
In one possible implementation, cerebral blood flow at each location may also be determined based on pixel value arrival delays at a plurality of locations of the first image. Cerebral blood flow is the flow of blood through a location per unit time. The perfusion function index can be subjected to deconvolution processing on the basis of a relation curve of pixel values of all positions along with time, and a pulse residual function of all the positions is obtained. That is, deconvolution processing is performed on the relationship curve at a certain position to obtain a time density curve, and a function corresponding to the time density curve is a pulse residual function at the position. Based on the pulse residual function for that location, cerebral blood flow for that location may be determined. In an example, cerebral blood flow at any location may be determined according to the following equation (1):
Figure BDA0002972403590000121
where CBF is the cerebral blood flow at any location, r (t) is the pulse residual function at that location, HsvAnd HLVIs a hemodynamic correction parameter, in the example, Hsv=0.25,HLVRho is brain parenchyma density, rho is 1.04g/ml, k is 0.45avIs a correction term for partial volume effects, wherein Hsv、HLVRho and kavAre all constant parameters. Cerebral blood flow is expressed in ml/100 g/min. The cerebral blood flow of each position in the physiological area can be determined according to the formula (1) to obtain a perfusion function chart of the cerebral blood flow.
In one possible implementation, cerebral blood volume represents the total amount of blood flowing through a location over a certain period of time. Determining cerebral blood volumes of a plurality of positions according to pixel values of the plurality of positions of the plurality of first images and pixel values of target positions in the plurality of first images. In an example, cerebral blood volume at each location may be determined according to the following equation (2):
Figure BDA0002972403590000131
wherein CBV is cerebral blood volume at any position, ct(t) is a function corresponding to a time-dependent curve of the pixel value at the location, ca(t) is a function corresponding to a time-dependent curve of the pixel value of the target position, for example, a function corresponding to a time-dependent curve of the pixel value of the artery point. The cerebral blood volume is in ml/100 g. The cerebral blood volume of each position in the physiological area can be determined according to the formula (2), and a perfusion function diagram of the cerebral blood volume is obtained.
In one possible implementation, after determining the cerebral blood flow and the cerebral blood flow of each location, an average transit time of each location may be determined based on the cerebral blood flow and the cerebral blood volume, and in an example, the average transit time of each location may be determined by the following equation (3):
Figure BDA0002972403590000132
wherein MTT is the mean transit time for any position. The mean transit time for each location in the physiological region can be determined according to equation (3) and a perfusion function map of the mean transit time is obtained.
In one possible implementation, the peak-to-peak times for the plurality of locations may be determined based on the contrast arrival delays for the plurality of locations and the contrast arrival delay for the target location. In an example, the time-to-peak for each position may be determined according to equation (4) below:
Tmax=arg maxt[r(t)]+IRFt0(AIF) (4)
wherein, TmaxTime of arrival at any position, IRFt0(AIF) is the difference between the contrast agent arrival delay at that location and the contrast agent arrival delay at the target location (e.g., arterial site). The time to peak at each location in the physiological region can be determined according to equation (4) to obtain a perfusion function map of the time to peak.
Fig. 12A, 12B, 12C, and 12D show schematic diagrams of perfusion function diagrams according to embodiments of the present disclosure. Fig. 12A is a functional diagram of perfusion of cerebral blood volume, fig. 12B is a functional diagram of perfusion of cerebral blood flow, fig. 12C is a functional diagram of perfusion of mean transit time, and fig. 12D is a functional diagram of perfusion of time to peak.
In one possible implementation, based on the perfusion function map and the centerline, relative values of perfusion function indicators on both sides of the centerline may be determined. For example, a relative value rCBF between the cerebral blood flow of a location in the left brain and the cerebral blood flow of the location at a mirror image location in the right brain may be determined, and a relative value rCBV between the cerebral blood volume of a location in the left brain and the cerebral blood volume of the location at a mirror image location in the right brain may also be determined. The relative cerebral blood flow rCBF and the relative cerebral blood volume rCBV for each location can be determined based on the above-described manner.
Further, other indexes can be determined according to needs, for example, the total volume of the pixel points with the peak reaching time being more than 10 seconds, the total volume of the pixel points with the peak reaching time being more than 8 seconds and the total volume of the pixel points with the peak reaching time being more than 6 seconds; the total volume of pixel points with relative cerebral blood flow rCBF smaller than 0.3, the total volume of pixel points with relative cerebral blood flow rCBF smaller than 0.34 and the total volume of pixel points with relative cerebral blood flow rCBF smaller than 0.38; the total volume of pixel points with the relative cerebral blood volume rCBV smaller than 0.34, the total volume of pixel points with the relative cerebral blood volume rCBV smaller than 0.38 and the total volume of pixel points with the relative cerebral blood volume rCBV smaller than 0.42; the ratio of the total volume of the pixels with the peak reaching time of more than 10 seconds to the total volume of the pixels with the peak reaching time of more than 6 seconds; the difference between the total volume of the pixels with the peak reaching time of more than 6 seconds and the total volume of the pixels with the cerebral blood flow rCBF of less than 0.3; the ratio of the total volume of the pixels with the peak reaching time of more than 6 seconds to the total volume of the pixels with the cerebral blood flow rCBF of less than 0.3, and the like.
According to the CT perfusion function diagram quantitative parameter processing equipment disclosed by the embodiment of the disclosure, the perfusion function index can be determined based on the preprocessed first image sequence and the target position, and then the relative values of the perfusion function indexes on the two sides of the central line are automatically determined according to the central line, so that the uncertainty of manual labeling is reduced, the processing efficiency is improved, the accuracy of calculating the relative values of the perfusion function indexes is improved, and a worker can be helped to quickly obtain the perfusion function index with higher accuracy and the relative value of the perfusion function index.
Fig. 13 shows a flowchart of a CT perfusion function map quantitative parameter processing method according to an embodiment of the present disclosure, as shown in fig. 13, the method includes: step S11, performing first preprocessing on a plurality of images to be processed in the image sequence to be processed to obtain a first image sequence; step S12, extracting a central line of each first image in the first image sequence; step S13, determining the target position of the key point in each first image in the first image; step S14, determining perfusion function indicators of the physiological region and relative values of the perfusion function indicators on both sides of the midline according to the first image sequence and the target position.
In one possible implementation manner, performing a first pre-processing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence includes: determining a reference layer of a plurality of images to be processed in the image sequence to be processed; according to the reference layers of the multiple images to be processed and the shapes of the areas to be registered in the reference layers, carrying out image registration on the multiple images to be processed to obtain multiple second images; performing image registration based on mutual information among the plurality of second images to obtain a plurality of third images; and correcting the time corresponding to the third images according to the average time interval between the acquisition moments of the images to be processed in the image sequence to be processed to obtain the first image sequence.
In one possible implementation, extracting a central line of each first graph in the first image sequence includes: obtaining a target area in each first image; determining a center of gravity of the target region; determining a plurality of dividing lines of the target area according to the gravity center; respectively determining the mirror image similarity of the target area relative to each dividing line; and determining the dividing line with the highest mirror image similarity as the middle line of the first image.
In one possible implementation, determining the target position of the keypoint in each first image in the first image includes: performing second preprocessing on the first image to obtain a fourth image; inputting the fourth image into a key point detection network for processing, and determining candidate positions of the key points in the first image; and screening the candidate positions according to the pixel values of the candidate positions in the plurality of first images to obtain the target position.
In a possible implementation manner, performing second preprocessing on the first image to obtain a fourth image includes: performing fusion processing on the plurality of first images to obtain a fifth image; and screening out the area where the physiological area is located in the fifth image to obtain the fourth image.
In one possible implementation, determining the target position of the keypoint in each first image in the first image includes: determining a target layer in the first image; performing superposition processing on a target layer in the plurality of first images to obtain a superposed image; and determining the target position according to the brightness characteristics of a plurality of positions in the superposed image.
In one possible implementation, determining a perfusion function indicator of the physiological region and a relative value of the perfusion function indicators on both sides of the midline according to the first image sequence and the target position includes: obtaining perfusion function indexes of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position; and determining relative values of the perfusion function indexes of the areas on two sides of the central line according to the perfusion function indexes of a plurality of positions in the first image and the central line of the first image.
In one possible implementation, the perfusion function indicator includes at least one of contrast agent arrival delay, cerebral blood flow, cerebral blood volume, mean transit time, and peak arrival time.
In one possible implementation, obtaining perfusion function indicators of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position includes: determining contrast arrival delays for a plurality of locations of the first image from pixel values for the plurality of locations; determining cerebral blood flow at a plurality of positions of the first image according to pixel values of the plurality of positions; determining cerebral blood volumes of a plurality of positions of the plurality of first images according to pixel values of the plurality of positions and pixel values of target positions in the plurality of first images; determining an average transit time for the plurality of locations based on the cerebral blood flow and the cerebral blood volume; determining the time to peak of the plurality of positions according to the arrival delay of the contrast agent at the plurality of positions and the arrival delay of the contrast agent at the target position.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a CT perfusion function map quantitative parameter processing device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the CT perfusion function map quantitative parameter processing methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions of the method portions are not repeated.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code, when the computer readable code is run on a device, a processor in the device executes instructions for implementing the CT perfusion function map quantitative parameter processing method provided in any of the above embodiments.
The disclosed embodiments also provide another computer program product for storing computer readable instructions, which when executed, cause a computer to perform the operations of the CT perfusion function map quantitative parameter processing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 14 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 14, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense an edge of a touch or slide action, but also detect a duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 15 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 15, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, stored in memory 1932TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A CT perfusion function map quantitative parameter processing device is characterized by comprising: a medical image acquisition device and a processing device,
the medical image acquisition device is used for acquiring a sequence of images to be processed, wherein the sequence of images to be processed comprises images to be processed of the same physiological area acquired at a plurality of moments, and the images to be processed comprise three-dimensional medical images;
the processing device is used for:
performing first preprocessing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence;
extracting a central line of each first image in the first image sequence;
determining target positions of key points in each first image in the first images, wherein the key points comprise artery points and vein points, and the target positions of the key points in the first images comprise the target positions of the artery points and the vein points;
determining perfusion function indexes of the physiological area and relative values of the perfusion function indexes on two sides of the midline according to the first image sequence and the target position;
the method for obtaining the first image sequence by performing the first preprocessing on a plurality of images to be processed in the image sequence to be processed comprises the following steps:
determining a reference layer of a plurality of images to be processed in the image sequence to be processed;
according to the reference layers of the multiple images to be processed and the shapes of the areas to be registered in the reference layers, carrying out image registration on the multiple images to be processed to obtain multiple second images;
performing image registration based on mutual information among the plurality of second images to obtain a plurality of third images;
correcting the time corresponding to the third images according to the average time interval between the acquisition moments of the images to be processed in the image sequence to be processed to obtain the first image sequence;
determining a perfusion function indicator of the physiological region and relative values of the perfusion function indicators on both sides of the midline according to the first image sequence and the target position, including:
obtaining perfusion function indexes of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position;
and determining relative values of the perfusion function indexes of the areas on two sides of the central line according to the perfusion function indexes of a plurality of positions in the first image and the central line of the first image.
2. The apparatus of claim 1, wherein extracting the centerline of each first map in the first sequence of images comprises:
obtaining a target area in each first image;
determining a center of gravity of the target region;
determining a plurality of dividing lines of the target area according to the gravity center;
respectively determining the mirror image similarity of the target area relative to each dividing line;
and determining the dividing line with the highest mirror image similarity as the middle line of the first image.
3. The apparatus of claim 1, wherein determining the target location of the keypoint in each first image in the first image comprises:
performing second preprocessing on the first image to obtain a fourth image;
inputting the fourth image into a key point detection network for processing, and determining candidate positions of the key points in the first image;
and screening the candidate positions according to the pixel values of the candidate positions in the plurality of first images to obtain the target position.
4. The apparatus of claim 3, wherein second pre-processing the first image to obtain a fourth image comprises:
performing fusion processing on the plurality of first images to obtain a fifth image;
and screening out the area where the physiological area is located in the fifth image to obtain the fourth image.
5. The apparatus of claim 1, wherein determining the target location of the keypoint in each first image in the first image comprises:
determining a target layer in the first image;
performing superposition processing on a target layer in the plurality of first images to obtain a superposed image;
and determining the target position according to the brightness characteristics of a plurality of positions in the superposed image.
6. The apparatus of claim 1, wherein the perfusion function index includes at least one of contrast arrival delay, cerebral blood flow, cerebral blood volume, mean transit time, and peak arrival time.
7. The apparatus of claim 6, wherein obtaining perfusion function indicators for a plurality of locations in the first image from pixel values of the plurality of first images in the first sequence of images and the target location comprises:
determining contrast arrival delays for a plurality of locations of the first image from pixel values for the plurality of locations;
determining cerebral blood flow at a plurality of positions of the first image according to pixel values of the plurality of positions;
determining cerebral blood volumes of a plurality of positions of the plurality of first images according to pixel values of the plurality of positions and pixel values of target positions in the plurality of first images;
determining an average transit time for the plurality of locations based on the cerebral blood flow and the cerebral blood volume;
determining the time to peak of the plurality of positions according to the arrival delay of the contrast agent at the plurality of positions and the arrival delay of the contrast agent at the target position.
8. A CT perfusion function map quantitative parameter processing method is characterized by comprising the following steps:
performing first preprocessing on a plurality of images to be processed in an image sequence to be processed to obtain a first image sequence, wherein the images to be processed comprise three-dimensional medical images, and the image sequence to be processed comprises the three-dimensional medical images of the same physiological area obtained at a plurality of moments;
extracting a central line of each first image in the first image sequence;
determining target positions of key points in each first image in the first images, wherein the key points comprise artery points and vein points, and the target positions of the key points in the first images comprise the target positions of the artery points and the vein points;
determining a perfusion function index of the physiological region according to the first image sequence and the target position, and determining relative values of the perfusion function indexes of the physiological region on two sides of a central line according to the perfusion function index and the central line of the first image;
the method for obtaining the first image sequence by performing the first preprocessing on a plurality of images to be processed in the image sequence to be processed comprises the following steps:
determining a reference layer of a plurality of images to be processed in the image sequence to be processed;
according to the reference layers of the multiple images to be processed and the shapes of the areas to be registered in the reference layers, carrying out image registration on the multiple images to be processed to obtain multiple second images;
performing image registration based on mutual information among the plurality of second images to obtain a plurality of third images;
correcting the time corresponding to the third images according to the average time interval between the acquisition moments of the images to be processed in the image sequence to be processed to obtain the first image sequence;
determining a perfusion function index of the physiological region according to the first image sequence and the target position, and determining relative values of perfusion function indexes of the physiological region on two sides of a central line according to the perfusion function index and the central line of the first image, including:
obtaining perfusion function indexes of a plurality of positions in the first image according to the pixel values of the plurality of first images in the first image sequence and the target position;
and determining relative values of the perfusion function indexes of the areas on two sides of the central line according to the perfusion function indexes of a plurality of positions in the first image and the central line of the first image.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of claim 8.
10. A computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of claim 8.
CN202110266924.6A 2021-03-11 2021-03-11 CT perfusion function map quantitative parameter processing equipment and method Active CN112862916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110266924.6A CN112862916B (en) 2021-03-11 2021-03-11 CT perfusion function map quantitative parameter processing equipment and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110266924.6A CN112862916B (en) 2021-03-11 2021-03-11 CT perfusion function map quantitative parameter processing equipment and method

Publications (2)

Publication Number Publication Date
CN112862916A CN112862916A (en) 2021-05-28
CN112862916B true CN112862916B (en) 2021-09-10

Family

ID=75994161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110266924.6A Active CN112862916B (en) 2021-03-11 2021-03-11 CT perfusion function map quantitative parameter processing equipment and method

Country Status (1)

Country Link
CN (1) CN112862916B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393433B (en) * 2021-06-10 2022-03-01 北京安德医智科技有限公司 Universal medical image perfusion parameter prediction method and device
CN113628207B (en) * 2021-08-30 2023-04-07 脑玺(苏州)智能科技有限公司 Image area segmentation method, device, equipment and storage medium
CN114638878B (en) * 2022-03-18 2022-11-11 北京安德医智科技有限公司 Two-dimensional echocardiogram pipe diameter detection method and device based on deep learning
CN117274218A (en) * 2023-10-09 2023-12-22 首都医科大学附属北京天坛医院 Blood vessel key point detection method, device and medium based on cerebral perfusion imaging

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005052648A (en) * 2003-08-04 2005-03-03 Siemens Ag Automatic calibration method of perfusion parameter image
WO2013156222A2 (en) * 2012-04-16 2013-10-24 Centre Hospitalier Universitaire Vaudois (Chuv) Method for generating perfusion images
CN106485706A (en) * 2012-11-23 2017-03-08 上海联影医疗科技有限公司 The post processing of image method of CT liver perfusion and CT liver perfusion method
WO2017192629A1 (en) * 2016-05-02 2017-11-09 The Regents Of The University Of California System and method for estimating perfusion parameters using medical imaging
CN109410216A (en) * 2018-09-14 2019-03-01 北京市商汤科技开发有限公司 A kind of cerebral arterial thrombosis image region segmentation method and device
CN109658401A (en) * 2018-12-14 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111243011A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
CN111402228A (en) * 2020-03-13 2020-07-10 腾讯科技(深圳)有限公司 Image detection method, device and computer readable storage medium
WO2020154807A1 (en) * 2019-01-29 2020-08-06 Uti Limited Partnership System and method for generating perfusion functional maps from temporally resolved helical computed tomographic images
CN111583209A (en) * 2020-04-29 2020-08-25 上海杏脉信息科技有限公司 Brain perfusion image feature point selection method, medium and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5322548B2 (en) * 2008-09-17 2013-10-23 株式会社東芝 X-ray CT apparatus, medical image processing apparatus, and medical image processing program
AU2016379175A1 (en) * 2015-12-21 2018-07-05 The Regents Of The University Of California Perfusion digital subtraction angiography
CN105809670B (en) * 2016-02-29 2019-07-19 上海联影医疗科技有限公司 Perfusion analysis method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005052648A (en) * 2003-08-04 2005-03-03 Siemens Ag Automatic calibration method of perfusion parameter image
WO2013156222A2 (en) * 2012-04-16 2013-10-24 Centre Hospitalier Universitaire Vaudois (Chuv) Method for generating perfusion images
CN106485706A (en) * 2012-11-23 2017-03-08 上海联影医疗科技有限公司 The post processing of image method of CT liver perfusion and CT liver perfusion method
WO2017192629A1 (en) * 2016-05-02 2017-11-09 The Regents Of The University Of California System and method for estimating perfusion parameters using medical imaging
CN109410216A (en) * 2018-09-14 2019-03-01 北京市商汤科技开发有限公司 A kind of cerebral arterial thrombosis image region segmentation method and device
CN111243011A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Key point detection method and device, electronic equipment and storage medium
CN109658401A (en) * 2018-12-14 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2020154807A1 (en) * 2019-01-29 2020-08-06 Uti Limited Partnership System and method for generating perfusion functional maps from temporally resolved helical computed tomographic images
CN111402228A (en) * 2020-03-13 2020-07-10 腾讯科技(深圳)有限公司 Image detection method, device and computer readable storage medium
CN111583209A (en) * 2020-04-29 2020-08-25 上海杏脉信息科技有限公司 Brain perfusion image feature point selection method, medium and electronic equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
4D-CTA评估急性缺血性脑卒中患者侧支循环与CTP灌注参数的相关性;陈聚惠等;《临床放射学杂志》;20200220(第02期);44-49 *
Associations of Aminotransferases with Adverse Outcomes after Acute Ischemic Stroke: Results from China National Stroke Registry;Zong, L等;《ResearchGate》;20201231;1-9 *
CT脑肿瘤灌注特征参数定量分析与研究;钟玲等;《中国医学物理学杂志》;20100115(第01期);48-52 *
Ischemic lesion typing on computed tomography perfusion and computed tomography angiography in hyperacute ischemic stroke: a preliminary study;Jing Xue 等;《Neurological Research》;20081231;334-337 *
心脏灌注磁共振成像的图像配准;何金强等;《中国生物医学工程学报》;20050430;第卷(第02期);71-74 *
磁共振灌注成像分析原理及其对脑卒中发病预测评估的价值;张玉梅等;《中国临床康复》;20060521(第20期);139-141 *

Also Published As

Publication number Publication date
CN112862916A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112862916B (en) CT perfusion function map quantitative parameter processing equipment and method
CN109829920B (en) Image processing method and device, electronic equipment and storage medium
CN110569854B (en) Image processing method and device, electronic equipment and storage medium
US10810438B2 (en) Setting apparatus, output method, and non-transitory computer-readable storage medium
KR101694643B1 (en) Method, apparatus, device, program, and recording medium for image segmentation
CN110647834A (en) Human face and human hand correlation detection method and device, electronic equipment and storage medium
CN113012166A (en) Intracranial aneurysm segmentation method and device, electronic device, and storage medium
CN112967291B (en) Image processing method and device, electronic equipment and storage medium
CN107944367B (en) Face key point detection method and device
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
CN113222038B (en) Breast lesion classification and positioning method and device based on nuclear magnetic image
CN109902725A (en) Mobile mesh object detection method, device and electronic equipment and storage medium
CN110211134B (en) Image segmentation method and device, electronic equipment and storage medium
CN113034491B (en) Coronary calcified plaque detection method and device
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN111860373B (en) Target detection method and device, electronic equipment and storage medium
CN111724361B (en) Method and device for displaying focus in real time, electronic equipment and storage medium
CN111640114B (en) Image processing method and device
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN113160947A (en) Medical image display method and device, electronic equipment and storage medium
CN113469948A (en) Left ventricle segment identification method and device, electronic equipment and storage medium
WO2019090734A1 (en) Photographing method and apparatus, mobile terminal, and computer readable storage medium
CN111798498A (en) Image processing method and device, electronic equipment and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN114387436B (en) Wall coronary artery detection method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant