CN111105404A - Method and system for extracting target position based on brain image data - Google Patents

Method and system for extracting target position based on brain image data Download PDF

Info

Publication number
CN111105404A
CN111105404A CN201911345441.4A CN201911345441A CN111105404A CN 111105404 A CN111105404 A CN 111105404A CN 201911345441 A CN201911345441 A CN 201911345441A CN 111105404 A CN111105404 A CN 111105404A
Authority
CN
China
Prior art keywords
image
artery
vein
point
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911345441.4A
Other languages
Chinese (zh)
Other versions
CN111105404B (en
Inventor
姚洋洋
宋凌
金海岚
印胤
杨光明
秦岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianglian Zhichuang Beijing Technology Co ltd
Original Assignee
Qianglian Zhichuang Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianglian Zhichuang Beijing Technology Co ltd filed Critical Qianglian Zhichuang Beijing Technology Co ltd
Priority to CN201911345441.4A priority Critical patent/CN111105404B/en
Publication of CN111105404A publication Critical patent/CN111105404A/en
Application granted granted Critical
Publication of CN111105404B publication Critical patent/CN111105404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Abstract

The embodiment of the specification discloses a method and a system for extracting a target position based on brain image data, and solves the problems that manual intervention is needed, the speed is low, the result consistency cannot be guaranteed and the like in the prior art. The method comprises the following steps: acquiring craniocerebral image data to be processed; extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image; determining the area where the target position in the first image is located to obtain a second image; and determining the target position based on the second image to obtain a third image. The method and the system for extracting the target position provided by the embodiment of the specification can eliminate or reduce the judgment difference caused by human factors, reduce the interpretation error, save time and realize rapid and automatic acquisition of the artery input point and/or the vein output point.

Description

Method and system for extracting target position based on brain image data
Technical Field
The present disclosure relates to the field of medical imaging and computer technologies, and in particular, to a method and a system for extracting a target location based on brain image data.
Background
CT perfusion (CTP) is an important means for rapid diagnosis of acute cerebral ischemia, and is commonly used for early diagnosis of acute cerebral ischemia and diagnosis of transient ischemic attack. CTP can display the focus of an ischemic part at an early stage and can distinguish an ischemic penumbra area of a brain, so that thrombolytic treatment is actively carried out within effective perfusion time, the neuron function of the ischemic penumbra is rescued, and the CTP has important clinical significance for timely diagnosis and guidance treatment of cerebral ischemia. The CTP image is used for accurately judging the range and the volume of the cerebral ischemia focus, and is favorable for predicting the prognosis of cerebral ischemia and evaluating the treatment effect of the cerebral ischemia. The positions of the arterial input point and/or the venous output point directly affect CT perfusion image parameters such as Cerebral Blood Flow (CBF), Cerebral Blood Volume (CBV), Time To Peak (TTP), Mean Transit Time (MTT), and the like, and further affect the diagnosis of acute Cerebral ischemia, so that it is important to determine the positions of the arterial input point and/or the venous output point.
At present, in the prior art, the position of the artery input point and/or the vein output point needs to first extract the blood vessel from the CT perfusion image data, and further extract the artery input point and/or the vein output point. The existing method needs manual intervention, has low speed and cannot ensure the consistency of results.
Therefore, a new method is needed to eliminate or reduce the judgment difference caused by human factors, reduce the interpretation error, save time and realize rapid and automatic acquisition of the artery input point and/or the vein output point.
Disclosure of Invention
The embodiment of the specification provides a method and a system for extracting a target position based on brain image data, which are used for solving the following technical problems: the method can eliminate or reduce the judgment difference caused by human factors, reduce the interpretation error, save time and realize the rapid and automatic acquisition of the artery input point and/or the vein output point.
The embodiment of the specification provides a method for extracting a target position based on brain image data, which comprises the following steps:
acquiring craniocerebral image data to be processed;
extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image;
determining a region where a target position in the first image is located, and obtaining a second image, wherein the region where the target position is located comprises an artery position region and/or a vein position region;
based on the second image, a target position is determined, and a third image is obtained, wherein the target position comprises an artery input point and/or a vein output point.
Preferably, the determining the region where the target position in the first image is located specifically includes:
determining a region in which the target position is located in the first image based on the symmetry axis of the craniocerebral image and/or the anatomical position of the blood vessel.
Preferably, the determining a region in which the target position is located in the first image based on the symmetry axis and/or the anatomical position of the blood vessel of the craniocerebral image specifically includes:
in the first image, taking a closed area formed by a straight line perpendicular to a preset point of a craniocerebral symmetry axis and the cranium as an artery position area;
in the first image, a region other than the artery position region is set as a vein position region.
Preferably, the determining the target position based on the second image specifically includes:
and determining a target position from the second image according to a preset threshold, wherein the preset threshold comprises an artery preset threshold and/or a vein preset threshold.
Preferably, the determining the target position from the second image according to the preset threshold specifically includes:
taking the maximum pixel value in the artery position area as the upper limit of the artery preset threshold, taking the maximum pixel value which is 0.5 times of the artery position area as the lower limit of the artery preset threshold, and taking the pixel point which is positioned in the artery preset threshold range as an artery point array;
and/or
Taking the maximum pixel value in the vein position area as the upper limit of the vein preset threshold, taking the maximum pixel value which is 0.8 times of the vein position area as the lower limit of the vein preset threshold, and taking the pixel point which is positioned in the vein preset threshold range as a vein point array;
extracting a point with the earliest peak starting time in a time density curve corresponding to each pixel point in the artery point array from the artery point array as an artery input point;
and/or
And extracting a point with the latest peak starting time in a time density curve corresponding to each pixel point in the vein point array from the vein point array as a vein output point.
Preferably, the extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image specifically includes:
and extracting blood vessels through a binary threshold value based on the to-be-processed craniocerebral image data to obtain a first image.
An embodiment of the present specification provides an extraction system of a target location based on brain image data, including the following steps:
the acquisition module acquires craniocerebral image data to be processed;
the segmentation module is used for extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image;
the extraction module is used for determining a region where a target position in the first image is located to obtain a second image, wherein the region where the target position is located comprises an artery position region and/or a vein position region;
based on the second image, a target position is determined, and a third image is obtained, wherein the target position comprises an artery input point and/or a vein output point.
Preferably, the determining the region where the target position in the first image is located specifically includes:
determining a region in which the target position is located in the first image based on the symmetry axis of the craniocerebral image and/or the anatomical position of the blood vessel.
Preferably, the determining a region in which the target position is located in the first image based on the symmetry axis and/or the anatomical position of the blood vessel of the craniocerebral image specifically includes:
in the first image, taking a closed area formed by a straight line perpendicular to a preset point of a craniocerebral symmetry axis and the cranium as an artery position area;
in the first image, a region other than the artery position region is set as a vein position region.
Preferably, the determining the target position based on the second image specifically includes:
and determining a target position from the second image according to a preset threshold, wherein the preset threshold comprises an artery preset threshold and/or a vein preset threshold.
Preferably, the determining the target position from the second image according to the preset threshold specifically includes:
taking the maximum pixel value in the artery position area as the upper limit of the artery preset threshold, taking the maximum pixel value which is 0.5 times of the artery position area as the lower limit of the artery preset threshold, and taking the pixel point which is positioned in the artery preset threshold range as an artery point array;
and/or
Taking the maximum pixel value in the vein position area as the upper limit of the vein preset threshold, taking the maximum pixel value which is 0.8 times of the vein position area as the lower limit of the vein preset threshold, and taking the pixel point which is positioned in the vein preset threshold range as a vein point array;
extracting a point with the earliest peak starting time in a time density curve corresponding to each pixel point in the artery point array from the artery point array as an artery input point;
and/or
And extracting a point with the latest peak starting time in a time density curve corresponding to each pixel point in the vein point array from the vein point array as a vein output point.
Preferably, the extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image specifically includes:
and extracting blood vessels through a binary threshold value based on the to-be-processed craniocerebral image data to obtain a first image.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
in the embodiment of the description, a first image is obtained by extracting a blood vessel region from craniocerebral image data to be processed, and a region where a target position in the first image is located is further determined to obtain a second image; and determining the target position based on the area of the target position in the second image, and obtaining a third image. By adopting the method provided by the embodiment of the specification, the judgment difference caused by human factors can be eliminated or reduced, the interpretation error is reduced, the time is saved, and the rapid and automatic acquisition of the artery input point and/or the vein output point is realized.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flowchart of a method for extracting a target location based on brain image data according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of obtaining a first image according to an embodiment of the present application;
fig. 3 is a schematic diagram of acquiring a target area according to an embodiment of the present application;
fig. 4 is a schematic diagram of an extraction system for a target location based on brain image data according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
When cerebral ischemia occurs, the local blood flow is reduced, and the blood flow rate, the blood volume, the passing time and the like are changed to different degrees. Cerebral ischemia, first manifested as dysfunction and then morphological changes, because the degree of brain tissue damage is closely related to the ischemia time, diagnosis, treatment and restoration of cerebral blood supply as early as possible can significantly improve the healing effect of patients.
The conventional CT scanning for acute cerebral ischemia is generally applicable to the diagnosis of lesions after 24 hours of ischemia; MRI Diffusion Weighted Imaging (DWI) can show lesions 4 hours after onset; CTP (CTP perfusion imaging) may show lesions 30 minutes after onset. CTP is mainly used as an important means for rapidly diagnosing acute cerebral ischemia, can display the focus of an ischemic part at the early stage of onset of disease and distinguish an ischemic penumbra area of a brain, thereby actively carrying out thrombolysis treatment in effective perfusion time and rescuing the neuron function of the ischemic penumbra, and has important clinical significance.
The positions of the artery input point and/or the vein output point directly influence the CT perfusion image parameters such as cerebral blood flow, cerebral blood volume, peak reaching time, average passing time and the like, and further influence the diagnosis of acute cerebral ischemia, so that the determination of the positions of the artery input point and/or the vein output point is of great significance.
Fig. 1 is a schematic flowchart of a method for extracting a target location based on brain image data according to an embodiment of the present disclosure. The method specifically comprises the following steps:
step S101: and acquiring the craniocerebral image data to be processed.
In the present application, the craniocerebral image data to be processed is CT perfusion image data. The CT perfusion image data is a plurality of CT sequence images taken at equal time intervals after the injection of a contrast agent.
Step S103: and extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image.
When determining the input artery point and/or the output vein point from the craniocerebral image data to be processed, the maximum density projection image corresponding to the craniocerebral perfusion image data is taken as a reference image to be obtained. Fig. 2 is a schematic flowchart of a process for obtaining a first image according to an embodiment of the present application, which specifically includes:
step S201: and acquiring a maximum density projection image of the to-be-processed craniocerebral image data.
In an embodiment of the present application, the maximum density of each pixel point in the CT sequence image in the craniocerebral perfusion image data is projected to obtain a maximum density projection image corresponding to the craniocerebral perfusion image data, which is used for extracting a blood vessel region in the subsequent steps. Each pixel point on the maximum density projection image corresponds to a time density curve on the CT sequence image, and each pixel point on the maximum density projection image corresponds to the maximum value on the time density curve (TDC curve) and reflects the maximum enhancement of each pixel point in the brain tissue.
In the maximum density projection image obtained in the above way, besides the craniocerebral parenchymal tissues, there are irrelevant parts such as skull, skin and cerebrospinal fluid, and the occupation ratio of these irrelevant parts in the maximum density projection image is large, which will increase the subsequent operation amount and increase the storage space and running time overhead during calculation. Therefore, as a more preferable solution of the present application, the obtained maximum density projection image is subjected to bone removal, filtering, and the like to remove interference.
In one embodiment of the present application, the maximum density projection map after bone removal is obtained after processing the maximum density projection map by threshold segmentation and seed point growth. The projection image of maximum density after bone removal only includes three parts of gray matter, white matter and blood vessels of the brain tissue. Furthermore, the maximum density projection image after bone removal can adopt a filtering method to remove interference and obtain the filtered maximum density projection image for extracting a blood vessel region in the subsequent step.
In an embodiment of the present application, the CT sequence image in the craniocerebral perfusion image data is subjected to bone removal, filtering, and the like, and after interference is removed, maximum intensity projection is performed to obtain a maximum intensity projection image, which is used for extracting a blood vessel region in subsequent steps.
Step S203: and extracting a blood vessel region from the maximum density projection image by adopting a binarization threshold value method to obtain a first image.
In the embodiment of the present application, the range of the binarization threshold is 100, and pixel points greater than the threshold range are extracted from the maximum density projection image to obtain the first image.
Step S105: determining a region where a target position in the first image is located, and obtaining a second image, wherein the region where the target position is located comprises an artery position region and/or a vein position region.
In the present application, the region where the target position is located can be obtained according to the symmetry axis of the craniocerebral image and/or the anatomical position of the blood vessel. Specifically, in the first image, a closed region formed by a straight line perpendicular to a preset point of a craniocerebral symmetry axis and the cranium is used as an artery position region; in the first image, a region other than the artery position region is set as a vein position region. Fig. 3 is a schematic diagram of obtaining a target area according to an embodiment of the present application. In the first image, a point on the intersection point of the symmetry axis of the skull and the skull to the position 3/5 of the symmetry axis of the skull is taken as a preset point, and a straight line which passes through the preset point and is perpendicular to the symmetry axis intersects with the first image to form a closed area, wherein the closed area is an artery position area. In the first image, a region other than the artery position region is set as a vein position region. In the present application, the extraction of the symmetry axis of the cranium brain includes, but is not limited to, a registration algorithm based on image inversion, a method based on deep learning.
Step S107: based on the second image, a target position is determined, and a third image is obtained, wherein the target position comprises an artery input point and/or a vein output point.
Since the contrast agent has the fastest speed in the input artery in CTP, the peak on the curve of the input artery appears the earliest than the peaks of other arteries, and the maximum enhancement value on the curve of the input artery is the largest than that of other arteries. The maximum enhancement of the time density curve of the output vein is far beyond the input artery and occurs later. Based on this, an arterial input point and a venous output point can be obtained. Specifically, the target position is determined from the second image according to a preset threshold, wherein the preset threshold comprises an artery preset threshold and/or a vein preset threshold.
In one embodiment of the application, the arterial input point and/or the venous output point are obtained by:
taking the maximum pixel value in the artery position area as the upper limit of the artery preset threshold, taking the maximum pixel value which is 0.5 times of the artery position area as the lower limit of the artery preset threshold, and taking the pixel point which is positioned in the artery preset threshold range as an artery point array;
and/or
Taking the maximum pixel value in the vein position area as the upper limit of the vein preset threshold, taking the maximum pixel value which is 0.8 times of the vein position area as the lower limit of the vein preset threshold, and taking the pixel point which is positioned in the vein preset threshold range as a vein point array;
extracting a point with the earliest peak starting time in a time density curve corresponding to each pixel point in the artery point array from the artery point array as an artery input point;
and/or
And extracting a point with the latest peak starting time in a time density curve corresponding to each pixel point in the vein point array from the vein point array as a vein output point.
By adopting the method provided by the embodiment of the application, the judgment difference caused by human factors can be eliminated or reduced, the interpretation error is reduced, the time is saved, and the rapid and automatic acquisition of the artery input point and/or the vein output point is realized.
Based on the same idea, an embodiment of the present specification further provides an extraction system for a target location based on craniocerebral image data, and fig. 4 is a schematic diagram of the extraction system for a target location based on craniocerebral image data provided by an embodiment of the present specification, where the system includes:
an obtaining module 401, which obtains the craniocerebral image data to be processed;
a segmentation module 403, configured to extract a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image;
an extraction module 405, configured to determine a region where a target position in the first image is located, and obtain a second image, where the region where the target position is located includes an artery position region and/or a vein position region;
based on the second image, a target position is determined, and a third image is obtained, wherein the target position comprises an artery input point and/or a vein output point.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and the relevant points can be referred to the partial description of the embodiments of the method.
The apparatus, the electronic device, the nonvolatile computer storage medium and the method provided in the embodiments of the present description correspond to each other, and therefore, the apparatus, the electronic device, and the nonvolatile computer storage medium also have similar advantageous technical effects to the corresponding method.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsrapl (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A method for extracting a target position based on brain image data, the method comprising:
acquiring craniocerebral image data to be processed;
extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image;
determining a region where a target position in the first image is located, and obtaining a second image, wherein the region where the target position is located comprises an artery position region and/or a vein position region;
based on the second image, a target position is determined, and a third image is obtained, wherein the target position comprises an artery input point and/or a vein output point.
2. The method according to claim 1, wherein the determining the region of the target location in the first image specifically comprises:
determining a region in which the target position is located in the first image based on the symmetry axis of the craniocerebral image and/or the anatomical position of the blood vessel.
3. The method according to claim 2, wherein determining the region in the first image where the target location is based on the symmetry axis and/or the anatomical location of the blood vessel of the craniocerebral image comprises:
in the first image, taking a closed area formed by a straight line perpendicular to a preset point of a craniocerebral symmetry axis and the cranium as an artery position area;
in the first image, a region other than the artery position region is set as a vein position region.
4. The method of claim 1, wherein determining the target location based on the second image comprises:
and determining a target position from the second image according to a preset threshold, wherein the preset threshold comprises an artery preset threshold and/or a vein preset threshold.
5. The method according to claim 4, wherein the determining the target position from the second image according to the preset threshold comprises:
taking the maximum pixel value in the artery position area as the upper limit of the artery preset threshold, taking the maximum pixel value which is 0.5 times of the artery position area as the lower limit of the artery preset threshold, and taking the pixel point which is positioned in the artery preset threshold range as an artery point array;
and/or
Taking the maximum pixel value in the vein position area as the upper limit of the vein preset threshold, taking the maximum pixel value which is 0.8 times of the vein position area as the lower limit of the vein preset threshold, and taking the pixel point which is positioned in the vein preset threshold range as a vein point array;
extracting a point with the earliest peak starting time in a time density curve corresponding to each pixel point in the artery point array from the artery point array as an artery input point;
and/or
And extracting a point with the latest peak starting time in a time density curve corresponding to each pixel point in the vein point array from the vein point array as a vein output point.
6. The method according to claim 1, wherein the extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image comprises:
and extracting blood vessels through a binary threshold value based on the to-be-processed craniocerebral image data to obtain a first image.
7. A system for extracting a target location based on brain image data, the system comprising:
the acquisition module acquires craniocerebral image data to be processed;
the segmentation module is used for extracting a blood vessel region from the to-be-processed craniocerebral image data to obtain a first image;
the extraction module is used for determining a region where a target position in the first image is located to obtain a second image, wherein the region where the target position is located comprises an artery position region and/or a vein position region;
based on the second image, a target position is determined, and a third image is obtained, wherein the target position comprises an artery input point and/or a vein output point.
8. The system of claim 7, wherein the determining the region of the target location in the first image comprises:
determining a region in which the target position is located in the first image based on the symmetry axis of the craniocerebral image and/or the anatomical position of the blood vessel.
9. The system of claim 8, wherein determining the region of the first image in which the target location is located based on the symmetry axis of the cranial image and/or the anatomical location of the blood vessel comprises:
in the first image, taking a closed area formed by a straight line perpendicular to a preset point of a craniocerebral symmetry axis and the cranium as an artery position area;
in the first image, a region other than the artery position region is set as a vein position region.
10. The system of claim 7, wherein determining the target location based on the second image comprises:
and determining a target position from the second image according to a preset threshold, wherein the preset threshold comprises an artery preset threshold and/or a vein preset threshold.
11. The system according to claim 10, wherein the determining the target position from the second image according to the preset threshold comprises:
taking the maximum pixel value in the artery position area as the upper limit of the artery preset threshold, taking the maximum pixel value which is 0.5 times of the artery position area as the lower limit of the artery preset threshold, and taking the pixel point which is positioned in the artery preset threshold range as an artery point array;
and/or
Taking the maximum pixel value in the vein position area as the upper limit of the vein preset threshold, taking the maximum pixel value which is 0.8 times of the vein position area as the lower limit of the vein preset threshold, and taking the pixel point which is positioned in the vein preset threshold range as a vein point array;
extracting a point with the earliest peak starting time in a time density curve corresponding to each pixel point in the artery point array from the artery point array as an artery input point;
and/or
And extracting a point with the latest peak starting time in a time density curve corresponding to each pixel point in the vein point array from the vein point array as a vein output point.
12. The system according to claim 7, wherein the extracting of the blood vessel region from the to-be-processed craniocerebral image data to obtain the first image comprises:
and extracting blood vessels through a binary threshold value based on the to-be-processed craniocerebral image data to obtain a first image.
CN201911345441.4A 2019-12-24 2019-12-24 Method and system for extracting target position based on brain image data Active CN111105404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911345441.4A CN111105404B (en) 2019-12-24 2019-12-24 Method and system for extracting target position based on brain image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911345441.4A CN111105404B (en) 2019-12-24 2019-12-24 Method and system for extracting target position based on brain image data

Publications (2)

Publication Number Publication Date
CN111105404A true CN111105404A (en) 2020-05-05
CN111105404B CN111105404B (en) 2022-11-22

Family

ID=70424110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911345441.4A Active CN111105404B (en) 2019-12-24 2019-12-24 Method and system for extracting target position based on brain image data

Country Status (1)

Country Link
CN (1) CN111105404B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862259A (en) * 2020-07-27 2020-10-30 上海联影医疗科技有限公司 Medical perfusion image processing method and medical imaging device
CN111863265A (en) * 2020-07-27 2020-10-30 强联智创(北京)科技有限公司 Simulation method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130322718A1 (en) * 2012-06-01 2013-12-05 Yi-Hsuan Kao Method and apparatus for measurements of the brain perfusion in dynamic contrast-enhanced computed tomography images
US8837800B1 (en) * 2011-10-28 2014-09-16 The Board Of Trustees Of The Leland Stanford Junior University Automated detection of arterial input function and/or venous output function voxels in medical imaging
CN109448003A (en) * 2018-10-26 2019-03-08 强联智创(北京)科技有限公司 A kind of entocranial artery blood-vessel image dividing method and system
CN109584169A (en) * 2018-10-26 2019-04-05 首都医科大学宣武医院 A kind of intercept method and system of the intracranial vessel image based on center line
CN110279417A (en) * 2019-06-25 2019-09-27 沈阳东软智能医疗科技研究院有限公司 Identify the method, device and equipment of aorta vessel

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8837800B1 (en) * 2011-10-28 2014-09-16 The Board Of Trustees Of The Leland Stanford Junior University Automated detection of arterial input function and/or venous output function voxels in medical imaging
US20130322718A1 (en) * 2012-06-01 2013-12-05 Yi-Hsuan Kao Method and apparatus for measurements of the brain perfusion in dynamic contrast-enhanced computed tomography images
CN109448003A (en) * 2018-10-26 2019-03-08 强联智创(北京)科技有限公司 A kind of entocranial artery blood-vessel image dividing method and system
CN109584169A (en) * 2018-10-26 2019-04-05 首都医科大学宣武医院 A kind of intercept method and system of the intracranial vessel image based on center line
CN110279417A (en) * 2019-06-25 2019-09-27 沈阳东软智能医疗科技研究院有限公司 Identify the method, device and equipment of aorta vessel

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ADRIËNNE MENDRIK等: "Automatic segmentation of intracranial arteries and veins in four-dimensional cerebral CT perfusion scans", 《MEDICAL PHYSICS》 *
YI-HSUAN KAO等: "Automatic measurements of arterial input and venous output functions on cerebral computed tomography perfusion images:A preliminary study", 《COMPUTERS IN BIOLOGY AND MEDICINE》 *
叶国伟等: "不同输入动脉对颈内动脉狭窄患者颅脑CT灌注成像参数的影响", 《中国医学影像学杂志》 *
沈倩等: "不同监测血管对256层CT全脑灌注成像参数值的影响", 《临床放射学杂志》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862259A (en) * 2020-07-27 2020-10-30 上海联影医疗科技有限公司 Medical perfusion image processing method and medical imaging device
CN111863265A (en) * 2020-07-27 2020-10-30 强联智创(北京)科技有限公司 Simulation method, device and equipment
CN111862259B (en) * 2020-07-27 2023-08-15 上海联影医疗科技股份有限公司 Medical perfusion image processing method and medical imaging device
CN111863265B (en) * 2020-07-27 2024-03-29 强联智创(北京)科技有限公司 Simulation method, simulation device and simulation equipment

Also Published As

Publication number Publication date
CN111105404B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN109448003B (en) Intracranial artery blood vessel image segmentation method and system
CN109671066B (en) Cerebral infarction judging method and system based on skull CT image
CN109448004B (en) Centerline-based intracranial blood vessel image interception method and system
CN111091563B (en) Method and system for extracting target region based on brain image data
CN111127428A (en) Method and system for extracting target region based on brain image data
CN109472780B (en) Method and system for measuring morphological parameters of intracranial aneurysm image
CN111105404B (en) Method and system for extracting target position based on brain image data
CN111081378B (en) Aneurysm rupture risk assessment method and system
CN109472823B (en) Method and system for measuring morphological parameters of intracranial aneurysm image
CN109447967B (en) Method and system for segmenting intracranial aneurysm image
CN111105425A (en) Symmetry axis/symmetry plane extraction method and system based on craniocerebral image data
CN109712122B (en) Scoring method and system based on skull CT image
CN111223089B (en) Aneurysm detection method and device and computer readable storage medium
CN109377504B (en) Intracranial artery blood vessel image segmentation method and system
CN109472803B (en) Intracranial artery blood vessel segmentation method and system
CN109741339B (en) Partitioning method and system
CN112185550A (en) Typing method, device and equipment
CN111584076A (en) Aneurysm rupture risk assessment method and system
CN113205508B (en) Segmentation method, device and equipment based on image data
CN112734726B (en) Angiography typing method, angiography typing device and angiography typing equipment
CN113160165A (en) Blood vessel segmentation method, device and equipment
CN110517244B (en) Positioning method and system based on DSA image
CN112927815B (en) Method, device and equipment for predicting intracranial aneurysm information
CN109584261B (en) Method and system for segmenting intracranial aneurysm image
CN110739078B (en) Aneurysm rupture risk assessment method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant