CN116934741B - Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall - Google Patents

Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall Download PDF

Info

Publication number
CN116934741B
CN116934741B CN202311162912.4A CN202311162912A CN116934741B CN 116934741 B CN116934741 B CN 116934741B CN 202311162912 A CN202311162912 A CN 202311162912A CN 116934741 B CN116934741 B CN 116934741B
Authority
CN
China
Prior art keywords
image
blood vessel
region
vessel wall
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311162912.4A
Other languages
Chinese (zh)
Other versions
CN116934741A (en
Inventor
荆京
李峥
张喆
陈硕
朱万琳
张斯�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lianying Intelligent Imaging Technology Research Institute
Beijing Tiantan Hospital
Original Assignee
Beijing Lianying Intelligent Imaging Technology Research Institute
Beijing Tiantan Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lianying Intelligent Imaging Technology Research Institute, Beijing Tiantan Hospital filed Critical Beijing Lianying Intelligent Imaging Technology Research Institute
Priority to CN202311162912.4A priority Critical patent/CN116934741B/en
Publication of CN116934741A publication Critical patent/CN116934741A/en
Application granted granted Critical
Publication of CN116934741B publication Critical patent/CN116934741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application relates to a method and a device for acquiring composition and quantitative parameters of a one-stop type blood vessel wall. The method comprises the following steps: acquiring a magnetic resonance image dataset corresponding to a head, neck and chest region; extracting a blood vessel center line from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image; selecting a target position on the curved surface reconstruction image to obtain a vessel wall segmentation result of a cross-section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; according to the central line of the blood vessel, determining a target area in the head and neck chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data. The method can improve the disease diagnosis and treatment accuracy rate caused by the integral blood vessel.

Description

Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall
Technical Field
The present application relates to the field of medical image processing technology, and in particular, to a method and apparatus for acquiring composition and quantitative parameters of a one-stop vascular wall.
Background
With the development of image processing technology, a medical image processing technology appears, so that the judgment of vascular diseases of the head, neck and chest is greatly promoted, and the etiology and pathogenesis of the cerebrovascular diseases can be more accurately determined by analyzing the overall state of the aortic arch combined with the plaque of the vascular wall of the head, neck and artery in a medical image, and an important reference is provided for diagnosis and treatment of the cerebrovascular diseases.
In the conventional technology, the analysis sequence of intracranial arteries, carotid arteries and aortic blood vessels is limited to the evaluation of stenosis degree, but has not been popular for plaque analysis application. The existing plaque evaluation sequence and analysis software only analyze small ranges of intracranial arteries and carotid arteries, only can see local tube wall states, and cannot combine whole blood vessel data sets of the intracranial arteries, the carotid arteries and the aorta in a large range, so that large-range blood vessel pathological change analysis cannot be realized, and more accurate diagnosis and treatment of diseases caused by whole blood vessels cannot be realized.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, a computer-readable storage medium, a computer program product, and a display system for acquiring a composition and a quantitative parameter of a wall of a one-stop blood vessel, which can realize a wide range of vascular lesion analyses and improve accuracy of diagnosis and treatment of a disease caused by an overall blood vessel.
In a first aspect, the present application provides a method of obtaining composition and quantitative parameters of a vessel wall in a single-station. The method comprises the following steps: acquiring a magnetic resonance image dataset corresponding to a head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images; according to the blood vessel center line, determining a target area in the head, neck and chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
In a second aspect, the present application also provides a one-stop blood vessel wall composition and quantitative parameter acquisition device. The device comprises: the data set acquisition module is used for acquiring a magnetic resonance image data set corresponding to the head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment; the image reconstruction module is used for extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; the segmentation result obtaining module is used for responding to a cross section selection instruction on a user operation interface, selecting a target position on the curved surface reconstruction image so as to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; the component obtaining module of the vessel wall is used for comparing the vessel wall segmentation results of the cross-section reconstruction images and determining the tissue difference of the vessel wall so as to judge the component of the vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images; the detection image determining module is used for determining a target area in the head, neck and chest area according to the blood vessel center line and determining a target detection image corresponding to the target area; and the quantitative analysis data obtaining module is used for detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of: acquiring a magnetic resonance image dataset corresponding to a head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images; according to the blood vessel center line, determining a target area in the head, neck and chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of obtaining a magnetic resonance image dataset corresponding to a head and neck chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images; according to the blood vessel center line, determining a target area in the head, neck and chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of: acquiring a magnetic resonance image dataset corresponding to a head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images; according to the blood vessel center line, determining a target area in the head, neck and chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
In a sixth aspect, the present application also provides a one-stop vascular wall composition and quantitative parameter display system comprising a graphical user interface comprising: a first region displaying a magnetic resonance image dataset corresponding to the head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment; a second area for displaying a list selection area including blood vessels for a user to select a target blood vessel type; and the third area is used for displaying a blood vessel center line corresponding to the head, neck and chest area extracted from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line.
The method, the device, the computer equipment, the computer readable storage medium, the computer program product and the display system for acquiring the composition and the quantitative parameters of the wall of the one-stop type blood vessel are realized by acquiring a magnetic resonance image data set corresponding to the head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch vessel section, a carotid artery vessel section and an intracranial vessel section; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is different layers and/or at least two contrast images; according to the central line of the blood vessel, determining a target area in the head and neck chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
Extracting a blood vessel center line corresponding to a head and neck chest region from a magnetic resonance image dataset corresponding to the given head and neck chest region, constructing a curved surface reconstruction image under the guidance of the blood vessel center line, and further analyzing a target position in the curved surface reconstruction image to obtain components of a blood vessel wall; under the guidance of the central line of the blood vessel, carrying out feature analysis on the target detection image of the target area to obtain quantitative analysis data; the method can realize the analysis of the large-scale vascular lesions based on the magnetic resonance image dataset according to the components of the vascular wall and quantitative analysis data, and improve the diagnosis and treatment accuracy of the diseases caused by the whole blood vessel.
Drawings
FIG. 1 is a diagram of an application environment for a method of obtaining composition and quantitative parameters of a one-stop type vessel wall in one embodiment;
FIG. 2 is a flow chart of a method of obtaining composition and quantitative parameters of a wall of a one-stop blood vessel in one embodiment;
FIG. 3 is a flow chart of a method of determining quantitative analysis data in one embodiment;
FIG. 4 is a flow chart of a method for obtaining a segmentation result of a vessel wall according to an embodiment;
FIG. 5 is a flow chart of a method of determining a centerline of a blood vessel in one embodiment;
FIG. 6 is a flow chart of a method of determining a centerline of a blood vessel in another embodiment;
FIG. 7 is a schematic diagram of integrated vessel wall image input data in one embodiment;
FIG. 8 is a schematic view of an integrated vessel wall surface reconstruction in one embodiment;
FIG. 9 is a schematic diagram of a vessel centerline segmentation network in one embodiment;
FIG. 10 is a schematic view of a head, neck and chest over a range data centerline extraction range in one embodiment;
FIG. 11 is a schematic diagram of head, neck and chest mass data centerline management in one embodiment;
FIG. 12 is a block diagram of a one-stop blood vessel wall composition and quantitative parameter acquisition device in accordance with one embodiment;
FIG. 13 is an internal block diagram of a computer device in one embodiment;
fig. 14 is a graphical user interface schematic of a head, neck and chest image processing system in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The method for acquiring the composition and the quantitative parameters of the wall of the one-stop type blood vessel can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 acquires a magnetic resonance image data set corresponding to the head, neck and chest region from the terminal 102; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch vessel section, a carotid artery vessel section and an intracranial vessel section; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is different layers and/or at least two contrast images; according to the central line of the blood vessel, determining a target area in the head and neck chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, portable wearable devices, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a method for acquiring composition and quantitative parameters of a vessel wall in one station is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
step 202, acquiring a magnetic resonance image data set corresponding to a head, neck and chest region.
The head, neck and chest region can be a region formed by at least one part of the head, neck or chest of a human or animal, wherein the chest region comprises an aorta. For example: a region of the head, neck and chest, a region of the head and neck, a region of the neck and chest, a region of the head and chest, and a region of one of the head, neck and chest.
The magnetic resonance image data set may be magnetic resonance image sequence data obtained by scanning a cervical and thoracic region.
Specifically, the server 104 responds to the operation instruction of the terminal 102 to obtain a magnetic resonance image dataset corresponding to a head, neck or chest region from the terminal 102, wherein the head, neck or chest region includes at least one part of the head, neck or chest of the target object, the magnetic resonance image dataset is a magnetic resonance image sequence of the head, neck or chest, including but not limited to a T1 image, an MRA image, a T1 enhanced image, a T2 image, a proton density image, etc., a one-stop vascular wall analysis flowchart corresponding to the magnetic resonance image dataset is shown in fig. 7 and 8, wherein fig. 7 is integrated vascular wall image input data, in this embodiment, a T1 image is of course input data including a large-scale scan sequence of the aortic arch combined with the head, neck and artery blood vessel, and the large-scale image acquisition may be obtained by segmentation scan stitching or whole scan. If the data are formed by segmented scanning and splicing, the scanning data of the aortic arch blood vessel segment, the carotid artery blood vessel segment and the intracranial blood vessel segment are obtained in a segmented mode, and then the blood vessel segments are spliced to obtain a large-range magnetic resonance image data set based on a rigid registration algorithm.
And (3) carrying out image feature analysis on the target area in the head, neck and chest area along the central line of the blood vessel to obtain quantitative parameters. The quantitative parameters include one or more of lesion classification results, lesion location information, vessel cross-sectional area and diameter parameters, vessel cross-sectional area parameters, or stenosis parameters. Among these, various parameter values such as lumen diameter, vessel wall thickness, vessel wall area, vessel lumen area, stenosis rate (reference distal or proximal end), and normalized vessel wall index can be displayed. The doctor can evaluate the diagnosis curative effect, the comparison before and after the operation, the judgment of the severity of the disease and the like according to the parameters of the pipe wall at different periods and the change of the parameters.
And then the acquired magnetic resonance image data set is stored in a storage unit, and when the server needs to process the magnetic resonance image data set, volatile storage resources are called from the storage unit for calculation by a central processing unit. The magnetic resonance image data of each magnetic resonance image data set may be single data input to the central processing unit, or may be a plurality of magnetic resonance image data simultaneously input to the central processing unit.
Step 204, extracting a vascular center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and performing curved surface reconstruction on the magnetic resonance image data set along the vascular center line to obtain a curved surface reconstruction image.
The center line of the blood vessel can be a curve obtained by triple integration of the center of the cross section of the blood vessel displayed in the magnetic resonance image data set.
Specifically, according to the display condition of the magnetic resonance image dataset on the head, neck and chest region, extracting and identifying the central line of the blood vessel from the image representing the blood vessel according to the magnetic resonance image dataset, wherein the extraction of the central line of the blood vessel is realized by adopting a multi-sequence model.
If the requirement for vessel imaging is segmentation of a vessel object, where the vessel object includes at least one of a vessel wall or a vessel wall plaque, a curved surface reconstruction is performed on vessels in a target area according to a linear structure center point of a vessel center line and direction vector information, respectively, where an image obtained by the curved surface reconstruction is shown in fig. 8.
In one embodiment, the input data comprises a plurality of anatomical images, the contrast of two or more anatomical images being different. And extracting a blood vessel center line corresponding to the neck and chest region of the head from the input data, wherein the blood vessel center line can be obtained by fusing the regional blood vessel center lines of at least two anatomical images. Further, the anatomical image may be curved along the vessel centerline, resulting in a curved reconstructed image as shown in fig. 8. The curved reconstruction image may have a vessel centerline and the cross-sectional reconstruction image at the selected location may be obtained on the curved reconstruction image by a cross-sectional selection key provided on the user interface.
And step 206, responding to the cross section selection instruction on the user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position.
The user operation interface may be an operation interface in the terminal that interacts with the outside.
The cross-section selection instruction may be a two-dimensional cross-section reconstruction instruction for a target location, where the target location may be a vessel wall requiring two-dimensional cross-section reconstruction.
Specifically, a target position is determined on a curved surface reconstruction image in response to a cross section selection instruction on a user operation interface, and two-dimensional cross section reconstruction is performed in combination with the curved surface reconstruction image according to the linear structure center point and the direction vector information of the blood vessel center line, wherein the two-dimensional cross section reconstruction result is a cross section reconstruction image. Inputting the cross-section reconstruction image into a corresponding neural network model with a segmentation function, and carrying out image segmentation according to image characteristics through the neural network model to respectively obtain a lumen wall segmentation result corresponding to the lumen wall of the blood vessel and a wall plaque segmentation result corresponding to the wall plaque of the blood vessel.
And step 208, comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall.
The components of the vessel wall can be vessel wall parameters, long and narrow parameters and the like.
Specifically, obtaining a tissue difference of a first blood vessel wall by comparing different layers and/or at least two contrast images in a lumen wall segmentation result of the cross-section reconstructed image; or, obtaining the tissue difference of the second vessel wall by comparing different layers and/or at least two contrast images in the vessel wall plaque segmentation result of the cross-section reconstructed image; or, obtaining the tissue difference of the third vessel wall by comparing different layers and/or at least two contrast images between the lumen wall segmentation result of the cross-section reconstructed image and the lumen wall segmentation result; the composition of the vessel wall may be determined by a tissue difference of the first vessel wall and/or a tissue difference of the second vessel wall and/or a tissue difference of the third vessel wall, wherein the composition of the vessel wall includes but is not limited to quantitative parameters such as vessel wall parameters and stenosis parameters of the vessel.
Step 210, determining a target area in the head, neck and chest area according to the blood vessel center line, and determining a target detection image corresponding to the target area.
The target area can be an area selected to be subjected to characteristic analysis of the image in the head, neck and chest area.
Wherein the target detection image may be a corresponding partial magnetic resonance image dataset in the target region.
Specifically, if the vessel imaging requirement is the detection of the region of interest, the vessel center line is taken as the reference line of the image analysis, the region needing the image analysis is determined as the target region in the head and neck chest region, and the partial magnetic resonance image dataset corresponding to the target region needing the image analysis is determined as the target detection image.
In step 212, the region of interest is detected based on the image features of the target detection image of the target region, and quantitative analysis data is obtained as image analysis data.
The region of interest detection may be detection of a region in the target region where abnormality may occur, that is, detection of features of an image of the region of interest is required to further determine whether there is abnormality.
The quantitative analysis data may be analysis data obtained by detecting the region of interest, and reflect whether the region of interest has an abnormality.
Specifically, the target detection image of the target region is input into an anomaly detection model formed by a neural network, and region-of-interest detection is performed through the anomaly detection model pair, so that quantitative analysis data is obtained as image analysis data, wherein the quantitative analysis data includes, but is not limited to, anomaly category, anomaly location and anomaly classification result of the region-of-interest.
In the method for acquiring the composition and quantitative parameters of the wall of the one-stop type blood vessel, a magnetic resonance image data set corresponding to a head, neck and chest region is acquired; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch vessel section, a carotid artery vessel section and an intracranial vessel section; extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line; responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is different layers and/or at least two contrast images; according to the central line of the blood vessel, determining a target area in the head and neck chest area and determining a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
Extracting a blood vessel center line corresponding to a head and neck chest region from a magnetic resonance image dataset corresponding to the given head and neck chest region, constructing a curved surface reconstruction image under the guidance of the blood vessel center line, and further analyzing a target position in the curved surface reconstruction image to obtain components of a blood vessel wall; under the guidance of the central line of the blood vessel, carrying out feature analysis on the target detection image of the target area to obtain quantitative analysis data; the method can realize the analysis of the large-scale vascular lesions based on the magnetic resonance image dataset according to the components of the vascular wall and quantitative analysis data, and improve the diagnosis and treatment accuracy of the diseases caused by the whole blood vessel.
In one embodiment, as shown in fig. 3, the detecting the region of interest based on the image feature of the target detection image of the target region to obtain quantitative analysis data includes:
and step 302, detecting the region of interest based on the image characteristics of the target detection image of the target region, and obtaining a region of interest identification result.
The region of interest recognition result may be detection data obtained by detecting image features of the target detection image.
Specifically, based on the extracted vascular center line of the head, neck and chest, a two-dimensional or three-dimensional region of interest is constructed in a target region along the center line, and then each region of interest is detected and identified by utilizing the image characteristics of a target detection image, so that a region of interest identification result is obtained.
Step 304, determining an abnormality category, an abnormality position and an abnormality grading result of the region of interest as quantitative analysis data in the case that the region of interest identification result indicates that the target region has an abnormality.
The region of interest may be a region where abnormality recognition is required.
Specifically, in a case where the region of interest identification result indicates that the target region has an abnormality, that is, in a case where at least one identification data in the region of interest identification result indicates that the corresponding region of interest has an abnormality, an abnormality of the region of interest is determined according to the region of interest identification result, where the abnormality may be an abnormality category, an abnormality location, and an abnormality classification result, for example: the abnormal category may be atherosclerotic plaque, dissection, aneurysm, etc., and the abnormal grading result may be the risk of a lesion. The method of region of interest detection may be a conventional machine learning method, for example: the label of whether the focus is, the position information of the focus, and the division of the risk level can be outputted through the discriminator. Deep learning methods, such as SSD/faster R-CNN/YOLO networks, can also be used to train and test models by constructing a set of data sets with identification tags (including lesion risk level, if yes) and output location information and risk level information of lesions.
Alternatively, plaque type may be determined by block-wise analysis along the vessel centerline, resulting in one or more of the eight plaque analysis results shown in the left graph of fig. 11. The probability of falling off and cracking of a blood vessel area can be determined according to the plaque analysis result, the probability of occurrence of adverse events is predicted, or the probability of stroke and cerebral hemorrhage of a patient is determined.
In the embodiment, the method and the device for detecting the abnormal condition of the region of interest can be used for evaluating the recognition result of the region of interest, and outputting the abnormal information under the condition that the recognition result of the region of interest represents that the region of interest has the abnormality, so that the abnormality of the region of interest can be described in two aspects of images and data, and the accuracy rate of detecting the abnormal condition of the region of interest is improved.
In one embodiment, the quantitative analysis data includes at least one of lesion classification results, lesion location information, vessel cross-sectional area and diameter parameters, vessel cross-sectional area parameters, or stenosis parameters. The focus grading result can be the risk degree of the focus on a target object under the condition that the focus exists in the region of interest; the focus position information may be three-dimensional coordinate information of a focus in a target object under the condition that the focus exists in the region of interest; the parameters of the cross-sectional area and the pipe diameter of the blood vessel can be parameters of the diameter of the blood vessel in the cross-sectional reconstruction image; the vessel cross-sectional area parameter may be an area parameter of the vessel in the cross-sectional reconstructed image; the stenosis parameter may be the extent of a vessel-broken stenosis in the region of interest.
In one embodiment, as shown in fig. 4, obtaining vessel wall segmentation results of a cross-sectional reconstructed image of a target location includes:
step 402, inputting the cross section reconstruction image into a blood vessel wall segmentation model based on the target position to obtain a lumen wall segmentation result.
The vessel wall segmentation model can be a neural network model for performing image segmentation on the vessel lumen wall of the cross-section reconstructed image.
The lumen wall segmentation result may be an image segmentation result of the vessel lumen wall input of the vessel wall segmentation model.
Specifically, a blood vessel wall segmentation model generated by a two-dimensional V-shaped network model is constructed, and the blood vessel wall segmentation model has the image segmentation capability of the blood vessel lumen wall through training sample images. Inputting the cross section reconstruction image into a blood vessel wall segmentation model, and carrying out image segmentation according to image features related to the lumen wall of the blood vessel through the blood vessel wall segmentation model to obtain a lumen wall segmentation result.
Step 404, taking the lumen wall segmentation result as a blood vessel wall segmentation result.
Specifically, the vessel wall segmentation results include, but are not limited to, quantitative parameters of the vessel lumen wall as vessel object segmentation results.
And/or, step 406, inputting the cross-section reconstruction image into the vessel plaque segmentation model based on the target position to obtain a vessel plaque segmentation result.
The vessel plaque segmentation model may be a neural network model for performing image segmentation on vessel wall plaque of the cross-sectional reconstructed image.
The vessel wall plaque segmentation result may be an image segmentation result of the vessel wall plaque input by the vessel plaque segmentation model.
Specifically, a vascular plaque segmentation model generated by a two-dimensional V-shaped network model is constructed, and the vascular plaque segmentation model has the image segmentation capability of vascular wall plaque through training sample images. Inputting the cross-section reconstruction image into a vessel plaque segmentation model, and carrying out image segmentation according to image features related to vessel wall plaque through the vessel plaque segmentation model to obtain a vessel wall plaque segmentation result.
And step 408, taking the plaque segmentation result as a blood vessel wall segmentation result.
Specifically, the vessel wall plaque segmentation results include, but are not limited to, quantitative parameters of vessel wall plaque as vessel object segmentation results. If only the lumen wall segmentation result or the wall plaque segmentation result is present, one of them is taken as a blood vessel object segmentation result, and if the blood vessel wall segmentation result or the wall plaque segmentation result is present at the same time, the two are combined as a blood vessel wall segmentation result.
In one embodiment, the vessel segmentation results include a combination of one or more of middle cerebral artery, anterior cerebral artery, posterior cerebral artery, basilar artery, internal carotid artery, external carotid artery, vertebral artery, common carotid artery, brachiocephalic trunk, subclavian artery, aortic arch.
In this embodiment, by dividing the vessel lumen wall and the vessel wall plaque, and combining the two division results as the vessel object division results of the head, neck and chest regions, the abnormality diagnosis of the vessel can be performed while considering the abnormality of the vessel lumen wall and the abnormality of the vessel wall plaque, and the diagnosis accuracy of the vessel abnormality condition can be improved.
In one embodiment, as shown in fig. 5, extracting a vessel centerline corresponding to a head, neck and chest region from a magnetic resonance image dataset includes:
step 502, performing multi-contrast sequence segmentation on the magnetic resonance image dataset to obtain a blood vessel trunk segmentation result.
Wherein the multi-contrast sequence segmentation may be an operation of segmenting the magnetic resonance image dataset according to different contrasts.
The segmentation result of the blood vessel trunk may be a segmentation result obtained by segmenting the blood vessel trunk through a contrast sequence.
Specifically, according to the requirement of a specific segmentation task, a multi-contrast sequence required for the segmentation task is determined, then the magnetic resonance image dataset is segmented by using the multi-contrast sequence, and as the magnetic resonance image dataset is a magnetic resonance image sequence of the head, neck and chest, a blood vessel trunk segmentation result corresponding to each sequence in the magnetic resonance image sequence can be obtained.
Step 504, inputting the magnetic resonance image dataset into the multi-label segmentation model to obtain a magnetic resonance image segmentation result.
Wherein the multi-label segmentation model may be an artificial intelligence model capable of performing multi-label segmentation tasks.
The magnetic resonance image segmentation result may be a segmentation result obtained by segmenting the magnetic resonance image dataset according to a plurality of labels.
Specifically, based on a convolutional neural network model capable of meeting multi-label segmentation, namely a multi-label segmentation model, a magnetic resonance image dataset is input into the multi-label segmentation model, and segmentation is carried out through a classifier of the multi-label segmentation model, so that a magnetic resonance image segmentation result corresponding to each sequence can be obtained.
Step 506, obtaining a blood vessel segmentation result according to the blood vessel trunk segmentation result and the magnetic resonance image segmentation result.
The blood vessel segmentation result may be each blood vessel segment obtained by segmenting a blood vessel in the head, neck and chest region.
Specifically, the blood vessel trunk segmentation result and the magnetic resonance image segmentation result are combined to obtain a blood vessel segmentation result. Segmented vessels in the vessel segmentation result include, but are not limited to, the following vessel segments: middle cerebral artery MCA, anterior cerebral artery ACA, posterior cerebral artery PCA, basilar artery BA, internal carotid artery ICA, external carotid artery ECA, vertebral artery VA, common carotid artery CCA, brachiocephalic trunk BCT, subclavian artery SA, aortic arch AA. For different vessel segments, the vessel segmentation results will be distinguished by different labels.
And step 508, determining a blood vessel center line corresponding to the head, neck and chest region according to the blood vessel segmentation result.
Specifically, under the condition that the magnetic resonance image dataset can cover the head, neck and chest region, according to different blood vessel segments in the blood vessel segmentation result, a skeletonizing and path tracking scheme is adopted to obtain blood vessel sub-center lines corresponding to the blood vessel segments, and the blood vessel sub-center lines corresponding to the blood vessel segments are spliced to obtain the blood vessel center lines corresponding to the head, neck and chest region.
In one embodiment, the centerline splitting network is shown in FIG. 9. Under the condition that the magnetic resonance image data sets can not cover the head, neck and chest regions, the blood vessel central lines corresponding to different magnetic resonance image data sets are extended to obtain the extended blood vessel central lines corresponding to the magnetic resonance image data sets, and the extended blood vessel central lines are spliced to be used as the blood vessel central lines corresponding to the head, neck and chest regions. The magnetic resonance image dataset comprises a plurality of anatomical images, at least two anatomical images differing in contrast.
And extracting a blood vessel central line corresponding to the head, neck and chest region from the magnetic resonance image data set, wherein the blood vessel central line is obtained by fusing the regional blood vessel central lines of at least two anatomical images. The anatomical images may include a combination of one or more of time-of-flight magnetic resonance angiography, TOF-MRA images, contrast-enhanced magnetic resonance angiography (contrast enhancement MRA, CE-MRA) images, T1-CE images, FLAIR (fluid attenuatedinversion recovery) images. With continued reference to fig. 9, the left image shows three different contrast magnetic resonance images, a TOF MRA image, a T1 image, and a T1-CE image, from top to bottom, respectively. Alternatively, the scan regions corresponding to the three different contrast magnetic resonance images may be set to be the same or different. Respectively inputting three different contrast magnetic resonance images into a central line segmentation network formed by a neural network corresponding to the middle diagram in fig. 9 to obtain regional blood vessel central lines corresponding to the three different contrast magnetic resonance images; the vessel center line can be obtained by fusing the vessel center lines of a plurality of areas. As shown in the right-hand graph of fig. 9, TOF-MRA and T1 images, respectively, of the extracted centerline (extracted centerline) after the curved projection are shown.
In this embodiment, by using the combination of the segmentation result of the blood vessel trunk and the segmentation result of the magnetic resonance image, the blood vessel segmentation result is determined, and the central lines of the blood vessel segmentation result are further connected to obtain the blood vessel central line of the whole region, so that the region of interest can be detected in different dimensions, and the two-dimensional cross section of the blood vessel can be reconstructed, so that effective data support can be provided for different blood vessel imaging requirements effectively.
In one embodiment, as shown in fig. 6, determining a vessel centerline corresponding to a head, neck and chest region according to a vessel segmentation result includes:
step 602, determining the central line of each blood vessel in the head, neck and chest region according to the blood vessel segmentation result.
The blood vessel sub-center line can be a center line corresponding to one section of blood vessel of the blood vessel segmentation result.
Specifically, for each vessel segment in the vessel segmentation result, a skeletonized and path tracking scheme is adopted to construct a vessel sub-center line corresponding to each vessel segment.
Step 604, extending each vessel sub-center line to obtain each vessel extension center line.
Wherein, the blood vessel extension center line can be new blood vessel center line data obtained after the extension of two end points of different blood vessel segments.
Specifically, for the blood vessel sub-center line of each blood vessel segment, a seed growth algorithm is adopted for extension, so as to obtain the blood vessel extension center line corresponding to each blood vessel segment.
Step 606, connecting the extending central lines of the blood vessels according to the distribution condition of the blood vessels in the head, neck and chest areas to obtain the central line of the blood vessels.
Specifically, based on the vascular distribution of human body or animal in the head, neck and chest region, the method of path tracking and path searching is adopted to connect the extending central lines of each blood vessel, the connection mode of the blood vessel segments is as shown in the central line management mode of blood vessels in fig. 10, the internal carotid artery extends upwards to be connected with the end point of the middle cerebral artery or the anterior cerebral artery, and the internal carotid artery extends downwards to be connected with the distal end point of the common carotid artery.
As shown in fig. 10, the left graph is a human body vascular distribution diagram, and the disease types caused by plaque at different positions are simultaneously shown in the graph: arterial embolism (Artery blocked by embolus), penetrating arterial disease (Penetrating artery disease), intracranial atherosclerosis (Intracranial atherosclerosis), carotid plaque with arterial embolism (Carotid Plaquewith Arteriogenic Emboli), carotid stenosis (Flow-reducing carotid stenosis), embolism (Emboli), atrial fibrillation (Atrial fibrillation), aortic arch plaque (aortic arch plaque), and heart Valve disease (Valve disease), etc. The intermediate image is an image of the anatomy, and the image of the entire anatomy includes the intracranial, cervical, and thoracic blood vessels sets. The right graph is a specific vessel category, including: middle cerebral artery MCA, anterior cerebral artery ACA, posterior cerebral artery PCA, basilar artery BA, internal carotid artery ICA, external carotid artery ECA, vertebral artery VA, common carotid artery CCA, brachiocephalic trunk BCT, subclavian artery SA, and aortic arch AA.
The common carotid artery would extend upward to connect with the lower endpoint of the internal carotid artery or external carotid artery. The common carotid artery on the left side is extracted down to the bifurcation point with the aortic arch and the bifurcation point is taken as the end point of the final presentation center line. For the right common carotid artery, it would extend down to the brachiocephalic trunk and have the bifurcation of the brachiocephalic trunk with the aortic arch as the end point of the presentation centerline. For the brachiocephalic trunk, it would extend upward to connect with the right subclavian artery or with the right common carotid artery. The left subclavian artery will be managed separately. The aortic arch is managed along the ascending aorta with the aortic sinus as a starting point. The basilar artery is connected with the left and right posterior arteries respectively upwards, and the basilar artery is connected with the left and right vertebral arteries respectively downwards. The vertebral artery will be drawn down to the bifurcation point with the inferior femoral artery as the starting endpoint. In one embodiment, the head, neck and chest large range data centerline extraction range is shown in fig. 10.
In this embodiment, the seed growth algorithm is used to extend the vessel sub-center line of each vessel segment, and further connect the extended vessel center lines, so that the most complete head, neck and chest large-scale vessel center line can be obtained in limited resources, and data support is provided for subsequent large-scale vessel lesion analysis.
In one embodiment, after the region of interest detection based on the image features of the target detection image of the target region, the quantitative analysis data is obtained as the image analysis data step, the method further includes:
first: at least one other image dataset corresponding to a head, neck and chest region is acquired.
The other image data sets can be image data sets acquired by at least one part of the head, the neck and the chest of the human and the animal, and the magnetic resonance image data sets corresponding to the head, the neck and the chest are not overlapped.
Specifically, the server 104 responds to the operation instruction of the terminal 102 to obtain at least one other image dataset acquired at least one of the three parts of the head, the neck and the chest of the human and the animal from the terminal 102, wherein the other image dataset is a magnetic resonance image sequence of the head, the neck and the chest, including but not limited to a T1 image, an MRA image, a T1 enhanced image, a T2 image, a proton density image and the like, the integrated vessel wall image corresponding to the other image dataset is a large-range scanning sequence of combining the aortic arch with the head, neck and arterial vessel, and the large-range image acquisition can be formed by segmentation scanning and splicing, or can be obtained by whole scanning. If the data are formed by segmented scanning and splicing, the scanning data of the aortic arch blood vessel segment, the carotid artery blood vessel segment and the intracranial blood vessel segment are obtained in a segmented mode, and then the blood vessel segments are spliced to obtain a large-scale other image data set based on a rigid registration algorithm.
The acquired other image data sets are then stored in the storage unit, and when the server needs to process the other image data sets, the other image data sets are called from the storage unit to the volatile storage resource for the central processing unit to calculate. The magnetic resonance image data of the other image data sets may be single data input to the central processing unit, or may be a plurality of magnetic resonance image data simultaneously input to the central processing unit.
Second,: and carrying out global registration on the image analysis data and each other image dataset to obtain an image sequence global registration result as a head and neck chest region registration result.
Wherein the global registration may be an overall registration of the image analysis data and the other image dataset.
The image sequence global registration result may be a result of performing extensive alignment on the image analysis data and other image data sets.
Wherein the head, neck and chest registration result may be the result of completing the alignment of the image analysis data and other image dataset.
Specifically, global registration based on large-scale alignment is respectively carried out on the image analysis data and each other image dataset, and an image sequence global registration result is obtained. If the image sequence global registration result achieves alignment on a large scale and tissue for both the image analysis data and each of the other image datasets, the image sequence global registration result is taken as a head and neck thoracic region registration result.
Third,: and under the condition that at least one local area image exists in the image sequence global registration result to indicate that head and neck and chest tissues are not aligned, carrying out local registration on the image analysis data and each local area image to obtain a head and neck and chest region registration result.
Wherein the local region image may be an image of a region that is not aligned in the global registration.
Wherein the local registration may be local registration of the image analysis data with other image datasets.
Specifically, if there is at least one local area image representing a head, neck and chest tissue misalignment in the event of the image sequence global registration result, respective local area images of the head, neck and chest tissue misalignment are determined. And respectively carrying out local registration based on head and neck and chest tissue alignment on the image analysis data and each local area image to obtain an image sequence local registration result. And finally, combining the global registration result of the image sequence with the local registration result of the image sequence to serve as a head, neck and chest region registration result.
In this embodiment, by using the image analysis data and the registration relationship between the other image data sets, and then transmitting the processing result on the image analysis data to the other image data sets, the other image data sets can obtain the same analysis data under the condition of aligning the head, neck and chest tissues in a large range, which is beneficial to diagnosing abnormal vascular conditions under different image data sets, and meanwhile, the amount of computer processing data is reduced, and the image processing efficiency is improved.
In one embodiment, a flow chart of a processing algorithm corresponding to the method for obtaining the composition and quantitative parameters of a one-stop type blood vessel wall is shown in fig. 11.
In one embodiment, the acquisition of the mid-range and wide-range magnetic resonance image datasets may be a segmented scan or a full-segment integrated scan. The sectional scanning can refer to that three parts of the head, the neck and the chest are independently scanned, or one part of the parts are scanned together and the other part of the parts are independently scanned, and then different scanning parts are spliced by a registration algorithm; the integrated scanning can be based on a large-range coil, a large-range FOV is set, and large-range scanning data are directly acquired without additional post-processing algorithm for splicing.
In one embodiment, the method for obtaining the blood vessel center line may be based on a plurality of sequences, or may be a combination of at least one sequence in the plurality of sequences to achieve extraction of the blood vessel center line.
In one embodiment, the multi-label segmentation model is used for segmentation, and the segmentation can be performed by a single model channel for one sequence or by the same model channel for a plurality of sequences.
In one embodiment, the method of managing the vessel centerline may be performed by a separate vessel segment, or may be performed by other connection methods from the proximal end to the distal end. Alternatively, the proximal or distal ends may be defined by the direction, or direction of arterial blood flow.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a composition and quantitative parameter acquisition device for realizing the one-stop type blood vessel wall of the head and neck chest image processing method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitations in the embodiments of the one or more one-stop blood vessel wall composition and quantitative parameter acquisition devices provided below can be referred to above for the limitations of one-stop blood vessel wall composition and quantitative parameter acquisition method, and are not repeated here.
In one embodiment, as shown in fig. 12, there is provided a one-stop blood vessel wall composition and quantitative parameter acquisition device comprising: a data set acquisition module 1202, an image reconstruction module 1204, a segmentation result obtaining module 1206, a composition obtaining module 1208 of the vessel wall, a detection image determination module 1210, and a quantitative analysis data obtaining module 1212, wherein:
a data set acquisition module 1202, configured to acquire a magnetic resonance image data set corresponding to a head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch vessel section, a carotid artery vessel section and an intracranial vessel section;
the image reconstruction module 1204 is configured to extract a vessel centerline corresponding to the head, neck and chest region from the magnetic resonance image dataset, and perform curved surface reconstruction on the magnetic resonance image dataset along the vessel centerline to obtain a curved surface reconstructed image, where the curved surface reconstructed image has the vessel centerline;
a segmentation result obtaining module 1206, configured to respond to a cross section selection instruction on the user operation interface, and select a target position on the curved surface reconstructed image, so as to obtain a vessel wall segmentation result of the cross section reconstructed image of the target position;
A component obtaining module 1208 of the vessel wall, configured to compare the vessel wall segmentation result of the cross-sectional reconstructed image, determine a tissue difference of the vessel wall, and determine the component of the vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is different layers and/or at least two contrast images;
a detection image determining module 1210, configured to determine a target area in the head and neck chest area according to the blood vessel centerline, and determine a target detection image corresponding to the target area;
the quantitative analysis data obtaining module 1212 is configured to perform region of interest detection based on image features of the target detection image of the target region, and obtain quantitative analysis data as image analysis data.
In one embodiment, the quantitative analysis data obtaining module 1212 is further configured to perform region of interest detection based on the image features of the target detection image of the target region, to obtain a region of interest recognition result; and determining the abnormal category, the abnormal position and the abnormal grading result of the region of interest as quantitative analysis data under the condition that the region of interest identification result represents that the target region has abnormality.
In one embodiment, the segmentation result obtaining module 1206 is further configured to input the cross-sectional reconstructed image to a vessel wall segmentation model based on the target location to obtain a vessel wall segmentation result; taking the blood vessel wall segmentation result as a blood vessel object segmentation result; and/or inputting the cross section reconstruction image into a vessel plaque segmentation model based on the target position to obtain a vessel plaque segmentation result; and taking the vessel wall plaque segmentation result as a vessel object segmentation result.
In one embodiment, the image reconstruction module 1204 is further configured to perform multi-contrast sequence segmentation on the magnetic resonance image dataset to obtain a blood vessel trunk segmentation result; inputting the magnetic resonance image dataset into a multi-label segmentation model to obtain a magnetic resonance image segmentation result; obtaining a blood vessel segmentation result according to the blood vessel trunk segmentation result and the magnetic resonance image segmentation result; and determining the central line of the blood vessel corresponding to the head, neck and chest region according to the blood vessel segmentation result.
In one embodiment, the image reconstruction module 1204 is further configured to determine each vessel sub-center line of the head, neck and chest region according to the vessel segmentation result; extending the central line of each blood vessel to obtain the extending central line of each blood vessel; and connecting the extending central lines of the blood vessels according to the distribution condition of the blood vessels in the head, neck and chest areas to obtain the central line of the blood vessels.
In one embodiment, the quantitative analysis data obtaining module 1212 is further configured to obtain at least one other image dataset corresponding to the head and neck chest region; performing global registration on the image analysis data and each other image dataset to obtain an image sequence global registration result as a head and neck chest region registration result; and under the condition that at least one local area image exists in the image sequence global registration result to indicate that head and neck and chest tissues are not aligned, carrying out local registration on the image analysis data and each local area image to obtain a head and neck and chest region registration result.
The above-described individual modules of the one-stop blood vessel wall composition and quantitative parameter acquisition device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 13. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing server data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for obtaining composition and quantitative parameters of a vessel wall in one station.
It will be appreciated by those skilled in the art that the structure shown in fig. 13 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
In one embodiment, a one-stop vascular wall composition and quantitative parameter display system is provided, the system including a graphical user interface that may divide a plurality of different regions, each region may have a different function. In this embodiment, the graphical user interface includes a combination of one or more of a user data entry area, a user editing area, an image display area, a quantitative parameter display area, and the like.
In this embodiment, the graphical user interface includes a first region, a second region, and a third region.
A first region displaying a magnetic resonance image dataset corresponding to the head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment. As shown in fig. 14, the first region contains the TOF-MRA image and the T1 image. The blood vessel center line corresponding to the head, neck and chest region can be extracted from the magnetic resonance image data set, and the blood vessel center line is obtained by fusing the regional blood vessel center lines of at least two anatomical images.
And a second area for displaying a selection area including a blood vessel list for a user to select a target blood vessel type. As shown in fig. 14, the second area shows a plurality of blood vessel type combinations, each corresponding to a different display path, respectively, and the user can obtain different expected blood vessel hopes by selecting the target blood vessel type.
And the third area is used for displaying the blood vessel center line corresponding to the head and neck chest area extracted from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image with the blood vessel center line. As shown in fig. 14, a curved surface reconstruction image can be obtained by performing curved surface reconstruction based on the center point of the linear structure and the direction vector information.
Optionally, the third region is further provided with a cross-section selection key (elliptical framed region in the figure) for applying to the curved reconstructed image at a selected position, while the cross-section reconstructed image at the selected position is also displayed in the third region.
Alternatively, the curved reconstructed image of the third region may establish a linkage relationship with the anatomical image of the first region. For example, while the third region operates the cross-section selection key, the anatomical image of the first region may simultaneously display the linked cross-section selection keys.
Optionally, the graphical user interface may further comprise a patient information area, a scan sequence area, and a tool selection area. The patient information area may display patient age, height, weight, etc. The scan sequence region may display an imaging sequence corresponding to the magnetic resonance image dataset. The tool selection area may include a plurality of editing tools such as gray scale conversion, linear selection, region selection, labeling, etc.
Optionally, the third region is further provided with a blood vessel wall composition display region and a quantitative data display region; a vessel wall composition display area for displaying a vessel wall segmentation result of a cross-sectional reconstructed image of a target location selected on the curved reconstructed image in response to a cross-sectional selection instruction on the user operation interface; comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is different layers and/or at least two contrast images;
the quantitative data display area is used for displaying a target area in the head and neck chest area according to the central line of the blood vessel and a target detection image corresponding to the target area; and detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric RandomAccess Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (9)

1. A method for obtaining composition and quantitative parameters of a vessel wall in one station, the method comprising:
acquiring a magnetic resonance image dataset corresponding to a head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment;
Extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line;
responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position;
wherein the obtaining the vessel wall segmentation result of the cross-sectional reconstructed image of the target position comprises:
inputting the cross section reconstruction image into a blood vessel wall segmentation model based on the target position to obtain a lumen wall segmentation result;
taking the lumen wall segmentation result as the blood vessel wall segmentation result;
and/or the number of the groups of groups,
inputting the cross section reconstruction image into a vessel plaque segmentation model based on the target position to obtain a vessel plaque segmentation result;
taking the vessel wall plaque segmentation result as the vessel wall segmentation result;
comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images;
According to the blood vessel center line, determining a target area in the head, neck and chest area and determining a target detection image corresponding to the target area; the target area is an area which is selected to be subjected to characteristic analysis of the image in the head, neck and chest area;
detecting a region of interest based on image features of a target detection image of the target region to obtain quantitative analysis data as image analysis data;
the detecting the region of interest based on the image features of the target detection image of the target region to obtain quantitative analysis data includes:
detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain a region of interest identification result;
and under the condition that the identification result of the region of interest represents that the target region has abnormality, determining the abnormality category, the abnormality position and the abnormality grading result of the region of interest as the quantitative analysis data.
2. The method of claim 1, wherein the quantitative analysis data comprises at least one of lesion classification results, lesion location information, vessel cross-sectional area and diameter parameters, vessel cross-sectional area parameters, or stenosis parameters.
3. The method of claim 1, wherein the vessel wall segmentation result comprises one or more segmented vessels; wherein the segmented blood vessel is middle cerebral artery, anterior cerebral artery, posterior cerebral artery, basilar artery, internal carotid artery, external carotid artery, vertebral artery, common carotid artery, brachiocephalic trunk, subclavian artery, aortic arch.
4. The method of claim 1, wherein extracting a vessel centerline corresponding to the head, neck and chest region from the magnetic resonance image dataset comprises:
performing multi-contrast sequence segmentation on the magnetic resonance image dataset to obtain a blood vessel trunk segmentation result;
inputting the magnetic resonance image dataset into a multi-label segmentation model to obtain a magnetic resonance image segmentation result;
obtaining a blood vessel segmentation result according to the blood vessel trunk segmentation result and the magnetic resonance image segmentation result;
and determining the central line of the blood vessel corresponding to the head, neck and chest region according to the blood vessel segmentation result.
5. The method of claim 4, wherein determining a vessel centerline corresponding to the head, neck and chest region based on the vessel segmentation result comprises:
Determining the central line of each blood vessel in the head, neck and chest region according to the blood vessel segmentation result;
extending each blood vessel sub-center line to obtain each blood vessel extending center line;
and connecting the extending central lines of the blood vessels according to the distribution condition of the blood vessels in the head, neck and chest areas to obtain the central line of the blood vessels.
6. A one-stop blood vessel wall composition and quantitative parameter acquisition device, the device comprising:
the data set acquisition module is used for acquiring a magnetic resonance image data set corresponding to the head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment;
the image reconstruction module is used for extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line;
The segmentation result obtaining module is used for responding to a cross section selection instruction on a user operation interface, selecting a target position on the curved surface reconstruction image so as to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position;
wherein the obtaining the vessel wall segmentation result of the cross-sectional reconstructed image of the target position comprises:
inputting the cross section reconstruction image into a blood vessel wall segmentation model based on the target position to obtain a lumen wall segmentation result;
taking the lumen wall segmentation result as the blood vessel wall segmentation result;
and/or the number of the groups of groups,
inputting the cross section reconstruction image into a vessel plaque segmentation model based on the target position to obtain a vessel plaque segmentation result;
taking the vessel wall plaque segmentation result as the vessel wall segmentation result;
the component obtaining module of the vessel wall is used for comparing the vessel wall segmentation results of the cross-section reconstruction images and determining the tissue difference of the vessel wall so as to judge the component of the vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images;
the detection image determining module is used for determining a target area in the head, neck and chest area according to the blood vessel center line and determining a target detection image corresponding to the target area; the target area is an area which is selected to be subjected to characteristic analysis of the image in the head, neck and chest area;
The quantitative analysis data obtaining module is used for detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain quantitative analysis data serving as image analysis data;
the detecting the region of interest based on the image features of the target detection image of the target region to obtain quantitative analysis data includes:
detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain a region of interest identification result;
and under the condition that the identification result of the region of interest represents that the target region has abnormality, determining the abnormality category, the abnormality position and the abnormality grading result of the region of interest as the quantitative analysis data.
7. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements, when executing the computer program:
acquiring a magnetic resonance image dataset corresponding to a head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment;
Extracting a blood vessel center line corresponding to the head, neck and chest region from the magnetic resonance image data set, and carrying out curved surface reconstruction on the magnetic resonance image data set along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line;
responding to a cross section selection instruction on a user operation interface, and selecting a target position on the curved surface reconstruction image to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position;
wherein the obtaining the vessel wall segmentation result of the cross-sectional reconstructed image of the target position comprises:
inputting the cross section reconstruction image into a blood vessel wall segmentation model based on the target position to obtain a lumen wall segmentation result;
taking the lumen wall segmentation result as the blood vessel wall segmentation result;
and/or the number of the groups of groups,
inputting the cross section reconstruction image into a vessel plaque segmentation model based on the target position to obtain a vessel plaque segmentation result;
taking the vessel wall plaque segmentation result as the vessel wall segmentation result;
comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images;
According to the blood vessel center line, determining a target area in the head, neck and chest area and determining a target detection image corresponding to the target area; the target area is an area which is selected to be subjected to characteristic analysis of the image in the head, neck and chest area;
detecting a region of interest based on image features of a target detection image of the target region to obtain quantitative analysis data as image analysis data;
the detecting the region of interest based on the image features of the target detection image of the target region to obtain quantitative analysis data includes:
detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain a region of interest identification result;
and under the condition that the identification result of the region of interest represents that the target region has abnormality, determining the abnormality category, the abnormality position and the abnormality grading result of the region of interest as the quantitative analysis data.
8. The computer device of claim 7, wherein the processor when executing the computer program further implements:
and determining the probability of falling off and cracking of a blood vessel region according to the blood vessel wall segmentation result, and predicting the probability of occurrence of adverse events or determining the probability of stroke and cerebral hemorrhage of a patient.
9. A one-stop vascular wall composition and quantification parameter display system, comprising a graphical user interface, the graphical user interface comprising:
a first region for displaying a magnetic resonance image dataset corresponding to the head, neck and chest region; the magnetic resonance image data set comprises at least two of a T1 image, an MRA image, a T1 enhanced image, a T2 image and a proton density image, wherein the magnetic resonance image data set is formed by splicing scanning data of an aortic arch blood vessel segment, a carotid artery blood vessel segment and an intracranial blood vessel segment;
a second area for displaying a list selection area including blood vessels for a user to select a target blood vessel type;
a third region for displaying a blood vessel center line corresponding to the head, neck and chest region extracted from the magnetic resonance image dataset, and performing curved surface reconstruction on the magnetic resonance image dataset along the blood vessel center line to obtain a curved surface reconstruction image, wherein the curved surface reconstruction image is provided with the blood vessel center line;
wherein the third region is further provided with a blood vessel wall composition display region and a quantitative data display region;
the blood vessel wall component display area is used for displaying a target position selected on the curved surface reconstruction image in response to a cross section selection instruction on a user operation interface so as to obtain a blood vessel wall segmentation result of the cross section reconstruction image of the target position;
Wherein the obtaining the vessel wall segmentation result of the cross-sectional reconstructed image of the target position comprises:
inputting the cross section reconstruction image into a blood vessel wall segmentation model based on the target position to obtain a lumen wall segmentation result;
taking the lumen wall segmentation result as the blood vessel wall segmentation result;
and/or inputting the cross-section reconstruction image into a vascular plaque segmentation model based on the target position to obtain a vascular plaque segmentation result;
taking the vessel wall plaque segmentation result as the vessel wall segmentation result;
comparing the blood vessel wall segmentation results of the cross-section reconstructed images, and determining the tissue difference of the blood vessel wall so as to judge the components of the blood vessel wall; wherein the vessel wall segmentation result of the cross-sectional reconstructed image is a different layer and/or at least two contrast images;
the quantitative data display area is used for displaying a target area in the head, neck and chest area according to the blood vessel center line and determining a target detection image corresponding to the target area; the target area is an area which is selected to be subjected to characteristic analysis of the image in the head, neck and chest area;
detecting a region of interest based on image features of a target detection image of the target region to obtain quantitative analysis data as image analysis data;
The detecting the region of interest based on the image features of the target detection image of the target region to obtain quantitative analysis data includes: detecting the region of interest based on the image characteristics of the target detection image of the target region to obtain a region of interest identification result;
and under the condition that the identification result of the region of interest represents that the target region has abnormality, determining the abnormality category, the abnormality position and the abnormality grading result of the region of interest as the quantitative analysis data.
CN202311162912.4A 2023-09-11 2023-09-11 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall Active CN116934741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311162912.4A CN116934741B (en) 2023-09-11 2023-09-11 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311162912.4A CN116934741B (en) 2023-09-11 2023-09-11 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall

Publications (2)

Publication Number Publication Date
CN116934741A CN116934741A (en) 2023-10-24
CN116934741B true CN116934741B (en) 2023-12-26

Family

ID=88375555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311162912.4A Active CN116934741B (en) 2023-09-11 2023-09-11 Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall

Country Status (1)

Country Link
CN (1) CN116934741B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561781A (en) * 2020-12-07 2021-03-26 深圳先进技术研究院 Magnetic resonance vessel wall image analysis method, system and computer readable medium
CN113066091A (en) * 2021-03-29 2021-07-02 昆明同心医联科技有限公司 Cerebral vessel segmentation method and device based on black vessel wall curved surface reconstruction and storage medium
CN113223015A (en) * 2021-05-11 2021-08-06 清华大学 Vascular wall image segmentation method, device, computer equipment and storage medium
CN113298831A (en) * 2021-06-30 2021-08-24 上海联影医疗科技股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN114399594A (en) * 2021-12-28 2022-04-26 深圳先进技术研究院 Automatic curved surface reconstruction method for vessel wall image based on center line extraction
CN114881974A (en) * 2022-05-11 2022-08-09 上海联影医疗科技股份有限公司 Medical image analysis method and system
CN115641389A (en) * 2022-08-23 2023-01-24 清华大学 Method and device for generating multi-contrast magnetic resonance image and readable storage medium
CN116091587A (en) * 2021-11-08 2023-05-09 上海微创卜算子医疗科技有限公司 Method for determining parameters of vascular stent, electronic device and storage medium
CN116485810A (en) * 2023-03-27 2023-07-25 清华大学 Carotid artery segmentation method, device and equipment based on magnetic resonance image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230115927A1 (en) * 2021-10-13 2023-04-13 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561781A (en) * 2020-12-07 2021-03-26 深圳先进技术研究院 Magnetic resonance vessel wall image analysis method, system and computer readable medium
CN113066091A (en) * 2021-03-29 2021-07-02 昆明同心医联科技有限公司 Cerebral vessel segmentation method and device based on black vessel wall curved surface reconstruction and storage medium
CN113223015A (en) * 2021-05-11 2021-08-06 清华大学 Vascular wall image segmentation method, device, computer equipment and storage medium
CN113298831A (en) * 2021-06-30 2021-08-24 上海联影医疗科技股份有限公司 Image segmentation method and device, electronic equipment and storage medium
CN116091587A (en) * 2021-11-08 2023-05-09 上海微创卜算子医疗科技有限公司 Method for determining parameters of vascular stent, electronic device and storage medium
CN114399594A (en) * 2021-12-28 2022-04-26 深圳先进技术研究院 Automatic curved surface reconstruction method for vessel wall image based on center line extraction
CN114881974A (en) * 2022-05-11 2022-08-09 上海联影医疗科技股份有限公司 Medical image analysis method and system
CN115641389A (en) * 2022-08-23 2023-01-24 清华大学 Method and device for generating multi-contrast magnetic resonance image and readable storage medium
CN116485810A (en) * 2023-03-27 2023-07-25 清华大学 Carotid artery segmentation method, device and equipment based on magnetic resonance image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
磁共振新技术在颈动脉狭窄诊断中的应用;鲁晓燕;张挽时;毕永民;宋云龙;时惠平;方红;祝红线;熊明辉;王东;喻敏;;空军总医院学报(第02期);第83-87, 90页 *
颈动脉磁共振图像的血管中心线提取;程诗音;段侪杰;陈慧军;梁正荣;;中国体视学与图像分析(第04期);第415-422页 *

Also Published As

Publication number Publication date
CN116934741A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
Ouyang et al. Video-based AI for beat-to-beat assessment of cardiac function
Dey et al. Artificial intelligence in cardiovascular imaging: JACC state-of-the-art review
Kusunose et al. Utilization of artificial intelligence in echocardiography
US11024025B2 (en) Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
Carass et al. Longitudinal multiple sclerosis lesion segmentation: resource and challenge
Saikumar et al. A novel implementation heart diagnosis system based on random forest machine learning technique.
US9959615B2 (en) System and method for automatic pulmonary embolism detection
Estrada et al. Retinal artery-vein classification via topology estimation
Mahapatra Semi-supervised learning and graph cuts for consensus based medical image segmentation
Wang et al. Slic-Seg: A minimally interactive segmentation of the placenta from sparse and motion-corrupted fetal MRI in multiple views
Zabihollahy et al. Convolutional neural network‐based approach for segmentation of left ventricle myocardial scar from 3D late gadolinium enhancement MR images
Ahirwar Study of techniques used for medical image segmentation and computation of statistical test for region classification of brain MRI
US7653227B2 (en) Hierarchical modeling in medical abnormality detection
Oghli et al. Automatic fetal biometry prediction using a novel deep convolutional network architecture
CN108603922A (en) Automatic cardiac volume is divided
Li et al. Automatic lumbar spinal MRI image segmentation with a multi-scale attention network
US11972571B2 (en) Method for image segmentation, method for training image segmentation model
US11600379B2 (en) Systems and methods for generating classifying and quantitative analysis reports of aneurysms from medical image data
Yang et al. 3D multi-scale residual fully convolutional neural network for segmentation of extremely large-sized kidney tumor
Sharma et al. A novel solution of using deep learning for left ventricle detection: enhanced feature extraction
Graves et al. Siamese pyramidal deep learning network for strain estimation in 3D cardiac cine-MR
Su et al. Res-DUnet: A small-region attentioned model for cardiac MRI-based right ventricular segmentation
CN116958679A (en) Target detection method based on weak supervision and related equipment
CN116934741B (en) Method and device for acquiring composition and quantitative parameters of one-stop type blood vessel wall

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant