WO2022000977A1 - 基于深度学习获取主动脉图像的系统 - Google Patents

基于深度学习获取主动脉图像的系统 Download PDF

Info

Publication number
WO2022000977A1
WO2022000977A1 PCT/CN2020/132798 CN2020132798W WO2022000977A1 WO 2022000977 A1 WO2022000977 A1 WO 2022000977A1 CN 2020132798 W CN2020132798 W CN 2020132798W WO 2022000977 A1 WO2022000977 A1 WO 2022000977A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
aortic
center
data
layer
Prior art date
Application number
PCT/CN2020/132798
Other languages
English (en)
French (fr)
Inventor
冯亮
刘广志
王之元
Original Assignee
苏州润迈德医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010606963.1A external-priority patent/CN111815587A/zh
Priority claimed from CN202010606964.6A external-priority patent/CN111815588B/zh
Application filed by 苏州润迈德医疗科技有限公司 filed Critical 苏州润迈德医疗科技有限公司
Priority to JP2022579902A priority Critical patent/JP7446645B2/ja
Priority to CN202080100602.8A priority patent/CN115769251A/zh
Priority to EP20943564.3A priority patent/EP4174762A1/en
Publication of WO2022000977A1 publication Critical patent/WO2022000977A1/zh
Priority to US18/089,728 priority patent/US20230153998A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the present invention relates to the technical field of coronary medicine, in particular to a system for acquiring aortic images based on deep learning.
  • Cardiovascular disease is the leading cause of death in the industrialized world.
  • the major form of cardiovascular disease is caused by the chronic accumulation of fatty substances in the inner tissue layers of the arteries supplying the heart, brain, kidneys and lower extremities.
  • Progressive coronary artery disease restricts blood flow to the heart. Due to the lack of accurate information provided by current non-invasive tests, many patients require invasive catheter procedures to evaluate coronary blood flow. Therefore, there is a need for a non-invasive method for quantifying blood flow in human coronary arteries to assess the functional significance of possible coronary artery disease. A reliable assessment of arterial volume will therefore be important for treatment planning addressing the patient's needs.
  • hemodynamic properties such as fractional flow reserve (FFR) are important indicators for determining optimal treatment for patients with arterial disease. Routine assessment of fractional flow reserve uses invasive catheterization to directly measure blood flow properties, such as pressure and flow rate. However, these invasive measurement techniques present risks to patients and can result in significant costs to the health care system.
  • FFR fractional flow reserve
  • Computed tomography arterial angiography is a computed tomography technique used to visualize arterial blood vessels.
  • a beam of X-rays is passed from a radiation source through a region of interest in the patient's body to obtain projection images.
  • the present invention provides a system for acquiring aortic images based on deep learning, so as to solve the problems of many human factors, poor consistency and slow extraction speed in the prior art when using empirical values to acquire aortic images.
  • the present application provides a system for acquiring aortic images based on deep learning, including: a database device, a deep learning device, a data extraction device, and an aortic acquisition device;
  • the database device is used to generate a slice database of the aortic layer and a slice database of the non-aortic layer;
  • the deep learning device connected to the database device, is used to perform deep learning on the slice data of the aortic layer and the slice data of the non-aortic layer, obtain a deep learning model, and analyze the feature data through the deep learning model , to obtain aortic data;
  • the data extraction device is used to extract the characteristic data of the CT sequence image or the three-dimensional data of the CT sequence image to be processed;
  • the aorta acquisition device is connected to the data extraction device and the deep learning device, and is configured to acquire an aortic image from the CT sequence image according to the deep learning model and the feature data.
  • the above-mentioned system for acquiring aortic images based on deep learning further includes: a CT storage device connected to the database device and the data extraction device, for acquiring three-dimensional data of CT sequence images.
  • the database device includes: an image processing structure, a slice data storage structure for the aortic layer, a slice data storage structure for the non-aortic layer, and the aortic layer slice data storage structure.
  • the slice data storage structure, the slice data storage structure of the non-aortic layer, and the CT storage device are all connected to the image processing structure;
  • the image processing structure for removing new images of the lungs, descending aorta, spine, and ribs from the CT image
  • a slice data storage structure of the aortic layer used for acquiring slice data of the aortic layer from the new image
  • the slice data storage structure of the non-aortic layer is used to obtain the remaining slice data from the new image after removing the slices in the slice data storage structure of the aortic layer, that is, the slice data of the non-aortic layer .
  • the image processing structure includes: a grayscale histogram unit, a grayscale volume acquisition unit, a lung tissue removal unit, a heart center of gravity extraction unit, and a spine center of gravity extraction unit. , a descending aorta image extraction unit, a new image acquisition unit;
  • the grayscale histogram unit connected with the CT storage device, is used to draw the grayscale histogram of each group of the CT sequence images;
  • the grayscale volume acquisition unit is connected to the grayscale histogram structure, and is used to sequentially acquire points M to M-1, and points M to M along the direction from the end point M to the origin O of the grayscale histogram -2 points, until the volume of each gray value area from point M to point O is obtained; the volume of each gray value area is obtained and the volume ratio V of the total area from point M to point O;
  • the lung tissue removal unit is connected to the grayscale volume acquisition unit, and is used to set the lung grayscale threshold Q lung according to medical knowledge and CT image imaging principles; if the grayscale value in the grayscale histogram is If it is smaller than Q lung , the image corresponding to the gray value is removed to obtain the first image of the removed lung tissue;
  • the descending aorta image extraction unit is connected to the heart center of gravity extraction structure, the spine center of gravity extraction structure, and the lung tissue removal unit, and is used for acquiring each group of CT sequences according to the center of gravity of the heart and the center of gravity of the spine image of the descending aorta;
  • the new image acquisition unit is connected with the descending aorta image extraction unit, the lung tissue removal unit, the slice data storage structure of the aortic layer, and the slice data storage structure of the non-aortic layer, using A new image is obtained by removing the lungs, descending aorta, spine, and ribs from the CT image.
  • the descending aorta region delineation unit includes: an average gray value acquisition module, a layered slice module, and a binarization processing module;
  • the average gray value acquisition module is connected to the removed lung tissue unit and the gray histogram structure, and is used to obtain a gray value in the first image that is greater than the descending aorta gray threshold Q drop .
  • Pixel point PO calculate the average gray value of the pixel point PO
  • the layered slicing module is connected to the average gray value acquisition module and the lung tissue removal unit, and is configured to start layered slicing from the bottom layer of the first image to obtain a first two-dimensional slice image group;
  • the binarization processing module is connected with the layered slice module and the grayscale histogram structure, and is used for according to the Perform binarization processing on the slice image, remove the impurity points in the first image, and obtain a binarized image, where k is a positive integer, Q k represents the gray value corresponding to the kth pixel PO, and P( k) represents the pixel value corresponding to the kth pixel point PO.
  • the descending aorta region delineation unit further includes: a rough acquisition module and a precise acquisition module;
  • the rough acquisition module is connected to the binarization processing module, and is configured to set the radius threshold of the circle formed by the descending aorta to the edge of the heart as the r threshold , according to the relationship between the descending aorta and the heart.
  • the distance is smaller than the distance between the spine and the heart, and the approximate area of the spine and the approximate area of the descending aorta are obtained;
  • the precise acquisition module is connected to the rough acquisition module, and is configured to remove error pixels, which are circles corresponding to the descending aorta, according to the approximate area of the descending aorta.
  • the data extraction device includes: a connected domain structure and a feature data acquisition structure;
  • the connected domain structure is connected to the new image acquisition unit, and is used for acquiring multiple binarized images of the CT image to be processed from the new image acquisition unit;
  • the feature data acquisition structure is connected to the connected domain structure, and is used to sequentially obtain the connected domain of each binarized image starting from the top layer, as well as the quasi-circle center C k , the area S k , the area S k , the connected domain corresponding to the connected domain from C k -C (k-1) between the quasi radius R k, and the center of two adjacent layers, each slice of the center C k of the distance from the top of the center C 1 C k -C 1, and the The pixel point in the layer pixel is greater than 0, and the pixel point in the previous layer pixel is equal to the area M k and filter area H k of all the pixels of 0, wherein, k represents the kth layer slice, k ⁇ 1; is the characteristic data.
  • a data processing unit is set inside the feature data acquisition structure, and a center acquisition unit, an area acquisition unit, and a radius acquisition unit respectively connected to the data processing unit;
  • the data processing unit is used for adopting the Hough detection algorithm, starting from the top layer, detecting three layers of slices in sequence, obtaining a center and a radius from the slices of each layer, respectively forming three circles; A point with a large deviation is removed from the center of the circle to obtain the descending aorta seed point P 1 ; the connected domain A 1 of the layer where the seed point P 1 is located is obtained; the barycenter of the connected domain A 1 is obtained as the pseudo-circle center C 1 , and the obtained The area S 1 of the connected domain A 1 and the quasi-circle radius R 1 ; taking the C 1 as a seed point, obtain the connected domain A 2 of the layer where the seed point P 1 is located; expand the connected domain A 1 to obtain In the expansion area D 1 , the overlapping part of the expansion area D 1 is removed from the connected area A 2 to obtain the connected area A 2 ′; the volume threshold V threshold of the connected area is set, if the volume V of the connected area A 2 ′ is 2 ⁇ V threshold ,
  • the circle center obtaining unit is used to store the pseudo-circle centers C 1 , C 2 . . . C k .
  • an area acquisition unit for storing areas S 1 , S 2 . . . S k . . , and filtering areas H 1 , H 2 . . . H k .
  • Radius acquisition unit used to store the quasi-circle radii R 1 , R 2 . . . R k .
  • the aortic acquiring device includes: a gradient edge structure and an aortic image acquiring structure;
  • the gradient edge structure connected with the deep learning device, is used to expand the aorta data; multiply the expanded aorta data with the CT original image data, and calculate the gradient of each pixel to obtain the gradient data; extracting gradient edges according to the gradient data; subtracting the gradient edges from the expanded aorta data;
  • the aortic image acquisition structure is connected to the new image acquisition unit and the gradient edge structure, and is used for generating a seed point list according to the center of the pseudo-circle; extracting a connected area according to the seed point list to obtain an aortic image .
  • the present application provides a system for obtaining aortic images based on deep learning, obtaining a deep learning model based on feature data and a database, and obtaining aortic images through a deep learning model, which has the advantages of good extraction effect, high robustness, and accurate calculation results. It has high promotion value in clinical practice.
  • FIG. 1 is a structural block diagram of an embodiment of a system for acquiring an aortic image based on deep learning of the present application
  • FIG. 2 is a structural block diagram of another embodiment of the system for acquiring an aortic image based on deep learning of the present application
  • FIG. 3 is a structural block diagram of the database device 100 of the present application.
  • FIG. 4 is a structural block diagram of the image processing structure 110 of the present application.
  • FIG. 5 is a structural block diagram of the descending aorta image storage structure 160 of the present application.
  • FIG. 6 is a structural block diagram of the descending aorta region delineation unit 162 of the present application.
  • FIG. 7 is a structural block diagram of the data extraction apparatus 300 of the present application.
  • FIG. 8 is a structural block diagram of the aorta obtaining apparatus 400 of the present application.
  • the present application provides a system for acquiring aortic images based on deep learning, including: a database device 100, a deep learning device 200, a data extraction device 300, and an aortic acquisition device 400; a database
  • the device 100 is used to generate the slice database of the aortic layer and the slice database of the non-aortic layer;
  • the deep learning device 200 is connected to the database device 100 and used to perform the slice data of the aortic layer and the slice data of the non-aortic layer.
  • Deep learning acquiring a deep learning model, and analyzing the feature data through the deep learning model to obtain aortic data; a data extraction device 300 for extracting the feature data of the CT sequence image to be processed or the three-dimensional data of the CT sequence image; the aorta
  • the acquisition device is connected to the data extraction device 300 and the deep learning device 200, and is used for acquiring the aorta image from the CT sequence image according to the deep learning model and feature data.
  • an embodiment of the present application further includes: a CT storage device 500 connected to the database device 100 and the data extraction device 300 for acquiring three-dimensional data of CT sequence images.
  • the database apparatus 100 includes: an image processing structure 110, a slice data storage structure 120 of the aortic layer, a slice data storage structure 130 of the non-aortic layer, and a slice of the aortic layer
  • the data storage structure 110, the slice data storage structure 120 of the non-aortic layer, and the CT storage device 500 are all connected to the image processing structure 110;
  • the new image; the slice data storage structure 120 of the aortic layer is used to obtain the slice data of the aortic layer from the new image;
  • the slice data storage structure 130 of the non-aortic layer is used to obtain the slice data storage structure of the aortic layer from the new image
  • the remaining slice data of the slice in 120 is the slice data of the non-aortic layer.
  • the image processing structure 110 includes: a grayscale histogram unit 111 , a grayscale volume acquisition unit 112 , a lung tissue removal unit 113 , a heart center of gravity extraction unit 114 , and a spine center of gravity extraction unit 114 unit 115, descending aorta image extraction unit 116, new image acquisition unit 117; grayscale histogram unit 111, connected to the CT storage device 500, for drawing a grayscale histogram of each group of CT sequence images; grayscale volume acquisition unit 112, which is connected to the grayscale histogram unit 111, and is used to obtain the M point to the M-1 point, and the M point to the M-2 point in turn along the direction from the end point M to the origin point O of the grayscale histogram, until the M point is obtained.
  • the descending aorta image extraction unit 116 is connected to the heart gravity center extraction unit 114, the spine gravity center extraction unit 115, and the lung tissue removal unit 113, and is used to obtain the descending aorta images of each group of CT sequence images according to the heart gravity center and the spine gravity center; new
  • the image acquisition unit 117 is connected with the descending aorta image extraction unit 116, the lung tissue removal unit 113, the slice data storage structure 120 of the aortic layer, and the slice data storage structure 130 of the non-aortic layer, and is used for removing the CT image from the image. Lungs, descending aorta, spine, ribs, acquire new images.
  • the descending aorta image extraction unit 116 includes: a descending aorta region delineation unit 162 , a descending aorta image acquisition unit 163 ; the descending aorta region delineation unit 162 , and
  • the grayscale histogram unit 111, the heart center of gravity extraction unit 114, the spine center of gravity extraction unit 115, and the lung tissue removal unit 113 are connected to project the center of gravity P 2 of the heart on the first image to obtain the center of the heart O 1 ;
  • the aortic gray threshold Q drops , and the first image is binarized; according to the distance between the descending aorta and the center of the heart O 1 , and the distance between the spine and the center of the heart O 1 , the circle corresponding to the descending aorta is obtained;
  • the arterial image acquisition unit 163 is connected to the lung tissue removal unit 113 and the descending aorta region delineation unit
  • the descending aorta region delineation unit 162 includes: an average gray value acquisition module 1621 , a layered slice module 1622 , and a binarization processing module 1623 ; the average gray value acquisition module 1623 ;
  • the module 1621 is connected with the lung tissue removal unit 113 and the grayscale histogram unit 111, and is used to obtain the pixel points PO whose grayscale values in the first image are greater than the grayscale threshold Q of the descending aorta, and calculate the average value of the pixel points PO.
  • the layered slice module 1622 is connected to the average gray value acquisition module 1621 and the lung tissue removal unit 113, and is used to start layered slices from the bottom layer of the first image to obtain the first two-dimensional slice image group; the binarization processing module 1623, connected with the layered slice module 1622 and the grayscale histogram unit 111, for Perform binarization processing on the sliced image, remove the impurity points in the first image, and obtain a binarized image, where k is a positive integer, Q k represents the gray value corresponding to the kth pixel PO, P(k) Indicates the pixel value corresponding to the kth pixel PO.
  • the descending aorta region delineation unit 162 further includes: a rough acquisition module 1624 and a precise acquisition module 1625; the rough acquisition module 1624 is connected to the binarization processing module 1623 and uses In setting the radius threshold of the circle formed by the descending aorta to the edge of the heart as the r threshold , according to the distance between the descending aorta and the heart is less than the distance between the spine and the heart, obtain the approximate area of the spine and the approximate area of the descending aorta; Accurate acquisition module 1625 , which is connected to the rough acquisition module 1624 and used to remove the error pixel points according to the rough area of the descending aorta, that is, the circle corresponding to the descending aorta.
  • a Hough detection element 1626 is set in the rough acquisition module 1624; the Hough detection element 1626 is used to determine the approximate area of the descending aorta according to the following principle: if the radius of the circle obtained by the Hough detection algorithm is r>r threshold , then this circle is the spine The corresponding circle, without recording the center and radius of the circle, is the approximate area of the spine; if the Hough detection algorithm obtains the radius of the circle r ⁇ r threshold , the circle may be the circle corresponding to the descending aorta, and the circle is recorded. The center and radius of the circle are the approximate area of the descending aorta.
  • a seed point acquisition element 1627 is set in the precise acquisition module 1625; the seed point acquisition element 1627 is connected to the Hough detection element 1626, and is used to screen the center and radius of the circle in the approximate area of the descending aorta, and remove the adjacent slices.
  • the center of the circle deviates from the larger circle, that is, the error pixel points are removed, and the seed point list of the descending aorta is formed.
  • the data extraction apparatus 300 includes: a connected domain structure 310 and a feature data acquisition structure 320 ; the connected domain structure 310 is connected to the new image acquisition unit 117 for obtaining data from the new image acquisition unit 117 A plurality of binarized images of the CT image to be processed are acquired inside; the feature data acquisition structure 320 is connected to the connected domain structure 310, and is used to sequentially acquire the connected domain of each of the binarized images starting from the top layer, and the connected domain
  • a data processing unit 321 is set inside the feature data acquisition structure 320, and a center acquisition unit 322, an area acquisition unit 323, and a radius acquisition unit 324 are respectively connected to the data processing unit 321;
  • the data processing unit 321 is used for adopting the Hough detection algorithm, starting from the top layer, detecting 3 slices in turn, obtaining a circle center and a radius from each slice, respectively forming 3 circles; removing from the 3 circle centers departing from the larger dots, points P 1 to obtain seeds aorta; obtaining communication domain a seed layer where points P 1 1; 1 acquires the communication domain a gravity center of the circle as Quasi C 1, obtaining an area S of a communication domain a and the quasi-circle radius R 1 ; take C 1 as the seed point, obtain the connected domain A 2 of the layer where the seed point P 1 is located; expand the connected domain A 1 to obtain the expanded region D 1 , remove and expand the connected domain A 2
  • the overlapping part of the Hough detection algorithm starting from the top layer,
  • the aorta acquisition device 400 includes: a gradient edge structure 410 and an aortic image acquisition structure 420 , and the gradient edge structure 410 is connected to the deep learning device 200 and used for expanding the aorta data; multiply the expanded aorta data with the original CT image data, and calculate the gradient of each pixel to obtain the gradient data; extract the gradient edge according to the gradient data; subtract the gradient edge from the expanded aorta data
  • the aortic image acquisition structure 420 is connected with the CT storage device 500 and the gradient edge structure 410, and is used to generate a seed point list according to the center of the pseudo-circle; extract the connected area according to the seed point list to obtain the aortic image.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, various aspects of the present invention may be embodied in the form of an entirely hardware implementation, an entirely software implementation (including firmware, resident software, microcode, etc.), or a combination of hardware and software aspects, It may be collectively referred to herein as a "circuit,” "module,” or “system.” Furthermore, in some embodiments, various aspects of the present invention may also be implemented in the form of a computer program product on one or more computer-readable media having computer-readable program code embodied thereon. Implementation of the method and/or system of embodiments of the present invention may involve performing or completing selected tasks manually, automatically, or a combination thereof.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes volatile storage for storing instructions and/or data and/or non-volatile storage for storing instructions and/or data, such as a magnetic hard disk and/or a Move media.
  • a network connection is also provided.
  • a display and/or user input device such as a keyboard or mouse, is optionally also provided.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of computer-readable storage media would include the following:
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • computer program code for performing operations for various aspects of the invention may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages, such as The "C" programming language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network - including a local area network (LAN) or wide area network (WAN) - or may be connected to an external computer (eg using an Internet service provider via Internet connection).
  • LAN local area network
  • WAN wide area network
  • These computer program instructions can also be stored on a computer-readable medium, the instructions cause a computer, other programmable data processing apparatus, or other device to operate in a particular manner, whereby the instructions stored on the computer-readable medium produce a An article of manufacture of instructions implementing the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • Computer program instructions can also be loaded on a computer (eg, a coronary artery analysis system) or other programmable data processing device to cause a series of operational steps to be performed on the computer, other programmable data processing device or other device to produce a computer-implemented process , such that instructions executing on a computer, other programmable apparatus, or other device provide a process for implementing the functions/acts specified in the flowchart and/or one or more block diagram blocks.
  • a computer eg, a coronary artery analysis system
  • other programmable data processing device to produce a computer-implemented process , such that instructions executing on a computer, other programmable apparatus, or other device provide a process for implementing the functions/acts specified in the flowchart and/or one or more block diagram blocks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

一种基于深度学习获取主动脉图像的系统,包括:数据库装置(100)、深度学习装置(200)、数据提取装置(300)和主动脉获取装置(400);数据库装置(100)用于生成主动脉层的切片数据库与非主动脉层的切片数据库;深度学习装置(200)与数据库装置(100)连接用于对切片数据进行深度学习,对特征数据进行分析,获得主动脉数据;数据提取装置(300)用于提取待处理的CT序列图像的特征数据;主动脉获取装置(400)与数据提取装置(300)、深度学习装置(200)连接,用于根据深度学习模型、特征数据从CT序列图像中获取主动脉图像。该装置依据特征数据和数据库获取深度学习模型,通过深度学习模型获取主动脉图像,具有提取效果好,鲁棒性高的优点,计算结果准确,在临床上具有较高的推广价值。

Description

基于深度学习获取主动脉图像的系统 技术领域
本发明涉及冠状动脉医学技术领域,特别是涉及基于深度学习获取主动脉图像的系统。
背景技术
心血管疾病是工业化世界中的死亡的首要原因。主要形式的心血管疾病由脂肪物质在供应心脏、大脑、肾脏和下肢的动脉的内组织层中的慢性积聚引起。进行性冠状动脉疾病限制到心脏的血流。由于缺少通过当前的非侵入式测试提供的准确信息,许多患者需要侵入式导管流程来评价冠脉血流。因此,存在对于量化人类冠状动脉中的血流以评价可能的冠状动脉疾病的功能意义的非侵入式方法的需求。对动脉容量的可靠评价因此对于解决患者需求的处置规划将是重要的。最近的研究已经证明,血流动力学特性,诸如血流储备分数(FFR),是确定针对具有动脉疾病的患者的最佳处置的重要指示器。对血流储备分数的常规评价使用侵入式导管插入术来直接测量血流特性,诸如压力和流速。然而,这些侵入式测量技术对患者存在风险,并且对健康护理系统可以导致显著的成本。
计算机断层摄影动脉血管造影是一种用于对动脉血管进行可视化的计算机断层摄影技术。出于该目的,X射线的射束从辐射源穿过患者的身体中的感兴趣区域以获得投影图像。
由于现有技术中采用经验值获取主动脉图像存在人为因素多,一致性差,提取速度慢的问题。
发明内容
本发明提供了一种基于深度学习获取主动脉图像的系统,以解决现有技术 中采用经验值获取主动脉图像存在人为因素多,一致性差,提取速度慢的问题。
为实现上述目的,本申请提供了一种基于深度学习获取主动脉图像的系统,包括:数据库装置、深度学习装置、数据提取装置和主动脉获取装置;
所述数据库装置,用于生成主动脉层的切片数据库与非主动脉层的切片数据库;
所述深度学习装置,与所述数据库装置连接,用于对主动脉层的切片数据和非主动脉层的切片数据进行深度学习,获取深度学习模型,通过所述深度学习模型对特征数据进行分析,获得主动脉数据;
所述数据提取装置,用于提取待处理的所述CT序列图像或CT序列图像的三维数据的所述特征数据;
主动脉获取装置,与所述数据提取装置、所述深度学习装置连接,用于根据所述深度学习模型、所述特征数据从所述CT序列图像中获取主动脉图像。
可选地,上述的基于深度学习获取主动脉图像的系统,还包括:与所述数据库装置、所述数据提取装置连接的CT存储装置,用于获取CT序列图像的三维数据。
可选地,上述的基于深度学习获取主动脉图像的系统,所述数据库装置包括:图像处理结构、主动脉层的切片数据存储结构,非主动脉层的切片数据存储结构,所述主动脉层的切片数据存储结构、所述非主动脉层的切片数据存储结构、所述CT存储装置均与所述图像处理结构连接;
所述图像处理结构,用于从所述CT图像上去除肺部、降主动脉、脊椎、肋骨的新图像;
所述主动脉层的切片数据存储结构,用于从所述新图像上获取主动脉层的切片数据;
所述非主动脉层的切片数据存储结构,用于从所述新图像上获取除去所述 主动脉层的切片数据存储结构内的切片的剩余的切片数据,即为非主动脉层的切片数据。
可选地,上述的基于深度学习获取主动脉图像的系统,所述图像处理结构包括:灰度直方图单元、灰度体积获取单元、去除肺部组织单元、心脏重心提取单元、脊椎重心提取单元、降主动脉图像提取单元、新图像获取单元;
所述灰度直方图单元,与所述CT存储装置连接,用于绘制每组所述CT序列图像的灰度直方图;
所述灰度体积获取单元,与所述灰度直方图结构连接,用于沿着所述灰度直方图的终点M至原点O方向,依次获取M点至M-1点,M点至M-2点,直至获取到M点至O点的各灰度值区域的体积;获取各灰度值区域的体积与M点至O点的总区域的体积占比V;
所述去除肺部组织单元,与所述灰度体积获取单元连接,用于根据医学知识以及CT图像成像原理,设置肺部灰度阈值Q ;如果所述灰度直方图中的灰度值小于Q ,则去除灰度值对应的图像,得到去除肺部组织的第一图像;
所述心脏重心提取单元,与所述灰度体积获取单元连接,用于获取心脏重心P 2,如果V=b,则拾取所述灰度值区域对应的起始点,将所述起始点投射到所述第一图像上,获取心脏区域三维图像,拾取所述心脏区域三维图像的物理重心P 2,其中,b表示常数,0.2<b<1。
所述脊椎重心提取单元,与所述CT存储装置、所述心脏重心提取单元连接,用于获取脊椎重心P 1,如果V=a,则拾取所述灰度值区域对应的起始点,将所述起始点投射到所述CT三维图像上,获取骨头区域三维图像,拾取所述骨头区域三维图像的物理重心P 1,其中,a表示常数,0<a<0.2。
所述降主动脉图像提取单元,与所述心脏重心提取结构、所述脊椎重心提取结构、所述去除肺部组织单元连接,用于根据所述心脏重心和所述脊椎重心获取每组CT序列图像的降主动脉图像;
所述新图像获取单元,与所述降主动脉图像提取单元、所述去除肺部组织单元、所述主动脉层的切片数据存储结构、所述非主动脉层的切片数据存储结构连接,用于从所述CT图像上去除肺部、降主动脉、脊椎、肋骨,获取新图像。
可选地,上述的基于深度学习获取主动脉图像的系统,所述降主动脉区域划定单元包括:平均灰度值获取模块、分层切片模块、二值化处理模块;
所述平均灰度值获取模块,与所述去除肺部组织单元、灰度直方图结构连接,用于获取所述第一图像内的灰度值大于所述降主动脉灰度阈值Q 的像素点PO,计算所述像素点PO的平均灰度值
Figure PCTCN2020132798-appb-000001
所述分层切片模块,与所述平均灰度值获取模块、所述去除肺部组织单元连接,用于从所述第一图像的底层开始分层切片,得到第一二维切片图像组;
二值化处理模块,与所述分层切片模块、灰度直方图结构连接,用于根据
Figure PCTCN2020132798-appb-000002
对切片图像进行二值化处理,去除所述第一图像中的杂质点,得到二值化图像,其中,k为正整数,Q k表示第k个像素点PO对应的灰度值,P(k)表示第k个像素点PO对应的像素值。
可选地,上述的基于深度学习获取主动脉图像的系统,所述降主动脉区域划定单元还包括:粗略获取模块和精确获取模块;
所述粗略获取模块,与所述二值化处理模块连接,用于设置所述降主动脉至所述心脏边缘构成的圆的半径阈值为r ,根据所述降主动脉与所述心脏的距离小于所述脊椎与所述心脏的距离,获取所述脊椎的大致区域与所述降主动脉的大致区域;
所述精确获取模块,与所述粗略获取模块连接,用于根据所述降主动脉的大致区域,去除误差像素点,即为所述降主动脉对应的圆。
可选地,上述的基于深度学习获取主动脉图像的系统,所述数据提取装置包括:连通域结构和特征数据获取结构;
所述连通域结构,与所述新图像获取单元连接,用于从所述新图像获取单元内获取待处理的所述CT图像的多幅二值化图像;
所述特征数据获取结构,与所述连通域结构连接,用于从顶层开始,依次获取每幅所述二值化图像的连通域,以及连通域对应的拟圆圆心C k、面积S k、拟圆半径R k,以及相邻两层圆心之间的距离C k-C (k-1),每层切片的圆心C k距离所述顶层C 1圆心的距离C k-C 1,以及所述像素点在所述层像素大于0,且所述像素点在上一层像素等于0的全部的像素的面积M k和过滤面积H k,其中,k表示第k层切片,k≥1;即为特征数据。
可选地,上述的基于深度学习获取主动脉图像的系统,所述特征数据获取结构内部设置数据处理单元,以及分别与所述数据处理单元连接的圆心获取单元、面积获取单元、半径获取单元;
所述数据处理单元,用于采用霍夫检测算法,从顶层开始,依次检测3层切片,从每层所述切片内各获得1个圆心和1个半径,分别形成3个圆;从3个所述圆心中去除偏离较大的点,获得降主动脉种子点P 1;获取所述种子点P 1所在层的连通域A 1;获取连通域A 1的重心作为拟圆圆心C 1,获取连通域A 1的面积S 1以及拟圆半径R 1;以所述C 1为种子点,获取所述种子点P 1所在层的连通域A 2;对所述连通域A 1进行膨胀,得到膨胀区域D 1,从所述连通域A 2内去除与所述膨胀区域D 1重叠的部分,得到连通域A 2’;设置连通域的体积阈值V ,如果连通域A 2’的体积V 2<V ,去除与上一层圆心C 1距离过大的点,则获取过滤面积H k,所述连通域A 2’的重心作为拟圆圆心C 2,获取连通域A 2的面积S 2以及拟圆半径R 2;重复所述连通域A 2的方法,依次获取每幅所述二值化图像的连通域,以及连通域对应的拟圆圆心C k、面积S k、拟圆半径R k, 以及相邻两层圆心之间的距离C k-C (k-1),每层切片的圆心C k距离所述顶层C 1圆心的距离C k-C 1
所述圆心获取单元,用于存储拟圆圆心C 1、C 2...C k...;
面积获取单元,用于存储面积S 1、S 2...S k...,以及过滤面积H 1、H 2...H k...;
半径获取单元,用于存储拟圆半径R 1、R 2...R k...。
可选地,上述的基于深度学习获取主动脉图像的系统,所述主动脉获取装置包括:梯度边缘结构和主动脉图像获取结构;
所述梯度边缘结构,与所述深度学习装置连接,用于膨胀所主动脉数据;将膨胀后的所述主动脉数据与CT原图像数据相乘,并计算每个像素点的梯度,得到梯度数据;根据所述梯度数据,提取梯度边缘;从所述膨胀后的主动脉数据中减去所述梯度边缘;
所述主动脉图像获取结构,与所述新图像获取单元、所述梯度边缘结构连接,用于根据所述拟圆圆心生成种子点列表;根据所述种子点列表提取连通区域,获得主动脉图像。
本申请实施例提供的方案带来的有益效果至少包括:
本申请提供了基于深度学习获取主动脉图像的系统,依据特征数据和数据库获取深度学习模型,通过深度学习模型获取主动脉图像,具有提取效果好,鲁棒性高的优点,计算结果准确,在临床上具有较高的推广价值。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本申请的基于深度学习获取主动脉图像的系统的一个实施例的结构框图;
图2为本申请的基于深度学习获取主动脉图像的系统的另一实施例的结构框图;
图3为本申请的数据库装置100的结构框图;
图4为本申请的图像处理结构110的结构框图;
图5为本申请的降主动脉图像存储结构160的结构框图;
图6为本申请的降主动脉区域划定单元162的结构框图;
图7为本申请的数据提取装置300的结构框图;
图8为本申请的主动脉获取装置400的结构框图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明具体实施例及相应的附图对本发明技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
以下将以图式揭露本发明的多个实施方式,为明确说明起见,许多实务上的细节将在以下叙述中一并说明。然而,应了解到,这些实务上的细节不应用以限制本发明。也就是说,在本发明的部分实施方式中,这些实务上的细节是非必要的。此外,为简化图式起见,一些习知惯用的结构与组件在图式中将以简单的示意的方式绘示之。
现有技术中采用经验值获取主动脉图像存在人为因素多,一致性差,提取速度慢的问题。
为了解决上述问题,如图1所示,本申请提供了一种基于深度学习获取主动脉图像的系统,包括:数据库装置100、深度学习装置200、数据提取装置300和主动脉获取装置400;数据库装置100,用于生成主动脉层的切片数据库与非主动脉层的切片数据库;深度学习装置200,与数据库装置100连接, 用于对主动脉层的切片数据和非主动脉层的切片数据进行深度学习,获取深度学习模型,通过深度学习模型对特征数据进行分析,获得主动脉数据;数据提取装置300,用于提取待处理的CT序列图像或CT序列图像的三维数据的特征数据;主动脉获取装置,与数据提取装置300、深度学习装置200连接,用于根据深度学习模型、特征数据从CT序列图像中获取主动脉图像。
如图2所示,本申请的一个实施例中,还包括:与数据库装置100、数据提取装置300连接的CT存储装置500,用于获取CT序列图像的三维数据。
如图3所示,本申请的一个实施例中,数据库装置100包括:图像处理结构110、主动脉层的切片数据存储结构120,非主动脉层的切片数据存储结构130,主动脉层的切片数据存储结构110、非主动脉层的切片数据存储结构120、CT存储装置500均与图像处理结构110连接;图像处理结构110用于从CT图像上去除肺部、降主动脉、脊椎、肋骨的新图像;主动脉层的切片数据存储结构120用于从新图像上获取主动脉层的切片数据;非主动脉层的切片数据存储结构130用于从新图像上获取除去主动脉层的切片数据存储结构内120的切片的剩余的切片数据,即为非主动脉层的切片数据。
如图4所示,本申请的一个实施例中,图像处理结构110包括:灰度直方图单元111、灰度体积获取单元112、去除肺部组织单元113、心脏重心提取单元114、脊椎重心提取单元115、降主动脉图像提取单元116、新图像获取单元117;灰度直方图单元111,与CT存储装置500连接,用于绘制每组CT序列图像的灰度直方图;灰度体积获取单元112,与灰度直方图单元111连接,用于沿着灰度直方图的终点M至原点O方向,依次获取M点至M-1点,M点至M-2点,直至获取到M点至O点的各灰度值区域的体积;获取各灰度值区域的体积与M点至O点的总区域的体积占比V;去除肺部组织单元113,与灰度体积获取单元112连接,用于根据医学知识以及CT图像成像原理,设置肺部灰度阈值Q ,如果灰度直方图111中的灰度值小于Q ,则去除灰度 值对应的图像,得到去除肺部组织的第一图像;心脏重心提取单元114,与灰度体积获取单元112、去除肺部组织单元113连接,用于获取心脏重心P 2,如果V=b,则拾取灰度值区域对应的起始点,将起始点投射到第一图像上,获取心脏区域三维图像,拾取心脏区域三维图像的物理重心P 2,其中,b表示常数,0.2<b<1。脊椎重心提取单元115,与去除肺部组织单元113、心脏重心提取单元114连接,用于获取脊椎重心P 1,如果V=a,则拾取灰度值区域对应的起始点,将起始点投射到CT三维图像上,获取骨头区域三维图像,拾取骨头区域三维图像的物理重心P 1,其中,a表示常数,0<a<0.2。降主动脉图像提取单元116,与心脏重心提取单元114、脊椎重心提取单元115、去除肺部组织单元113连接,用于根据心脏重心和脊椎重心获取每组CT序列图像的降主动脉图像;新图像获取单元117,与降主动脉图像提取单元116、去除肺部组织单元113、主动脉层的切片数据存储结构120、非主动脉层的切片数据存储结构130连接,用于从CT图像上去除肺部、降主动脉、脊椎、肋骨,获取新图像。
本申请通过先筛选出心脏重心和脊椎重心,对心脏和脊椎的位置进行定位,然后根据心脏和脊椎的位置获取降主动脉图像,减少了运算量,算法简单,容易操作,运算速度快,设计科学,图像处理精准。
如图5所示,本申请的一个实施例中,降主动脉图像提取单元116包括:降主动脉区域划定单元162、降主动脉图像获取单元163;降主动脉区域划定单元162,与灰度直方图单元111、心脏重心提取单元114、脊椎重心提取单元115、去除肺部组织单元113连接,用于将心脏重心P 2投影到第一图像上,获得心脏的圆心O 1;设置降主动脉灰度阈值Q ,对第一图像进行二值化处理;根据降主动脉与心脏圆心O 1的距离,以及脊椎与心脏圆心O 1的距离,获取降主动脉对应的圆;降主动脉图像获取单元163,与去除肺部组织单元113、降主动脉区域划定单元162连接,用于从CT图像上获取降主动脉图像。
如图6所示,本申请的一个实施例中,降主动脉区域划定单元162包括: 平均灰度值获取模块1621、分层切片模块1622、二值化处理模块1623;平均灰度值获取模块1621,与去除肺部组织单元113、灰度直方图单元111连接,用于获取第一图像内的灰度值大于降主动脉灰度阈值Q 的像素点PO,计算像素点PO的平均灰度值
Figure PCTCN2020132798-appb-000003
分层切片模块1622,与平均灰度值获取模块1621、去除肺部组织单元113连接,用于从第一图像的底层开始分层切片,得到第一二维切片图像组;二值化处理模块1623,与分层切片模块1622、灰度直方图单元111连接,用于根据
Figure PCTCN2020132798-appb-000004
对切片图像进行二值化处理,去除第一图像中的杂质点,得到二值化图像,其中,k为正整数,Q k表示第k个像素点PO对应的灰度值,P(k)表示第k个像素点PO对应的像素值。
如图6所示,本申请的一个实施例中,降主动脉区域划定单元162还包括:粗略获取模块1624和精确获取模块1625;粗略获取模块1624,与二值化处理模块1623连接,用于设置降主动脉至心脏边缘构成的圆的半径阈值为r ,根据降主动脉与心脏的距离小于脊椎与心脏的距离,获取脊椎的大致区域与降主动脉的大致区域;精确获取模块1625,与粗略获取模块1624连接,用于根据降主动脉的大致区域,去除误差像素点,即为降主动脉对应的圆。粗略获取模块1624内设置霍夫检测元件1626;霍夫检测元件1626,用于根据如下原则判定降主动脉的大致区域:如果霍夫检测算法获得圆的半径r>r ,则此圆是脊椎对应的圆,不对此圆的圆心和半径进行记录,即为脊椎的大致区域;如果霍夫检测算法获得圆的半径r≤r ,则此圆可能是降主动脉对应的圆,记录此圆的圆心和半径,即为降主动脉的大致区域。精确获取模块1625内设置种子点获取元件1627;种子点获取元件1627,与霍夫检测元件1626连接,用于对降主动脉的大致区域内的圆的圆心和半径进行筛选,去除相邻切片之间圆心偏离较大的圆,即去除误差像素点,形成降主动脉的种子点列表。
如图7所示,本申请的一个实施例中,数据提取装置300包括:连通域结构310和特征数据获取结构320;连通域结构310与新图像获取单元117连接,用于从新图像获取单元117内获取待处理的CT图像的多幅二值化图像;特征数据获取结构320,与连通域结构310连接,用于从顶层开始,依次获取每幅所述二值化图像的连通域,以及连通域对应的拟圆圆心C k、面积S k、拟圆半径R k,以及相邻两层圆心之间的距离C k-C (k-1),每层切片的圆心C k距离所述顶层C 1圆心的距离C k-C 1,以及所述像素点在所述层像素大于0,且所述像素点在上一层像素等于0的全部的像素的面积M k和过滤面积H k,其中,k表示第k层切片,k≥1;即为特征数据。
如图7所示,本申请的一个实施例中,特征数据获取结构320内部设置数据处理单元321,以及分别与数据处理单元321连接的圆心获取单元322、面积获取单元323、半径获取单元324;数据处理单元321,用于采用霍夫检测算法,从顶层开始,依次检测3层切片,从每层切片内各获得1个圆心和1个半径,分别形成3个圆;从3个圆心中去除偏离较大的点,获得主动脉种子点P 1;获取种子点P 1所在层的连通域A 1;获取连通域A 1的重心作为拟圆圆心C 1,获取连通域A 1的面积S 1以及拟圆半径R 1;以C 1为种子点,获取种子点P 1所在层的连通域A 2;对连通域A 1进行膨胀,得到膨胀区域D 1,从连通域A 2内去除与膨胀区域D 1重叠的部分,得到连通域A 2’;设置连通域的体积阈值V ,如果连通域A 2’的体积V 2<V ,去除与上一层圆心C 1距离过大的点,则获取过滤面积H k,连通域A 2’的重心作为拟圆圆心C 2,获取连通域A 2的面积S 2以及拟圆半径R 2;重复连通域A 2的方法,依次获取每幅二值化图像的连通域,以及连通域对应的拟圆圆心C k、面积S k、拟圆半径R k,以及相邻两层圆心之间的距离C k-C (k-1),每层切片的圆心C k距离顶层C 1圆心的距离C k-C 1,圆心获取单元,用于存储拟圆圆心C 1、C 2...C k...;面积获取单元,用于存储面积 S 1、S 2...S k...,以及过滤面积H 1、H 2...H k...;半径获取单元,用于存储拟圆半径R 1、R 2...R k...。
如图8所示,本申请的一个实施例中,主动脉获取装置400包括:梯度边缘结构410和主动脉图像获取结构420,梯度边缘结构410,与深度学习装置200连接,用于膨胀主动脉数据;将膨胀后的主动脉数据与CT原图像数据相乘,并计算每个像素点的梯度,得到梯度数据;根据梯度数据,提取梯度边缘;从膨胀后的主动脉数据中减去梯度边缘;主动脉图像获取结构420,与CT存储装置500、梯度边缘结构410连接,用于根据拟圆圆心生成种子点列表;根据种子点列表提取连通区域,获得主动脉图像。
所属技术领域的技术人员知道,本发明的各个方面可以实现为系统、方法或计算机程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、驻留软件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。此外,在一些实施例中,本发明的各个方面还可以实现为在一个或多个计算机可读介质中的计算机程序产品的形式,该计算机可读介质中包含计算机可读的程序代码。本发明的实施例的方法和/或系统的实施方式可以涉及到手动地、自动地或以其组合的方式执行或完成所选任务。
例如,可以将用于执行根据本发明的实施例的所选任务的硬件实现为芯片或电路。作为软件,可以将根据本发明的实施例的所选任务实现为由计算机使用任何适当操作系统执行的多个软件指令。在本发明的示例性实施例中,由数据处理器来执行如本文的根据方法和/或系统的示例性实施例的一个或多个任务,诸如用于执行多个指令的计算平台。可选地,该数据处理器包括用于存储指令和/或数据的易失性储存器和/或用于存储指令和/或数据的非易失性储存器,例如,磁硬盘和/或可移动介质。可选地,也提供了一种网络连接。可 选地也提供显示器和/或用户输入设备,诸如键盘或鼠标。
可利用一个或多个计算机可读的任何组合。计算机可读介质可以是计算机可读信号介质或计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举列表)将包括以下各项:
具有一个或多个导线的电连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括(但不限于)无线、有线、光缆、RF等等,或者上述的任意合适的组合。
例如,可用一个或多个编程语言的任何组合来编写用于执行用于本发明的各方面的操作的计算机程序代码,包括诸如Java、Smalltalk、C++等面向对象编程语言和常规过程编程语言,诸如"C"编程语言或类似编程语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立 的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络--包括局域网(LAN)或广域网(WAN)-连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机程序指令实现。这些计算机程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些计算机程序指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。
也可以把这些计算机程序指令存储在计算机可读介质中,这些指令使得计算机、其它可编程数据处理装置、或其它设备以特定方式工作,从而,存储在计算机可读介质中的指令就产生出包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的指令的制造品(article of manufacture)。
还可将计算机程序指令加载到计算机(例如,冠状动脉分析系统)或其它可编程数据处理设备上以促使在计算机、其它可编程数据处理设备或其它设备上执行一系列操作步骤以产生计算机实现过程,使得在计算机、其它可编程装置或其它设备上执行的指令提供用于实现在流程图和/或一个或多个框图方框中指定的功能/动作的过程。
本发明的以上的具体实例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (9)

  1. 一种基于深度学习获取主动脉图像的系统,其特征在于,包括:数据库装置、深度学习装置、数据提取装置和主动脉获取装置;
    所述数据库装置,用于生成主动脉层的切片数据库与非主动脉层的切片数据库;
    所述深度学习装置,与所述数据库装置连接,用于对主动脉层的切片数据和非主动脉层的切片数据进行深度学习,获取深度学习模型,通过所述深度学习模型对特征数据进行分析,获得主动脉数据;
    所述数据提取装置,用于提取待处理的所述CT序列图像或CT序列图像的三维数据的所述特征数据;
    主动脉获取装置,与所述数据提取装置、所述深度学习装置连接,用于根据所述深度学习模型、所述特征数据从所述CT序列图像中获取主动脉图像。
  2. 根据权利要求1所述的基于深度学习获取主动脉图像的系统,其特征在于,还包括:与所述数据库装置、所述数据提取装置连接的CT存储装置,用于获取CT序列图像的三维数据。
  3. 根据权利要求2所述的基于深度学习获取主动脉图像的系统,其特征在于,所述数据库装置包括:图像处理结构、主动脉层的切片数据存储结构,非主动脉层的切片数据存储结构,所述主动脉层的切片数据存储结构、所述非主动脉层的切片数据存储结构、所述CT存储装置均与所述图像处理结构连接;
    所述图像处理结构,用于从所述CT图像上去除肺部、降主动脉、脊椎、肋骨的新图像;
    所述主动脉层的切片数据存储结构,用于从所述新图像上获取主动脉层的切片数据;
    所述非主动脉层的切片数据存储结构,用于从所述新图像上获取除去所述主动脉层的切片数据存储结构内的切片的剩余的切片数据,即为非主动脉层的 切片数据。
  4. 根据权利要求3所述的基于深度学习获取主动脉图像的系统,其特征在于,所述图像处理结构包括:灰度直方图单元、灰度体积获取单元、去除肺部组织单元、心脏重心提取单元、脊椎重心提取单元、降主动脉图像提取单元、新图像获取单元;
    所述灰度直方图单元,与所述CT存储装置连接,用于绘制每组所述CT序列图像的灰度直方图;
    所述灰度体积获取单元,与所述灰度直方图结构连接,用于沿着所述灰度直方图的终点M至原点O方向,依次获取M点至M-1点,M点至M-2点,直至获取到M点至O点的各灰度值区域的体积;获取各灰度值区域的体积与M点至O点的总区域的体积占比V;
    所述去除肺部组织单元,与所述灰度体积获取单元连接,用于根据医学知识以及CT图像成像原理,设置肺部灰度阈值Q ;如果所述灰度直方图中的灰度值小于Q ,则去除灰度值对应的图像,得到去除肺部组织的第一图像;
    所述心脏重心提取单元,与所述灰度体积获取单元、所述去除肺部组织单元连接,用于获取心脏重心P 2,如果V=b,则拾取所述灰度值区域对应的起始点,将所述起始点投射到所述第一图像上,获取心脏区域三维图像,拾取所述心脏区域三维图像的物理重心P 2,其中,b表示常数,0.2<b<1。
    所述脊椎重心提取单元,与所述CT存储装置、所述心脏重心提取单元连接,用于获取脊椎重心P 1,如果V=a,则拾取所述灰度值区域对应的起始点,将所述起始点投射到所述CT三维图像上,获取骨头区域三维图像,拾取所述骨头区域三维图像的物理重心P 1,其中,a表示常数,0<a<0.2。
    所述降主动脉图像提取单元,与所述心脏重心提取结构、所述脊椎重心提取结构、所述去除肺部组织单元连接,用于根据所述心脏重心和所述脊椎重心获取每组CT序列图像的降主动脉图像;
    所述新图像获取单元,与所述降主动脉图像提取单元、所述去除肺部组织单元、所述主动脉层的切片数据存储结构、所述非主动脉层的切片数据存储结构连接,用于从所述CT图像上去除肺部、降主动脉、脊椎、肋骨,获取新图像。
  5. 根据权利要求4所述的基于深度学习获取主动脉图像的系统,其特征在于,所述降主动脉区域划定单元包括:平均灰度值获取模块、分层切片模块、二值化处理模块;
    所述平均灰度值获取模块,与所述去除肺部组织单元、灰度直方图结构连接,用于获取所述第一图像内的灰度值大于所述降主动脉灰度阈值Q 的像素点PO,计算所述像素点PO的平均灰度值
    Figure PCTCN2020132798-appb-100001
    所述分层切片模块,与所述平均灰度值获取模块、所述去除肺部组织单元连接,用于从所述第一图像的底层开始分层切片,得到第一二维切片图像组;
    二值化处理模块,与所述分层切片模块、灰度直方图结构连接,用于根据
    Figure PCTCN2020132798-appb-100002
    对切片图像进行二值化处理,去除所述第一图像中的杂质点,得到二值化图像,其中,k为正整数,Q k表示第k个像素点PO对应的灰度值,P(k)表示第k个像素点PO对应的像素值。
  6. 根据权利要求5所述的基于深度学习获取主动脉图像的系统,其特征在于,所述降主动脉区域划定单元还包括:粗略获取模块和精确获取模块;
    所述粗略获取模块,与所述二值化处理模块连接,用于设置所述降主动脉至所述心脏边缘构成的圆的半径阈值为r ,根据所述降主动脉与所述心脏的距离小于所述脊椎与所述心脏的距离,获取所述脊椎的大致区域与所述降主动脉的大致区域;
    所述精确获取模块,与所述粗略获取模块连接,用于根据所述降主动脉的 大致区域,去除误差像素点,即为所述降主动脉对应的圆。
  7. 根据权利要求6所述的基于深度学习获取主动脉图像的系统,其特征在于,所述数据提取装置包括:连通域结构和特征数据获取结构;
    所述连通域结构,与所述新图像获取单元连接,用于从所述新图像获取单元内获取待处理的所述CT图像的多幅二值化图像;
    所述特征数据获取结构,与所述连通域结构连接,用于从顶层开始,依次获取每幅所述二值化图像的连通域,以及连通域对应的拟圆圆心C k、面积S k、拟圆半径R k,以及相邻两层圆心之间的距离C k-C (k-1),每层切片的圆心C k距离所述顶层C 1圆心的距离C k-C 1,以及所述像素点在所述层像素大于0,且所述像素点在上一层像素等于0的全部的像素的面积M k和过滤面积H k,其中,k表示第k层切片,k≥1;即为特征数据。
  8. 根据权利要求7所述的基于深度学习获取主动脉图像的系统,其特征在于,所述特征数据获取结构内部设置数据处理单元,以及分别与所述数据处理单元连接的圆心获取单元、面积获取单元、半径获取单元;
    所述数据处理单元,用于采用霍夫检测算法,从顶层开始,依次检测3层切片,从每层所述切片内各获得1个圆心和1个半径,分别形成3个圆;从3个所述圆心中去除偏离较大的点,获得降主动脉种子点P 1;获取所述种子点P 1所在层的连通域A 1;获取连通域A 1的重心作为拟圆圆心C 1,获取连通域A 1的面积S 1以及拟圆半径R 1;以所述C 1为种子点,获取所述种子点P 1所在层的连通域A 2;对所述连通域A 1进行膨胀,得到膨胀区域D 1,从所述连通域A 2内去除与所述膨胀区域D 1重叠的部分,得到连通域A 2’;设置连通域的体积阈值V ,如果连通域A 2’的体积V 2<V ,去除与上一层圆心C 1距离过大的点,则获取过滤面积H k,所述连通域A 2’的重心作为拟圆圆心C 2,获取连通域A 2的面积S 2以及拟圆半径R 2;重复所述连通域A 2的方法,依次获取每幅所述 二值化图像的连通域,以及连通域对应的拟圆圆心C k、面积S k、拟圆半径R k,以及相邻两层圆心之间的距离C k-C (k-1),每层切片的圆心C k距离所述顶层C 1圆心的距离C k-C 1
    所述圆心获取单元,用于存储拟圆圆心C 1、C 2...C k...;
    面积获取单元,用于存储面积S 1、S 2...S k...,以及过滤面积H 1、H 2...H k...;
    半径获取单元,用于存储拟圆半径R 1、R 2...R k...。
  9. 根据权利要求8所述的基于深度学习获取主动脉图像的系统,其特征在于,所述主动脉获取装置包括:梯度边缘结构和主动脉图像获取结构;
    所述梯度边缘结构,与所述深度学习装置连接,用于膨胀所主动脉数据;将膨胀后的所述主动脉数据与CT原图像数据相乘,并计算每个像素点的梯度,得到梯度数据;根据所述梯度数据,提取梯度边缘;从所述膨胀后的主动脉数据中减去所述梯度边缘;
    所述主动脉图像获取结构,与所述新图像获取单元、所述梯度边缘结构连接,用于根据所述拟圆圆心生成种子点列表;根据所述种子点列表提取连通区域,获得主动脉图像。
PCT/CN2020/132798 2020-06-29 2020-11-30 基于深度学习获取主动脉图像的系统 WO2022000977A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2022579902A JP7446645B2 (ja) 2020-06-29 2020-11-30 深層学習に基づいて大動脈画像を取得するシステム
CN202080100602.8A CN115769251A (zh) 2020-06-29 2020-11-30 基于深度学习获取主动脉图像的系统
EP20943564.3A EP4174762A1 (en) 2020-06-29 2020-11-30 Deep learning-based aortic image acquisition system
US18/089,728 US20230153998A1 (en) 2020-06-29 2022-12-28 Systems for acquiring image of aorta based on deep learning

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010606963.1A CN111815587A (zh) 2020-06-29 2020-06-29 基于ct序列图像拾取主动脉中心线上的点的方法和系统
CN202010606964.6A CN111815588B (zh) 2020-06-29 2020-06-29 基于ct序列图像获取降主动脉的方法和系统
CN202010606963.1 2020-06-29
CN202010606964.6 2020-06-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/089,728 Continuation US20230153998A1 (en) 2020-06-29 2022-12-28 Systems for acquiring image of aorta based on deep learning

Publications (1)

Publication Number Publication Date
WO2022000977A1 true WO2022000977A1 (zh) 2022-01-06

Family

ID=79317360

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/132796 WO2022000976A1 (zh) 2020-06-29 2020-11-30 基于深度学习获取主动脉的方法和存储介质
PCT/CN2020/132798 WO2022000977A1 (zh) 2020-06-29 2020-11-30 基于深度学习获取主动脉图像的系统

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132796 WO2022000976A1 (zh) 2020-06-29 2020-11-30 基于深度学习获取主动脉的方法和存储介质

Country Status (5)

Country Link
US (2) US20230260133A1 (zh)
EP (2) EP4174762A1 (zh)
JP (2) JP7446645B2 (zh)
CN (2) CN115769251A (zh)
WO (2) WO2022000976A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645372B (zh) * 2023-07-27 2023-10-10 汉克威(山东)智能制造有限公司 一种制动气室外观图像智能检测方法及系统

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170235915A1 (en) * 2016-02-17 2017-08-17 Siemens Healthcare Gmbh Personalized model with regular integration of data
CN107563983A (zh) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 图像处理方法以及医学成像设备
CN110264465A (zh) * 2019-06-25 2019-09-20 中南林业科技大学 一种基于形态学和深度学习的主动脉夹层动态检测方法
CN111815583A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取主动脉中心线的方法和系统
CN111815585A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取冠脉树和冠脉入口点的方法和系统
CN111815588A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取降主动脉的方法和系统
CN111815589A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取无干扰冠脉树图像的方法和系统
CN111815586A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct图像获取左心房、左心室的连通域的方法和系统
CN111815584A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取心脏重心的方法和系统
CN111815587A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像拾取主动脉中心线上的点的方法和系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008142482A (ja) 2006-12-13 2008-06-26 Med Solution Kk 縦隔リンパ節郭清で切除される領域を複数の区域にセグメンテーションする装置およびプログラム
CN106803251B (zh) * 2017-01-12 2019-10-08 西安电子科技大学 由ct影像确定主动脉缩窄处压力差的装置与方法
JP6657132B2 (ja) 2017-02-27 2020-03-04 富士フイルム株式会社 画像分類装置、方法およびプログラム
US10685438B2 (en) * 2017-07-17 2020-06-16 Siemens Healthcare Gmbh Automated measurement based on deep learning
CN109035255B (zh) * 2018-06-27 2021-07-02 东南大学 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法
US11127138B2 (en) * 2018-11-20 2021-09-21 Siemens Healthcare Gmbh Automatic detection and quantification of the aorta from medical images

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170235915A1 (en) * 2016-02-17 2017-08-17 Siemens Healthcare Gmbh Personalized model with regular integration of data
CN107563983A (zh) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 图像处理方法以及医学成像设备
CN110264465A (zh) * 2019-06-25 2019-09-20 中南林业科技大学 一种基于形态学和深度学习的主动脉夹层动态检测方法
CN111815583A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取主动脉中心线的方法和系统
CN111815585A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取冠脉树和冠脉入口点的方法和系统
CN111815588A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取降主动脉的方法和系统
CN111815589A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取无干扰冠脉树图像的方法和系统
CN111815586A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct图像获取左心房、左心室的连通域的方法和系统
CN111815584A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像获取心脏重心的方法和系统
CN111815587A (zh) * 2020-06-29 2020-10-23 苏州润心医疗器械有限公司 基于ct序列图像拾取主动脉中心线上的点的方法和系统

Also Published As

Publication number Publication date
JP2023532268A (ja) 2023-07-27
CN115769251A (zh) 2023-03-07
JP7446645B2 (ja) 2024-03-11
US20230260133A1 (en) 2023-08-17
EP4174760A1 (en) 2023-05-03
EP4174762A1 (en) 2023-05-03
JP2023532269A (ja) 2023-07-27
US20230153998A1 (en) 2023-05-18
CN115769252A (zh) 2023-03-07
WO2022000976A1 (zh) 2022-01-06

Similar Documents

Publication Publication Date Title
US20230144795A1 (en) Methods and systems for acquiring centerline of aorta based on ct sequence images
US11896416B2 (en) Method for calculating coronary artery fractional flow reserve on basis of myocardial blood flow and CT images
WO2022000727A1 (zh) 基于ct序列图像获取冠脉树和冠脉入口点的方法和系统
US20190374190A1 (en) System and method for biophysical lung modeling
WO2022000729A1 (zh) 基于ct序列图像获取无干扰冠脉树图像的方法和系统
US8605976B2 (en) System and method of detection of optimal angiography frames for quantitative coronary analysis using wavelet-based motion analysis
EP3753494A1 (en) Calculating a fractional flow reserve
WO2022000726A1 (zh) 基于ct图像获取左心房、左心室的连通域的方法和系统
WO2022109903A1 (zh) 三维血管合成方法、系统及冠状动脉分析系统和存储介质
EP4064181A1 (en) Method and apparatus for acquiring contour lines of blood vessel according to center line of blood vessel
CN108471994B (zh) 移动ffr模拟
WO2022000728A1 (zh) 基于ct序列图像获取降主动脉的方法和系统
WO2022000734A1 (zh) 基于ct序列图像拾取主动脉中心线上的点的方法和系统
US20230153998A1 (en) Systems for acquiring image of aorta based on deep learning
CN112132882A (zh) 从冠状动脉二维造影图像中提取血管中心线的方法和装置
CN111815584B (zh) 基于ct序列图像获取心脏重心的方法和系统
WO2022000731A1 (zh) 基于ct序列图像获取心脏重心和脊椎重心的方法和系统
TW202116253A (zh) 診斷支援程式
EP4203774A1 (en) Explainable deep learning camera-agnostic diagnosis of obstructive coronary artery disease

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20943564

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022579902

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020943564

Country of ref document: EP

Effective date: 20230130