CN112308813A - Blood vessel state evaluation method and blood vessel state evaluation device - Google Patents

Blood vessel state evaluation method and blood vessel state evaluation device Download PDF

Info

Publication number
CN112308813A
CN112308813A CN201910682829.7A CN201910682829A CN112308813A CN 112308813 A CN112308813 A CN 112308813A CN 201910682829 A CN201910682829 A CN 201910682829A CN 112308813 A CN112308813 A CN 112308813A
Authority
CN
China
Prior art keywords
learning model
deep learning
image
target
blood vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910682829.7A
Other languages
Chinese (zh)
Inventor
谢成典
李爱先
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Far Eastern Memorial Hospital
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201910682829.7A priority Critical patent/CN112308813A/en
Publication of CN112308813A publication Critical patent/CN112308813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a blood vessel state evaluation method and a blood vessel state evaluation device, wherein the blood vessel state evaluation method comprises the following steps: obtaining at least one angiographic image corresponding to a target user; analyzing the at least one angiographic image through a first depth learning model to select a target image from the at least one angiographic image; analyzing the target image through at least one second depth learning model to judge the blood vessel type of the target user and dividing a target blood vessel pattern in the target image into a plurality of scoring sections; and analyzing the output of the at least one second deep learning model through a third deep learning model to obtain the blood vessel state of the target user.

Description

Blood vessel state evaluation method and blood vessel state evaluation device
Technical Field
The present invention relates to a physiological state assessment technique based on deep learning, and more particularly, to a blood vessel state assessment method and a blood vessel state assessment apparatus.
Background
With the change of modern dietary habits, the cardiovascular diseases have the tendency of being younger. Cardiovascular obstruction can cause myocardial infarction, and acute myocardial infarction often causes life loss, so that it is an unbearable matter to keep the cardiovascular smooth. Generally, if cardiovascular blockage occurs, in addition to taking medicine to control the disease condition, the cardiovascular blockage can be controlled by cardiac catheter operation in cardiology department, balloon expansion or stent placement, and more serious coronary artery bypass operation in cardiac surgery can be selected. In addition, the SYNTAX score is an evaluation method for stent placement or bypass surgery by calculating the degree of occlusion of the heart vessels by angiography. However, the SYNTAX scoring mechanism is very complicated, and the doctor or the examiner must determine the state of the blood vessel from the angiographic image and perform a complicated scoring procedure.
Disclosure of Invention
The invention provides a blood vessel state evaluation method and a blood vessel state evaluation device, which can effectively improve the evaluation efficiency of the blood vessel state.
An embodiment of the present invention provides a vascular condition assessment method, including: obtaining at least one angiographic image corresponding to a target user; analyzing the at least one angiographic image through a first deep learning model to select a target image from the at least one angiographic image; analyzing the target image through at least one second deep learning model to judge the blood vessel type of the target user and dividing a target blood vessel pattern in the target image into a plurality of scoring sections; and analyzing the output of the at least one second deep learning model through a third deep learning model to obtain the blood vessel state of the target user.
An embodiment of the present invention further provides a blood vessel state evaluation device, which includes a storage device and a processor. The storage device is used for storing at least one angiographic image corresponding to a target user. The processor is coupled to the storage device. The processor is configured to analyze the at least one angiographic image through a first deep learning model to select a target image from the at least one angiographic image. The processor is further configured to analyze the target image through at least one second deep learning model to determine a blood vessel type of the target user and divide a target blood vessel pattern in the target image into a plurality of scoring segments. The processor is further configured to analyze an output of the at least one second deep learning model through a third deep learning model to obtain a vascular status of the target user.
Based on the above, after obtaining at least one angiographic image corresponding to a target user, the target image may be selected by analyzing the angiographic image through the first deep learning model. Then, the target image is analyzed by the second deep learning model, the blood vessel type of the target user can be determined and the target blood vessel pattern in the target image can be divided into a plurality of scoring sections. Further, by analyzing the output of the second deep learning model by the third deep learning model, the blood vessel state of the target user can be obtained. Therefore, the evaluation efficiency of the vascular state can be effectively improved.
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a schematic view of a blood vessel state evaluation device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating analysis of an image by a first deep learning model according to an embodiment of the invention;
FIG. 3 is a schematic diagram illustrating analysis of an image by a second deep learning model according to an embodiment of the invention;
FIG. 4 is a diagram illustrating a scoring rule and corresponding scoring segments according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a segment of a segment score according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating analysis of an image by a third deep learning model according to an embodiment of the invention;
FIG. 7 is a schematic diagram illustrating evaluation information according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating a blood vessel state evaluation method according to an embodiment of the present invention.
Description of the reference numerals
10: vascular condition evaluation device
101: processor with a memory having a plurality of memory cells
102: storage device
103: image processing module
1031 to 1033: deep learning model
21(1) to 21(n), 31, 51: image of a person
22: sequence of
301: left side advantage
302: advantage of right side
303: is unknown
41. 42: scoring rules
501-505: scoring segment
601(R), 601(G), 601 (B): single color image
602(1) -602 (p): masking image
603. 71: evaluating information
S801 to S804: step (ii) of
Detailed Description
Fig. 1 is a schematic view of a blood vessel state evaluation device according to an embodiment of the present invention. Referring to fig. 1, in an embodiment, the device (also referred to as a blood vessel state evaluation device) 10 may be any electronic device or computer device with image analysis and calculation functions. In another embodiment, the apparatus 10 may also be a cardiovascular condition examination device or an image acquisition device for cardiovascular imaging. The device 10 may be used to automatically analyze an angiographic image of a user (also referred to as a target user) and automatically generate assessment information reflecting the vascular status of the target user. In one embodiment, a contrast agent may be injected into and photographed against a target user's heart vessel (e.g., coronary artery) to obtain the angiographic image.
The device 10 includes a processor 101, a storage device 102, and an image processing module 103. The processor 101 is coupled to the storage device 102 and the image processing module 103. The Processor 101 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other Programmable general purpose or special purpose microprocessor, a Digital Signal Processor (DSP), a Programmable controller, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or other similar devices or combinations thereof. The processor 101 may be responsible for the overall or partial operation of the device 10.
The storage device 102 is used for storing images (i.e. angiographic images) and other data. Storage 102 may include volatile storage media and nonvolatile storage media. The volatile storage medium may include a Random Access Memory (RAM), and the non-volatile storage medium may include a Read Only Memory (ROM), a Solid State Disk (SSD), or a conventional hard disk (HDD), etc.
The image processing module 103 is used to perform image recognition on the image stored in the storage device 102 to recognize a pattern in the image through machine vision. The image processing module 103 may be implemented as a software module, a firmware module, or a hardware circuit. For example, in one embodiment, the image processing module 103 may include at least one Graphics Processor (GPU) or similar processing chip to perform the image recognition. Alternatively, in one embodiment, the image processing module 103 is a program code that can be loaded into the storage device 102 and executed by the processor 101. In one embodiment, the image processing module 103 may also be implemented in the processor 101.
It should be noted that the image processing module 103 has an artificial intelligence architecture such as machine learning and/or deep learning and can continuously improve its image recognition performance through training. For example, the image processing module 103 may include a deep learning model (also referred to as a first deep learning model) 1031, a deep learning model (also referred to as a second deep learning model) 1032, and a deep learning model (also referred to as a third deep learning model) 1033. The deep learning models in the image processing module 103 may be independent of each other or communicate with each other. Furthermore, in an embodiment, the device 10 may further include an input/output device such as a mouse, a keyboard, a display, a microphone, a speaker, or a network interface card, and the type of the input/output device is not limited thereto.
FIG. 2 is a schematic diagram illustrating analysis of an image by a first deep learning model according to an embodiment of the invention. Referring to fig. 1 and 2, the storage device 102 can store a plurality of images 21(1) to 21 (n). Images 21(1) -21 (n) may belong to one or more movie files. Images 21(1) to 21(n) are all angiographic images corresponding to the same target user. The processor 101 may analyze the images 21(1) -21 (n) through the deep learning model 1031 to select one or more images (also referred to as target images) from the images 21(1) -21 (n). For example, the deep learning model 1031 may include a Recurrent Neural Network (RNN) model and/or a Long Short Term Memory (LSTM) model, such as a Long Short Term Memory (LSTM) model, that is associated with a time series.
From the analysis results of the images 21(1) - (21 (n), the deep learning model 1031 may output a sequence 22 including n probability values P (1) - (P (n)). The probability values P (1) to P (n) correspond to the images 21(1) to 21(n), respectively. For example, the probability value p (i) corresponds to the image 21 (i). i is between 1 and n. The probability value p (i) is between 0 and 1. The probability value p (i) may represent the probability that the image 21(i) will participate in the subsequent operation. The processor 101 may compare the probability values P (1) -P (n) with a predetermined value. If the probability value p (i) is higher than the predetermined value, the processor 101 may determine the image 21(i) corresponding to the probability value p (i) as the target image.
After selecting the target image, the processor 101 may analyze the target image through the deep learning model 1032 to determine the blood vessel type of the target user and divide the blood vessel pattern (also referred to as the target blood vessel pattern) in the target image into a plurality of scoring sections. For example, the division of the scoring segments may conform to SYNTAX or similar scoring criteria. For example, the deep learning model 1032 may include a Convolutional Neural Network (CNN) model, a Full Convolutional Network (FCN) model, a Region-based CNN (Region-based CNN), and/or a U-Net model, among other neural network models related to encoding and decoding.
FIG. 3 is a diagram illustrating analysis of an image by a second deep learning model, according to an embodiment of the invention. Referring to fig. 1 and fig. 3, it is assumed that the image 31 is a target image. The processor 101 may analyze the image 31 through the deep learning model 1032 to determine the blood vessel type of the target user. On the other hand, the processor 101 may divide the blood vessel pattern (also referred to as a target blood vessel pattern) in the image 31 into a plurality of scoring segments through the deep learning model 1032. It should be noted that the operation of determining the blood vessel type of the target user and the operation of dividing the target blood vessel pattern in the target image into a plurality of score segments may be performed by one or more sub-deep learning models in the deep learning model 1032.
Based on the analysis result of the image 31, the deep learning model 1032 may determine the blood vessel type of the target user to be one of left dominant (left dominant) 301 and right dominant (right dominant) 302. For example, the left side advantage 201 and the right side advantage 202 may reflect two different types of right coronary arteries. In addition, if the analysis result of the image 31 does not match any of the left side superiority 201 and the right side superiority 202, the deep learning model 1032 may also determine that the blood vessel type of the target user is unknown 303. If the blood vessel type of the target user is unknown 303, the processor 101 may re-perform the operations of FIG. 2 to select a new target image. The operations of fig. 3 may then be performed on the new target image to again identify the blood vessel type of the target user as either the left dominant 301 or the right dominant 302.
In an embodiment, a certain sub-deep learning model of the deep learning models 1032 may be used to check the reasonableness of the target image selected by the deep learning model 1031. For example, if the deep learning model 1032 determines from the currently selected target image that the blood vessel type of the target user is unknown 303 of fig. 3, then this sub-deep learning model may determine whether the currently selected target image is reasonable. If the currently selected target image is not reasonably adequate, the processor 101 may reselect another image as the target image through the deep learning model 1031. The deep learning model 1032 may again determine the reasonableness of this new target image. Alternatively, if the deep learning model 1032 determines from the currently selected target image that the blood vessel type of the target user is the left side superiority 301 or the right side superiority 302 of fig. 3, this sub-deep learning model may determine that the currently selected target image is reasonable and the processor 101 may execute the subsequent procedure according to the determination result.
Fig. 4 is a schematic diagram illustrating a scoring rule and corresponding scoring segments according to an embodiment of the invention. Referring to fig. 3 and 4, the scoring rules 41 and 42 correspond to a left side advantage 301 and a right side advantage 302, respectively. If the blood vessel type of the target user is the left dominance 301, the blood vessel occlusion states in a plurality of scoring sections marked with numerical values of 1-15 in the blood vessel pattern can be scored based on the scoring rule 41. Alternatively, if the blood vessel type of the target user is the right superiority 302, the blood vessel occlusion status in a plurality of scoring segments marked with the numerical values 1-15, 16 and 16 a-16 c in the blood vessel pattern may be scored based on the scoring rule 42. Therefore, according to the analysis result of the image 31, the deep learning model 1032 may divide the target blood vessel pattern in the image 31 into a plurality of scoring sections according to one of the scoring rules 41 and 42.
Fig. 5 is a schematic diagram of the division scoring segment according to an embodiment of the present invention. Referring to fig. 3-5, in one embodiment, it is assumed that the blood vessel type of the target user is the right superiority 302. The deep learning model 1032 may divide the vessel pattern (i.e., the target vessel pattern) in the image 51 into the scoring segments 501-505 according to the right side dominance 302. The scoring segment 501 corresponds to segment 1 indicated by the scoring rule 42, the scoring segment 502 corresponds to segment 2 indicated by the scoring rule 42, the scoring segment 503 corresponds to segment 3 indicated by the scoring rule 42, the scoring segment 504 corresponds to segment 4 indicated by the scoring rule 42, and the scoring segment 505 corresponds to segments 16 and 16 a-16 c indicated by the scoring rule 42. It should be noted that, in another embodiment, if the blood vessel type of the target user is the left dominant blood vessel 301, the deep learning model 1032 may also divide the target blood vessel pattern in the target image into a plurality of corresponding score sections based on the sections 1-15 indicated by the score rule 41.
Returning to fig. 1, after dividing the target blood vessel pattern in the target image into a plurality of scoring segments, the processor 101 may analyze the output of the deep learning model 1032 through the deep learning model 1033 to obtain the blood vessel state of the target user. For example, the deep learning model 1033 may include a CNN model (e.g., vgnet or ResNet) or other suitable learning model.
FIG. 6 is a diagram illustrating analysis of an image by a third deep learning model, according to an embodiment of the invention. Referring to fig. 6, in an embodiment, the deep learning model 1033 can obtain a plurality of monochrome images 601(R), 601(G), and 601(B) corresponding to the target image. For example, monochrome images 601(R), 601(G), and 601(B) can be obtained by performing color filtering on the target image to present the target image in monochrome colors (e.g., red, green, and blue), respectively. In some cases, the accuracy of analyzing a monochrome image is higher than the accuracy of analyzing a color image.
The deep learning model 1033 may also obtain a plurality of mask images 602(1) -602 (p) corresponding to the divided scoring segments. For example, the processor 101 of fig. 1 may generate the corresponding mask images 602(1) -602 (p) according to the p scoring sections divided by the deep learning model 1032. p may be between the values 2 and 25 (corresponding to the SYNTAX scoring criteria). Taking fig. 5 as an example, the mask image 602(1) may be generated according to the divided scoring segments 501 and used to analyze the blood vessel status in the scoring segments 501, and the mask image 602(2) may be generated according to the divided scoring segments 502 and used to analyze the blood vessel status in the scoring segments 502, and so on. In one embodiment, the total number (i.e., the value p) of the mask images 602(1) -602 (p) may be different according to whether the blood vessel type of the target user is the left dominant or the right dominant.
Deep learning model 1033 may analyze monochrome images 601(R), 601(G), 601(B) and mask images 602(1) -602 (p) and generate evaluation information 603. The evaluation information 603 may reflect the state of the blood vessel of the user. For example, the assessment information 603 may reflect whether a vessel within a certain scoring segment is present with lesions such as complete Occlusion (Total Occlusion), Trifurcation lesion (Trifurcation), Bifurcation lesion (Bifurcation), ostium lesion (Aorto-lateral lesion), Severe tortuosity (sever tortuosity), or Severe Calcification (Heavy calcium). Such lesions are defined, for example, in the SYNTAX scoring criteria.
Fig. 7 is a schematic diagram of evaluation information according to an embodiment of the present invention. Referring to fig. 7, the evaluation information 71 may be stored in the storage device 102 of fig. 1 and may be output through an input/output interface (e.g., presented on a display).
In the present embodiment, the evaluation information 71 may record whether any of the lesions 0 to 19 appear in the blood vessels in the score segments 1 to 15. If the analysis result reflects that the blood vessels in a bisecting section (e.g., scoring section 1) present a lesion (e.g., lesion 0), the cross-column between the bisecting section and the lesion (e.g., scoring section 1 and lesion 0) can be recorded as T. If the analysis result reflects that the blood vessels in a certain score segment (e.g., score segment 2) appear to be a certain lesion (e.g., lesion 19), the cross-column between this score segment and this lesion (e.g., score segment 2 and lesion 19) may be labeled as F. Thereby, the evaluation information 71 can clearly reflect the blood vessel state of the target user. For example, the assessment information 71 may document the scoring results of the vascular state for one or more scoring segments.
It is noted that, in one embodiment, the evaluation information 71 may also record the association information between at least one scoring segment and at least one lesion in other forms. In another embodiment, the evaluation information 71 may also record more information describing the vascular status of the target user, such as the probability of a lesion occurring in a score segment, and the like, which is not limited by the invention.
In one embodiment, the input images (e.g., images 21(1) -21 (n) of FIG. 2) may include images from different camera angles. After analyzing the images through a plurality of deep learning models (e.g., the deep learning models 1031 to 1033 of FIG. 1), a plurality of scoring results generated by analyzing the images at different photographing angles can be obtained, and the scoring results can be recorded in the evaluation information 71. If different scoring results for the same scoring segment (also referred to as a target scoring segment) are generated by analyzing images from different camera angles, only part of the scoring results can be finally used to describe the vascular state of the target scoring segment. For example, the maximum value (i.e., the highest score) for the scoring segment among all the scoring results can be used as the final scoring result to describe the blood vessel state of the scoring segment of interest.
Fig. 8 is a flowchart illustrating a blood vessel state evaluation method according to an embodiment of the present invention. Referring to fig. 8, in step S801, at least one angiographic image corresponding to a target user is obtained. In step S802, the angiographic image is analyzed by a first deep learning model to select a target image from the angiographic image. In step S803, the target image is analyzed by the second deep learning model to determine the blood vessel type of the target user and to divide the target blood vessel pattern in the target image into a plurality of scoring sections. In step S804, the output of the second deep learning model is analyzed by the third deep learning model to obtain the blood vessel state of the target user.
However, the steps in fig. 8 have been described in detail above, and are not described again here. It is to be noted that, the steps in fig. 8 can be implemented as a plurality of program codes or circuits, and the invention is not limited thereto. In addition, the method of fig. 8 may be used with the above exemplary embodiments, or may be used alone, and the invention is not limited thereto.
In summary, after obtaining at least one angiographic image corresponding to a target user, the target image may be selected by analyzing the angiographic image through the first deep learning model. Then, the target image is analyzed by the second deep learning model, the blood vessel type of the target user can be determined and the target blood vessel pattern in the target image can be divided into a plurality of scoring sections. Further, by analyzing the output of the second deep learning model by the third deep learning model, the blood vessel state of the target user can be obtained. Therefore, the evaluation efficiency of the vascular state can be effectively improved.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention.

Claims (12)

1. A vascular condition assessment method, comprising:
obtaining at least one angiographic image corresponding to a target user;
analyzing the at least one angiographic image through a first deep learning model to select a target image from the at least one angiographic image;
analyzing the target image through at least a second deep learning model to judge the blood vessel type of the target user and dividing a target blood vessel pattern in the target image into a plurality of scoring sections; and
and analyzing the output of the at least one second deep learning model through a third deep learning model to obtain the blood vessel state of the target user.
2. The vascular condition evaluation method according to claim 1, wherein the step of analyzing the at least one angiographic image by the first deep learning model to select the target image from the at least one angiographic image comprises:
determining, by the first deep learning model, a probability value corresponding to a first image of the at least one angiographic image; and
and if the probability value is higher than a preset value, determining the first image as the target image.
3. The blood vessel state evaluation method of claim 1, wherein the blood vessel type of the target user comprises one of a left dominant side and a right dominant side.
4. The vascular condition assessment method of claim 1, wherein the partitioning of the plurality of scoring segments meets the SYNTAX scoring criteria.
5. The vascular condition assessment method of claim 1, wherein the step of analyzing the output of the at least one second deep learning model by the third deep learning model to obtain the vascular condition of the target user comprises:
obtaining a plurality of mask images corresponding to the plurality of scoring segments and a plurality of monochrome images corresponding to the target image; and
analyzing the plurality of mask images and the plurality of monochrome images through the third deep learning model to obtain the blood vessel state of the target user.
6. The method of claim 1, wherein the at least one angiographic image comprises a plurality of images from different camera angles, and the step of analyzing the output of the at least one second deep learning model by the third deep learning model to obtain the vascular status of the target user comprises:
generating a plurality of scoring results corresponding to the target scoring segments; and
and taking the maximum value of the plurality of scoring results for the target scoring section as a final scoring result to describe the vessel state of the target scoring section.
7. A blood vessel state evaluation device comprising:
a storage device for storing at least one angiographic image corresponding to a target user; and
a processor coupled to the storage device,
wherein the processor is configured to analyze the at least one angiographic image through a first deep learning model to select a target image from the at least one angiographic image,
the processor is further configured to analyze the target image through at least one second deep learning model to determine a blood vessel type of the target user and divide a target blood vessel pattern in the target image into a plurality of scoring segments, and
the processor is further configured to analyze an output of the at least one second deep learning model through a third deep learning model to obtain a vascular status of the target user.
8. The vascular condition assessment device of claim 7, wherein the operation of the processor to analyze the at least one angiographic image through the first deep learning model to select the target image from the at least one angiographic image comprises:
determining, by the first deep learning model, a probability value corresponding to a first image of the at least one angiographic image; and
and if the probability value is higher than a preset value, determining the first image as the target image.
9. The vascular condition assessment device according to claim 7, wherein the blood vessel type of the target user comprises one of a left dominant side and a right dominant side.
10. The vascular condition assessment device of claim 7, wherein the partitioning of the plurality of scoring segments meets the SYNTAX scoring criteria.
11. The vascular condition assessment device of claim 7, wherein the operation of the processor analyzing the output of the at least one second deep learning model by the third deep learning model to obtain the vascular condition of the target user comprises:
obtaining a plurality of mask images corresponding to the plurality of scoring segments and a plurality of monochrome images corresponding to the target image; and
analyzing the plurality of mask images and the plurality of monochrome images through the third deep learning model to obtain the blood vessel state of the target user.
12. The vascular condition assessment device according to claim 7, wherein the at least one angiographic image includes a plurality of images at different camera angles, and the processor analyzes the output of the at least one second deep learning model through the third deep learning model to obtain the vascular condition of the target user comprises:
generating a plurality of scoring results corresponding to the target scoring segments; and
and taking the maximum value of the plurality of scoring results for the target scoring section as a final scoring result to describe the vessel state of the target scoring section.
CN201910682829.7A 2019-07-26 2019-07-26 Blood vessel state evaluation method and blood vessel state evaluation device Pending CN112308813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910682829.7A CN112308813A (en) 2019-07-26 2019-07-26 Blood vessel state evaluation method and blood vessel state evaluation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910682829.7A CN112308813A (en) 2019-07-26 2019-07-26 Blood vessel state evaluation method and blood vessel state evaluation device

Publications (1)

Publication Number Publication Date
CN112308813A true CN112308813A (en) 2021-02-02

Family

ID=74328829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910682829.7A Pending CN112308813A (en) 2019-07-26 2019-07-26 Blood vessel state evaluation method and blood vessel state evaluation device

Country Status (1)

Country Link
CN (1) CN112308813A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005001661A2 (en) * 2003-06-25 2005-01-06 Schlumberger Technology Corporation Method and apparatus and program storage device including an integrated well planning workflow control system with process dependencies
US20090328239A1 (en) * 2006-07-31 2009-12-31 Bio Tree Systems, Inc. Blood vessel imaging and uses therefor
CN104867147A (en) * 2015-05-21 2015-08-26 北京工业大学 SYNTAX automatic scoring method based on coronary angiogram image segmentation
CN108577883A (en) * 2018-04-03 2018-09-28 上海交通大学 A kind of Screening for coronary artery disease device, screening system and signal characteristic extracting methods
CN108629773A (en) * 2018-05-10 2018-10-09 北京红云智胜科技有限公司 The method for establishing the convolutional neural networks data set of training identification cardiovascular type
CN109658407A (en) * 2018-12-27 2019-04-19 上海联影医疗科技有限公司 Methods of marking, device, server and the storage medium of coronary artery pathological changes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005001661A2 (en) * 2003-06-25 2005-01-06 Schlumberger Technology Corporation Method and apparatus and program storage device including an integrated well planning workflow control system with process dependencies
US20090328239A1 (en) * 2006-07-31 2009-12-31 Bio Tree Systems, Inc. Blood vessel imaging and uses therefor
CN104867147A (en) * 2015-05-21 2015-08-26 北京工业大学 SYNTAX automatic scoring method based on coronary angiogram image segmentation
CN108577883A (en) * 2018-04-03 2018-09-28 上海交通大学 A kind of Screening for coronary artery disease device, screening system and signal characteristic extracting methods
CN108629773A (en) * 2018-05-10 2018-10-09 北京红云智胜科技有限公司 The method for establishing the convolutional neural networks data set of training identification cardiovascular type
CN109658407A (en) * 2018-12-27 2019-04-19 上海联影医疗科技有限公司 Methods of marking, device, server and the storage medium of coronary artery pathological changes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
倪森;付冬梅;丁邺;: "基于眼底图像不同彩色通道的出血特征提取", no. 10 *
张勤奕: "《缺血性脑血管病外科治疗学——颈动脉内膜剥脱术》", 人民军医出版社, pages: 16 - 17 *
黄文博 等: "彩色视网膜眼底图像血管自动检测方法", pages 1378 - 1386 *

Similar Documents

Publication Publication Date Title
TWI698225B (en) Blood vessel status evaluation method and blood vessel status evaluation device
EP3730040A1 (en) Method and apparatus for assisting in diagnosis of cardiovascular disease
US12042247B2 (en) System and method for determining coronary artery tissue type based on an OCT image and using trained engines
US11342078B2 (en) Blood vessel status evaluation method and blood vessel status evaluation device
CN105380598B (en) Method and system for the automatic treatment planning for arteriarctia
US11051779B2 (en) Processing image frames of a sequence of cardiac images
US10997720B2 (en) Medical image classification method and related device
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
KR102217392B1 (en) Apparatus and method for learning coronary artery diagnosis image, diagnosis apparatus and method for stenosed lesion of coronary artery having significant diffrences using the learning model costructed by the same
KR102204371B1 (en) Learning method for generating multiphase collateral image and multiphase collateral imaging method using machine learning
CN114947916A (en) Method and device for calculating SYNTAX score of coronary artery lesion
CN112308813A (en) Blood vessel state evaluation method and blood vessel state evaluation device
CN112381012A (en) Method and device for identifying target region in eye image and electronic equipment
CN112307804B (en) Vascular state evaluation method and vascular state evaluation device
CN115761132A (en) Method and device for automatically reconstructing three-dimensional model of coronary artery
CN113648059B (en) Surgical plan evaluation method, computer device, and storage medium
CN116228781A (en) Coronary vessel segmentation method, device, electronic equipment and storage medium
CN115082442A (en) Method and device for predicting stent implantation effect, electronic equipment and storage medium
CN113610800A (en) Device for assessing collateral circulation, non-diagnostic method and electronic apparatus
CN114119645A (en) Method, system, device and medium for determining image segmentation quality
CN113436709A (en) Image display method and related device and equipment
RU2475833C2 (en) Sample-based filter
CN118154589B (en) Method for detecting blood vessel density of middle cerebral artery based on intracranial CTA image, computer equipment, readable storage medium and program product
US20240306926A1 (en) Method for predicting fractional flow reserve on basis of machine learning
WO2023068049A1 (en) Prediction system, prediction device, and prediction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210324

Address after: The new Taiwan Chinese Taiwan New Taipei City Xizhi District Five Road No. 88 8 floor

Applicant after: Acer Inc.

Applicant after: FAR EASTERN MEMORIAL Hospital

Address before: The new Taiwan Chinese Taiwan New Taipei City Xizhi District Five Road No. 88 8 floor

Applicant before: Acer Inc.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210202