CN113012146B - Vascular information acquisition method and device, electronic equipment and storage medium - Google Patents

Vascular information acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113012146B
CN113012146B CN202110388354.8A CN202110388354A CN113012146B CN 113012146 B CN113012146 B CN 113012146B CN 202110388354 A CN202110388354 A CN 202110388354A CN 113012146 B CN113012146 B CN 113012146B
Authority
CN
China
Prior art keywords
feature
image
blood vessel
fusion
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110388354.8A
Other languages
Chinese (zh)
Other versions
CN113012146A (en
Inventor
康雁
杨英健
郭英委
冯孟婷
李强
杨超然
柴东
郭嘉琦
张智超
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Priority to CN202110388354.8A priority Critical patent/CN113012146B/en
Publication of CN113012146A publication Critical patent/CN113012146A/en
Application granted granted Critical
Publication of CN113012146B publication Critical patent/CN113012146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The disclosure relates to a blood vessel information acquisition method and device and electronic equipment, wherein the method comprises the following steps: acquiring a medical image of a tissue organ, wherein the medical image is a single-mode image or a multi-mode image; performing blood vessel extraction processing on the medical image to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology; and determining the blood vessel morphology information of each partition based on the position information of each partition in the medical image. The embodiment of the disclosure can conveniently obtain the blood vessel morphology information of different partitions.

Description

Vascular information acquisition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of biomedical technologies, and in particular, to a method and apparatus for acquiring vascular information, an electronic device, and a storage medium.
Background
The blood circulation of the human body can convey nutrient substances to all parts of the whole body, and blood vessels are transmission carriers of blood. When the angiogenesis is changed, the blood transmission is also changed, and the blood circulation of the human body is affected, so that the human body is damaged. Therefore, detection of vascular conditions has been one of the hot directions of researchers.
In the related art, the whole blood vessel information of a tissue organ is usually detected, however, abnormal conditions of the blood vessel usually occur in a local area. If the blood vessel parameters of different partitions of the tissue and the organ can be accurately detected and analyzed, on one hand, unnecessary waste of calculation resources can be reduced, on the other hand, the blood vessel parameters of the specific partition can be analyzed in a targeted manner, a basis can be provided for evaluating the blood vessel parameters of the different partitions, and convenience is brought to clinical diagnosis and treatment.
Disclosure of Invention
The disclosure provides a blood vessel information acquisition method and device, electronic equipment and a storage medium. The blood vessel information of different areas of the tissue and the organ can be comprehensively analyzed, and the detection precision is improved.
According to an aspect of the present disclosure, there is provided a blood vessel information acquisition method including:
acquiring a medical image of a tissue organ, wherein the medical image is a single-mode image or a multi-mode image;
performing blood vessel extraction processing on the medical image to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology;
and determining the blood vessel morphology information of each partition based on the position information of each partition in the medical image.
In some possible embodiments, the performing a blood vessel extraction process on the medical image to obtain blood vessel morphology information includes:
extracting multi-scale image features of the medical image by using a multi-mode blood vessel segmentation model;
performing feature fusion processing on the multi-scale image features to obtain fusion features;
and performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information.
In some possible embodiments, in a case where the medical image is a single-mode image, performing feature fusion processing on the multi-scale image feature to obtain a fused feature, including:
performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scales of the multi-scale image features of the single-mode image from small to large, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features;
based on the feature update processing of the last image feature, obtaining the fusion feature;
and/or
Under the condition that the medical image is a multi-mode image, performing feature fusion processing on the multi-scale image features to obtain fusion features, wherein the feature fusion processing comprises the following steps:
For the medical image of any mode in the multi-mode image, performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order from small scale to large scale of the multi-scale image feature of the image of any mode, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features of the medical image of any mode;
based on the feature update processing of the last image feature of any mode image, obtaining the fusion feature of any mode image;
performing connection processing on the fusion characteristics of at least two modal images in the multi-modal images, and performing convolution processing on the connected characteristics to obtain the fusion characteristics of the multi-modal images;
wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes:
performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature;
and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
In some possible embodiments, the performing blood vessel extraction based on the fusion feature, to obtain the blood vessel morphology information, includes:
inputting the fusion characteristics into a first classifier to obtain a blood vessel position;
obtaining a vascular feature based on the fusion feature and the vascular location;
and inputting the blood vessel characteristics into at least one second classifier to obtain blood vessel morphology information except for the blood vessel positions corresponding to the blood vessel characteristics, wherein each second classifier is used for obtaining different blood vessel morphology information.
In some possible embodiments, determining vessel morphology information for each partition based on the location information for each partition within the medical image includes:
acquiring the position information of each partition in the medical image;
determining blood vessel morphology information of each partition based on the position information of each partition;
the obtaining the position information of each partition in the medical image includes at least one of the following modes:
determining the position information of each partition in the medical image by using a classification network;
and determining the position information of each partition in the medical image by using a standard partition template.
According to a second aspect of the present disclosure, there is provided a blood vessel information obtaining apparatus including:
The acquisition module is used for acquiring medical images of tissue and organs, wherein the medical images are single-mode images or multi-mode images;
the segmentation module is used for executing blood vessel extraction processing on the medical image by utilizing a multimode blood vessel segmentation model to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology;
and the determining module is used for determining the blood vessel morphology information of each partition based on the position information of each partition in the medical image.
In some possible embodiments, the segmentation module includes a multi-mode vessel segmentation model, and extracts multi-scale image features of the medical image using the multi-mode vessel segmentation model; performing feature fusion processing on the multi-scale image features to obtain fusion features; and performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information.
In some possible embodiments, the segmentation module is configured to perform feature fusion processing on the multi-scale image feature to obtain a fused feature when the medical image is a single-mode image, including:
performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scales of the multi-scale image features of the single-mode image from small to large, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features;
Based on the feature update processing of the last image feature, obtaining the fusion feature;
and/or
Under the condition that the medical image is a multi-mode image, performing feature fusion processing on the multi-scale image features to obtain fusion features, wherein the feature fusion processing comprises the following steps:
for the medical image of any mode in the multi-mode image, performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order from small scale to large scale of the multi-scale image feature of the image of any mode, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features of the medical image of any mode;
based on the feature update processing of the last image feature of any mode image, obtaining the fusion feature of any mode image;
performing connection processing on the fusion characteristics of at least two modal images in the multi-modal images, and performing convolution processing on the connected characteristics to obtain the fusion characteristics of the multi-modal images;
wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes:
Performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature;
and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of the first aspects.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions, characterized in that the computer program instructions, when executed by a processor, implement the method of any one of the first aspects.
In the embodiment of the present disclosure, the medical image of the tissue organ (such as brain, kidney, etc.) is subjected to a blood vessel extraction process to obtain blood vessel information, where the blood vessel morphology information may be any information related to the blood vessel morphology. And the blood vessel morphology information of each partition can be further acquired by utilizing the partition information of the tissue and organs. According to the method, on one hand, the blood vessel morphology information (such as the blood vessel position, the blood vessel center line, the blood vessel branch points and the like) is determined by combining the image information of multiple modes, so that the detection accuracy is improved, and on the other hand, the blood vessel morphology in multiple subareas of the tissue can be respectively obtained, and the subsequent analysis of the blood vessel state of each subarea is facilitated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
FIG. 1 illustrates a flow chart of vessel information acquisition according to an embodiment of the present disclosure;
FIG. 2 shows a flowchart of step S20 according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a deep learning network model architecture according to an embodiment of the present disclosure;
FIG. 4 shows a flowchart of step S30 in an embodiment according to the present disclosure;
fig. 5 shows a block diagram of a blood vessel information acquisition apparatus according to an embodiment of the present disclosure;
fig. 6 shows a block diagram of an electronic device 800 according to an embodiment of the disclosure;
fig. 7 illustrates a block diagram of another electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The execution subject of the blood vessel information acquisition may be an image processing apparatus, for example, the image processing method may be executed by a terminal device or a server or other processing device, wherein the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the image processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Fig. 1 shows a flowchart of blood vessel information acquisition according to an embodiment of the present disclosure, as shown in fig. 1, including:
s10: acquiring a medical image of a tissue organ, wherein the medical image is a single-mode image or a multi-mode image;
in some possible embodiments, different images may be acquired for different tissue organs. Tissue organs may include the brain and kidneys, such as for the brain, medical images may be CT (computed tomography), MRI (magnetic resonance imaging), MRA (magnetic resonance angiography), etc., for the kidneys, medical images may be CT, X-rays, ultrasound, etc., which is not specifically limited by the present disclosure. It should be noted that, in the embodiment of the present disclosure, the medical image used may be an image obtained in a non-perfusion manner, and the detection of blood vessel information and the evaluation of state are realized by processing an image that does not cause damage to an organ, etc., so that the method has a higher application value.
In addition, in the embodiment of the disclosure, the medical image may be a multimode image, may include at least one image, and may improve the accuracy of subsequent blood vessel extraction by fusing the image features of multiple modes. When the vessel information is acquired through the multi-mode image, the registration processing can be firstly carried out on the multi-mode image, so that errors caused by the images can be reduced.
S20: performing blood vessel extraction processing on the medical image to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology;
in some possible embodiments, the blood vessel morphology information includes at least two of a position of a blood vessel, a branch point of the blood vessel, a number of stages of the blood vessel, and a center line of the blood vessel, and the blood vessel morphology information corresponding to an artery and a vein of the blood vessel may be further distinguished, which is not specifically limited in the present disclosure. The vessel parameters such as the vessel position, length, volume, vessel curvature, vessel diameter (inner diameter, outer diameter, wall thickness), vessel branch point position, vessel progression, etc. can be conveniently obtained by using the vessel morphology information, which is not particularly limited in the present disclosure.
In addition, in the embodiment of the disclosure, the vessel morphology information may be extracted by using a conventional algorithm, or may be performed by using a deep learning network model. The traditional algorithm may include a thresholding method, wherein pixel points with pixel values higher than a set threshold are determined as blood vessels, so as to obtain the blood vessel positions in the medical image, and further determine the center line positions, the blood vessel series, and the branch point positions according to the blood vessel positions. Wherein the threshold value set for different medical images may be different, and may be set by a person skilled in the art according to requirements, for example, for an MRA image, the threshold value may be set to a value larger than 270. The embodiments of the present disclosure may also implement vessel extraction by other algorithms, such as performing vessel enhancement processing by a hessian matrix to obtain a vessel position, or performing vessel extraction by using a hidden markov model to obtain a vessel position, so as to detect a center line position, a vessel progression, and a branch point position. The central line position of the blood vessel can be determined through the middle position of the pipe diameter formed by the blood vessel positions, and the number of the blood vessel stages can be the number of the blood vessel tail ends. The foregoing is merely exemplary and is not intended to be limiting in any way.
In addition, when the blood vessel extraction process is performed through the deep learning network model, the deep learning network model may include a feature extraction module, a feature fusion module, and a classification module, to implement classification and identification of blood vessel morphology information.
S30: and determining the blood vessel morphology information of each partition based on the position information of each partition in the medical image.
In some possible embodiments, position information of each partition in the medical image may be acquired, where the partitions are different tissue regions in the tissue organ, and when the medical image is a kidney image, the partitions may include a left kidney partition and a right kidney partition; when the medical image is a brain image, the segmentation may include left brain partition, right brain partition, white matter region, grey matter region, cerebellum region, brainstem, temporal lobe, frontal lobe, etc. The present disclosure does not specifically limit the above-mentioned partitions, and may be set according to actual needs.
In other embodiments of the present disclosure, the determination of the partition may also be determined according to the received frame selection information, that is, during the actual application, a selection operation on the medical image may be received, and the partition information may be acquired through the selection operation. Wherein the selecting operation may include obtaining the selected area using a rectangular box, a circular box, or a manually drawn closed figure.
Based on the configuration, the method and the device for detecting the blood vessel morphology based on the image information can be used for determining blood vessel morphology information (such as blood vessel positions, blood vessel center lines, blood vessel branch points and the like) by combining the image information of multiple modes, so that detection accuracy is improved, blood vessel morphology in multiple partitions of a tissue can be respectively obtained, and subsequent analysis on blood vessel states of each partition is facilitated.
Specific embodiments of the disclosure are described below with reference to the accompanying drawings.
First, the embodiments of the present disclosure may acquire a medical image of a tissue organ, which may be a kidney or a brain, but the present disclosure is not limited thereto, and may be any other tissue organ. The medical image may be a single-mode image or a multi-mode image. The single-mode image is preferably CT or MRA, so that the vascular precision is improved, and the multi-mode image can be any type of image combination.
According to the embodiment of the disclosure, the medical image to be subjected to blood vessel information acquisition can be obtained through communication connection with a server or equipment for acquiring the medical image.
In the case of obtaining medical images, preprocessing may be performed on each image, where the preprocessing may include denoising, normalization, and standardization, where denoising may use a gaussian denoising algorithm, normalization includes normalizing pixel values of the medical image to be between [0,255], and standardization may process the medical image to be an image of the same size, such as 512×512×600, where the disclosure is not specifically limited.
In addition, if the medical image is a single-mode image, the vessel extraction can be performed on the preprocessed medical image, and if the medical image is a multi-mode image, the multi-mode image can be registered to obtain the conversion relation among the images. The method of registration may include registration based on key feature points, such as Fast, susan, sift, harris algorithm, etc., as this disclosure is not particularly limited.
Further, a blood vessel extraction process may be performed on the medical image to acquire blood vessel morphology information. As described above, the embodiments of the present disclosure may segment a blood vessel using a conventional algorithm, and further obtain information such as a blood vessel center line position, a blood vessel branch point position, and a blood vessel progression using the obtained blood vessel position. In addition, embodiments of the present disclosure may also perform vessel extraction through a deep learning network model. Fig. 2 shows a flowchart of step S20 according to an embodiment of the present disclosure, and fig. 3 shows a schematic diagram of a deep learning network model structure according to an embodiment of the present disclosure.
As shown in fig. 2 and 3, the deep learning network model includes a feature extraction module, a feature fusion module, and a classification module. The performing a blood vessel extraction process on the medical image to obtain blood vessel morphology information includes:
S21: extracting multi-scale image features of the medical image by using a multi-mode blood vessel segmentation model;
s22: performing feature fusion processing on the multi-scale image features to obtain fusion features;
s23: and performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information.
In some possible implementations, the multi-scale features may be extracted using a feature extraction module. Specifically, the feature extraction module may include a forward feature extraction unit that reduces a feature scale by performing a convolution process, and a reverse feature extraction unit that increases a feature scale by performing a convolution process, and convolution kernels employed for the forward feature extraction and the reverse feature extraction may be different.
In one example, as shown in fig. 3, multi-scale features F1, F2, F3, F4, F5 are extracted by a forward extraction process, where F1 is an initial feature of an image input to the model, a convolution process is performed on the initial feature, and feature scale is reduced, e.g., the convolution process may be performed sequentially through a convolution kernel of 1*1 to obtain features F2-F5, where the scale of F2 is one-half of F1, the scale of F3 is one-fourth of F1, the scale of F4 is one-eighth of F1, and the scale of F5 is one-sixteenth of F1. The above-described features are merely exemplary, and in other embodiments, the number of features to be obtained is not limited, and may be set according to the need.
In addition, in the reverse feature extraction process, features obtained by the forward feature extraction process can be fused at intervals to obtain multi-scale features in the reverse process. For example, a convolution process may be performed on F5 starting with feature F5 having the smallest scale, and the scale may be increased such that the processed scale is the same as F3. And then adding the characteristic after the F5 convolution and the F3 to obtain a characteristic F6. Similarly, the feature F4 is subjected to convolution processing, the obtained feature is the same as the scale F2, and the feature after the F2 convolution is added to the F2, so as to obtain a feature F7. That is, in the process of extracting the reverse feature, convolution processing, such as 3*3 convolution, may be performed on the features of each scale in order from small scale to large scale, so as to obtain features spaced by one scale from the original scale, and then adding two features of the same scale to obtain the reverse feature, where the reverse feature includes at least two features of different scales.
The embodiment of the disclosure takes the multi-scale features after the reverse feature extraction and the initial features of the original image as the obtained multi-scale image features after the feature extraction, such as F1, F6 and F7.
Based on the above configuration, when the embodiment of the disclosure performs feature extraction processing, two processes of forward feature extraction and reverse feature extraction are adopted, so that feature information of medical images under different scales is fully extracted, and a basis is provided for a subsequent classification process. In addition, in the embodiment of the disclosure, the feature fusion is performed in an interval mode in the reverse processing process, so that the number of the output features is reduced while the features under different scales are fully reserved, the processing speed is increased and the operation time is reduced relative to the adjacent feature extraction mode, and the data processing amount is reduced for the subsequent feature fusion.
In some possible embodiments, the feature fusion module may be used to perform feature fusion processing in the case of obtaining the multi-scale image feature. The feature fusion process of the embodiment of the disclosure can be used as a feature update process, and specifically, feature fusion can be sequentially performed by using features of adjacent scales.
In one example, convolution processing may be performed on the i-th image feature and feature update processing may be performed on the i+1th image feature in order of scale from small to large, where i is an integer greater than zero and less than or equal to N, where N represents the number of multi-scale image features; based on the feature update processing of the last image feature, obtaining the fusion feature; wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes: performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature; and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
As shown in fig. 3, in order of the multi-scale image features F7, F6 and F1 from small scale to large scale, when feature fusion is performed, convolution processing may be performed on the F7 first to obtain features with the same scale as the F6 adjacent to the F7, and the features are added to the F6 to update the features of the F6 to obtain F8. And then, carrying out convolution processing on F8 so that the scale of the F8 is the same as that of the feature F1, and adding the convolved results of F1 and F8 to realize the update of F1 to obtain F9, wherein in the example of FIG. 3, F9 can be used as the final obtained fusion feature.
The embodiment can realize the feature fusion of the single-mode image, and can realize the feature fusion of the multi-mode image in two ways.
In one example, in a case where the medical image is a multi-modal image, performing feature fusion processing on the multi-scale image features to obtain fusion features, including: for the medical image of any mode in the multi-mode image, performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order from small scale to large scale of the multi-scale image feature of the image of any mode, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features of the medical image of any mode; based on the feature update processing of the last image feature of any mode image, obtaining the fusion feature of any mode image; performing connection processing on the fusion characteristics of at least two modal images in the multi-modal images, and performing convolution processing on the connected characteristics to obtain the fusion characteristics of the multi-modal images; wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes: performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature; and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
That is, for multi-mode medical images, feature fusion processing can be performed on images of any one mode in the multi-mode medical images respectively to obtain fusion features of the medical images of each mode, then the fusion features of the medical images of each mode can be connected, and convolution processing is performed to obtain final fusion features of the multi-mode images.
Alternatively, in another example, before the feature extraction process is performed, the multimode medical images may be connected, the connected images are used as the input of the feature extraction module, and then the multiscale features of the multimode images are obtained, and the fusion features are obtained through the feature fusion process.
Based on the configuration, the embodiment of the disclosure can realize full fusion in the extracted multi-scale information and improve the feature precision.
In case of obtaining the fusion feature, the extraction of the vessel morphology information, i.e. the classification of the vessel morphology information, may be further performed by means of a classification module. In the embodiment of the disclosure, the fusion characteristic can be directly input into a classifier, and classification of all blood vessel morphology information is obtained by using the classifier. In embodiments of the present disclosure, the vessel morphology information may include vessel position, vessel centerline position, vessel branch point position, vessel progression, and the like.
In addition, in the embodiment of the present disclosure, classification of the blood vessel morphology information may also be implemented by a plurality of classifiers, where each classifier is used for classifying and identifying different blood vessel morphology information. The performing blood vessel extraction based on the fusion feature to obtain the blood vessel morphology information includes: inputting the fusion characteristics into a first classifier to obtain a blood vessel position; obtaining a vascular feature based on the fusion feature and the vascular location; and inputting the blood vessel characteristics into at least one second classifier to obtain blood vessel morphology information except for the blood vessel positions corresponding to the blood vessel characteristics, wherein each second classifier is used for obtaining different blood vessel morphology information.
In one example, the input fusion feature may be classified by a first classifier, where a vessel position corresponding to the fusion feature is identified, and the first classifier outputs a probability that each pixel point in the fusion feature corresponds to a vessel, and determines that the pixel point is a vessel if the probability is greater than a probability threshold. The pixel point positions which are expressed as blood vessels in the fusion characteristics can be obtained through the mode. The corresponding vessel position can be determined in the original medical image. The dimension of the fusion characteristic is the same as that of the medical image, and the corresponding blood vessel position is the same.
Under the condition that the blood vessel position is obtained through the first classifier, the product of a mask diagram representing the blood vessel position and the fusion characteristic can be utilized to obtain the blood vessel characteristic, wherein the pixel value representing the blood vessel position in the mask diagram is 1, the rest positions are zero, and the blood vessel characteristic corresponding to the blood vessel position is obtained through the product. The vessel characteristics may then be input into a plurality of second classifiers, each for classifying different vessel morphology information. For example, the second classifier a may be used to classify a blood vessel centerline, the second classifier B may be used to classify a blood vessel branch point, the second classifier C may be used to classify an artery and a vein, and the classifier for classifying a blood vessel progression may be included, which is not particularly limited in this disclosure.
In another embodiment of the present disclosure, each second classifier may further classify the blood vessel morphology information based on the fusion feature and the blood vessel feature, and fuse classification results obtained by the two features to obtain optimized blood vessel morphology information. Specifically, the second classifier a may perform classification and identification of a vessel centerline based on the fusion feature and the vessel feature, respectively, to obtain a first centerline and a second centerline. According to the embodiment of the invention, the first central line and the second central line can be subjected to least square fitting to obtain the final central line position of the blood vessel, and the classification precision is improved. In addition, the second classifier B may perform classification and identification of a blood vessel branch point based on the fusion feature, the blood vessel feature, and the blood vessel center line position, respectively, to obtain a first branch point position, a second branch point position, and a third branch point position. And then the optimized branch point position can be obtained according to the average value of the adjacent first branch point position, the adjacent second branch point position and the adjacent third branch point position. In addition, the third classifier C may perform classification of an artery and a vein based on the fusion feature and the blood vessel feature, respectively, to obtain a first arterial position and a second arterial position of the artery, and a first venous position and a second venous position of the vein, and further may perform linear fitting (least square method) on the first arterial position and the second arterial position, to obtain an optimized arterial position, and perform linear fitting on the first venous position and the second venous position, to obtain an optimized venous position.
The classifier of the embodiment of the disclosure can be composed of convolution layers, and can be specifically set according to requirements.
Based on the configuration, the embodiment of the disclosure can realize the identification of various blood vessel morphology information in the medical image, and can also fuse the classification result of the multi-classifier to improve the classification precision.
Next, a training process of the multi-mode vessel segmentation model is described in the embodiments of the present disclosure, where the model may include a feature extraction module, a feature fusion module, and a plurality of classification modules, as shown in fig. 3.
The method for training the multimode vessel segmentation model comprises the following steps: acquiring a training sample of the multimode image; performing feature extraction on the multimode image by utilizing the feature extraction module to obtain a multi-scale image feature; performing feature fusion on the multi-scale image features by using the feature fusion module to obtain fusion features; predicting the blood vessel position of the fusion characteristic by using a first classifier to obtain a first prediction result; predicting blood vessel morphology information except the blood vessel position based on the first prediction result and the fusion characteristic by using at least one second classifier to obtain a second prediction result corresponding to each second classifier; and adjusting the loss weight of the classifier based on the loss of the first prediction result and the second prediction result until the training requirement is met.
In the embodiment of the disclosure, the training of the network can be performed by taking the multimode image as a training sample, so that the blood vessel segmentation precision of the multimode image by the network model can be satisfied. In addition, when the network model is trained, a classification loss weight adjustment strategy is introduced, and classification accuracy is improved. Specifically, the first classifier may be configured to predict a blood vessel position in the training sample, and determine a first classification loss of the first classifier using the blood vessel label position corresponding to the training sample. The second classifier a may predict a vessel centerline position, and a second classification loss for the centerline may be determined based on the centerline label, and a corresponding second classification loss for the second classifier B, C may be obtained. The first classification loss, the second classification loss and the third classification loss may be mean square error loss (MSE), and under the condition that each classification loss is obtained, the loss weight of each classifier is adjusted based on the principle that the larger the classification loss is, the larger the weight is, so that the learning accuracy of the sample is improved. A specific loss weight may be a difference between 1 and the classification loss, wherein the relationship between classification loss and loss weight may be expressed as w=1-1/L. Where W represents the loss weight and L represents the classification loss. Based on each classification loss, an overall loss of the network may be further determined, which may be a mean of a sum of the loss weights and the corresponding classification loss products.
The network parameters, such as convolution kernel, network weights, etc., may be adjusted based on the overall loss of the network, which is not specifically limited by the present disclosure. In the event that the resulting overall loss is less than a loss threshold, the training is ended, and the loss threshold may be a value greater than 0.8.
Through the embodiment, training of the multimode image blood vessel segmentation model can be realized, and a model meeting the precision requirement is obtained. And further, the accurate extraction of the vessel morphology information can be performed by using the model.
In the case of obtaining the blood vessel morphology information, the blood vessel morphology information and the blood vessel parameters of each partition may be further determined.
Fig. 4 shows a flowchart of step S30 in an embodiment according to the present disclosure. The determining the vessel morphology information of each partition based on the position information of each partition in the medical image comprises the following steps:
s31: acquiring the position information of the partition in the medical image;
s32: and determining the blood vessel morphology information of each partition based on the position information of each partition.
In an embodiment of the present disclosure, a tissue partition is determined based on a medical image of a tissue organ. For example, a left kidney partition, a right kidney partition may be determined based on the kidney images. The left brain region, right brain region, frontal lobe region, parietal lobe region, temporal lobe region, occipital lobe region, grey brain region, white brain region, brain stem region, cerebellum region, etc., may be determined based on brain images, the foregoing being merely exemplary illustrations, and the present disclosure is specifically limited thereto.
Wherein determining a partition in the medical image comprises at least one of:
a) Determining a partition of the medical image using a classification network;
the embodiment of the disclosure may realize the division of each partition through a trained convolutional neural network, which may include a residual network, a Unet network, a Vnet network, and the like, which is not particularly limited in this disclosure. The location information corresponding to each partition in the medical image can be obtained through the classification network.
Alternatively, the classification module of the deep learning network model shown in fig. 3 may further include a second classifier D to implement division of each tissue and organ partition, and specifically, the fusion feature may be input into the second classifier D to output position information of each partition.
B) Determining a partition of the medical image using a standard partition template;
according to the embodiment of the disclosure, the partition information of the medical image can be obtained according to the partition template obtained in advance. The partition template may be obtained first, for example, a stored partition template may be read from other devices or locally, such as a kidney partition template and a brain partition template, where the partition template may be a template that is used by a standard, such as an MNI152 template or other atlas template, or may be a partition template obtained by performing partition identification on a tissue organ of a healthy person, and further performing fitting on a position of each partition to obtain a mean value.
Under the condition that the partition template is obtained, registering the partition template to the medical image, so that the position information about each partition in the medical image can be obtained. The registration algorithm may include susan, sift, harris, etc., and is not particularly limited herein.
C) Determining position information of each partition based on the selection operation;
in some possible embodiments, a selection operation for the medical image may be acquired, where the selection operation is used to determine at least one closed area in the medical image, and the closed area may be determined as a partition, and a position of the closed area is a position of the corresponding partition. Embodiments of the present disclosure may provide an identification for each partition to distinguish the partitions.
By the method, the position information of each partition in the medical image can be determined, and then the blood vessel morphology information of each partition can be obtained by combining the position information of each partition and the whole blood vessel morphology information of the medical image.
According to the embodiment of the disclosure, the vascular parameters corresponding to all vascular morphology information in the medical image can be obtained, namely, the vascular parameters can be obtained based on the vascular morphology information, wherein the vascular parameters can comprise the information such as the vascular length, the vascular area, the vascular volume, the vascular series, the average caliber (inner diameter, outer diameter and wall thickness) of the vascular, the position of the vascular center line, the position of a vascular branch point, the arterial length, the venous length and the like. In the embodiment of the disclosure, the vessel length parameter, the vessel diameter information, the volume, the center line position, the branch point position, the number of stages and the like can be conveniently obtained based on the vessel position in the vessel morphology information, and the arterial vessel parameter and the venous vessel parameter can be respectively obtained according to the marks of the artery and the vein in the vessel position, wherein the determination method of each parameter can be obtained by a conventional statistical method, and is not particularly limited.
In addition, the embodiment of the disclosure can also determine the blood vessel morphology information in each partition according to the position information and the blood vessel morphology information of each partition. For example, the blood vessel region in each partition can be determined according to the blood vessel position and the partition position, the blood vessel morphology information corresponding to the blood vessel region in each partition can be further obtained based on the determined blood vessel region, and then the blood vessel parameters in each partition can be further determined based on the blood vessel morphology information in each partition, namely the blood vessel parameter information of different partitions can be obtained.
Based on the configuration, the embodiment of the disclosure can obtain the blood vessel morphology information and the blood vessel parameters of each partition, and is convenient for evaluating and clinically diagnosing the blood vessel parameters of each partition.
In the embodiment of the disclosure, when the blood vessel morphology information or the blood vessel parameters of each partition are obtained, the blood vessel state may be estimated as a whole based on the blood vessel parameters of each partition. That is, the embodiment of the disclosure may further perform comprehensive state evaluation on the blood vessel according to the obtained blood vessel information of each partition, to obtain an evaluation result. That is, the embodiments of the present disclosure may also provide a method and apparatus for comprehensively evaluating a vascular state, where the method includes obtaining vascular information of each partition through the above embodiments, and comprehensively evaluating the vascular state by using the vascular morphology information in different tissue partitions.
In some possible embodiments, where vessel morphology information is obtained, vessel morphology information within different tissue regions may be utilized to comprehensively evaluate vessel status. The method can comprise symmetry evaluation, comparison evaluation between standard parameters and relative evaluation of differences between different partitions, and a high-precision blood vessel state evaluation result is obtained through evaluation in various modes.
In some possible embodiments, where vessel morphology information is obtained, vessel morphology information within different tissue regions may be utilized to comprehensively evaluate vessel status. The method can comprise symmetry evaluation, comparison evaluation with standard parameters and difference influence evaluation between different partitions, and a high-precision blood vessel state evaluation result is obtained through evaluation in various modes. In one example, the symmetry-assessment results may be derived based on differences between the vessel parameters of the symmetric partitions. Specifically, the left and right kidneys or the left and right brains are symmetrical, the distribution and morphology of the blood vessels of the symmetrical partition of the normal healthy organ should be symmetrical, and the embodiments of the present disclosure can perform symmetry assessment according to the symmetry between the morphology of the blood vessels of the symmetrical partition.
The difference between the blood vessel parameters of the symmetrical partition can be obtained respectively, such as the difference between the corresponding blood vessel lengths, the difference between the blood vessel areas of the same characteristic layers, the difference between the blood vessel volumes, the difference between the blood vessel series, the difference between the average pipe diameters of the blood vessels, the distance average value between the center lines of the blood vessels, the distance average value between the branch point positions, and the like, so that the difference between the blood vessel parameters is obtained, and the difference rate is obtained. Wherein the difference rate of the blood vessel length is equal to the ratio of the difference of the blood vessel length to the sum of the blood vessel lengths, the difference rate of the blood vessel areas is the ratio of the sum of the differences of the blood vessel areas of each layer to the sum of the blood vessel areas of each layer, the difference rate of the blood vessel volumes is the ratio of the difference of the blood vessel volumes to the sum of the blood vessel volumes, the difference rate of the blood vessel series is equal to the ratio of the difference of the blood vessel series to the sum of the blood vessel series, the difference rate of the average pipe diameter of the blood vessel is equal to the ratio of the difference of the average pipe diameter of the blood vessel to the sum of the average pipe diameter of the blood vessel, the difference rate of the blood vessel center lines is equal to the ratio of the average value of the distance differences between the blood vessel center lines to the sum of the distance differences between the points of the blood vessel center lines, and the difference rate of the branch points is equal to the ratio of the average value of the distance differences between the corresponding branch points. Based on this, the difference rate of the arterial length, the difference rate of the venous length, the average caliber of the artery and vein, the difference rate of the area and volume, and the like can be further obtained, which is not particularly limited in the present disclosure.
In case of obtaining the difference rate between the blood vessel parameters in each of the symmetrical partitions, an average difference rate, i.e. the sum of the difference rates divided by the number of types of difference rates, can be obtained. The average difference rate is determined as a symmetry assessment result. Alternatively, a specific weight may be set for each difference rate, the difference rate of the parameter is determined to be updated based on the product of the set weight and the difference rate, and the updated difference rate is added to the number of parameter types divided by the difference rate to obtain the symmetry evaluation result.
In addition, embodiments of the present disclosure may also determine a comparison evaluation result based on a difference between the vascular parameter and the standard parameter. Wherein, the standard parameter can represent the standard distribution condition of blood vessels in tissue organs. The standard parameter may be a parameter value counted through the distribution of blood vessels in tissue organs of a large number of healthy people, for example, standard parameter values, such as standard blood vessel length, standard blood vessel volume, standard progression, standard blood vessel position, standard center line position, etc., may be determined according to the distribution of blood vessels in tissue organs and in each partition of thousands of healthy volunteers, and the blood vessel parameters of each partition also have corresponding standard values, which are not illustrated one by one.
In the case of obtaining the determined blood vessel parameter and the standard parameter, the average value of the difference rate between the corresponding parameters may be determined as the comparison evaluation result. Specifically, the difference between the vascular parameter and the standard parameter of each partition may be obtained respectively, and then the difference rate may be obtained (the manner of determining the difference rate is the same as the above-mentioned symmetry evaluation process, and a repeated description is not given here). The average difference rate of each partition relative to the standard parameter can be obtained through the method. According to the embodiment of the disclosure, different weights can be set for each partition, products are obtained between the corresponding weights and the average difference rate, then the products are added to obtain an addition value, and the ratio of the addition value to the number of the partitions is determined to be a comparison evaluation result.
In addition, the embodiment of the disclosure can also determine a relative evaluation result based on the difference of the blood vessel parameter of the partition relative to the blood vessel parameters of other partitions. Specifically, in a normal cerebral blood vessel, the difference between each partition and the standard parameter is not large, but if the vascularity of a certain partition is abnormal, the difference of the partition relative to the standard parameter is higher than that of other partitions. Based on this phenomenon, embodiments of the present disclosure determine the impact of each partition on the overall vessel layout, determining the relative assessment results.
Specifically, embodiments of the present disclosure determine relative evaluation results for each partition with an elimination method. Specifically, assuming that the tissue organ includes M partitions, selecting one partition from the M partitions, determining an average difference rate between the partition and a standard parameter corresponding to the partition, and determining an average difference rate between the remaining M-1 partitions and the standard parameter of the M-1 partitions. The ratio of the difference between the two average difference rates to the sum of the two average difference rates is determined as the relative difference rate of the selected partition.
When the relative difference rate of each partition is obtained, the product of the average difference rate and the weight of each partition is obtained, and the ratio between the product and the number of partitions is determined as the relative evaluation result.
In the case of obtaining the symmetrical evaluation result, the comparison evaluation result, and the relative evaluation result, a weighted sum of at least two evaluation results among the respective evaluation results may be utilized as the evaluation result of the vascular state.
It should be noted that, the weight value of each process set in the embodiment of the present disclosure may be set according to the requirement, and is not specifically limited herein, and may be any value between [0,1 ].
In the embodiment of the disclosure, the larger the value corresponding to the obtained evaluation result is, the higher the probability of the existence of the abnormality of the blood vessel is, and the smaller the value is, the lower the probability of the existence of the abnormality of the blood vessel is. The embodiment of the disclosure can be configured with the corresponding relation between the evaluation result and the risk level, and the risk level corresponding to the evaluation result can be conveniently searched through the corresponding relation. The risk level may include three levels of high, medium and low, and the corresponding low risk level has an evaluation result of 0-35%, the corresponding medium risk level has an evaluation result of 36-65%, and the high risk level has an evaluation result of 66-100%. The foregoing is illustrative and is not to be construed as limiting the present disclosure in any way. Through the evaluation result, whether the morphology of blood vessels in tissue organs is abnormal or not can be prompted, so that the risk degree of vascular diseases is prompted, and assistance is provided for clinic.
In addition, in the embodiment of the disclosure, the accuracy of the evaluation result can be improved by distinguishing the standard parameters according to the age intervals in consideration of the difference in the vascular states of the subjects of different age groups. That is, the embodiments of the present disclosure may set different standard parameters for subjects in different age intervals, and the determination of the standard parameters is similar to the above, except that the age interval of the subject in which the corresponding tissue organ is located is different. When the vascular state is evaluated, standard parameters of the corresponding age range can be selected for evaluation, and the evaluation accuracy is improved.
In summary, in the embodiment of the present disclosure, the blood vessel extraction process may be performed on the medical image of the tissue organ (such as brain, kidney, etc.), so as to obtain the blood vessel information, where the blood vessel morphology information may be any information related to the blood vessel morphology. The blood vessel morphology information in each partition can be further obtained by utilizing the positions of the plurality of partitions of the tissue organ, so that the blood vessel state of each partition can be conveniently determined. In addition, the embodiment of the disclosure can also evaluate the comprehensive state of the blood vessel by combining the blood vessel morphology information in each partition to obtain an evaluation result, wherein the blood vessel state is comprehensively evaluated in a plurality of modes, the obtained evaluation result can more comprehensively reflect the blood vessel state, and the evaluation precision is improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure.
In addition, the present disclosure further provides a blood vessel state evaluation device, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the blood vessel information acquisition provided by the present disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 5 shows a block diagram of a blood vessel information obtaining device according to an embodiment of the present disclosure, as shown in fig. 5, the device includes:
an acquisition module 100, configured to acquire a medical image of a tissue organ, where the medical image is a single-mode image or a multi-mode image;
a segmentation module 200, configured to perform a blood vessel extraction process on the medical image by using a multimode blood vessel segmentation model, so as to obtain blood vessel morphology information, where the blood vessel morphology information includes at least two kinds of information related to blood vessel morphology;
the determining module 300 is configured to determine vessel morphology information of each partition based on the location information of each partition in the medical image.
In some possible embodiments, the segmentation module includes a multi-mode vessel segmentation model, and extracts multi-scale image features of the medical image using the multi-mode vessel segmentation model; performing feature fusion processing on the multi-scale image features to obtain fusion features; and performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information.
In some possible embodiments, the segmentation module is configured to perform feature fusion processing on the multi-scale image feature to obtain a fused feature when the medical image is a single-mode image, including:
performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scales of the multi-scale image features of the single-mode image from small to large, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features;
based on the feature update processing of the last image feature, obtaining the fusion feature;
and/or
Under the condition that the medical image is a multi-mode image, performing feature fusion processing on the multi-scale image features to obtain fusion features, wherein the feature fusion processing comprises the following steps:
for the medical image of any mode in the multi-mode image, performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order from small scale to large scale of the multi-scale image feature of the image of any mode, wherein i is an integer greater than zero and less than or equal to N, and N represents the number of the multi-scale image features of the medical image of any mode;
Based on the feature update processing of the last image feature of any mode image, obtaining the fusion feature of any mode image;
performing connection processing on the fusion characteristics of at least two modal images in the multi-modal images, and performing convolution processing on the connected characteristics to obtain the fusion characteristics of the multi-modal images;
wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes:
performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature;
and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the method described above.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 6 shows a block diagram of an electronic device 800 according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of another electronic device 1900 in accordance with an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A blood vessel information acquisition method, characterized by comprising:
acquiring a medical image of a tissue organ, the medical image configured as a single-mode image;
performing blood vessel extraction processing on the medical image to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology;
determining blood vessel morphology information of each partition based on the position information of each partition in the medical image;
the performing the blood vessel extraction processing on the medical image to obtain blood vessel morphology information includes:
extracting multi-scale image features of the medical image by using a multi-mode blood vessel segmentation model;
Performing feature fusion processing on the multi-scale image features to obtain fusion features;
performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information;
under the condition that the medical image is a single-mode image, performing feature fusion processing on the multi-scale image features to obtain fusion features, wherein the feature fusion processing comprises the following steps:
performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scales of the multi-scale image features of the single-mode image from small to large; wherein i is an integer greater than zero and less than or equal to N, N representing the number of multi-scale image features;
based on the feature update processing of the last image feature, obtaining the fusion feature; wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes:
performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature;
and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
2. The method of claim 1, wherein performing a vessel extraction based on the fusion feature yields the vessel morphology information, comprising:
inputting the fusion characteristics into a first classifier to obtain a blood vessel position;
obtaining a vascular feature based on the fusion feature and the vascular location;
and inputting the blood vessel characteristics into at least one second classifier to obtain blood vessel morphology information except for the blood vessel positions corresponding to the blood vessel characteristics, wherein each second classifier is used for obtaining different blood vessel morphology information.
3. The method according to any one of claims 1-2, wherein determining vessel morphology information for each partition based on the location information for each partition within the medical image comprises:
acquiring the position information of each partition in the medical image;
determining blood vessel morphology information of each partition based on the position information of each partition;
the obtaining the position information of each partition in the medical image includes at least one of the following modes:
determining the position information of each partition in the medical image by using a classification network;
and determining the position information of each partition in the medical image by using a standard partition template.
4. A blood vessel information acquisition method, characterized by comprising:
acquiring a medical image of a tissue organ, the medical image configured as a multi-modal image;
performing blood vessel extraction processing on the medical image to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology;
determining blood vessel morphology information of each partition based on the position information of each partition in the medical image;
the performing the blood vessel extraction processing on the medical image to obtain blood vessel morphology information includes:
extracting multi-scale image features of the medical image by using a multi-mode blood vessel segmentation model;
performing feature fusion processing on the multi-scale image features to obtain fusion features;
performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information;
under the condition that the medical image is a multi-mode image, performing feature fusion processing on the multi-scale image features to obtain fusion features, wherein the feature fusion processing comprises the following steps:
aiming at the medical image of any mode in the multi-mode image, performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scale of the multi-scale image feature of any mode image from small to large; wherein i is an integer greater than zero and less than or equal to N, N representing the number of multi-scale image features of the medical image of any modality;
Based on the feature update processing of the last image feature of any mode image, obtaining the fusion feature of any mode image;
performing connection processing on the fusion characteristics of at least two modal images in the multi-modal images, and performing convolution processing on the connected characteristics to obtain the fusion characteristics of the multi-modal images;
wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes:
performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature;
and obtaining the updated (i+1) th image feature by using the weighted sum of the first convolution feature and the (i+1) th image feature.
5. The method of claim 4, wherein the performing a vessel extraction based on the fusion features,
obtaining the vessel morphology information, including:
inputting the fusion characteristics into a first classifier to obtain a blood vessel position;
obtaining a vascular feature based on the fusion feature and the vascular location;
and inputting the blood vessel characteristics into at least one second classifier to obtain blood vessel morphology information except for the blood vessel positions corresponding to the blood vessel characteristics, wherein each second classifier is used for obtaining different blood vessel morphology information.
6. The method of any of claims 4-5, wherein determining vessel morphology information for each partition based on location information for each partition within the medical image comprises:
acquiring the position information of each partition in the medical image;
determining blood vessel morphology information of each partition based on the position information of each partition;
the obtaining the position information of each partition in the medical image includes at least one of the following modes:
determining the position information of each partition in the medical image by using a classification network;
and determining the position information of each partition in the medical image by using a standard partition template.
7. A blood vessel information acquisition apparatus, comprising:
the acquisition module is used for acquiring medical images of tissue and organs, and the medical images are configured into single-mode images;
the segmentation module is used for executing blood vessel extraction processing on the medical image by utilizing a multimode blood vessel segmentation model to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology; the segmentation module comprises a multimode vascular segmentation model, and extracts multiscale image characteristics of the medical image by utilizing the multimode vascular segmentation model; performing feature fusion processing on the multi-scale image features to obtain fusion features; performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information; the segmentation module is configured to perform feature fusion processing on the multi-scale image features to obtain fusion features under the condition that the medical image is a single-mode image, and the feature fusion processing comprises: performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scales of the multi-scale image features of the single-mode image from small to large; wherein i is an integer greater than zero and less than or equal to N, N representing the number of multi-scale image features; based on the feature update processing of the last image feature, obtaining the fusion feature; wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes: performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature; obtaining the (i+1) th image feature after updating by using the weighted sum of the first convolution feature and the (i+1) th image feature;
And the determining module is used for determining the blood vessel morphology information of each partition based on the position information of each partition in the medical image.
8. A blood vessel information acquisition apparatus, comprising:
the acquisition module is used for acquiring medical images of tissue and organs, and the medical images are configured into multi-mode images;
the segmentation module is used for executing blood vessel extraction processing on the medical image by utilizing a multimode blood vessel segmentation model to obtain blood vessel morphology information, wherein the blood vessel morphology information comprises at least two kinds of information related to blood vessel morphology; the segmentation module comprises a multimode vascular segmentation model, and extracts multiscale image characteristics of the medical image by utilizing the multimode vascular segmentation model; performing feature fusion processing on the multi-scale image features to obtain fusion features; performing blood vessel extraction based on the fusion characteristics to obtain the blood vessel morphology information; the segmentation module is configured to perform feature fusion processing on the multi-scale image features to obtain fusion features under the condition that the medical image is a multi-mode image, and the feature fusion processing comprises: aiming at the medical image of any mode in the multi-mode image, performing convolution processing on the ith image feature and performing feature updating processing on the (i+1) th image feature according to the order of the scale of the multi-scale image feature of any mode image from small to large; wherein i is an integer greater than zero and less than or equal to N, N representing the number of multi-scale image features of the medical image of any modality; based on the feature update processing of the last image feature of any mode image, obtaining the fusion feature of any mode image; performing connection processing on the fusion characteristics of at least two modal images in the multi-modal images, and performing convolution processing on the connected characteristics to obtain the fusion characteristics of the multi-modal images; wherein the performing convolution processing on the ith image feature and performing feature update processing on the (i+1) th image feature includes: performing convolution processing on the ith image feature to obtain a first convolution feature with the same scale as the (i+1) th image feature; obtaining the (i+1) th image feature after updating by using the weighted sum of the first convolution feature and the (i+1) th image feature;
And the determining module is used for determining the blood vessel morphology information of each partition based on the position information of each partition in the medical image.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the vascular information retrieval method of any of claims 1 to 3.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the vascular information retrieval method of any of claims 4 to 6.
11. A computer-readable storage medium, on which computer program instructions are stored, characterized in that the computer program instructions, when executed by a processor, implement the blood vessel information acquisition method of any one of claims 1 to 3.
12. A computer-readable storage medium, on which computer program instructions are stored, characterized in that the computer program instructions, when executed by a processor, implement the blood vessel information acquisition method according to any one of claims 4 to 6.
CN202110388354.8A 2021-04-12 2021-04-12 Vascular information acquisition method and device, electronic equipment and storage medium Active CN113012146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110388354.8A CN113012146B (en) 2021-04-12 2021-04-12 Vascular information acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110388354.8A CN113012146B (en) 2021-04-12 2021-04-12 Vascular information acquisition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113012146A CN113012146A (en) 2021-06-22
CN113012146B true CN113012146B (en) 2023-10-24

Family

ID=76388309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110388354.8A Active CN113012146B (en) 2021-04-12 2021-04-12 Vascular information acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113012146B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744223A (en) * 2021-08-26 2021-12-03 联影智能医疗科技(北京)有限公司 Blood vessel risk assessment method, computer device, and storage medium
CN114359284B (en) * 2022-03-18 2022-06-21 北京鹰瞳科技发展股份有限公司 Method for analyzing retinal fundus images and related products
CN116863146B (en) * 2023-06-09 2024-03-08 强联智创(北京)科技有限公司 Method, apparatus and storage medium for extracting hemangio features

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6842638B1 (en) * 2001-11-13 2005-01-11 Koninklijke Philips Electronics N.V. Angiography method and apparatus
CN110321946A (en) * 2019-06-27 2019-10-11 郑州大学第一附属医院 A kind of Multimodal medical image recognition methods and device based on deep learning
CN110647889A (en) * 2019-08-26 2020-01-03 中国科学院深圳先进技术研究院 Medical image recognition method, medical image recognition apparatus, terminal device, and medium
CN111161270A (en) * 2019-12-24 2020-05-15 上海联影智能医疗科技有限公司 Blood vessel segmentation method for medical image, computer device and readable storage medium
CN111325729A (en) * 2020-02-19 2020-06-23 青岛海信医疗设备股份有限公司 Biological tissue segmentation method based on biomedical images and communication terminal
CN111369542A (en) * 2020-03-06 2020-07-03 上海联影智能医疗科技有限公司 Blood vessel marking method, image processing system and storage medium
CN111462116A (en) * 2020-05-13 2020-07-28 吉林大学第一医院 Multimodal parameter model optimization fusion method based on imagery omics characteristics
CN111680447A (en) * 2020-04-21 2020-09-18 深圳睿心智能医疗科技有限公司 Blood flow characteristic prediction method, blood flow characteristic prediction device, computer equipment and storage medium
CN111754511A (en) * 2020-07-06 2020-10-09 苏州六莲科技有限公司 Liver blood vessel segmentation method and device based on deep learning and storage medium
CN111754510A (en) * 2020-07-06 2020-10-09 苏州六莲科技有限公司 Blood supply analysis method, device and storage medium
CN111784762A (en) * 2020-06-01 2020-10-16 北京理工大学 Method and device for extracting blood vessel center line of X-ray contrast image
CN111832644A (en) * 2020-07-08 2020-10-27 北京工业大学 Brain medical image report generation method and system based on sequence level
CN112001925A (en) * 2020-06-24 2020-11-27 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001099A1 (en) * 2016-06-30 2018-01-04 上海联影医疗科技有限公司 Method and system for extracting blood vessel

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6842638B1 (en) * 2001-11-13 2005-01-11 Koninklijke Philips Electronics N.V. Angiography method and apparatus
CN110321946A (en) * 2019-06-27 2019-10-11 郑州大学第一附属医院 A kind of Multimodal medical image recognition methods and device based on deep learning
CN110647889A (en) * 2019-08-26 2020-01-03 中国科学院深圳先进技术研究院 Medical image recognition method, medical image recognition apparatus, terminal device, and medium
CN111161270A (en) * 2019-12-24 2020-05-15 上海联影智能医疗科技有限公司 Blood vessel segmentation method for medical image, computer device and readable storage medium
CN111325729A (en) * 2020-02-19 2020-06-23 青岛海信医疗设备股份有限公司 Biological tissue segmentation method based on biomedical images and communication terminal
CN111369542A (en) * 2020-03-06 2020-07-03 上海联影智能医疗科技有限公司 Blood vessel marking method, image processing system and storage medium
CN111680447A (en) * 2020-04-21 2020-09-18 深圳睿心智能医疗科技有限公司 Blood flow characteristic prediction method, blood flow characteristic prediction device, computer equipment and storage medium
CN111462116A (en) * 2020-05-13 2020-07-28 吉林大学第一医院 Multimodal parameter model optimization fusion method based on imagery omics characteristics
CN111784762A (en) * 2020-06-01 2020-10-16 北京理工大学 Method and device for extracting blood vessel center line of X-ray contrast image
CN112001925A (en) * 2020-06-24 2020-11-27 上海联影医疗科技股份有限公司 Image segmentation method, radiation therapy system, computer device and storage medium
CN111754511A (en) * 2020-07-06 2020-10-09 苏州六莲科技有限公司 Liver blood vessel segmentation method and device based on deep learning and storage medium
CN111754510A (en) * 2020-07-06 2020-10-09 苏州六莲科技有限公司 Blood supply analysis method, device and storage medium
CN111832644A (en) * 2020-07-08 2020-10-27 北京工业大学 Brain medical image report generation method and system based on sequence level

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Chao-Chuan Chang ; Pei-Yu Chen ; Chih-Chung Huang."3D blood vessel mapping of adult zebrafish using high frequency ultrasound ultrafast doppler imaging".《2017 IEEE International Ultrasonics Symposium (IUS)》.2017,全文. *
基于多模型融合和区域迭代生长的视网膜血管自动分割;赖小波;许茂盛;徐小媚;;电子学报(第12期);全文 *
基于多模态卷积神经网络的脑血管提取方法研究;秦志光;陈浩;丁熠;蓝天;陈圆;沈广宇;;电子科技大学学报(第04期);全文 *
基于胸部CT影像的肺血管树分割关键技术研究;杨志永;肖洪旭;李雨泽;姜海松;姜杉;;天津大学学报(自然科学与工程技术版)(第02期);全文 *
多模态医学影像融合识别技术研究进展;周涛;陆惠玲;陈志强;马竟先;;生物医学工程学杂志(第05期);全文 *

Also Published As

Publication number Publication date
CN113012146A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN113012146B (en) Vascular information acquisition method and device, electronic equipment and storage medium
CN111310764B (en) Network training method, image processing device, electronic equipment and storage medium
CN113012816B (en) Brain partition risk prediction method and device, electronic equipment and storage medium
CN111310616B (en) Image processing method and device, electronic equipment and storage medium
CN108256555B (en) Image content identification method and device and terminal
EP2977956B1 (en) Method, apparatus and device for segmenting an image
CN112767329B (en) Image processing method and device and electronic equipment
EP3133532A1 (en) Method and device for training classifier and recognizing a type of information
US11455491B2 (en) Method and device for training image recognition model, and storage medium
TW202022561A (en) Method, device and electronic equipment for image description statement positioning and storage medium thereof
CN112541928A (en) Network training method and device, image segmentation method and device and electronic equipment
CN112075927B (en) Etiology classification method and device for cerebral apoplexy
CN109360197B (en) Image processing method and device, electronic equipment and storage medium
CN106557759B (en) Signpost information acquisition method and device
JP2022522551A (en) Image processing methods and devices, electronic devices and storage media
WO2022156235A1 (en) Neural network training method and apparatus, image processing method and apparatus, and electronic device and storage medium
CN113128520B (en) Image feature extraction method, target re-identification method, device and storage medium
EP3767488A1 (en) Method and device for processing untagged data, and storage medium
CN111582383B (en) Attribute identification method and device, electronic equipment and storage medium
CN113034491B (en) Coronary calcified plaque detection method and device
CN114820584A (en) Lung focus positioner
KR20220034844A (en) Image processing method and apparatus, electronic device, storage medium and program product
CN112884040B (en) Training sample data optimization method, system, storage medium and electronic equipment
CN114418931A (en) Method and device for extracting residual lung lobes after operation, electronic equipment and storage medium
CN115565666A (en) Cerebral infarction assessment method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant