CN116485820A - Method and device for extracting artery and vein image and nonvolatile storage medium - Google Patents

Method and device for extracting artery and vein image and nonvolatile storage medium Download PDF

Info

Publication number
CN116485820A
CN116485820A CN202310742187.1A CN202310742187A CN116485820A CN 116485820 A CN116485820 A CN 116485820A CN 202310742187 A CN202310742187 A CN 202310742187A CN 116485820 A CN116485820 A CN 116485820A
Authority
CN
China
Prior art keywords
image
blood vessel
target
feature
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310742187.1A
Other languages
Chinese (zh)
Other versions
CN116485820B (en
Inventor
李延祥
李楠宇
陈日清
苏晨晖
余坤璋
徐宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Kunbo Biotechnology Co Ltd
Original Assignee
Hangzhou Kunbo Biotechnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kunbo Biotechnology Co Ltd filed Critical Hangzhou Kunbo Biotechnology Co Ltd
Priority to CN202310742187.1A priority Critical patent/CN116485820B/en
Publication of CN116485820A publication Critical patent/CN116485820A/en
Application granted granted Critical
Publication of CN116485820B publication Critical patent/CN116485820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an arteriovenous image extraction method, an arteriovenous image extraction device and a nonvolatile storage medium. Wherein the method comprises the following steps: extracting and fusing image features with different feature sizes from a scanned image by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; and extracting an arteriovenous image from the scanned image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result. The method and the device solve the technical problem of low efficiency caused by extracting the artery and vein images from the medical scanning images by adopting a mode of combining threshold segmentation with a growth algorithm in the related art.

Description

Method and device for extracting artery and vein image and nonvolatile storage medium
Technical Field
The present application relates to the field of medical image processing, and in particular, to an arteriovenous image extraction method, an arteriovenous image extraction device, and a nonvolatile storage medium.
Background
In the related art, when an arteriovenous image is extracted from a medical scanning image, a mode of directly performing threshold segmentation on the medical scanning image and combining a growth algorithm to extract the arteriovenous image is generally adopted, but the mode has complicated and complicated flow and takes too long time, so that the extraction efficiency is too low.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides an arteriovenous image extraction method, an arteriovenous image extraction device and a nonvolatile storage medium, which at least solve the technical problem of low efficiency caused by extracting arteriovenous images from medical scanning images by adopting a mode of combining threshold segmentation with a growth algorithm in the related art.
According to an aspect of an embodiment of the present application, there is provided an arteriovenous image extraction method, including: extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; and extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
Optionally, determining the vessel type corresponding to the voxel in the voxel set according to the first segmentation result includes: determining voxels to be classified in the voxel set, wherein the voxels to be classified are voxels which are not classified into corresponding blood vessel types in the blood vessel image area; counting the number of voxels corresponding to each blood vessel type in a neighborhood of the voxels to be classified, wherein the neighborhood comprises all voxels adjacent to the voxels to be classified in the scanned image; and determining the blood vessel type with the largest corresponding voxel number as the blood vessel type corresponding to the voxel to be classified.
Optionally, the step of extracting and fusing image features with different feature sizes from the scanned image of the target physiological tissue by using the target neural network to obtain the first segmentation result includes: dividing the scanned image into a plurality of image blocks, wherein the size of each image block in the plurality of image blocks is a preset size; for each image block in the plurality of image blocks, extracting and fusing image features with different feature sizes from each image block by adopting a target neural network to obtain a second segmentation result corresponding to each image block, wherein the second segmentation result comprises voxels belonging to different blood vessel types in each image block; and splicing the second segmentation result corresponding to each image block to obtain a first segmentation result.
Optionally, the step of extracting and fusing image features with different feature sizes from each image block by using the target neural network to obtain a second segmentation result corresponding to each image block includes: extracting image features of each image block through a feature extraction module of the target neural network, wherein the feature extraction module comprises a plurality of first feature extraction units, the feature sizes and feature dimensions of the image features extracted by different first feature extraction units are different, and the plurality of first feature extraction units are connected through a full connection layer; processing a first target image feature in the image features of each image block through a global feature extraction module to obtain a second target image feature, wherein the first target image feature is the image feature with the highest feature dimension and the smallest feature dimension in the image features of each image block; the method comprises the steps that convolution processing is conducted on second target image features through a feature fusion module of a target neural network, and image features of each image block except for first target image features are fused in the processing process to obtain target fusion image features, wherein the feature fusion module comprises a plurality of second feature extraction units, and the second feature extraction units are connected through an up-sampling module; and performing dimension reduction convolution processing on the target fusion image characteristics of each image block to obtain a second segmentation result.
Optionally, the target neural network further comprises a convolution attention module, wherein the convolution attention module comprises a channel attention module and a space attention module; the step of fusing image features of each image block other than the first target image feature during processing includes: compressing the image features to be processed in the space dimension through a channel attention module to obtain a channel attention feature map of the image features to be processed, wherein the image features to be processed are the image features of each image block except the first target image feature; carrying out global maximum pooling treatment on the channel attention feature map in the channel dimension by using the space attention module to obtain a first treatment result, and carrying out global average pooling treatment on the channel dimension to obtain a second treatment result; combining the first processing result and the second processing result to obtain a spatial attention characteristic diagram of the image characteristic to be processed; and fusing the spatial attention characteristic diagram corresponding to the image characteristic to be processed and the processing result output by the second characteristic extraction unit contained in the characteristic fusion module in the processing process.
Optionally, the target neural network is trained by: acquiring a first type of training scanning image, wherein the image type of the first type of training scanning image is an enhanced scanning image, and the first type of training scanning image comprises labeling information of image areas with different blood vessel types; determining the regional HU value average value of the image regions of different blood vessel types and the window width average value corresponding to the different blood vessel types in the first training scanning image, and determining the enhancement times corresponding to the image regions of different blood vessel types according to the regional HU value average value and the window width average value corresponding to the same blood vessel type; reducing HU values of voxels in image areas of different blood vessel types in the first type of training scanning images by corresponding enhancement factors to obtain a second type of training scanning images, wherein the image types of the second type of training scanning images are plain scanning images; screening target training images from the first type training scanning images and the second type training scanning images to obtain a target training data set, wherein the number of the second type training scanning images in the target training data set is larger than that of the first type training scanning images; training the neural network to be trained through the target training data set to obtain the target neural network.
Optionally, before the step of training the neural network to be trained through the target training data set, the arteriovenous image extraction method further includes: determining a physiological tissue region in the target training image, wherein the physiological tissue region is an image region corresponding to the target physiological tissue in the target training image; and cutting the target training image according to the physiological tissue area.
Optionally, before the step of training the neural network to be trained through the target training data set, the arteriovenous image extraction method further includes: counting volume information corresponding to voxels in each target training image in the target training data set, wherein the volume information is the actual space size corresponding to the voxels; determining target volume information according to volume information corresponding to voxels in each target training image; and resampling each target training image in the target training data set according to the target volume information, wherein the volume information corresponding to the voxels in each resampled target training image is the same.
According to another aspect of the embodiments of the present application, there is also provided an arteriovenous image extraction device, including: the first processing module is used for extracting and fusing image features with different feature sizes from a scanning image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanning image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; the second processing module is used for determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in the scanned image, and determining voxel sets corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; the third processing module is used for dividing the blood vessel types of the voxels in the voxel set according to the first segmentation result; and the fourth processing module is used for extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
According to another aspect of the embodiment of the present application, there is further provided an arteriovenous image extraction apparatus, which is characterized by including a scanning apparatus, a processor, and a display, wherein the scanning apparatus is configured to scan a target portion of an object to be scanned, so as to obtain a scanned image of a target physiological tissue of the object to be scanned; the processor is used for extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; extracting an arteriovenous image from the scanning image according to the blood vessel type and the first segmentation result corresponding to the voxels contained in the obtained voxel set, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region; and the display is used for displaying the arteriovenous images.
According to another aspect of the embodiments of the present application, there is also provided a nonvolatile storage medium in which a program is stored, wherein the active vein image extraction method is performed when the program is run.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including: the system comprises a memory and a processor for running a program stored in the memory, wherein the program runs to execute the mobile vein image extraction method.
In the embodiment of the application, image features with different feature sizes are extracted and fused from a scanning image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanning image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; according to the blood vessel type and the first segmentation result corresponding to the voxels contained in the obtained voxel set, an arteriovenous image is extracted from a scanning image, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region, voxels belonging to different blood vessel types in the scanning image are determined by adopting a neural network to obtain the first segmentation result, then the blood vessel image region is determined by adopting a HU value range, and the blood vessel type of the voxels contained in the blood vessel image region is divided by combining the first segmentation result, so that the first segmentation result can be extended, more accurate and complete arterial blood vessel region, venous blood vessel region and aortic blood vessel region are obtained, the purpose of rapidly and accurately extracting the arteriovenous image is achieved, the technical effect of efficiently extracting the arteriovenous image is achieved, and the technical problem of low efficiency caused by extracting the arteriovenous image from the medical scanning image by adopting a threshold segmentation combined growth algorithm in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic structural diagram of a computer device (mobile terminal) according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an arteriovenous image extraction method according to an embodiment of the present application;
FIG. 3 is a schematic architecture diagram of a target neural network according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an attention module provided according to an embodiment of the present application;
FIG. 5 is a schematic architecture diagram of another target neural network provided in accordance with an embodiment of the present application;
FIG. 6 is a schematic architecture diagram of a global feature extraction module provided according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a data conversion process according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an image cropping process according to an embodiment of the present application;
FIG. 9a is a schematic diagram of a model training process provided in accordance with an embodiment of the present application;
FIG. 9b is a schematic diagram of vein tags in a model training procedure provided in accordance with an embodiment of the present application;
FIG. 9c is a schematic diagram of an arterial label during model training provided in accordance with an embodiment of the present application;
FIG. 9d is a schematic illustration of an aortic label during model training provided in accordance with embodiments of the present application;
FIG. 9e is a schematic diagram of a merged label in a model training process provided in accordance with an embodiment of the present application;
FIG. 10 is a schematic illustration of a terminal vascular image restoration procedure provided in accordance with an embodiment of the present application;
fig. 11 is a schematic structural view of an arteriovenous image extraction apparatus provided according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an arteriovenous image extraction device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In recent years, the incidence rate and the death rate of diseases such as slow lung resistance and the like are increasing, and the diseases become a serious health problem endangering life. When the lung disease problem is examined, the chest CT scanning can be adopted to obtain the lung image with high density resolution and high contrast, and the lung information can be intuitively displayed for doctors in the clinical diagnosis stage. In order to further determine whether the lung has diseases such as slow pulmonary obstruction, an arteriovenous image can be extracted from a scanned image of the lung and an arteriovenous trunk model can be constructed, so that lung segment information of the lung can be more intuitively embodied, and a doctor can be helped to make an ablation therapy scheme and the like. In the prior art, when blood vessel information is extracted from a medical scanning image, a common technical means is a method of adopting threshold segmentation and combining a growth algorithm, but the method is complicated in flow, usually requires a large amount of iterative operation, and the calculation time cost is high.
In order to solve this problem, related solutions are provided in the embodiments of the present application, and are described in detail below.
According to an embodiment of the present application, a method embodiment of an arteriovenous image extraction method is provided, and it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different from that herein.
The method embodiments provided by the embodiments of the present application may be performed in a mobile terminal, a computer terminal, or similar computing device. Fig. 1 shows a hardware block diagram of a computer terminal (or mobile device) for implementing an arteriovenous image extraction method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more processors 102 (shown as 102a, 102b, … …,102 n) which may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA, a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, such as a program instruction/data storage device corresponding to the method for extracting an arteriovenous image in the embodiment of the application, and the processor 102 executes the software programs and modules stored in the memory 104 to perform various functional applications and data processing, that is, implement the method for extracting an arteriovenous image of the application program. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
In the above operating environment, the embodiment of the present application provides an arteriovenous image extraction method, as shown in fig. 2, including the following steps:
step S202, extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by using a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta;
The target tissue may be any tissue in the human body, for example, lung, liver, kidney, or the like.
In the technical solution provided in step S202, in order to reduce the hardware performance requirement on the device running the arteriovenous image extraction method provided in the present application, the step of extracting and fusing the image features with different feature sizes from the scanned image of the target physiological tissue by using the target neural network to obtain the first segmentation result includes: dividing the scanned image into a plurality of image blocks, wherein the size of each image block in the plurality of image blocks is a preset size; for each image block in the plurality of image blocks, extracting and fusing image features with different feature sizes from each image block by adopting a target neural network to obtain a second segmentation result corresponding to each image block, wherein the second segmentation result comprises voxels belonging to different blood vessel types in each image block; and splicing the second segmentation result corresponding to each image block to obtain a first segmentation result.
Specifically, the acquired flat scan CT image data of the target physiological tissue (such as lung) of the patient can be segmented into image blocks with the same size, the segmented image blocks are sequentially input into the target neural network model, the second segmentation results corresponding to the image blocks output by the target neural network model are obtained, and the second segmentation results are spliced to obtain the first segmentation results. When the second segmentation result is obtained, it is possible to hold only voxels belonging to the blood vessel region in the image block and set HU values of voxels not belonging to the blood vessel region to be the same as the picture background. The same applies to the first segmentation result obtained by stitching, that is, the first segmentation result may be an image obtained by setting all HU values of voxels of a non-vascular region in the flat scan CT image to be the same as the background HU value. Wherein the background of the image refers to the image area of the image that does not belong to the target tissue.
In some embodiments of the present application, in order to fully utilize image features of different dimensions in an image, so as to obtain a more accurate segmentation result, the steps of using a target neural network to extract and fuse image features of different feature sizes from each image block, respectively, and obtaining a second segmentation result corresponding to each image block include: extracting image features of each image block through a feature extraction module of the target neural network, wherein the feature extraction module comprises a plurality of first feature extraction units, the feature sizes and feature dimensions of the image features extracted by different first feature extraction units are different, and the plurality of first feature extraction units are connected through a full connection layer; extracting features of a first target image feature in the image features through a global feature extraction module to obtain a second target image feature, wherein the first target image feature is an image feature with the highest feature dimension and the smallest feature dimension in the image features of each image block; the method comprises the steps that convolution processing is conducted on second target image features through a feature fusion module of a target neural network, and image features of each image block except for first target image features are fused in the processing process to obtain target fusion image features, wherein the feature fusion module comprises a plurality of second feature extraction units, and the second feature extraction units are connected through an up-sampling module; and performing dimension reduction convolution processing on the target fusion image characteristics to obtain a second segmentation result.
As an alternative implementation manner, the backbone network architecture of the target neural network in the embodiment of the present application is shown in fig. 3, and as can be seen from fig. 3, the backbone network of the target neural network includes a feature extraction module and a feature fusion module, the feature extraction module comprises a plurality of first feature extraction units, and the feature extraction units are connected through a largest pooling layer with the size of 2 multiplied by 2, so that the feature size is reduced, and the downsampling effect is achieved.
Each first feature extraction unit is composed of two convolution layers, each convolution layer comprises a Conv+BN+ReLu structure, wherein the convolution kernels in the Conv structure in each convolution layer are 3 x 3 convolution kernels.
The second feature extraction unit in the feature fusion module is to add an image fusion layer based on the first feature extraction unit for fusing the image features of corresponding dimensions. The image fusion layer is used for performing channel stitching (Concat) operation on the image features input into the image fusion layer. The second feature extraction units are connected by a convolution layer of size 2 x 2, where the convolution layer is used for up-sampling. The output result of the last second feature extraction unit contained in the feature fusion module is subjected to dimension reduction convolution processing through a convolution kernel of 1 multiplied by 1, so that a second segmentation result is obtained, wherein the second segmentation result corresponding to each image block comprises voxels belonging to an arterial vessel region, voxels belonging to a venous vessel region and voxels belonging to an aortic vessel region.
In addition, as can be seen from fig. 3, the target neural network further includes a convolution attention module, and the convolution attention module includes a channel attention module and a space attention module as shown in fig. 4, each first feature extraction unit is connected with the corresponding second feature extraction unit through the convolution attention module, and the first feature extraction unit and the second feature extraction unit at the bottom layer are connected through two convolution layers, and the structures of the two convolution layers are the same as those of the convolution layers in the first feature extraction unit or the second feature extraction unit. The convolution attention module may perform feature extraction on the image features extracted by the first feature extraction unit from the spatial dimension and the channel dimension, so as to obtain a spatial attention feature map of the image features extracted by the first feature extraction unit.
Specifically, the step of fusing image features of each image block, except for the first target image feature, through the target neural network during processing includes: compressing the image features to be processed in the space dimension through a channel attention module to obtain a channel attention feature map of the image features to be processed, wherein the image features to be processed are the image features of each image block except the first target image feature; carrying out global maximum pooling treatment on the channel attention feature map in the channel dimension by using the space attention module to obtain a first treatment result, and carrying out global average pooling treatment on the channel dimension to obtain a second treatment result; combining the first processing result and the second processing result to obtain a spatial attention characteristic diagram of the image characteristic to be processed; and fusing the spatial attention characteristic diagram corresponding to the image characteristic to be processed and the processing result output by the second characteristic extraction unit contained in the characteristic fusion module in the processing process.
In some embodiments of the present application, as shown in fig. 5, the encoding module and the decoding module in the backbone network architecture of the target neural network are connected through a global feature extraction module (transducer), where the global feature extraction module is used to encode the high-dimensional feature, so as to obtain the global feature. The structure of the global feature extraction module is shown in fig. 6, and the global feature extraction module comprises a linear layer, a multi-head attention module, a normalization layer and a feedforward neural network. Specifically, the image features input to the global feature extraction module are sent to the multi-head attention module through the normalization layer, the multi-head attention module fuses the multiple attention mechanism output features, the model is further prevented from being over-fitted, and then feature fusion is carried out and normalization acceleration model convergence is carried out. The feedforward neural network comprises two full-connection layers, so that features can be projected to a higher dimension, and feature information which is easier to distinguish is obtained.
By adding the global feature extraction module in the target neural network, the global features of the scanned image can be additionally obtained, so that a better overall segmentation effect is obtained.
In some embodiments of the present application, the target neural network is trained by: acquiring a first type of training scanning image, wherein the image type of the first type of training scanning image is an enhanced scanning image, and the first type of training scanning image comprises labeling information of image areas with different blood vessel types; determining the regional HU value average value of the image regions of different blood vessel types and the window width average value corresponding to the different blood vessel types in the first training scanning image, and determining the enhancement times corresponding to the image regions of different blood vessel types according to the regional HU value average value and the window width average value corresponding to the same blood vessel type; reducing HU values of voxels in image areas of different blood vessel types in the first type of training scanning images by corresponding enhancement factors to obtain a second type of training scanning images, wherein the image types of the second type of training scanning images are plain scanning images; screening target training images from the first type training scanning images and the second type training scanning images to obtain a target training data set, wherein the number of the second type training scanning images in the target training data set is larger than that of the first type training scanning images; training the neural network to be trained through the target training data set to obtain the target neural network.
When the first training scan image is obtained, the first training scan image may be subjected to image segmentation processing, so as to determine an image region corresponding to each blood vessel type in the first training scan image. Because the image types of the first training scanning images are enhanced scanning images, the image areas corresponding to the blood vessel types contained in the first training scanning images are conveniently distinguished, and the image areas corresponding to the blood vessel types can be accurately divided, so that the accuracy of the image areas corresponding to the blood vessel types contained in the second training scanning images obtained by converting the first training scanning images is ensured, and meanwhile, a large number of plain scanning images are not required to be acquired, the processing efficiency and convenience can be improved, and the accuracy and the reliability of the target neural network obtained by subsequent training are ensured.
Specifically, when constructing the target training data set, the enhanced scan image may be reconstructed into a flat scan image by adopting a data conversion manner, and the flat scan image is combined with a small amount of enhanced scan images to serve as the target training data set. The process of data conversion of the enhanced scan image is shown in fig. 7, and the main process is as follows: and constructing a deep learning data set by using the marked enhanced scanning images, wherein each enhanced scanning image in the deep learning data set is marked with arteries, veins and aorta contained in the enhanced scanning image, constructing a neural network model for training, segmenting a subsequent enhanced scanning image by using the trained neural network model to obtain a corresponding marked enhanced scanning image, and realizing automatic identification of the enhanced scanning image by using the trained neural network model to quickly obtain the regions of the arteries, veins and aorta contained in each enhanced scanning image, so that the subsequent processing speed can be obviously improved. And then determining the enhancement times of the corresponding target areas according to the HU mean value of the target area of any one enhanced scanning image and the common HU value of the corresponding target area, so as to reduce each voxel of the target area of the enhanced scanning image by the corresponding enhancement times and obtain the reconstructed plain scanning image. And finally, extracting the areas of the enhanced scanning image, which are not reconstructed, setting a unified enhancement multiple, and reducing each voxel of the areas, which are not reconstructed, by the unified enhancement multiple to obtain a final flat scanning image. Wherein the target area may include arteries, veins, and aorta. Of course, each enhanced scan image in the deep learning data may be further labeled with other regions such as the heart contained therein, which is not limited in this specification. In fig. 7, each square corresponds to a voxel, the color shade of the square represents the HU value of the square, and assuming that a first enhancement region identified by the numeral "1" may represent an arterial region and a second enhancement region identified by the numeral "2" may represent a venous region, it can be seen that the HU value of a voxel in the enhancement region is significantly smaller after processing, and is close to the HU values of voxels in other non-enhancement regions.
As an optional implementation manner, before the step of training the neural network to be trained through the target training data set, in order to avoid interference of background information, a physiological tissue area in the target training image may be further determined, where the physiological tissue area is an image area corresponding to the target physiological tissue in the target training image; and cutting the target training image according to the physiological tissue area.
Specifically, as shown in fig. 8, the clipping process first determines a mask (image mask) of a physiological tissue region in a target physiological tissue image in a target training image, then determines a minimum three-dimensional bounding box according to the mask, and clips the target training image according to the size of the bounding box.
It should be noted that, because the actual volumes corresponding to one voxel in different scanned images are different, before the step of training the neural network to be trained through the target training data set, volume information corresponding to the voxels in each target training image in the target training data set can be counted, where the volume information is the actual space size corresponding to the voxels; determining target volume information according to volume information corresponding to voxels in each target training image; and resampling each target training image in the target training data set according to the target volume information, wherein the volume information corresponding to the voxels in each resampled target training image is the same.
Specifically, the target volume information may be determined according to volume information corresponding to voxels in the target training image, and may be, for example, a median of volume information corresponding to voxels in all or part (e.g., 90%) of the images in the training dataset.
In order to make the training images have the same gray level distribution, the training images can be standardized, for example, z-scanning standardization can be performed, mean values and standard deviations of HU values of the images in the training data set are counted, and for each training image, the HU value of each voxel in the image is subtracted from the mean value and divided by the standard deviation to obtain a standardized result.
As an optional implementation manner, in order to expand the data set and increase the diversity of the data so as to improve the generalization capability of the target neural network, the training image can be randomly rotated by 0-45 degrees, scaled according to a preset scaling value interval, and Gaussian noise can be added into the training image to perform brightness transformation.
As an alternative embodiment, the complete flow of training the target neural network is shown in fig. 9 a. Firstly combining the flat scan image and a small amount of enhanced scan image, and combining the labels of different types of blood vessels to obtain a combined label, thereby obtaining a training data set. And training the target neural network by adopting the training data set, and adjusting the weight of the target neural network in the training process. The first segmentation result output by the final target neural network will contain all voxels corresponding to different vessel types in the input scan image. The labels of the different types of blood vessels described above include a venous label as shown in fig. 9b, an arterial label as shown in fig. 9c, and an aortic label as shown in fig. 9 d. The resulting combination tag is shown in fig. 9 e.
Step S204, determining HU value range corresponding to blood vessel image areas of different blood vessel types contained in the scanned image, and determining voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value range;
in the technical solution provided in step S204, the voxel type corresponding to the voxel in the voxel set is not distinguished in the voxel set determined according to the HU value range, so that the voxel in the voxel set needs to be divided by using the first segmentation result, thereby determining the blood vessel type corresponding to the voxel in the voxel set.
Step S206, dividing the blood vessel types of the voxels in the voxel set according to the first division result;
step S208, extracting an arteriovenous image from the scanned image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
In the solution provided in step S208, since the voxels in the voxel set are classified in step S206, there may be some voxels that cannot be classified, for example, some voxels corresponding to the end of the blood vessel cannot be classified, and the end of the blood vessel is broken on the image. To determine the vessel type corresponding to these unclassified voxels, the vessel ends are connected in an extending manner, and the second segmentation result may be repaired by using an image repair procedure as shown in fig. 10, which specifically includes the following steps: determining voxels to be classified in the voxel set, wherein the voxels to be classified are voxels which are not classified into corresponding blood vessel types in the blood vessel image area; counting the number of voxels corresponding to each blood vessel type in a neighborhood of the voxels to be classified, wherein the neighborhood comprises all voxels adjacent to the voxels to be classified in the scanned image; and determining the blood vessel type with the largest corresponding voxel number as the blood vessel type corresponding to the voxel to be classified. The voxels to be classified refer to voxels in the voxel set which cannot correspond to the first segmentation result. In some embodiments of the present application, when a voxel in the voxel set belongs to the voxel set and the first segmentation result at the same time, a blood vessel type corresponding to the voxel may be determined according to the first segmentation result, and for a voxel only existing in the voxel set and not included by the first segmentation result, the voxel is considered to be a voxel to be classified in the voxel set.
As an alternative embodiment, as shown in fig. 10, the voxel to be classified is a voxel with a label value of X, the neighborhood of the voxel may be an image region within a range of 3X 3 centered on the voxel to be classified, the numbers 1, 2 in each square and the shades in the squares in fig. 10 are used to represent different label values.
Extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; according to the blood vessel type and the first segmentation result corresponding to the voxels contained in the obtained voxel set, an arteriovenous image is extracted from the scanned image, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region, voxels belonging to the blood vessel image in the scanned image are determined by adopting a neural network, and voxels in blood vessel images of different blood vessel types are determined by adopting a HU value range, so that the purpose of rapidly and accurately extracting the arteriovenous image is achieved, the technical effect of efficiently extracting the arteriovenous image is achieved, and the technical problem of low efficiency caused by extracting the arteriovenous image from the medical scanned image by adopting a threshold segmentation and growth algorithm mode in the related technology is solved.
The embodiment of the application provides an arteriovenous image extraction device, and fig. 11 is a schematic structural diagram of the device. As shown in fig. 11, the apparatus includes: a scanning device 110, a processor 112, and a display 114, wherein the scanning device 110 is configured to scan a target portion of an object to be scanned, thereby obtaining a scanned image of a target tissue of the object to be scanned; the processor 112 is configured to extract and fuse image features with different feature sizes from a scanned image of a target physiological tissue by using a target neural network, so as to obtain a first segmentation result, where the first segmentation result includes voxels belonging to different blood vessel types in the scanned image, the target neural network includes a feature extraction module and a feature fusion module, and the feature extraction module and the feature fusion module are connected by a global feature extraction module, and the blood vessel types include an artery, a vein and an aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; extracting an arteriovenous image from the scanning image according to the blood vessel type and the first segmentation result corresponding to the voxels contained in the obtained voxel set, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region; and a display 114 for displaying the first segmentation result and the second segmentation result.
An embodiment of the present application provides an arteriovenous image extraction device, fig. 12 is a schematic structural diagram of the device, and as shown in fig. 12, the device includes: the first processing module 120 is configured to extract and fuse image features with different feature sizes from a scanned image of a target physiological tissue by using a target neural network, so as to obtain a first segmentation result, where the first segmentation result includes voxels belonging to different blood vessel types in the scanned image, the target neural network includes a feature extraction module and a feature fusion module, and the feature extraction module and the feature fusion module are connected by a global feature extraction module, and the blood vessel types include an artery, a vein and an aorta; a second processing module 122, configured to determine a HU value range corresponding to a blood vessel image region including different blood vessel types in the scanned image, and determine a voxel set corresponding to the blood vessel image region in the scanned image according to the HU value range; a third processing module 124, configured to divide the vessel types to which the voxels in the voxel set belong according to the first segmentation result; a fourth processing module 126 is configured to extract an arteriovenous image from the scanned image according to the vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, where the arteriovenous image includes an arterial vessel region, a venous vessel region, and an aortic vessel region.
In some embodiments of the present application, the step of determining, by the third processing module 124, a vessel type corresponding to a voxel in the set of voxels according to the first segmentation result includes: determining voxels to be classified in the voxel set, wherein the voxels to be classified are voxels which are not classified into corresponding blood vessel types in the blood vessel image area; counting the number of voxels corresponding to each blood vessel type in a neighborhood of the voxels to be classified, wherein the neighborhood comprises all voxels adjacent to the voxels to be classified in the scanned image; and determining the blood vessel type with the largest corresponding voxel number as the blood vessel type corresponding to the voxel to be classified.
In some embodiments of the present application, the step of extracting and fusing the image features with different feature sizes from the scanned image of the target physiological tissue by the first processing module 120 using the target neural network to obtain the first segmentation result includes: dividing the scanned image into a plurality of image blocks, wherein the size of each image block in the plurality of image blocks is a preset size; for each image block in the plurality of image blocks, extracting and fusing image features with different feature sizes from each image block by adopting a target neural network to obtain a second segmentation result corresponding to each image block, wherein the second segmentation result comprises voxels belonging to a blood vessel region in each image block; and splicing the second segmentation result corresponding to each image block to obtain a first segmentation result.
In some embodiments of the present application, the step of extracting and fusing the image features with different feature sizes from each image block by the first processing module 120 using the target neural network, and obtaining the second segmentation result corresponding to each image block includes: extracting image features of each image block through a feature extraction module of the target neural network, wherein the feature extraction module comprises a plurality of first feature extraction units, the feature sizes and feature dimensions of the image features extracted by different first feature extraction units are different, and the plurality of first feature extraction units are connected through a full connection layer; processing a first target image feature in the image features of each image block through a global feature extraction module to obtain a second target image feature, wherein the first target image feature is the image feature with the highest feature dimension and the smallest feature dimension in the image features of each image block; the method comprises the steps that convolution processing is conducted on second target image features through a feature fusion module of a target neural network, and image features of each image block except for first target image features are fused in the processing process to obtain target fusion image features, wherein the feature fusion module comprises a plurality of second feature extraction units, and the second feature extraction units are connected through an up-sampling module; and performing dimension reduction convolution processing on the target fusion image characteristics of each image block to obtain a second segmentation result.
In some embodiments of the present application, the target neural network further includes a convolution attention module, where the convolution attention module includes a channel attention module and a spatial attention module; the step of the first processing module 120 fusing image features of each image block other than the first target image feature during processing includes: compressing the image features to be processed in the space dimension through a channel attention module to obtain a channel attention feature map of the image features to be processed, wherein the image features to be processed are the image features of each image block except the first target image feature; carrying out global maximum pooling treatment on the channel attention feature map in the channel dimension by using the space attention module to obtain a first treatment result, and carrying out global average pooling treatment on the channel dimension to obtain a second treatment result; combining the first processing result and the second processing result to obtain a spatial attention characteristic diagram of the image characteristic to be processed; and fusing the spatial attention characteristic diagram corresponding to the image characteristic to be processed and the processing result output by the second characteristic extraction unit contained in the characteristic fusion module in the processing process.
In some embodiments of the present application, the target neural network is trained by: acquiring a first type of training scanning image, wherein the image type of the first type of training scanning image is an enhanced scanning image, and the first type of training scanning image comprises labeling information of image areas with different blood vessel types; determining the regional HU value average value of the image regions of different blood vessel types and the window width average value corresponding to the different blood vessel types in the first training scanning image, and determining the enhancement times corresponding to the image regions of different blood vessel types according to the regional HU value average value and the window width average value corresponding to the same blood vessel type; reducing HU values of voxels in image areas of different blood vessel types in the first type of training scanning images by corresponding enhancement factors to obtain a second type of training scanning images, wherein the image types of the second type of training scanning images are plain scanning images; screening target training images from the first type training scanning images and the second type training scanning images to obtain a target training data set, wherein the number of the second type training scanning images in the target training data set is larger than that of the first type training scanning images; training the neural network to be trained through the target training data set to obtain the target neural network.
In some embodiments of the present application, before the step of training the neural network to be trained by the target training data set, the arteriovenous image extraction device is further configured to: determining a physiological tissue region in the target training image, wherein the physiological tissue region is an image region corresponding to the target physiological tissue in the target training image; and cutting the target training image according to the physiological tissue area.
In some embodiments of the present application, before the step of training the neural network to be trained by the target training data set, the arteriovenous image extraction device is further configured to: counting volume information corresponding to voxels in each target training image in the target training data set, wherein the volume information is the actual space size corresponding to the voxels; determining target volume information according to volume information corresponding to voxels in each target training image; and resampling each target training image in the target training data set according to the target volume information, wherein the volume information corresponding to the voxels in each resampled target training image is the same.
Note that each module in the above-described arteriovenous image extraction device may be a program module (for example, a set of program instructions for realizing a specific function), or may be a hardware module, and the latter may be represented by the following form, but is not limited thereto: the expression forms of the modules are all a processor, or the functions of the modules are realized by one processor.
The embodiment of the application provides a nonvolatile storage medium, wherein a program is stored in the nonvolatile storage medium, and the following arteriovenous image extraction method is executed when the program runs: extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; and extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
The embodiment of the application provides electronic equipment, which comprises: the system comprises a memory and a processor, wherein the processor is used for running a program stored in the memory, and the following arteriovenous image extraction method is executed when the program runs: extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; and extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
Embodiments of the present application provide a computer program product which, when executed by an electronic device or processor, performs an arteriovenous image extraction method of: extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; and extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
The embodiment of the application provides a diagnosis and treatment method, which comprises the following steps: extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in a scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; extracting an arteriovenous image from the scanning image according to the blood vessel type and the first segmentation result corresponding to the voxels contained in the obtained voxel set, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region; and (5) making an ablation scheme according to the arteriovenous images.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be essentially or a part contributing to the related art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. An arteriovenous image extraction method is characterized by comprising the following steps:
extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta;
determining HU value ranges corresponding to blood vessel image areas of different blood vessel types contained in the scanned image, and determining a voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value ranges;
dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result;
And extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region.
2. The method of claim 1, wherein the step of determining a vessel type corresponding to a voxel in the set of voxels from the first segmentation result comprises:
determining voxels to be classified in the voxel set, wherein the voxels to be classified are voxels which are not classified into corresponding blood vessel types in the blood vessel image region;
counting the number of voxels corresponding to each blood vessel type in a neighborhood of the voxel to be classified, wherein the neighborhood comprises all voxels adjacent to the voxel to be classified in the scanning image;
and determining the blood vessel type with the largest corresponding voxel number as the blood vessel type corresponding to the voxel to be classified.
3. The method for extracting an arteriovenous image according to claim 1, wherein the step of extracting and fusing image features of different feature sizes from the scanned image of the target physiological tissue by using the target neural network to obtain the first segmentation result comprises:
Dividing the scanned image into a plurality of image blocks, wherein the size of each image block in the plurality of image blocks is a preset size;
for each image block in the plurality of image blocks, extracting and fusing image features with different feature sizes from each image block by adopting the target neural network to obtain a second segmentation result corresponding to each image block, wherein the second segmentation result comprises voxels belonging to different blood vessel types in each image block;
and splicing the second segmentation results corresponding to each image block to obtain the first segmentation results.
4. The method for extracting an arteriovenous image according to claim 3, wherein the step of extracting and fusing image features with different feature sizes from each image block by using the target neural network to obtain the second segmentation result corresponding to each image block comprises the steps of:
extracting image features of each image block through a feature extraction module of a target neural network, wherein the feature extraction module comprises a plurality of first feature extraction units, the feature sizes and feature dimensions of the image features extracted by different first feature extraction units are different, and the plurality of first feature extraction units are connected through a full connection layer;
Processing a first target image feature in the image features of each image block through the global feature extraction module to obtain a second target image feature, wherein the first target image feature is the image feature with the highest feature dimension and the smallest feature dimension in the image features of each image block;
the second target image features are subjected to convolution processing through a feature fusion module of a target neural network, and the image features of each image block except the first target image features are fused in the processing process to obtain target fusion image features, wherein the feature fusion module comprises a plurality of second feature extraction units, and the second feature extraction units are connected through an up-sampling module;
and performing dimension reduction convolution processing on the target fusion image characteristics of each image block to obtain the second segmentation result.
5. The method for extracting an arteriovenous image according to claim 4, wherein the target neural network further comprises a convolution attention module, and the convolution attention module comprises a channel attention module and a space attention module; the step of fusing image features of each image block other than the first target image feature during processing includes:
Compressing image features to be processed in a space dimension through the channel attention module to obtain a channel attention feature map of the image features to be processed, wherein the image features to be processed are the image features of each image block except the first target image feature;
carrying out global maximum pooling treatment on the channel attention feature map in the channel dimension by the space attention module to obtain a first treatment result, and carrying out global average pooling treatment on the channel dimension to obtain a second treatment result;
combining the first processing result and the second processing result to obtain a spatial attention characteristic diagram of the image characteristic to be processed;
and fusing the spatial attention characteristic diagram corresponding to the image characteristic to be processed and the processing result output by the second characteristic extraction unit contained in the characteristic fusion module in the processing process.
6. The arteriovenous image extraction method of claim 1, wherein the target neural network is trained by:
acquiring a first type of training scanning image, wherein the image type of the first type of training scanning image is an enhanced scanning image, and the first type of training scanning image comprises labeling information of image areas with different blood vessel types;
Determining the regional HU value average value of the image regions of different blood vessel types and the window width average value corresponding to the different blood vessel types in the first training scanning image, and determining the enhancement times corresponding to the image regions of different blood vessel types according to the regional HU value average value and the window width average value corresponding to the same blood vessel type;
reducing HU values of voxels in image areas of different blood vessel types in the first type of training scanning images by corresponding enhancement factors to obtain a second type of training scanning images, wherein the image types of the second type of training scanning images are plain scanning images;
screening target training images from the first type training scanning images and the second type training scanning images to obtain a target training data set, wherein the number of the second type training scanning images in the target training data set is larger than that of the first type training scanning images;
and training the neural network to be trained through the target training data set to obtain the target neural network.
7. An arteriovenous image extraction device, characterized by comprising:
the device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for extracting and fusing image features with different feature sizes from a scanning image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, the first segmentation result comprises voxels belonging to different blood vessel types in the scanning image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta;
The second processing module is used for determining HU value ranges corresponding to blood vessel image areas containing different blood vessel types in the scanning image, and determining a voxel set corresponding to the blood vessel image areas in the scanning image according to the HU value ranges;
the third processing module is used for dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result;
and a fourth processing module, configured to extract an arteriovenous image from the scanned image according to the obtained vessel type corresponding to the voxels contained in the voxel set and the first segmentation result, where the arteriovenous image includes an arterial vessel region, a venous vessel region and an aortic vessel region.
8. An arteriovenous image extraction device is characterized by comprising a scanning device, a processor and a display, wherein,
the scanning equipment is used for scanning a target part of an object to be scanned so as to obtain a scanning image of a target physiological tissue of the object to be scanned;
the processor is used for extracting and fusing image features with different feature sizes from a scanned image of a target physiological tissue by adopting a target neural network to obtain a first segmentation result, wherein the first segmentation result comprises voxels belonging to different blood vessel types in the scanned image, the target neural network comprises a feature extraction module and a feature fusion module, the feature extraction module and the feature fusion module are connected through a global feature extraction module, and the blood vessel types comprise arteries, veins and aorta; determining HU value range corresponding to blood vessel image areas containing different blood vessel types in the scanned image, and determining voxel set corresponding to the blood vessel image areas in the scanned image according to the HU value range; dividing the blood vessel type of the voxel in the voxel set according to the first segmentation result; extracting an arteriovenous image from the scanning image according to the blood vessel type corresponding to the voxels contained in the obtained voxel set and the first segmentation result, wherein the arteriovenous image comprises an arterial blood vessel region, a venous blood vessel region and an aortic blood vessel region;
The display is used for displaying the arteriovenous images.
9. A nonvolatile storage medium, wherein a program is stored in the nonvolatile storage medium, and wherein the program, when executed, controls a device in which the nonvolatile storage medium is located to execute the arteriovenous image extraction method according to any one of claims 1 to 6.
10. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program is executed to perform the arteriovenous image extraction method as set forth in any one of claims 1 to 6.
CN202310742187.1A 2023-06-21 2023-06-21 Method and device for extracting artery and vein image and nonvolatile storage medium Active CN116485820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310742187.1A CN116485820B (en) 2023-06-21 2023-06-21 Method and device for extracting artery and vein image and nonvolatile storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310742187.1A CN116485820B (en) 2023-06-21 2023-06-21 Method and device for extracting artery and vein image and nonvolatile storage medium

Publications (2)

Publication Number Publication Date
CN116485820A true CN116485820A (en) 2023-07-25
CN116485820B CN116485820B (en) 2023-09-22

Family

ID=87223552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310742187.1A Active CN116485820B (en) 2023-06-21 2023-06-21 Method and device for extracting artery and vein image and nonvolatile storage medium

Country Status (1)

Country Link
CN (1) CN116485820B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703948A (en) * 2023-08-03 2023-09-05 杭州脉流科技有限公司 Intracranial vessel tree segmentation method and device based on deep neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190159737A1 (en) * 2015-08-14 2019-05-30 Elucid Bioimaging Inc. Methods and systems for utilizing quantitative imaging
CN113744272A (en) * 2021-11-08 2021-12-03 四川大学 Automatic cerebral artery delineation method based on deep neural network
WO2022007352A1 (en) * 2020-07-10 2022-01-13 温州医科大学 Three-dimensional choroidal vessel imaging and quantitative analysis method and apparatus based on optical coherence tomography system
WO2022021955A1 (en) * 2020-07-30 2022-02-03 推想医疗科技股份有限公司 Image segmentation method and apparatus, and training method and apparatus for image segmentation model
CN115330807A (en) * 2022-07-12 2022-11-11 国电南瑞科技股份有限公司 Choroidal neovascularization image segmentation method based on hybrid convolutional network
CN116168099A (en) * 2022-12-22 2023-05-26 杭州堃博生物科技有限公司 Medical image reconstruction method and device and nonvolatile storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190159737A1 (en) * 2015-08-14 2019-05-30 Elucid Bioimaging Inc. Methods and systems for utilizing quantitative imaging
WO2022007352A1 (en) * 2020-07-10 2022-01-13 温州医科大学 Three-dimensional choroidal vessel imaging and quantitative analysis method and apparatus based on optical coherence tomography system
WO2022021955A1 (en) * 2020-07-30 2022-02-03 推想医疗科技股份有限公司 Image segmentation method and apparatus, and training method and apparatus for image segmentation model
CN113744272A (en) * 2021-11-08 2021-12-03 四川大学 Automatic cerebral artery delineation method based on deep neural network
CN115330807A (en) * 2022-07-12 2022-11-11 国电南瑞科技股份有限公司 Choroidal neovascularization image segmentation method based on hybrid convolutional network
CN116168099A (en) * 2022-12-22 2023-05-26 杭州堃博生物科技有限公司 Medical image reconstruction method and device and nonvolatile storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冯诺;宋余庆;刘哲;: "特征重用和注意力机制下肝肿瘤自动分类", 中国图象图形学报, no. 08 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703948A (en) * 2023-08-03 2023-09-05 杭州脉流科技有限公司 Intracranial vessel tree segmentation method and device based on deep neural network
CN116703948B (en) * 2023-08-03 2023-11-14 杭州脉流科技有限公司 Intracranial vessel tree segmentation method and device based on deep neural network

Also Published As

Publication number Publication date
CN116485820B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN109712217B (en) Medical image visualization method and system
CN108876794A (en) Aneurysm in volumetric image data with carry being isolated for tumor blood vessel
EP3814984A1 (en) Systems and methods for automated detection of visual objects in medical images
CN116485820B (en) Method and device for extracting artery and vein image and nonvolatile storage medium
CN106709920B (en) Blood vessel extraction method and device
CN111028248A (en) Method and device for separating static and dynamic pulses based on CT (computed tomography) image
CN116503607B (en) CT image segmentation method and system based on deep learning
Ravichandran et al. 3D inception U-Net for aorta segmentation using computed tomography cardiac angiography
WO2021030995A1 (en) Inferior vena cava image analysis method and product based on vrds ai
CN114723893A (en) Organ tissue spatial relationship rendering method and system based on medical images
CN113706541B (en) Image processing method and device
CN115147360B (en) Plaque segmentation method and device, electronic equipment and readable storage medium
CN116664592A (en) Image-based arteriovenous blood vessel separation method and device, electronic equipment and medium
CN116168099A (en) Medical image reconstruction method and device and nonvolatile storage medium
CN113763324B (en) Image processing method, computer-readable storage medium, processor, and system
Dou et al. Unsupervised domain adaptation of convnets for medical image segmentation via adversarial learning
CN110910409A (en) Gray scale image processing method and device and computer readable storage medium
CN113362327B (en) Region segmentation method, device, terminal and storage medium in chest image
CN112862785B (en) CTA image data identification method, device and storage medium
CN112862787B (en) CTA image data processing method, device and storage medium
CN112862786B (en) CTA image data processing method, device and storage medium
CN111798468B (en) Image processing method and device, storage medium and electronic terminal
CN114419375A (en) Image classification method, training method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant