CN113256670A - Image processing method and device, and network model training method and device - Google Patents

Image processing method and device, and network model training method and device Download PDF

Info

Publication number
CN113256670A
CN113256670A CN202110566138.8A CN202110566138A CN113256670A CN 113256670 A CN113256670 A CN 113256670A CN 202110566138 A CN202110566138 A CN 202110566138A CN 113256670 A CN113256670 A CN 113256670A
Authority
CN
China
Prior art keywords
medical image
feature map
neural network
network
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110566138.8A
Other languages
Chinese (zh)
Inventor
简伟健
陈宽
王少康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202110566138.8A priority Critical patent/CN113256670A/en
Publication of CN113256670A publication Critical patent/CN113256670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses an image processing method and device and a network model training method and device. The method comprises the following steps: acquiring a first feature map of a medical image through a first neural network according to the medical image; acquiring a second feature map of the medical image through a map network according to the first feature map; and acquiring the segmentation result of the artery and the vein of the medical image through a second neural network according to the second characteristic diagram. By adopting the graph network, the global information of the medical image can be obtained, and meanwhile, the second neural network is adopted, the local fine segmentation of the medical image can be carried out on the basis of the global information of the medical image, so that a more fine segmentation result of the artery and the vein is obtained.

Description

Image processing method and device, and network model training method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a network model training method and apparatus.
Background
Blood vessel segmentation is useful in imaging diagnosis of blood vessels. For example, cardiovascular disease is very common and one of the most common diseases in the world today, and for segmentation of coronary arteries of the heart, the traditional method is segmentation of vessels based on region growing, or segmentation of vessels based on deep learning. However, because the coronary artery and the vein have similar shapes and are very close in gray scale, the traditional method is difficult to distinguish the coronary artery from the vein, and therefore the segmentation precision of the blood vessel is not high.
Disclosure of Invention
In view of the above, embodiments of the present application are directed to providing an image processing method and apparatus, and a network model training method and apparatus, which can improve the segmentation accuracy of a blood vessel.
According to a first aspect of embodiments of the present application, there is provided an image processing method, including: acquiring a first feature map of a medical image through a first neural network according to the medical image; acquiring a second feature map of the medical image through a map network according to the first feature map; and acquiring the segmentation result of the artery and the vein of the medical image through a second neural network according to the second characteristic diagram.
In one embodiment, the method further comprises: performing midline extraction on the artery and the vein to obtain corresponding midlines of the artery and the vein; performing point sampling on the middle line to obtain a plurality of key points; and acquiring an adjacency matrix corresponding to the artery and the vein according to the connection relation among the plurality of key points.
In one embodiment, the obtaining a second feature map of the medical image through a map network according to the first feature map comprises: and acquiring the second characteristic diagram through the diagram network according to the first characteristic diagram and the adjacency matrix.
In one embodiment, the line width of the central line is a single pixel point, and the method further includes: and traversing the number of adjacent pixel points of each pixel point in the plurality of pixel points on the central line.
In one embodiment, the performing point sampling on the middle line to obtain a plurality of key points includes: and sampling pixel points with two and/or one adjacent pixel point on the central line according to a preset distance to obtain the plurality of key points.
In one embodiment, the medical image is obtained by cutting a block of a preset size from the original medical image centering on any of the plurality of key points.
In one embodiment, the acquiring, from the medical image, a first feature map of the medical image through a first neural network includes: and acquiring the first feature map through the first neural network according to the medical image and the segmentation result of the blood vessel and the background, wherein the blood vessel comprises the vein and the artery.
In one embodiment, when the artery is a coronary artery, the method further comprises: acquiring a connected domain of the vein in a segmentation result of the blood vessel and the background according to a connected domain algorithm; and removing the veins of which the size of the connected domain exceeds a preset threshold value from the segmentation results of the artery and the veins to obtain the segmentation result of the coronary artery.
In one embodiment, the second feature map is a feature map output by a preset layer of the graph network.
According to a second aspect of the embodiments of the present application, there is provided a method for training a network model, including: determining a sample medical image, wherein the sample medical image comprises annotated veins and arteries; training a cascade neural network based on the sample medical image to generate the network model, wherein the cascade neural network comprises a first neural network, a graph network and a second neural network which are connected in series, the first neural network is used for outputting a first sample feature map of the sample medical image, the graph network is used for outputting a second sample feature map of the sample medical image based on the first sample feature map, and the second neural network is used for outputting a segmentation result of veins and arteries of the sample medical image based on the second sample feature map.
In one embodiment, the training a cascaded neural network based on the sample medical images to generate the network model comprises: obtaining the first sample feature map from the sample medical image and the first neural network; obtaining a first loss function value of the network model according to the first sample feature map and the map network; obtaining a second loss function value of the network model according to the second sample feature map and the second neural network; updating parameters of the network model based on the first loss function value and the second loss function value.
In one embodiment, the method further comprises: performing midline extraction on the artery and the vein to obtain corresponding midlines of the artery and the vein; performing point sampling on the middle line to obtain a plurality of key points; and acquiring an adjacency matrix corresponding to the artery and the vein according to the connection relation among the plurality of key points.
In one embodiment, the obtaining a first loss function value of the network model from the first sample profile and the graph network comprises: obtaining the classification result of the artery and the vein through the graph network according to the first sample feature graph and the adjacency matrix; and determining a first loss function value of the network model according to the classification result and the marked veins and arteries.
According to a third aspect of embodiments of the present application, there is provided an image processing apparatus including: the first acquisition module is configured to acquire a first feature map of a medical image through a first neural network according to the medical image; a second acquisition module configured to acquire a second feature map of the medical image through a map network according to the first feature map; and the third acquisition module is configured to acquire the segmentation result of the artery and the vein of the medical image through a second neural network according to the second feature map.
In one embodiment, the apparatus further comprises: a module for executing each step in the image processing method mentioned in the above embodiments.
According to a fourth aspect of the embodiments of the present application, there is provided a training apparatus for a network model, including: a determination module configured to determine a sample medical image, wherein the sample medical image comprises annotated veins and arteries; a training module configured to train a cascaded neural network based on the sample medical image to generate the network model, wherein the cascaded neural network includes a first neural network, a graph network and a second neural network in series, the first neural network is used for outputting a first sample feature map of the sample medical image, the graph network is used for outputting a second sample feature map of the sample medical image based on the first sample feature map, and the second neural network is used for outputting a segmentation result of veins and arteries of the sample medical image based on the second sample feature map.
In one embodiment, the apparatus further comprises: and a module for executing each step in the network model training method mentioned in the above embodiments.
According to a fifth aspect of embodiments of the present application, there is provided an electronic apparatus, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform the method according to any of the above embodiments.
According to a sixth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program for executing the method of any of the above embodiments.
An image processing method provided by the embodiment of the application can obtain the segmentation result of the artery and vein of the medical image by inputting the medical image into the cascade neural network (i.e. the first neural network, the graph network and the second neural network in series). By adopting the graph network, the global information of the medical image can be obtained, and meanwhile, the second neural network is adopted, the local fine segmentation of the medical image can be carried out on the basis of the global information of the medical image, so that a more fine segmentation result of the artery and the vein is obtained.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic diagram illustrating an implementation environment provided by an embodiment of the present application.
Fig. 2 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating key points provided in an embodiment of the present application.
Fig. 4 is a schematic process diagram illustrating an image processing method according to an embodiment of the present application.
Fig. 5 is a schematic flowchart illustrating a method for training a network model according to an embodiment of the present application.
Fig. 6 is a schematic flowchart illustrating a network model training method according to another embodiment of the present application.
Fig. 7 is a block diagram illustrating an image processing apparatus according to an embodiment of the present application.
Fig. 8 is a block diagram illustrating an image processing apparatus according to another embodiment of the present application.
Fig. 9 is a block diagram illustrating a training apparatus for a network model according to an embodiment of the present application.
Fig. 10 is a block diagram illustrating a training apparatus for a network model according to another embodiment of the present application.
Fig. 11 is a block diagram illustrating an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Summary of the application
A neural network is an operational model, which is formed by a large number of nodes (or neurons) connected to each other, each node corresponding to a policy function, and the connection between each two nodes representing a weighted value, called weight, for a signal passing through the connection. The neural network generally comprises a plurality of neural network layers, the upper network layer and the lower network layer are mutually cascaded, the output of the ith neural network layer is connected with the input of the (i + 1) th neural network layer, the output of the (i + 1) th neural network layer is connected with the input of the (i + 2) th neural network layer, and the like. After the training samples are input into the cascaded neural network layers, an output result is output through each neural network layer and is used as the input of the next neural network layer, therefore, the output is obtained through calculation of a plurality of neural network layers, the prediction result of the output layer is compared with a real target value, the weight matrix and the strategy function of each layer are adjusted according to the difference condition between the prediction result and the target value, the neural network continuously passes through the adjusting process by using the training samples, so that the parameters such as the weight of the neural network and the like are adjusted until the prediction result of the output of the neural network is consistent with the real target result, and the process is called the training process of the neural network. After the neural network is trained, a neural network model can be obtained.
In addition, Graph Networks (GNs) are a deep learning method based on Graph domain analysis, which can be understood as a set of functions that perform relational reasoning (relational reasoning) in a topological space (topological space) organized in Graph structures. The Graph Network may adopt a Network structure such as a Graph Convolutional neural Network (GCN) or a Graph Attention Network (GAT), and does not include an operator for changing the Graph structure such as pooling, that is, the topology of the Graph is consistent every convolution.
Blood vessel segmentation is useful in imaging diagnosis of blood vessels. For example, for the segmentation of coronary arteries of the heart, the segmentation of coronary arteries is difficult for the following reasons: 1) coronary arteries are complex in structure and have many small branches; 2) the coronary artery has uneven gray scale, low contrast with surrounding tissues and fuzzy boundary of the peripheral part of the blood vessel; 3) coronary arteries contain various complex lesions; 4) motion artifacts of the heart can affect the imaging of the coronary arteries; 5) a plurality of veins and coronary arteries of the heart are staggered, the phenomenon that the veins and the coronary arteries are connected exists when the image quality is low, and the segmented vein false positive is easy to appear.
For the blood vessel segmentation based on the region growing, firstly, the seed points of the blood vessel tree are extracted, generally, the root region of the blood vessel tree is taken as the seed points, then, the characteristics (such as gray scale, texture and the like) of the root region are extracted, the blood vessel tree grows in the neighborhood, the similar blood vessel in the neighborhood is found out, and the process stops if the growth condition is not met. However, in the segmentation method based on region growing, if a plaque or an artifact is encountered, the region growing stops growing because the feature condition is not satisfied, and the robustness is low.
For vessel segmentation based on deep learning, a dicom image is input into a deep learning network, a loss function value is calculated according to the output of the deep learning network and a gold standard, the gradient of the loss function value is transmitted reversely to correct the parameters of a model, and finally a segmentation model with high robustness is output. However, because the coronary artery and the vein have similar shapes and very close gray levels, the segmentation model is difficult to distinguish the coronary artery from the vein, and the phenomenon of more false positives of the vein is caused.
Therefore, when the coronary artery is segmented by using the conventional blood vessel segmentation method, the obtained blood vessel is not high in segmentation accuracy.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. The implementation environment includes a CT scanner 130, a server 120, and a computer device 110. The computer device 110 may acquire medical images from the CT scanner 130, and the computer device 110 may be connected to the server 120 via a communication network. Optionally, the communication network is a wired network or a wireless network.
The CT scanner 130 is used for performing X-ray scanning on the human tissue to obtain a CT image of the human tissue. In one embodiment, a chest X-ray positive position film, i.e. a medical image in the present application, can be obtained by scanning the lungs with the CT scanner 130.
The computer device 110 may be a general-purpose computer or a computer device composed of an application-specific integrated circuit, and the like, which is not limited in this embodiment. For example, the Computer device 110 may be a mobile terminal device such as a tablet Computer, or may be a Personal Computer (PC), such as a laptop portable Computer and a desktop Computer. One skilled in the art will appreciate that the number of computer devices 110 described above may be one or more, and that the types may be the same or different. For example, the number of the computer devices 110 may be one, or the number of the computer devices 110 may be several tens or hundreds, or more. The number and the type of the computer devices 110 are not limited in the embodiments of the present application.
The server 120 is a server, or consists of several servers, or is a virtualization platform, or a cloud computing service center.
In some alternative embodiments, a network model (consisting of a first neural network, a graph network, and a second neural network in series) may be deployed in the computer device 110 for segmenting the medical image. The computer device 110 may segment the medical image acquired from the CT scanner 130 using the network model deployed thereon, and specifically, first, input the medical image into a first neural network, acquire a first feature map of the medical image, then, acquire a second feature map of the medical image through a graph network according to the first feature map, and finally, input the second feature map into a second neural network, and acquire a segmentation result of arteries and veins of the medical image.
In some alternative embodiments, the server 120 receives the training images collected by the computer device 110 and trains the cascaded neural networks (i.e., the first neural network, the graph network, and the second neural network in series) with the training images to obtain the network model. The computer device 110 may send the medical image acquired from the CT scanner 130 to the server 120, and the server 120 performs segmentation using the network model trained thereon, specifically, first, the medical image is input into a first neural network to acquire a first feature map of the medical image, then, a second feature map of the medical image is acquired through a graph network according to the first feature map, and finally, the second feature map is input into a second neural network to acquire a segmentation result of an artery and a vein of the medical image. Server 120 sends the segmentation results to computer device 110 for review by healthcare workers.
By adopting the graph network, the global information of the medical image can be obtained, and meanwhile, the second neural network is adopted, the local fine segmentation of the medical image can be carried out on the basis of the global information of the medical image, so that a more fine segmentation result of the artery and the vein is obtained.
Exemplary method
Fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present application. The method described in fig. 2 is performed by a computing device (e.g., a server), but the embodiments of the present application are not limited thereto. The server may be one server, or may be composed of a plurality of servers, or may be a virtualization platform, or a cloud computing service center, which is not limited in this embodiment of the present application. As shown in fig. 2, the method includes the following.
S210: according to the medical image, a first feature map of the medical image is acquired through a first neural network.
In one embodiment, the medical image may refer to an original medical image, which may be obtained by Computed tomography (Computed tomography)yCT), Computed Radiography (CR), Digital Radiography (DR), nuclear magnetic resonance, ultrasound, or other techniques to obtain directly acquired imagesLike this.
In an embodiment, the medical image may also be a preprocessed image, and the preprocessed image may be a medical image obtained by preprocessing an original medical image. However, the embodiment of the present application does not specifically limit a specific implementation manner of the preprocessing, and the preprocessing may refer to gray scale normalization, denoising processing, image enhancement processing, or the like.
In an embodiment, the medical image may be a three-dimensional CT image, a part of the three-dimensional medical image in the three-dimensional CT image (i.e., a block obtained by cutting out a patch from the three-dimensional CT image), or a layer of two-dimensional medical image in the three-dimensional CT image, which is not limited in this embodiment of the present invention.
In an embodiment, the first neural network may be any type of neural network. Optionally, the first Neural Network may be a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a Recurrent Neural Network (RNN), or the like, and the specific type of the first Neural Network is not limited in this embodiment of the application. The first neural network may include neural network layers such as an input layer, a convolutional layer, a pooling layer, and a connection layer, which is not limited in this embodiment. In addition, the number of each neural network layer is not limited in the embodiments of the present application. The first neural network may be understood as an encoder for performing feature extraction and data dimension reduction.
In an embodiment, the medical image is directly input into the first neural network, and a first feature map of the medical image can be obtained.
In another embodiment, according to the segmentation result of the medical image, the background and the blood vessel, the first feature map can be obtained through the first neural network, that is, the segmentation result of the medical image, the background and the blood vessel is directly input into the first neural network, the first feature map of the medical image can be obtained, or the segmentation result of the medical image, the background and the blood vessel is combined and then input into the first neural network, the first feature map of the medical image can be obtained. The first profile may be one-dimensional.
The segmentation result of the background and the blood vessel may be a segmentation result obtained by deep learning or machine learning, which is not specifically limited in this embodiment of the present application. The background and the blood vessel segmentation result have a background value of 0 and a foreground (including arteries and veins) value of 1.
Compared with the method for acquiring the first feature map based on the medical image only, the method for acquiring the first feature map based on the medical image, the background and the segmentation result of the blood vessel has the advantages that the extracted first feature map is more accurate and has higher hierarchical information.
In an embodiment, the first feature map may not be one-dimensional, and the global pooling operation is performed on the first feature map to obtain a dimension-reduced first feature map, where the dimension-reduced first feature map is one-dimensional.
S220: and acquiring a second feature map of the medical image through the map network according to the first feature map.
In one embodiment, the arteries and veins are subjected to midline extraction to obtain corresponding midlines of the arteries and veins; performing point sampling on the central line to obtain a plurality of key points; and acquiring an adjacency matrix corresponding to the artery and the vein according to the connection relation among the plurality of key points.
In an embodiment, the first feature map and the adjacency matrix are input into a map network, and a second feature map of the medical image can be obtained. The graph network is a deep learning method based on graph domain analysis, and can be understood as a function set for carrying out relational reasoning according to graph structure organization in a topological space. The graph network can adopt a network structure such as a graph convolution neural network or a graph attention network and does not contain an operator such as pooling for changing the graph structure, namely, the topological structure of the graph keeps consistent every convolution.
In one embodiment, the second feature map may be a feature map output by a predetermined layer of the graph network. The predetermined layer may be a convolutional layer before the full connection layer and the classification layer, which is not specifically limited in this embodiment.
The method of fire simulation or maximum internal ball can be adopted to extract the middle lines of the artery and vein on the medical image or the artery and vein in the segmentation result of the background and the blood vessel, so that the middle lines of the vein and the artery can reflect the topological structure of the original blood vessel.
In one embodiment, to better distinguish the middle pixel point, the branch pixel point, and the top pixel point of the blood vessel, the line width of the central line of the vein and the artery is a single pixel point. The middle pixel point is a pixel point between two end points on one blood vessel, and the number of the pixel points adjacent to the middle pixel point is only two; the branched pixel points are pixel points at the intersection of at least two blood vessels, and the number of the pixel points adjacent to the branched pixel points is more than two; the top pixel point is a terminal pixel point on a blood vessel, and the number of the pixel points adjacent to the top pixel point is only one.
Therefore, a depth-first search method can be adopted to traverse the number of adjacent pixels of each pixel in the plurality of pixels on the central line, and the intermediate pixels, the branched pixels and the top pixels in the plurality of pixels can be distinguished.
In an embodiment, in order to enhance the robustness of the graph network, the pixel points having two and/or one adjacent pixel point on the central line may be sampled according to a preset distance to obtain a plurality of key points. The coordinates of the plurality of key points are extracted, and as shown in fig. 3, points a to N on the blood vessel are the plurality of key points.
That is, the types of the plurality of key points are intermediate pixel points or top pixel points, not cross pixel points. Because the forked pixel points possibly contain vein false positives, the plurality of key points are determined as middle pixel points or top pixel points, so that the situation that the forked pixel points belong to an artery or a vein can be avoided being not well judged by a graph network when the graph network is trained, and the robustness of the graph network obtained by training can be enhanced.
However, it should be noted that the embodiment of the present application does not specifically limit the value of the preset distance. The distance between any two key points is not limited, and the pixel points with two and/or one adjacent pixel point on the central line can be sampled equidistantly according to the preset distance, so that the distances between the key points are equal. The specific number of the plurality of key points is not limited in the embodiment of the application, and the number of the key points can be different along with the difference of values of the preset distance.
In one embodiment, a directed graph or an undirected graph, i.e., an adjacency matrix (adjacenymtrix) corresponding to an artery and a vein, may be constructed according to the connection relationship of a plurality of key points on the midline. If the number of the plurality of key points is N (i.e., the adjacency matrix has N nodes), the size of the adjacency matrix is N × N. The adjacency matrix is a matrix representing the adjacency relationship between the key points.
The graph network extracts features from the adjacency matrix based on the first feature graph, and uses the features to perform node classification (node classification), graph classification (graph classification), edge prediction (link prediction), and the like on the adjacency matrix. That is, the graph network may output classification results (binary results) of N nodes, i.e., determine whether N key points are arteries or veins. However, it should be noted that the result of binary classification of N key points may also be used as the output of the cascaded neural network.
In one embodiment, the medical image is obtained by cutting a patch of a preset size from the original medical image with any key point of the plurality of key points as a center, that is, cutting the patch from the original medical image with each key point as a center, and obtaining N patches (patches). However, the embodiment of the present application does not specifically limit the value of the preset dimension, and those skilled in the art may make different selections according to actual requirements.
The medical image in the application may refer to one patch, one patch corresponds to one first feature map, and according to the first feature maps corresponding to all the patches, a second feature map of the medical image may be obtained through a graph network. That is, each patch is separately input to the first neural network in the cascade neural network, then the first feature maps corresponding to all the patches are input to the graph network of the cascade neural network, and finally the second feature map corresponding to each patch is separately input to the second neural network, so that the video memory occupation can be effectively reduced.
Since the segmentation result of the background and the blood vessel can be input into the first neural network as described in step S210, when the medical image is a block with a preset size, the segmentation result of the background and the blood vessel can be obtained by cutting the block with the preset size from the original segmentation result of the background and the blood vessel with any one of the plurality of key points as the center.
S230: and according to the second characteristic diagram, obtaining the artery and vein segmentation result of the medical image through a second neural network.
In an embodiment, the type of the second neural network may be the same as or different from the first neural network, which is not limited in this application. The second neural network may be understood as a decoder for decoding the segmentation result from the features. A skip connection (UNet-like structure) can be established between the first neural network (encoder) and the second neural network (decoder).
In an embodiment, the second feature map is directly input into the second neural network, and the segmentation result of the artery and vein of the medical image can be obtained.
In another embodiment, the second feature map is up-sampled and input into a second neural network, and the segmentation result of the artery and vein of the medical image can be obtained.
In another embodiment, the segmentation result of the artery and vein of the medical image can be obtained through the second neural network according to the first feature map and the second feature map, that is, the segmentation result can be obtained by directly inputting the first feature map and the second feature map into the second neural network, or the segmentation result can be obtained by inputting the first feature map and the second feature map into the second neural network after up-sampling.
Compared with the method for acquiring the segmentation result based on the second feature map only, the method for acquiring the segmentation result based on the first feature map and the second feature map has the advantage that the obtained segmentation result is more accurate. The segmentation result is a binary result, i.e., the value of the artery in the segmentation result is 1 and the value of the vein in the segmentation result is 0.
When the medical image is obtained by cutting blocks with preset sizes from the original medical image, the segmentation results of the artery and the vein are combined by the segmentation results corresponding to the blocks, that is, the segmentation results are the segmentation results of the artery and the vein of the full map.
By adopting the graph network, the global information of the medical image can be obtained, and meanwhile, the second neural network is adopted, the local fine segmentation of the medical image can be carried out on the basis of the global information of the medical image, so that a more fine segmentation result of the artery and the vein is obtained.
Through the graph network, the global topological relation is constructed, the local deep learning network is segmented, the overall prediction result is corrected, the segmentation false positive can be effectively removed, the robustness is high, and the segmentation precision is high.
In another embodiment of the present application, when the artery is a coronary artery, the method further comprises: acquiring a connected domain of a vein in a segmentation result of a background and a blood vessel according to a connected domain algorithm; and removing the veins of which the size of the connected domain exceeds a preset threshold value from the segmentation results of the arteries and the veins to obtain the segmentation result of the coronary arteries.
And extracting connected domains of veins in the segmentation results of the background and the blood vessels according to a connected domain algorithm to obtain the connected domains corresponding to the veins. The algorithms for connected component extraction can be divided into two categories: one type is a local neighborhood algorithm, namely, each connected component is checked one by one from local to whole, a 'starting point' is determined, and then a mark is filled into the surrounding neighborhood in an expanding way; the other type is that from the whole to the local, different connected components are firstly determined, and then each connected component is filled with a mark by using a region filling method, the final purpose of the two types of arithmetic operations is to extract a target '1' value pixel set which is adjacent to each other from a dot matrix binary image which is composed of white pixels and black pixels, and fill unequal digital marks into different connected domains in the image.
For example, the segmentation result of the background and the blood vessel may include a plurality of veins, and then after the connected component is extracted, the first connected component is labeled as 1, the second connected component is labeled as 2, the third connected component is labeled as 3, and so on, so as to obtain the connected components corresponding to the plurality of veins.
The sizes of the connected domains corresponding to the plurality of veins are different, and veins with the sizes of the connected domains corresponding to the veins smaller than the preset threshold value are most likely to be false coronary artery false positives of veins which are segmented by mistake in segmentation results of backgrounds and blood vessels.
Fig. 4 is a schematic diagram illustrating a process of obtaining a segmentation result of an artery and a vein according to an embodiment of the present application.
The method comprises the steps of taking any key point in N key points as a center, intercepting blocks A-N with preset sizes from an original medical image, inputting the blocks A-N into an encoder (a first neural network) as input, performing global pooling to obtain N first feature maps 41 corresponding to the blocks A-N, inputting the N first feature maps 41 into a plurality of map rolling layers to obtain a second feature map 42, and outputting classification results 43 (namely, whether the key points belong to arteries or veins) of the N key points after the second feature map 42 sequentially passes through a full connection layer and an activation layer. At the same time, the second feature map 42 is input to a decoder (second neural network) as an input, and the result 44 of the segmentation of the veins and arteries can be obtained.
Fig. 5 is a schematic flowchart illustrating a method for training a network model according to an embodiment of the present application. The method described in fig. 5 is performed by a computing device (e.g., a server), but the embodiments of the present application are not limited thereto. The server may be one server, or may be composed of a plurality of servers, or may be a virtualization platform, or a cloud computing service center, which is not limited in this embodiment of the present application. The trained network model can be used for carrying out vein and artery segmentation on any medical image. As shown in fig. 5, the training method is as follows.
S510: a sample medical image is determined, wherein the sample medical image includes the labeled veins and arteries.
The sample medical image mentioned in the present embodiment is the same type of image as the medical image in the above-described step S210. The sample medical image is artificially labeled to obtain labels for arteries and veins.
S520: training a cascade neural network based on the sample medical image to generate a network model, wherein the cascade neural network comprises a first neural network, a graph network and a second neural network which are connected in series, the first neural network is used for outputting a first sample feature map of the sample medical image, the graph network is used for outputting a second sample feature map of the sample medical image based on the first sample feature map, and the second neural network is used for outputting the segmentation result of veins and arteries of the sample medical image based on the second sample feature map.
The cascaded neural network comprises a first neural network, a graph network and a second neural network in series, as shown in fig. 4, the output (first sample feature map) of the encoder (first neural network) is used as the input of the graph network, and the output (second sample feature map) of the graph network is used as the input of the decoder (second neural network).
By adopting the graph network, the global information of the sample medical image can be obtained, and meanwhile, the second neural network is adopted, so that the local fine segmentation of the sample medical image can be carried out on the basis of the global information of the sample medical image, and a more fine segmentation result of the artery and the vein is obtained.
As shown in fig. 6, in the training method of a network model provided in the embodiment of the present application, step S520 includes steps S610 to S640.
S610: a first sample feature map is obtained from the sample medical image and the first neural network.
In one embodiment, a first neural network is used for feature extraction, and a first sample feature map can be obtained by inputting the sample medical image into the first neural network.
Of course, the sample medical image and the segmentation result of the background and the blood vessel may also be input into the first neural network as described in step S210 to obtain the first sample feature map.
For details of the implementation in this embodiment, please refer to step S210, which is not described herein again.
S620: and acquiring a first loss function value of the network model according to the first sample characteristic diagram and the graph network.
In one embodiment, the graph network employs a first loss function, it being understood that the first loss function may be any type of loss function. Optionally, the first loss function may be a cross entropy loss function, and a user may select different loss functions according to different application scenarios, and the embodiment of the present application does not limit the type of the first loss function.
In an embodiment, midline extraction is performed on an artery and a vein to obtain midlines corresponding to the artery and the vein, point sampling is performed on the midlines to obtain a plurality of key points, an adjacent matrix corresponding to the artery and the vein is obtained according to the connection relation among the plurality of key points, classification results of the artery and the vein are obtained through a graph network according to a first sample characteristic graph and the adjacent matrix, and a first loss function value of the network model is determined according to the classification results and the marked vein and artery.
After the first sample feature map and the adjacency matrix are input into the graph network, the classification result of the artery and the vein can be output, and the classification result is the binary classification result of the artery and the vein. Using the first loss function, a similarity loss between the classification result and the labeled veins and arteries (i.e., the target result) is calculated, and a first loss function value of the network model can be obtained. The smaller the first loss function value is, the closer the predicted classification result is to the target result, and the higher the accuracy of prediction is. Conversely, the greater the first loss function value, the lower the accuracy of the representation of the prediction.
For details of the implementation in this embodiment, please refer to step S220, which is not described herein again.
S630: and acquiring a second loss function value of the network model according to the second sample characteristic diagram and the second neural network.
In one embodiment, the second neural network employs a second loss function, it being understood that the second loss function may be any type of loss function. Optionally, the second loss function may be a cross entropy loss function, and a user may select different loss functions according to different application scenarios, and the embodiment of the present application does not limit the type of the second loss function.
In one embodiment, the second sample feature map is input into a second neural network, and the results of the segmentation of the arteries and veins may be obtained.
Of course, the first sample feature map and the second sample feature map may also be input into the second neural network as described in step S210 to obtain the segmentation results of the arteries and veins.
Using the second loss function, the similarity loss between the segmentation result and the labeled veins and arteries (i.e., the target result) is calculated, and a second loss function value of the network model can be obtained. The smaller the second loss function value is, the closer the predicted segmentation result is to the target result, and the higher the accuracy of prediction is. Conversely, the greater the second loss function value, the lower the accuracy of the representation of the prediction.
For details of the implementation in this embodiment, please refer to step S230, which is not described herein again.
S640: parameters of the network model are updated based on the first loss function value and the second loss function value.
In an embodiment, the overall loss function value L of the network model is a weighted sum of the first loss function value Lgraph and the second loss function value Lsegmentation, i.e. L-Lgraph + α × Lsegmentation, where α is a hyper-parameter.
In an embodiment, the total loss function value may be subjected to gradient back-propagation to update parameters of the new network model, such as weights, bias values, and the like, which is not limited in this application.
Exemplary devices
The embodiment of the device can be used for executing the embodiment of the method. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 7 is a block diagram illustrating an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus 700 includes:
a first obtaining module 710 configured to obtain a first feature map of the medical image through a first neural network according to the medical image;
a second obtaining module 720, configured to obtain a second feature map of the medical image through the map network according to the first feature map;
a third obtaining module 730 configured to obtain a segmentation result of the artery and vein of the medical image through the second neural network according to the second feature map.
By adopting the graph network, the global information of the medical image can be obtained, and meanwhile, the second neural network is adopted, the local fine segmentation of the medical image can be carried out on the basis of the global information of the medical image, so that a more fine segmentation result of the artery and the vein is obtained.
In another embodiment of the present application, as shown in fig. 8, the apparatus 700 shown in fig. 7 further comprises:
a midline extraction module 740 configured to perform midline extraction on the artery and vein to obtain a midline corresponding to the artery and vein;
a sampling module 750 configured to perform point sampling on the central line to obtain a plurality of key points;
a fourth obtaining module 760 configured to obtain an adjacency matrix corresponding to the artery and the vein according to a connection relationship between the plurality of key points.
In another embodiment of the present application, the second obtaining module 720 is further configured to: and acquiring a second characteristic diagram through the diagram network according to the first characteristic diagram and the adjacency matrix.
In another embodiment of the present application, the line width of the central line is a single pixel point, and the apparatus 700 further includes: and the traversal module is configured to traverse the number of adjacent pixels of each pixel in the plurality of pixels on the central line.
In another embodiment of the present application, the sampling module 750 is further configured to: and sampling pixel points with two and/or one adjacent pixel point on the central line according to a preset distance to obtain a plurality of key points.
In another embodiment of the present application, the medical image is obtained by cutting a block of a preset size from the original medical image centering on any of the plurality of key points.
In another embodiment of the present application, the first obtaining module 710 is further configured to: and acquiring a first feature map through a first neural network according to the medical image and the segmentation result of the blood vessel and the background, wherein the blood vessel comprises a vein and an artery.
In another embodiment of the present application, when the artery is a coronary artery, the apparatus 700 further comprises: the connected domain module is configured to acquire a connected domain of a vein in a segmentation result of the blood vessel and the background according to a connected domain algorithm; and the removing module is configured to remove the vein of which the size of the connected domain exceeds a preset threshold value in the segmentation result of the artery and the vein so as to obtain the segmentation result of the coronary artery.
In another embodiment of the present application, the second feature map is a feature map output by a predetermined layer of the graph network.
Fig. 9 is a block diagram illustrating a training apparatus for a network model according to an embodiment of the present application. As shown in fig. 9, the training apparatus 900 includes:
a determination module 910 configured to determine a sample medical image, wherein the sample medical image comprises annotated veins and arteries;
a training module 920 configured to train a cascaded neural network based on the sample medical image to generate a network model, wherein the cascaded neural network includes a first neural network, a graph network and a second neural network in series, the first neural network is used for outputting a first sample feature map of the sample medical image, the graph network is used for outputting a second sample feature map of the sample medical image based on the first sample feature map, and the second neural network is used for outputting a segmentation result of veins and arteries of the sample medical image based on the second sample feature map.
By adopting the graph network, the global information of the sample medical image can be obtained, and meanwhile, the second neural network is adopted, so that the local fine segmentation of the sample medical image can be carried out on the basis of the global information of the sample medical image, and a more fine segmentation result of the artery and the vein is obtained.
In another embodiment of the present application, the training module 920 is further configured to: obtaining a first sample feature map according to the sample medical image and the first neural network; obtaining a first loss function value of the network model according to the first sample characteristic diagram and the graph network; obtaining a second loss function value of the network model according to the second sample characteristic diagram and the second neural network; parameters of the network model are updated based on the first loss function value and the second loss function value.
In another embodiment of the present application, as shown in fig. 10, the training device 900 shown in fig. 9 further comprises:
a midline extraction module 930 configured to perform midline extraction on the artery and vein to obtain a midline corresponding to the artery and vein;
a sampling module 940 configured to perform point sampling on the central line to obtain a plurality of key points;
the obtaining module 950 is configured to obtain an adjacency matrix corresponding to the artery and the vein according to the connection relationship between the plurality of key points.
In another embodiment of the present application, the training module 920, when obtaining the first loss function value of the network model according to the first sample feature map and the graph network, is further configured to: obtaining classification results of arteries and veins through a graph network according to the first sample feature graph and the adjacency matrix; and determining a first loss function value of the network model according to the classification result and the marked veins and arteries.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 11. FIG. 11 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 11, electronic device 1100 includes one or more processors 1110 and memory 1120.
The processor 1110 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1100 to perform desired functions.
The memory 1120 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 1110 to implement the image processing methods, the network model training methods, and/or other desired functions of the various embodiments of the present application described above. Various contents such as a division result may also be stored in the computer-readable storage medium.
In one example, the electronic device 1100 may further include: an input device 1130 and an output device 1140, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 1130 may be a microphone or microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input device 1130 may be a communication network connector.
The input devices 1130 may also include, for example, a keyboard, a mouse, and the like.
The output device 1140 may output various information including the division result to the outside. The output devices 1140 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 1100 relevant to the present application are shown in fig. 11, and components such as buses, input/output interfaces, and the like are omitted. In addition, electronic device 1100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the image processing method, the training method of a network model according to various embodiments of the present application described in the "exemplary methods" section of this specification above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the image processing method, the training method of a network model according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (14)

1. An image processing method, comprising:
acquiring a first feature map of a medical image through a first neural network according to the medical image;
acquiring a second feature map of the medical image through a map network according to the first feature map;
and acquiring the segmentation result of the artery and the vein of the medical image through a second neural network according to the second characteristic diagram.
2. The method of claim 1, further comprising:
performing midline extraction on the artery and the vein to obtain corresponding midlines of the artery and the vein;
performing point sampling on the middle line to obtain a plurality of key points;
acquiring an adjacency matrix corresponding to the artery and the vein according to the connection relation among the plurality of key points,
wherein, the obtaining a second feature map of the medical image through a map network according to the first feature map comprises:
and acquiring the second characteristic diagram through the diagram network according to the first characteristic diagram and the adjacency matrix.
3. The method of claim 2, wherein the width of the middle line is a single pixel point, and wherein the method further comprises:
traversing a number of neighboring pixels of each of a plurality of pixels on the centerline,
wherein, the point sampling on the middle line to obtain a plurality of key points comprises:
and sampling pixel points with two and/or one adjacent pixel point on the central line according to a preset distance to obtain the plurality of key points.
4. The method according to claim 2, wherein the medical image is obtained by cutting a block of a preset size from the original medical image centering on any of the plurality of key points.
5. The method according to any one of claims 1 to 4, wherein the obtaining a first feature map of the medical image through a first neural network from the medical image comprises:
and acquiring the first feature map through the first neural network according to the medical image and the segmentation result of the blood vessel and the background, wherein the blood vessel comprises the vein and the artery.
6. The method of any one of claims 1 to 4, wherein when the artery is a coronary artery, the method further comprises:
acquiring a connected domain of the vein in a segmentation result of the blood vessel and the background according to a connected domain algorithm;
and removing the veins of which the size of the connected domain exceeds a preset threshold value from the segmentation results of the artery and the veins to obtain the segmentation result of the coronary artery.
7. The method according to any one of claims 1 to 4, wherein the second feature map is a feature map output by a predetermined layer of the graph network.
8. A method for training a network model, comprising:
determining a sample medical image, wherein the sample medical image comprises annotated veins and arteries;
training a cascade neural network based on the sample medical image to generate the network model, wherein the cascade neural network comprises a first neural network, a graph network and a second neural network which are connected in series, the first neural network is used for outputting a first sample feature map of the sample medical image, the graph network is used for outputting a second sample feature map of the sample medical image based on the first sample feature map, and the second neural network is used for outputting a segmentation result of veins and arteries of the sample medical image based on the second sample feature map.
9. The training method of claim 8, wherein the training a cascaded neural network based on the sample medical images to generate the network model comprises:
obtaining the first sample feature map from the sample medical image and the first neural network;
obtaining a first loss function value of the network model according to the first sample feature map and the map network;
obtaining a second loss function value of the network model according to the second sample feature map and the second neural network;
updating parameters of the network model based on the first loss function value and the second loss function value.
10. The training method of claim 9, further comprising:
performing midline extraction on the artery and the vein to obtain corresponding midlines of the artery and the vein;
performing point sampling on the middle line to obtain a plurality of key points;
acquiring an adjacency matrix corresponding to the artery and the vein according to the connection relation among the plurality of key points,
wherein the obtaining a first loss function value of the network model according to the first sample feature map and the map network comprises:
obtaining the classification result of the artery and the vein through the graph network according to the first sample feature graph and the adjacency matrix;
and determining a first loss function value of the network model according to the classification result and the marked veins and arteries.
11. An image processing apparatus characterized by comprising:
the first acquisition module is configured to acquire a first feature map of a medical image through a first neural network according to the medical image;
a second acquisition module configured to acquire a second feature map of the medical image through a map network according to the first feature map;
and the third acquisition module is configured to acquire the segmentation result of the artery and the vein of the medical image through a second neural network according to the second feature map.
12. An apparatus for training a network model, comprising:
a determination module configured to determine a sample medical image, wherein the sample medical image comprises annotated veins and arteries;
a training module configured to train a cascaded neural network based on the sample medical image to generate the network model, wherein the cascaded neural network includes a first neural network, a graph network and a second neural network in series, the first neural network is used for outputting a first sample feature map of the sample medical image, the graph network is used for outputting a second sample feature map of the sample medical image based on the first sample feature map, and the second neural network is used for outputting a segmentation result of veins and arteries of the sample medical image based on the second sample feature map.
13. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1 to 10.
14. A computer-readable storage medium, the storage medium storing a computer program for executing the method of any of the preceding claims 1 to 10.
CN202110566138.8A 2021-05-24 2021-05-24 Image processing method and device, and network model training method and device Pending CN113256670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110566138.8A CN113256670A (en) 2021-05-24 2021-05-24 Image processing method and device, and network model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110566138.8A CN113256670A (en) 2021-05-24 2021-05-24 Image processing method and device, and network model training method and device

Publications (1)

Publication Number Publication Date
CN113256670A true CN113256670A (en) 2021-08-13

Family

ID=77183970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110566138.8A Pending CN113256670A (en) 2021-05-24 2021-05-24 Image processing method and device, and network model training method and device

Country Status (1)

Country Link
CN (1) CN113256670A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837192A (en) * 2021-09-22 2021-12-24 推想医疗科技股份有限公司 Image segmentation method and device and neural network training method and device
CN114972859A (en) * 2022-05-19 2022-08-30 推想医疗科技股份有限公司 Pixel classification method, model training method, device, equipment and medium
CN116226822A (en) * 2023-05-05 2023-06-06 深圳市魔样科技有限公司 Intelligent finger ring identity data acquisition method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740407A (en) * 2018-08-27 2019-05-10 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on figure network
CN110555399A (en) * 2019-08-23 2019-12-10 北京智脉识别科技有限公司 Finger vein identification method and device, computer equipment and readable storage medium
US20200074638A1 (en) * 2018-09-05 2020-03-05 Htc Corporation Image segmentation method, apparatus and non-transitory computer readable medium of the same
CN111062963A (en) * 2019-12-16 2020-04-24 上海联影医疗科技有限公司 Blood vessel extraction method, system, device and storage medium
CN111275749A (en) * 2020-01-21 2020-06-12 沈阳先进医疗设备技术孵化中心有限公司 Image registration and neural network training method and device
CN111899244A (en) * 2020-07-30 2020-11-06 北京推想科技有限公司 Image segmentation method, network model training method, device and electronic equipment
CN111932554A (en) * 2020-07-31 2020-11-13 青岛海信医疗设备股份有限公司 Pulmonary blood vessel segmentation method, device and storage medium
WO2021017006A1 (en) * 2019-08-01 2021-02-04 京东方科技集团股份有限公司 Image processing method and apparatus, neural network and training method, and storage medium
US20210049467A1 (en) * 2018-04-12 2021-02-18 Deepmind Technologies Limited Graph neural networks representing physical systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210049467A1 (en) * 2018-04-12 2021-02-18 Deepmind Technologies Limited Graph neural networks representing physical systems
CN109740407A (en) * 2018-08-27 2019-05-10 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on figure network
US20200074638A1 (en) * 2018-09-05 2020-03-05 Htc Corporation Image segmentation method, apparatus and non-transitory computer readable medium of the same
WO2021017006A1 (en) * 2019-08-01 2021-02-04 京东方科技集团股份有限公司 Image processing method and apparatus, neural network and training method, and storage medium
CN110555399A (en) * 2019-08-23 2019-12-10 北京智脉识别科技有限公司 Finger vein identification method and device, computer equipment and readable storage medium
CN111062963A (en) * 2019-12-16 2020-04-24 上海联影医疗科技有限公司 Blood vessel extraction method, system, device and storage medium
CN111275749A (en) * 2020-01-21 2020-06-12 沈阳先进医疗设备技术孵化中心有限公司 Image registration and neural network training method and device
CN111899244A (en) * 2020-07-30 2020-11-06 北京推想科技有限公司 Image segmentation method, network model training method, device and electronic equipment
CN111932554A (en) * 2020-07-31 2020-11-13 青岛海信医疗设备股份有限公司 Pulmonary blood vessel segmentation method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SEUNG YEON SHIN 等: "Deep vessel segmentation by learning graphical connectivity", MEDICAL IMAGE ANALYSIS, pages 1 - 2 *
ZHIWEI ZHAI 等: "Linking Convolutional Neural Networks with Graph Convolutional Networks: Application in Pulmonary Artery-Vein Separation", GRAPH LEARNING IN MEDICAL IMAGING, pages 1 - 2 *
邱泓燕;张海刚;杨金锋;: "基于图卷积网络的手指静脉识别方法研究", 信号处理, no. 03 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837192A (en) * 2021-09-22 2021-12-24 推想医疗科技股份有限公司 Image segmentation method and device and neural network training method and device
CN113837192B (en) * 2021-09-22 2024-04-19 推想医疗科技股份有限公司 Image segmentation method and device, and neural network training method and device
CN114972859A (en) * 2022-05-19 2022-08-30 推想医疗科技股份有限公司 Pixel classification method, model training method, device, equipment and medium
CN116226822A (en) * 2023-05-05 2023-06-06 深圳市魔样科技有限公司 Intelligent finger ring identity data acquisition method and system
CN116226822B (en) * 2023-05-05 2023-07-14 深圳市魔样科技有限公司 Intelligent finger ring identity data acquisition method and system

Similar Documents

Publication Publication Date Title
EP3553742B1 (en) Method and device for identifying pathological picture
CN111899245B (en) Image segmentation method, image segmentation device, model training method, model training device, electronic equipment and storage medium
CN111161275B (en) Method and device for segmenting target object in medical image and electronic equipment
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN110706246B (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
US20200320685A1 (en) Automated classification and taxonomy of 3d teeth data using deep learning methods
CN109003267B (en) Computer-implemented method and system for automatically detecting target object from 3D image
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN113256670A (en) Image processing method and device, and network model training method and device
CN113066090B (en) Training method and device, application method and device of blood vessel segmentation model
US11972571B2 (en) Method for image segmentation, method for training image segmentation model
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
US20230177698A1 (en) Method for image segmentation, and electronic device
Liu et al. A fully automatic segmentation algorithm for CT lung images based on random forest
CN112991346B (en) Training method and training system for learning network for medical image analysis
CN110321943A (en) CT image classification method, system, device based on semi-supervised deep learning
CN111524109A (en) Head medical image scoring method and device, electronic equipment and storage medium
CN114972211B (en) Training method, segmentation method, device, equipment and medium for image segmentation model
CN112767422B (en) Training method and device of image segmentation model, segmentation method and device, and equipment
CN113724185B (en) Model processing method, device and storage medium for image classification
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
CN113240699A (en) Image processing method and device, model training method and device, and electronic equipment
CN117522892A (en) Lung adenocarcinoma CT image focus segmentation method based on space channel attention enhancement
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination