CN112242193B - Automatic blood vessel puncture method based on deep learning - Google Patents

Automatic blood vessel puncture method based on deep learning Download PDF

Info

Publication number
CN112242193B
CN112242193B CN202011281045.2A CN202011281045A CN112242193B CN 112242193 B CN112242193 B CN 112242193B CN 202011281045 A CN202011281045 A CN 202011281045A CN 112242193 B CN112242193 B CN 112242193B
Authority
CN
China
Prior art keywords
blood vessel
picture
path
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011281045.2A
Other languages
Chinese (zh)
Other versions
CN112242193A (en
Inventor
齐鹏
张浩雨
陈禹
葛坦谛
程黎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202011281045.2A priority Critical patent/CN112242193B/en
Publication of CN112242193A publication Critical patent/CN112242193A/en
Application granted granted Critical
Publication of CN112242193B publication Critical patent/CN112242193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an automatic blood vessel puncturing method based on deep learning, which comprises the steps of obtaining a blood vessel picture of a part to be punctured, carrying out gray processing, loading the blood vessel picture into a convolutional neural network model, obtaining a background, a punctured area in a blood vessel and a non-punctured area in the blood vessel picture, and puncturing the blood vessel according to the punctured area in the blood vessel; the convolutional neural network model comprises a contraction path, an expansion path and an activation function which are connected in sequence, wherein the contraction path carries out down-sampling; the expansion path performs up-sampling and is spliced with the picture with the corresponding size in the contraction path, and the expansion path outputs the picture with the depth of 3; and the activation function carries out three-classification on the third dimensionality of the image output by the extension path, and selects the maximum value in three-classification results as final output. Compared with the prior art, the method realizes the identification of the puncture area in the blood vessel through image three-classification, is favorable for realizing automatic blood vessel puncture, and further avoids the contact between nurses and patients during the puncture.

Description

Automatic blood vessel puncture method based on deep learning
Technical Field
The invention relates to the field of blood vessel puncture, in particular to an automatic blood vessel puncture method based on deep learning.
Background
Under the background of increasingly serious global epidemic situation, the spread of virus can be avoided to the greatest extent by avoiding the contact between people, so that the epidemic situation is ended as early as possible. In daily medical treatment, a nurse and a patient are inevitably contacted during the acupuncture, a robot capable of realizing automatic acupuncture can avoid the condition, and how to search blood vessels is indispensable in realizing the automatic acupuncture robot.
In the existing blood vessel image segmentation technology, a SIFT algorithm or a segmentation technology based on a boundary and a region is generally adopted. However, these blood vessel image segmentation techniques based on conventional image processing methods may cause the following problems: 1. the model is based on the texture characteristics of the feature points, and the expression capability is insufficient. During the medical treatment, the ultrasonic imaging of a blood vessel is interfered by a lot of noise, and an effective texture is difficult to form for model examination. 2. Without the support of deep learning technology, the method cannot benefit from the powerful capability of a neural network in precision, accuracy and generalization capability of a model. 3. Conventional algorithms are typically used to distinguish vessel portions from background portions.
Chinese patent CN201710145321.4 provides a method and system for retinal vessel image segmentation based on PCNN, in which an image preprocessing technique of seed region growing and threshold processing is used to segment retinal vessels. Thresholding is a relatively crude process that provides the PCNN pre-processed image with some loss of detail that can be captured and utilized by the neural network. And the sectional processing method does not have the advantage of easy training of an end-to-end model and is difficult to be quickly applied to other scenes.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic blood vessel puncture method based on deep learning, which is favorable for realizing automatic blood vessel puncture.
The purpose of the invention can be realized by the following technical scheme:
a method for automatic blood vessel puncture based on deep learning comprises the steps of obtaining a blood vessel picture of a part to be punctured, conducting graying processing on the blood vessel picture, loading the blood vessel picture after graying processing into a pre-established and trained convolutional neural network model, obtaining a background, a region which can be punctured in a blood vessel and a region which cannot be punctured in the blood vessel picture, and conducting blood vessel puncture according to the region which can be punctured in the blood vessel;
the convolutional neural network model comprises a contraction path, an expansion path and an activation function which are connected in sequence,
the systolic path is used to downsample an input picture;
the expansion path is used for up-sampling the picture output by the contraction path and splicing the picture with the corresponding size in the contraction path, and the expansion path outputs the picture with the depth of 3;
and the activation function carries out three-dimension classification on the third dimension of the image output by the extended path, and selects the maximum value in three-dimension classification results as final output.
Further, the size of the input picture is 256 × 1, and the contracting path includes performing two convolution operations on the input picture in sequence, so that the hidden layer size is 256 × 64, and then repeatedly performing one downsampling operation and two convolution operations until the size of the picture is 32 × 512.
Further, the expansion path repeatedly performs one upsampling and one stitching on the pictures output by the contraction path until the image size is 256 × 3.
Further, the processing procedure of the activation function specifically includes performing three classifications on a third dimension of the image output by the extended path to obtain three classification scores for each pixel point, where the three classification scores are [ V ] respectively 0 ,V 1 ,V 2 ]Selecting V 0 ,V 1 ,V 2 The position of the maximum value in (1) is used as the final output if V 0 At maximum, the classification value of the corresponding pixel is 0, if V is 1 Maximum, the classification value of the corresponding pixel is 1, if V 2 And at maximum, the classification value of the corresponding pixel is 2.
Further, the loss function of the convolutional neural network model adopts Logistic regression.
Further, the blood vessel picture is a blood vessel near-infrared picture obtained by a near-infrared camera.
Further, the calculation expression of the graying processing is as follows:
Figure BDA0002780788570000021
wherein, grey is a graying processing result, R is an R-dimensional value of the blood vessel picture, G is a G-dimensional value of the blood vessel picture, and B is a B-dimensional value of the blood vessel picture.
Further, the automatic blood vessel puncture method further comprises the step of standardizing the blood vessel picture after the graying processing, and then loading the blood vessel picture into the convolutional neural network model.
Further, the training process of the convolutional neural network model comprises the following steps: randomly dividing a pre-collected and labeled blood vessel near-infrared data set into a training set and a testing set, loading the training set into a convolutional neural network model for training, and verifying the convolutional neural network model through the testing set until a preset training standard is reached;
and updating parameters by adopting a back propagation algorithm in the training process of the convolutional neural network model.
Furthermore, the blood vessel near-infrared data set comprises blood vessel near-infrared pictures subjected to graying processing and standardization processing and Mask pictures corresponding to the blood vessel near-infrared pictures one by one, the size of each Mask picture is the same as that of the blood vessel near-infrared picture in the blood vessel near-infrared data set, and the Mask pictures are used for distinguishing a background, a puncture region in the blood vessel and a non-puncture region in the blood vessel through three different numerical values.
Compared with the prior art, the invention has the following advantages:
(1) The method adopts the convolutional neural network model to identify the background, the puncture area in the blood vessel and the non-puncture area in the blood vessel from the blood vessel picture, and performs blood vessel puncture according to the puncture area in the blood vessel;
the convolution neural network model adopts a structure similar to U-net, so that each pixel point of the image can be classified, and higher segmentation accuracy is obtained; due to the structural characteristics of the U-net, the model contains a compression path and an expansion path, so that the context information can be captured and the accurate positioning can be realized at the same time; meanwhile, data enhancement (splicing with a contraction path) is used, elastic deformation processing is applied to the collected pictures, the problem of insufficient training data is solved, and the robustness of the algorithm is improved.
(2) The invention adopts the near-infrared camera to acquire the image, and more clearly acquires the blood vessel image near the puncture position, thereby achieving the purposes of identifying the background and the part which can not be punctured in the blood vessel and avoiding the part when selecting the puncture point.
Drawings
FIG. 1 is a schematic structural diagram of a convolutional neural network model according to an embodiment of the present invention;
FIG. 2 is a diagram of a ReLU activation function;
FIG. 3 is a subdivided view of the flow steps of the automatic vessel puncture method based on deep learning according to the embodiment of the present invention;
fig. 4 is a schematic diagram of a blood vessel near-infrared image after graying processing.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
The embodiment provides an automatic blood vessel puncturing method based on deep learning, which comprises the steps of acquiring a blood vessel picture of a part to be punctured, carrying out graying processing on the blood vessel picture, loading the blood vessel picture subjected to graying processing into a pre-established and trained convolutional neural network model, acquiring a background, an intravascular punctured area and an intravascular unpunctured area in the blood vessel picture, and carrying out blood vessel puncturing according to the intravascular punctured area;
the convolutional neural network model comprises a contraction path, an expansion path and an activation function which are connected in sequence,
the contraction path is used for down-sampling the input picture;
the expansion path is used for up-sampling the picture output by the contraction path and splicing the picture with the corresponding size in the contraction path, and the expansion path outputs a picture with the depth of 3;
and the activation function carries out three-classification on the third dimensionality of the image output by the extension path, and selects the maximum value in three-classification results as final output.
Specifically, the size of the input picture is 256 × 1, and the contraction path includes performing two convolution operations on the input picture in sequence so that the hidden layer size is 256 × 64, and then repeatedly performing one downsampling and two convolution operations until the size of the picture is 32 × 512.
The expansion path repeatedly performs one upsampling and one stitching on the pictures output by the contraction path until the image size is 256 × 3.
Specifically, the processing procedure of the activation function is to perform three classifications on the third dimension of the picture output by the extended path to obtain three classification scores for each pixel point, wherein the three classification scores are [ V ] respectively 0 ,V 1 ,V 2 ]Selecting V 0 ,V 1 ,V 2 The position of the maximum value in (1) is used as the final output if V 0 At maximum, the classification value of the corresponding pixel is 0, if V is 1 Maximum, the classification value of the corresponding pixel is 1, if V 2 And at maximum, the classification value of the corresponding pixel is 2.
As a preferred embodiment, logistic regression is used as the loss function of the convolutional neural network model.
In a preferred embodiment, the parameters are updated by using a back propagation algorithm in the training process of the convolutional neural network model.
In a preferred embodiment, the blood vessel picture is a blood vessel near-infrared picture obtained by a near-infrared camera.
The calculation expression of the graying processing is as follows:
Figure BDA0002780788570000041
in the formula, grey is a graying processing result, R is an R-dimensional value of the blood vessel picture, G is a G-dimensional value of the blood vessel picture, and B is a B-dimensional value of the blood vessel picture.
As a preferred embodiment, the automatic blood vessel puncturing method further includes normalizing the grayed blood vessel picture, and then loading the blood vessel picture into the convolutional neural network model.
The training process of the convolutional neural network model comprises the following steps: randomly dividing a pre-collected and labeled blood vessel near-infrared data set into a training set and a testing set, loading the training set into a convolutional neural network model for training, and verifying the convolutional neural network model through the testing set until a preset training standard is reached;
the blood vessel near-infrared data set comprises blood vessel near-infrared pictures subjected to graying processing and standardization processing and Mask pictures corresponding to the blood vessel near-infrared pictures one by one, the size of each Mask picture is the same as that of the blood vessel near-infrared picture in the blood vessel near-infrared data set, and the Mask pictures are used for distinguishing a background, a puncture region in the blood vessel and a puncture-free region in the blood vessel through three different numerical values.
The above preferred embodiments are combined to obtain an optimal embodiment, and specific implementation procedures of the optimal embodiment are described in detail below.
As shown in fig. 3, the automatic blood vessel puncturing method based on deep learning in this embodiment includes the following steps:
s1: clear and complete blood vessel pictures near the acupuncture position are obtained through the near-infrared camera.
S2: a vessel surveillance data set with three classification markers is established.
S3: a convolutional neural network model as shown in figure 1 was built.
S4: and (4) training the convolution neural network model built by the model pair S3 by using a near-infrared blood vessel image data set.
S5: and testing the trained model by using the test set.
S6: after the test is finished, each time the image obtained by the near-infrared camera is sent to the convolutional neural network model, the convolutional neural network model divides the near-infrared blood vessel image into a background, a part which can be punctured in the blood vessel and a part which can not be punctured in the blood vessel.
S7: and performing vascular puncture according to the puncture area in the blood vessel.
The following is a detailed description of the key parts of the above steps:
1. collecting a data set
101 256 x3 pixels blood vessel near-infrared pictures of different needle sites were acquired by a near-infrared camera.
102 By a gray-scale map conversion formula:
Figure BDA0002780788570000051
in the formula, grey is a graying processing result, R is an R-dimensional value of the blood vessel picture, G is a G-dimensional value of the blood vessel picture, and B is a B-dimensional value of the blood vessel picture.
As shown in fig. 4, the picture is converted from an RGB map to a gray scale map, each pixel of the gray scale map is normalized by the following equation, and a Mask picture of 256 × 1 is created to specify the category of each pixel.
Figure BDA0002780788570000061
In the formula, A pq Is the pixel value of the p-th row and q-th column, A ij The pixel value of the ith row and the jth column is n, and the number of rows or columns of the picture is n.
The above operations are repeated until the data set can support the training of the model. How to create Mask pictures of near-infrared images is described below.
2. Mask picture establishment
Each Mask image has the same size as the input picture. Assuming that the size of the blood vessel near-infrared image is HxWx3, wherein 3 is the number of RGB channels, the Mask is an image with the size of HxWx 1.
201 Mask of size HxWx1 is established, where the value of each pixel is 0.
202 Pixel points belonging to the blood vessel pierceable portion in MaskCorresponding coordinate H 0 ,W 0 The value of (b) is changed to 1.
203 The coordinates H corresponding to the pixel points which do not belong to the blood vessel puncture part in the Mask 1 ,W 1 The value of (b) is changed to 2.
3. Modeling
301 Building a model: the input is a 256 × 1 grayscale image.
302 Establish a shrink path: after the input picture is convolved twice (padding, conv3x3, reLU), the hidden layer size is 256 × 64, and then the operations of downsampling once (2x 2 max _pooling) and convolving twice are repeated until the image size is 32 × 512. At this point, the extraction from the original image to the final information is completed, and we will use the information contained in the 32 × 512 matrix to achieve the purpose of three classifications.
Where padding is padding, conv3x3 is convolution with a convolution kernel of 3x3, reLU is the ReLU activation function, and 2x2 max _poolingis the maximum pooling using a convolution kernel of 2x 2.
303 Establish an extended path: the 32 × 512 pictures are repeatedly upsampled once (2x2up _ conv) and stitched once (collocated Feature _ Map) until the size of the pictures is 256 × 3, at this time, the same size as the original pictures is output, wherein the third dimension 3 in the matrix is used for classification of the position by adopting SoftMax classification in the dimension during classification, namely, the HxW position is 0, 1 or 2, and the position corresponds to a background, a blood vessel pierceable part and a blood vessel non-pierceable part.
Wherein, up _ Conv is convolution-layer-based upsampling, and the coordinate Feature _ Map is the channel merging of the Feature Map.
304 Activate function with Softmax: passing the final output image with the size of 256 × 3 through a Softmax function in the third dimension of the matrix to obtain three classification scores of each pixel point, wherein the three classification scores are respectively [ V [ ] 0 ,V 1 ,V 2 ]Get V 0 ,V 1 ,V 2 The position where the maximum value is located is taken as the final output. For example, in (H) 0 ,W 0 ) In position, V 0 >V 1 >V 2 Then, 0 is selected as (H) 0 ,W 0 ) The output of the point represents that the pixel point belongs to the classification as the background.
Softmax activation function:
Figure BDA0002780788570000071
in the formula, z i Is the input of neuron i, C is the total number of neurons, SOftMax (z) i ) The output of the function is activated for Softmax.
305 Select a loss function: the model adopts Logistic regression, and the loss function is as follows:
Figure BDA0002780788570000072
wherein m is the number of samples contained in the data set, y i The method adopts the onehot coding method and adopts the onehot coding method,
Figure BDA0002780788570000073
i is the column direction row i, T is the transposed symbol, and log (×) is the logarithmic operation.
Finally, the output of the model is an image of 256 × 1 pixels, if the pixel is 0, the image represents the background, if the pixel is 1, the image represents the blood vessel puncturable part, and if the pixel is 2, the image represents the blood vessel non-puncturable part.
4. Training and testing model
401 Training the model using the collected and labeled vascular near-infrared dataset: randomly selecting 80% of pictures as a training set, taking the rest 20% as a test set, and updating the parameter theta by using a back propagation algorithm, namely:
Figure BDA0002780788570000074
/>
since the gradient direction is the direction that makes the function rise the fastest, updating the parameters in the opposite direction of the gradient will decrease the value of the function. The function is a loss function of the model, i.e. the magnitude of the metric error, and the value of the reduction function represents the reduction of the model error.
402 The data of the test set is sent to a model with a trained error value for testing, and the accuracy of the three classifications is calculated.
5. Application model
501 Before automatic puncture is carried out, a near-infrared camera is used for collecting blood vessel images near a puncture point, the images are sent into a model for three categories of the blood vessel near-infrared images, and the model returns a picture for marking the images as a background/a punctured part in a blood vessel/a non-punctured part in the blood vessel.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations can be devised by those skilled in the art in light of the above teachings. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (9)

1. A method for automatic blood vessel puncture based on deep learning is characterized in that the method specifically comprises the steps of obtaining a blood vessel picture of a part to be punctured, carrying out graying processing on the blood vessel picture, loading the blood vessel picture after graying processing into a pre-established and trained convolution neural network model, obtaining a background, a region capable of puncturing in a blood vessel and a region incapable of puncturing in the blood vessel picture, and carrying out blood vessel puncture according to the region capable of puncturing in the blood vessel;
the convolutional neural network model comprises a contraction path, an expansion path and an activation function which are connected in sequence,
the puncturing path is used for down-sampling an input picture;
the expansion path is used for up-sampling the picture output by the contraction path and splicing the picture with the corresponding size in the contraction path, and the expansion path outputs the picture with the depth of 3;
the activation function carries out three-dimension classification on the third dimension of the picture output by the extended path, and selects the maximum value in three-dimension classification results as final output;
the processing process of the activation function specifically includes three-classifying the third dimension of the picture output by the extended path to obtain three classification scores for each pixel point, wherein the three classification scores are [ V ] respectively 0 ,V 1 ,V 2 ]Selecting V 0 ,V 1 ,V 2 The position of the maximum value in (1) is used as the final output if V 0 At maximum, the classification value of the corresponding pixel is 0, if V is 1 At maximum, the classification value of the corresponding pixel is 1, if V 2 At maximum, the classification value of the corresponding pixel is 2;
if the pixel point is 0, the background is represented, if the pixel point is 1, the blood vessel puncture part is represented, and if the pixel point is 2, the blood vessel puncture part is represented.
2. The method according to claim 1, wherein the input picture has a size of 256 × 1, and the contracting comprises performing two convolution operations on the input picture in sequence to make the hidden layer size 256 × 64, and then repeating the downsampling and the two convolution operations until the picture size is 32 × 512.
3. The method of claim 2, wherein the expansion path repeatedly performs an upsampling and a stitching on the output pictures of the contraction path until the image size is 256 × 3.
4. The automatic vessel puncture method based on deep learning of claim 1, wherein the loss function of the convolutional neural network model adopts Logistic regression.
5. The automatic blood vessel puncturing method based on deep learning of claim 1, wherein the blood vessel picture is a blood vessel near-infrared picture obtained by a near-infrared camera.
6. The automatic blood vessel puncture method based on deep learning of claim 5, wherein the calculation expression of the graying processing is as follows:
Figure FDA0003875207310000021
in the formula, grey is a graying processing result, R is an R-dimensional value of the blood vessel picture, G is a G-dimensional value of the blood vessel picture, and B is a B-dimensional value of the blood vessel picture.
7. The automatic blood vessel puncturing method based on deep learning of claim 6, wherein the automatic blood vessel puncturing method further comprises normalizing the grayed blood vessel picture and then loading the blood vessel picture into the convolutional neural network model.
8. The automatic vessel puncture method based on deep learning of claim 1, wherein the training process of the convolutional neural network model comprises: randomly dividing a pre-collected and labeled blood vessel near-infrared data set into a training set and a testing set, loading the training set into a convolutional neural network model for training, and verifying the convolutional neural network model through the testing set until a preset training standard is reached;
and updating parameters by adopting a back propagation algorithm in the training process of the convolutional neural network model.
9. The automatic blood vessel puncturing method based on deep learning of claim 8, wherein the blood vessel near-infrared data set comprises blood vessel near-infrared pictures subjected to graying processing and standardization processing and Mask pictures corresponding to the blood vessel near-infrared pictures one by one, the size of the Mask pictures is the same as that of the blood vessel near-infrared pictures in the blood vessel near-infrared data set, and the Mask pictures are used for distinguishing a background, a punctured region in the blood vessel and a non-punctured region in the blood vessel through three different numerical values.
CN202011281045.2A 2020-11-16 2020-11-16 Automatic blood vessel puncture method based on deep learning Active CN112242193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011281045.2A CN112242193B (en) 2020-11-16 2020-11-16 Automatic blood vessel puncture method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011281045.2A CN112242193B (en) 2020-11-16 2020-11-16 Automatic blood vessel puncture method based on deep learning

Publications (2)

Publication Number Publication Date
CN112242193A CN112242193A (en) 2021-01-19
CN112242193B true CN112242193B (en) 2023-03-31

Family

ID=74166953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011281045.2A Active CN112242193B (en) 2020-11-16 2020-11-16 Automatic blood vessel puncture method based on deep learning

Country Status (1)

Country Link
CN (1) CN112242193B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516624A (en) * 2021-04-28 2021-10-19 武汉联影智融医疗科技有限公司 Determination of puncture forbidden zone, path planning method, surgical system and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334656A (en) * 2019-07-08 2019-10-15 中国人民解放军战略支援部队信息工程大学 Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN110852987A (en) * 2019-09-24 2020-02-28 西安交通大学 Vascular plaque detection method and device based on deep morphology and storage medium
WO2020177217A1 (en) * 2019-03-04 2020-09-10 东南大学 Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales
CN111931626A (en) * 2020-08-03 2020-11-13 天津理工大学 Automatic operation method of vascular intervention robot based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020177217A1 (en) * 2019-03-04 2020-09-10 东南大学 Method of segmenting pedestrians in roadside image by using convolutional network fusing features at different scales
CN110334656A (en) * 2019-07-08 2019-10-15 中国人民解放军战略支援部队信息工程大学 Multi-source Remote Sensing Images Clean water withdraw method and device based on information source probability weight
CN110852987A (en) * 2019-09-24 2020-02-28 西安交通大学 Vascular plaque detection method and device based on deep morphology and storage medium
CN111931626A (en) * 2020-08-03 2020-11-13 天津理工大学 Automatic operation method of vascular intervention robot based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的图像识别系统;李航等;《电脑知识与技术》;20200405(第10期);全文 *
基于深度学习的指静脉识别研究;吴超等;《计算机技术与发展》;20180228;全文 *
基于深度学习的自动静脉穿刺机器人研制;董丽丽等;《医疗卫生装备》;20200630;全文 *
基于深度学习的静脉留置针穿刺点智能定位方法;贾海晶等;《生物医学工程研究》;20180119;全文 *

Also Published As

Publication number Publication date
CN112242193A (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN111145170B (en) Medical image segmentation method based on deep learning
Dharmawan et al. A new hybrid algorithm for retinal vessels segmentation on fundus images
Nasr-Esfahani et al. Vessel extraction in X-ray angiograms using deep learning
Zhu et al. Detection of the optic disc in images of the retina using the Hough transform
CN115205300B (en) Fundus blood vessel image segmentation method and system based on cavity convolution and semantic fusion
KR102214123B1 (en) Method and system for extracting and integratin and evaluating lesion using pwi-dwi mismatch based on artificial intelligence
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
Xu et al. Dual-channel asymmetric convolutional neural network for an efficient retinal blood vessel segmentation in eye fundus images
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
CN114511502A (en) Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium
CN112242193B (en) Automatic blood vessel puncture method based on deep learning
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN113379741B (en) Retinal blood vessel segmentation method, device and storage medium based on blood vessel characteristics
CN114821682A (en) Multi-sample mixed palm vein identification method based on deep learning algorithm
CN112741651B (en) Method and system for processing ultrasonic image of endoscope
KR20200083303A (en) Apparatus and method for increasing learning data using patch matching
CN113539402B (en) Multi-mode image automatic sketching model migration method
CN113506274A (en) Detection system for human cognitive condition based on visual saliency difference map
CN117218453A (en) Incomplete multi-mode medical image learning method
Mookiah et al. Computer aided diagnosis of diabetic retinopathy using multi-resolution analysis and feature ranking frame work
CN110634119B (en) Method, device and computing equipment for segmenting vein blood vessel in magnetic sensitivity weighted image
CN116206160A (en) Automatic identification network model and automatic sketching network model construction method for nasopharyngeal carcinoma lesion tissues based on convolutional neural network model
CN112614092A (en) Spine detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant