CN114176514A - Vein identification and positioning method and system based on near-infrared imaging - Google Patents

Vein identification and positioning method and system based on near-infrared imaging Download PDF

Info

Publication number
CN114176514A
CN114176514A CN202111353446.9A CN202111353446A CN114176514A CN 114176514 A CN114176514 A CN 114176514A CN 202111353446 A CN202111353446 A CN 202111353446A CN 114176514 A CN114176514 A CN 114176514A
Authority
CN
China
Prior art keywords
infrared imaging
vein
infrared
vessel
blood vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111353446.9A
Other languages
Chinese (zh)
Other versions
CN114176514B (en
Inventor
齐鹏
季嘉蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202111353446.9A priority Critical patent/CN114176514B/en
Publication of CN114176514A publication Critical patent/CN114176514A/en
Application granted granted Critical
Publication of CN114176514B publication Critical patent/CN114176514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • A61B5/489Blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0064Body surface scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Vascular Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Input (AREA)

Abstract

The invention relates to a vein blood vessel identification and positioning method and a vein blood vessel identification and positioning system based on near-infrared imaging, wherein the method comprises the steps of respectively shooting infrared imaging pictures under a first infrared wavelength and a second infrared wavelength through a near-infrared imaging camera, and respectively taking the first infrared imaging picture and the second infrared imaging picture; preprocessing the first infrared imaging picture, and enhancing the characteristics of the vein to obtain an enhanced image; inputting the enhanced image into a first vein vessel segmentation network model, and performing vessel distribution identification to obtain a segmentation image; inputting the segmentation image into a second vein vessel segmentation network model, and carrying out vessel labeling suitable for puncture to obtain a labeled vessel; extracting the position of the connected domain to obtain a marked puncture point and an optimal puncture angle; and calculating the depth information of the marked blood vessel according to the light intensity change of the marked blood vessel in the first infrared imaging picture and the second infrared imaging picture. Compared with the prior art, the method has the advantages of high identification precision, good robustness and the like.

Description

Vein identification and positioning method and system based on near-infrared imaging
Technical Field
The invention relates to the technical field of vein vessel identification, in particular to a vein vessel identification and positioning method and system based on near-infrared imaging.
Background
Venipuncture is a frequent medical treatment and is still in the technical stage of manual puncture nowadays. Due to the limitation of human eyes, the artificial puncture is often performed for many times due to wrong puncture positions for obese people with high BMI and infants with thin blood vessels; in addition, the puncture angle and the puncture pose are difficult to be continuously and stably judged. At the same time, hospitals are also expensive to cultivate a technically mature nurse.
With the rapid development of intelligent medical treatment, the market has increasingly paid attention to the venipuncture blood collection/injection robot. For the existing vein vessel automatic identification technology, the identification of the vessel depth is usually carried out by means of a vein network of a near infrared identification 2D plane and combining ultrasound. However, in the prior art, two types of sensor data, namely ultrasonic sensor data and infrared sensor data, are adopted, and the two types of independent sensor data describe the division of the spatial information of the blood vessel, so that the accuracy of identifying the position of the blood vessel is easily reduced when fusion processing is performed subsequently, and the error expression is most obvious in the spatial angle identification of the blood vessel.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a vein identification and positioning method and system based on near infrared imaging.
The purpose of the invention can be realized by the following technical scheme:
a vein vessel identification and positioning method based on near infrared imaging comprises the following steps:
s1, respectively shooting infrared imaging pictures under the first infrared wavelength and the second infrared wavelength through a near-infrared imaging camera, wherein the infrared imaging pictures are respectively a first infrared imaging picture and a second infrared imaging picture;
s2, preprocessing the first infrared imaging picture, and enhancing the characteristics of the vein to obtain an enhanced image;
s3, inputting the enhanced image into the first vein vessel segmentation network model, and performing vessel distribution identification to obtain a segmentation image;
s4, inputting the segmentation image into a second vein segmentation network model, and carrying out blood vessel labeling suitable for puncture to obtain a labeled blood vessel; meanwhile, extracting the position of the connected domain of the marked blood vessel to obtain the central point of each connected domain in the marked blood vessel, rotating by taking each central point as a center to form a plurality of straight lines which pass through the connected domain at different angles, and selecting the straight line which is most intersected with the connected domain from the straight lines, wherein the central point corresponding to the selected straight line is the target puncture point, and the slope of the straight line is the optimal puncture angle;
s5, calculating the depth of the body surface fat layer of the human body marked on the blood vessel according to the light intensity change of the marked blood vessel in the first infrared imaging picture and the second infrared imaging picture, namely the depth information of the marked blood vessel.
Further, the first vein segmentation network model and the second vein segmentation network model both adopt a TransUNet neural network model.
Further, the depth information calculation expression of the labeled blood vessel is as follows:
Figure BDA0003356742490000021
wherein χ represents a deformation coefficient, I1Denotes the intensity of the first infrared wavelength,. DELTA.I1Expressing the absorption intensity of fat at the first infrared wavelength, i.e. the difference between the emission intensity and the reflection intensity at the first infrared wavelength, I2Indicating the intensity of the second infrared wavelength,. DELTA.I2Expressing the fat absorption intensity under the second infrared wavelength, namely the difference between the emission intensity and the reflection intensity under the second infrared wavelength, wherein eta is the light intensity absorption correction coefficient, mu1And mu2Is represented by1And I2Absorption coefficient of fat for light at two intensities.
Further, μ1And mu2By using a general formula muaRepresents:
Figure BDA0003356742490000022
where k is the imaginary part of the refractive index of water,
Figure BDA0003356742490000023
λ is the volume fraction of water in the tissue and the wavelength of the incident light.
Further, the preprocessing comprises the steps of carrying out gray level processing on the image, then carrying out binarization processing through a maximum inter-class variance method, and finally enhancing the characteristics of the vein vessels in the image by adopting a multiscale filtering method based on a Hessian matrix.
Further, the first infrared wavelength is 850nm, and the second infrared wavelength is 940 nm.
Further, in step S1, the infrared imaging camera respectively takes a plurality of infrared imaging pictures at the first infrared wavelength and the second infrared wavelength, and selects the clearest image as the first infrared imaging picture and the second infrared imaging picture.
8. A vein vessel identification and positioning system based on near infrared imaging is characterized by comprising a processor and a memory, wherein the processor calls a program in the memory to execute the following steps:
s1, respectively shooting infrared imaging pictures under the first infrared wavelength and the second infrared wavelength through a near-infrared imaging camera, wherein the infrared imaging pictures are respectively a first infrared imaging picture and a second infrared imaging picture;
s2, preprocessing the first infrared imaging picture, and enhancing the characteristics of the vein to obtain an enhanced image;
s3, inputting the enhanced image into the first vein vessel segmentation network model, and performing vessel distribution identification to obtain a segmentation image;
s4, inputting the segmentation image into a second vein segmentation network model, and carrying out blood vessel labeling suitable for puncture to obtain a labeled blood vessel; meanwhile, extracting the position of the connected domain of the marked blood vessel to obtain the central point of each connected domain in the marked blood vessel, rotating by taking each central point as a center to form a plurality of straight lines which pass through the connected domain at different angles, and selecting the straight line which is most intersected with the connected domain from the straight lines, wherein the central point corresponding to the selected straight line is the target puncture point, and the slope of the straight line is the optimal puncture angle;
s5, calculating the depth of the body surface fat layer of the human body marked on the blood vessel according to the light intensity change of the marked blood vessel in the first infrared imaging picture and the second infrared imaging picture, namely the depth information of the marked blood vessel.
9. The vein identification and positioning system based on near infrared imaging as claimed in claim 8, wherein the depth information of the labeled vessel is calculated by the following expression:
Figure BDA0003356742490000031
wherein χ represents a deformation coefficient, I1Denotes the intensity of the first infrared wavelength,. DELTA.I1Expressing the absorption intensity of fat at the first infrared wavelength, i.e. the difference between the reflected intensity and the emitted intensity at the first infrared wavelength, I2Indicating the intensity of the second infrared wavelength,. DELTA.I2Expressing the fat absorption intensity under the second infrared wavelength, namely the difference between the reflection intensity and the emission intensity under the second infrared wavelength, wherein eta is the light intensity absorption correction coefficient, mu1And mu2Is represented by1And I2Absorption coefficient of fat for light at two intensities.
10. The vein identification and positioning system based on near infrared imaging according to claim 9, wherein μ1And mu2By using a general formula muaRepresents:
Figure BDA0003356742490000032
where k is the imaginary part of the refractive index of water,
Figure BDA0003356742490000033
λ is the volume fraction of water in the tissue and the wavelength of the incident light.
Compared with the prior art, the invention has the following beneficial effects:
1. the identification of the position and the depth of the blood vessel can be realized only by adopting near infrared data without additional ultrasonic equipment, and the shooting data of the same position of the same camera is adopted, so that the high unification of the depth information and the position of the blood vessel can be ensured, and the identification precision and robustness are obviously improved.
2. Compared with the traditional method of adopting one neural network for identification, the double-input double-output network detection precision is far superior to the single-input single-output precision from the result of an actual experiment, and the method has higher reliability and accuracy when applied to medical identification.
3. The vein segmentation network model adopts a TransUNet neural network model, which combines CNN and Transformer as an encoder, wherein the CNN focuses on global information, and the Transformer encodes local details. In addition, skip-connection in the Unet network is reserved, the feature graph generated by coding and the feature graph restored by the corresponding decoding part are spliced together by using channels, and feature information loss caused by deepening of the network layer number in the downsampling structure is avoided.
Drawings
Fig. 1 is a general flowchart of a blood vessel image recognition method based on near infrared imaging according to the present invention.
Fig. 2 is a training flowchart of the vessel segmentation network according to the present invention.
Fig. 3 is a diagram showing the preprocessing effect of the blood vessel image according to the present invention.
Fig. 4 is a diagram showing the effect of the vessel segmentation network according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in fig. 1, the present embodiment provides a vein identification and location method based on near-infrared imaging, including the following steps:
step S1, respectively shooting infrared imaging pictures under a first infrared wavelength and a second infrared wavelength through a near-infrared imaging camera, wherein the infrared imaging pictures are a first infrared imaging picture and a second infrared imaging picture;
s2, preprocessing the first infrared imaging picture, and enhancing the characteristics of the vein to obtain an enhanced image;
step S3, inputting the enhanced image into a first vein vessel segmentation network model, and carrying out vessel distribution identification to obtain a segmentation image;
step S4, inputting the segmentation image into a second vein segmentation network model, and carrying out blood vessel labeling suitable for puncture to obtain a labeled blood vessel;
and step S5, calculating the depth of the body surface fat layer of the human body marked on the blood vessel according to the light intensity change of the marked blood vessel in the first infrared imaging picture and the second infrared imaging picture, namely the depth information of the marked blood vessel.
As shown in fig. 4, the acquired infrared images can be specifically trained and processed through the five steps, and finally, a proper puncture point, puncture angle and puncture position are output.
The following are specific developments:
step S1
When the near-infrared imaging camera shoots a picture, a plurality of infrared imaging pictures under the first infrared wavelength and the second infrared wavelength are respectively shot, and the image with the highest definition is selected as the first infrared imaging picture and the second infrared imaging picture. In this embodiment, the first infrared wavelength is 850nm, and the second infrared wavelength is 940 nm; the peak value of infrared light absorption is near 940nm wavelength, and the infrared absorption effect is smaller under 850nm wavelength; these two wavelengths are most advantageous for detecting fat depth.
Step S2
The first infrared imaging picture is preprocessed as follows
S21, cutting out a black area and machine information on the first infrared imaging picture;
and S22, graying the picture, and if the input image is a color image, processing the image into a black-and-white image, wherein the gray value ranges from 0 to 255, the brightness is from dark to light, and the color is from black to white. Normalizing the picture pixels;
the specific formula of normalization is:
Figure BDA0003356742490000051
wherein g represents the pixel value of each pixel point in the picture;
s23, carrying out binarization processing on the gray level image obtained in S22 through an OTSU algorithm (maximum inter-class variance method);
and S24, performing blood vessel enhancement processing on the blood vessel image by using a multiscale filtering method based on the Hessian matrix.
Through the binarization processing and filtering of the first infrared imaging picture, clearer sample input is provided for subsequent vein network segmentation, and a better vein network segmentation effect is achieved.
Step S3 and step S4
In the two steps, the first vein segmentation network model and the second vein segmentation network model both adopt a TransUNet neural network model.
As shown in fig. 2, the first vein vessel segmentation network model aims to segment the contour of the vessel. The method comprises data processing, model building, model training, result output and the like.
The construction process of the first vein vessel segmentation network model comprises the following steps:
a series of pictures are taken by the near infrared imaging camera and are preprocessed in step S2 to obtain a training data set. Then, the ratio of 4: 3: 3 dividing the training data set into a target training set, a target verification set and a target test set. In order to increase the number of miss-trainers in the network and avoid the overfitting of the network, the target training set is rotated, translated, randomly scaled, shielded and the like. In addition, the division ratio is only listed in the embodiment and is not limited; in this embodiment, the data set is subjected to labeling of the vein vessel edges by a medical professional.
Constructing a structure of a TransUNet neural network model;
inputting the target training set into a TransUNet network, and training the target training set;
and inputting the target verification set and the target test set into a TransUNet network, and training the TransUNet network to obtain a target segmentation effect of the network. Optionally, the implementation example uses the dice value and the Hausdorff distance to evaluate the implementation effect of vein segmentation. Further modifications and improvements are made in accordance with the implementation of the object.
In this embodiment, the transit neural network model is a U-shaped network with five layers for both the encoding part and the decoding part.
Specifically, the model is as follows for an improved encoder in a U-type network:
the encoder of the network is combined with CNN and transformer, the CNN focuses on local information, and the transformer encodes global information;
because the input of the Transformer needs to be a sequence, the input image information needs to be serialized; assuming that a near infrared sheet P is obtained by shooting, the near infrared sheet P is changed into a single-channel gray-scale image P through operations such as image preprocessing graying and the likeW×H. The picture is sliced to be a picture with resolution of a × a, and the original picture can be decomposed into a plurality of pictures
Figure BDA0003356742490000061
Picture sequence of length
Figure BDA0003356742490000062
After the completion of the serialization of the pictures, the sequence of near-infrared picture slices is compressed to M-dimension (into picture T) using one full-link layerW×H×M)。
Meanwhile, referring to the position encoding in VIT, encoding the absolute position of the picture to form a one-dimensional picture position information code to form Tpos
Thus, for a specific ith picture patch, the encoding function is:
Figure BDA0003356742490000063
in the best ofTo an output vector z0And then input to the final encoding function. The system specifically comprises an L-layer interactive multi-head attention Module (MA) and a multi-head perceptron Module (MLP), and the specific functions are as follows:
y'j=MSA(LN(yj-1))+yj-1
yj=MLP(LN(y'j))+y'j
wherein y isjThe output vector of the j layer after being coded is represented, LN represents layer normalization, and is used for reducing the influence caused by high correlation of sample change;
and meanwhile, coding sequences obtained by CNN corresponding to transformers in different layers are adopted to generate characteristic images with corresponding dimensionality, and the characteristic images are cascaded to form a CNN-transformer encoder.
In the network, five times of upsampling coding is carried out on a near-infrared picture sequence. Meanwhile, the network reserves a skip-connection structure of the Unet structure, and correspondingly performs five times of downsampling decoding, namely, five layers of decoders are adopted to gradually obtain a final decoded image;
specifically, the U-type network decoder of the present model is as follows:
the network adopts a CUP cascade decoder which can hide the characteristics by performing up-sampling for a plurality of times and cascading a plurality of decoders
Figure BDA0003356742490000071
By cascaded upsampling blocks CUP, to
Figure BDA0003356742490000072
Full resolution of (3).
For each particular upsampled block, comprising two upsampling operators, a 3 × 3 convolutional layer, the loss function uses the ReLU function:
Figure BDA0003356742490000073
and the decoded sequence is connected with the characteristic diagram acquired by the CNN network in a long distance, so that the characteristic diagram is aggregated under the aggregation resolution, the low level characteristic is maintained, and a better blood vessel contour identification effect is realized.
As in fig. 4, a first vein vessel segmentation network model may be trained with labeled vessel contour datasets; and inputting a near-infrared picture obtained by data preprocessing into the trained model, and segmenting the vein contour.
As shown in fig. 3, the second vein segmentation network model aims to obtain a blood vessel region suitable for puncture, and uses connected domain calculation to select an optimal puncture point and a corresponding puncture angle.
The construction process of the second vein vessel segmentation network model comprises the following steps:
after the contour recognition of the vessel edge is completed, the vessel segment with too high or too thin edge curvature needs to be removed. Specifically, the positions which are not suitable for puncturing, such as vein bifurcations, large-curvature vein sections, vein sections close to the imaging edge of the near-infrared camera, short vein sections and the like, are manually erased, and the accurate positions which are suitable for puncturing are manually marked to be used as a training data set. Then, the ratio of 4: 3: 3 dividing the training data set into a target training set, a target verification set and a target test set.
In order to increase the number of miss-trainers in the network and avoid the overfitting of the network, the target training set is rotated, translated, randomly scaled, shielded and the like. In addition, the division ratio is only listed in the embodiment and is not limited; in this embodiment, the data set is subjected to labeling of the vein vessel edges by a medical professional.
Constructing a structure of a TransUNet neural network model;
inputting the target training set into a TransUNet network, and training the target training set;
and inputting the target verification set and the target test set into a TransUNet network, and training the TransUNet network to obtain a target segmentation effect of the network. Optionally, the implementation example uses the dice value and the Hausdorff distance to evaluate the implementation effect of vein segmentation. Further modifications and improvements are made in accordance with the implementation of the object.
As shown in fig. 4, the second vein vessel segmentation network model can be trained by labeled vessel data sets suitable for puncture; the output result of the first vein vessel segmentation network model can be used as the input of the second vein vessel segmentation network model, a blood vessel region suitable for puncture is identified, and further the optimal puncture point and puncture angle are calculated through a connected domain algorithm.
And inputting the result of the first vein vessel segmentation network model into the second vein vessel segmentation network model to obtain a vessel region suitable for puncture, and setting the picture of the output result as Q. And (3) carrying out connected domain position extraction on Q by adopting:
cv2.findcontours (gradyimg. cv2. RETR-TREE, cv2. CHAIN-APPROX-NONE).
After each connected component is obtained, the set of connected components is assumed to be S ═ S1,s2,……,snAnd n is the number of the communication domains.
For each connected domain, calculating a corresponding central point, and obtaining the central point of each connected domain by using an image central moment calculation function in opencv:
cv2.moments(contours[j],binaryImage=True)
let the set of center points corresponding to each connected domain be P ═ { P ═ P1,p2,……,pnAnd n is the number of communication domains, and the central points are the candidate set of the optimal puncture points.
For each center point piRotating around the center, and having a plurality of straight lines passing through the communication domain at different angles; selecting and connecting region s in the straight linesiThe most intersecting straight line liAnd with liSlope of (a) thetaiAs an optimum piercing angle for the communicating region.
Through the steps, l corresponding to each communication domain can be obtainediA straight line. Selecting l0=max{li,i∈[1,n]And if the target puncture point in the original image is the center point P of the connected domain corresponding to the straight lineoAnd the angle corresponding to the straight line is used as the puncture angle theta0
Step S5
In the embodiment, the first infrared imaging picture and the second infrared imaging picture are used for obtaining the depth information of the identified blood vessel, and the principle is as follows:
since the depth of veins in the human body is generally determined by the thickness of the epidermis, the thickness of the dermis and the thickness of the fat layer overlying the veins. The thickness of the epidermis layer of the skin of a human arm is usually about 0.15 mm, the total thickness of the skin is usually about 2 mm, and the thickness of the fat layer varies from individual to individual and from site to site. The arm vein depth is determined primarily by the thickness of the fat layer.
Because the skin epidermis of the human body has good transmission characteristics to infrared light. Most of the near infrared light irradiated to human skin can penetrate through the epidermis and enter subcutaneous adipose tissues. Also, the backscattered light of fat is mostly able to pass through the skin. There are also two effects of light scattering and absorption in fat. However, the fat absorption coefficient is small and stable, and the absorption coefficients are different at two near infrared wavelengths. There is a peak of infrared absorption near the wavelength of 940nm, and the effect of infrared absorption is small at the wavelength of 850 nm.
Since the output brightness of the infrared image is non-linear with the input signal amplitude, gamma correction is used here to calculate the brightness of the image. Assuming that a certain pixel point of the picture has pixels with three dimensions of R, G, and B, the brightness of the pixel point can be correspondingly calculated as:
Figure BDA0003356742490000091
since the infrared light is uniformly radiated outward in a wave pattern, it is assumed here that the light travels a distance d when radiated to the blood vessel. Being uniformly divergent, the brightness should exhibit a linear relationship with the light intensity reflected by the picture. Assuming that the reflected infrared light intensity of the picture is I', the following formula can be obtained:
I’=γL
the coefficient γ is a correction coefficient of a linear relationship and can be obtained by linear regression. The above linear regression can be performed experimentally in advance.
Thereby, the reflected light intensity I of the infrared picture can be obtained1' and lightness of the picture, and then the difference value of the energy absorbed by the infrared light under two wavelengths is used for calculating the thickness of the fat. Since the size of fat cells is much larger than the wavelength of near infrared light, there is strong back scattering in fat. The use can use the backscattered light difference to estimate the thickness of fat. Firstly, at the fat layer on the surface of the human body, the absorption coefficients of two kinds of light are as follows:
Figure BDA0003356742490000101
where k is the imaginary part of the refractive index of water,
Figure BDA0003356742490000102
is the volume fraction of water in the tissue (fixed constant 0.81) and λ is the wavelength of the incident light.
Through the infrared light of two kinds of wavelength, can calculate human body surface fat layer degree of depth, specifically be:
Figure BDA0003356742490000103
wherein, I1Denotes the initial intensity, Δ I, of infrared light waves having a wavelength of 850mm1Indicating the intensity of the fat absorption (difference between the intensity of the emitted light and the intensity of the reflected light), I2Denotes the initial light intensity, Δ I, of an infrared light wave having a wavelength of 940mm2Indicating the intensity of the fat absorption (difference between the intensity of the emitted light and the intensity of the reflected light), I1' denotes the reflected light intensity of the first infrared imaging picture, I2' represents the reflected light intensity of the second infrared imaging picture, eta is the light intensity absorption correction coefficient, mu1,μ2Representing the absorption coefficient of fat for light at two intensities of light. In the formula, the specific correction coefficient can be determined by regression.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A vein blood vessel identification and positioning method based on near infrared imaging is characterized by comprising the following steps:
s1, respectively shooting infrared imaging pictures under the first infrared wavelength and the second infrared wavelength through a near-infrared imaging camera, wherein the infrared imaging pictures are respectively a first infrared imaging picture and a second infrared imaging picture;
s2, preprocessing the first infrared imaging picture, and enhancing the characteristics of the vein to obtain an enhanced image;
s3, inputting the enhanced image into the first vein vessel segmentation network model, and performing vessel distribution identification to obtain a segmentation image;
s4, inputting the segmentation image into a second vein segmentation network model, and carrying out blood vessel labeling suitable for puncture to obtain a labeled blood vessel; meanwhile, extracting the position of the connected domain of the marked blood vessel to obtain the central point of each connected domain in the marked blood vessel, rotating by taking each central point as a center to form a plurality of straight lines which pass through the connected domain at different angles, and selecting the straight line which is most intersected with the connected domain from the straight lines, wherein the central point corresponding to the selected straight line is the target puncture point, and the slope of the straight line is the optimal puncture angle;
s5, calculating the depth of the body surface fat layer of the human body marked on the blood vessel according to the light intensity change of the marked blood vessel in the first infrared imaging picture and the second infrared imaging picture, namely the depth information of the marked blood vessel.
2. A vein identification and location method based on near-infrared imaging according to claim 1, wherein the first vein segmentation network model and the second vein segmentation network model both use a transit neural network model.
3. The vein vessel identification and positioning method based on near-infrared imaging according to claim 1, wherein the depth information calculation expression of the labeled vessel is as follows:
Figure FDA0003356742480000011
wherein χ represents a deformation coefficient, I1Denotes the intensity of the first infrared wavelength,. DELTA.I1Expressing the absorption intensity of fat at the first infrared wavelength, i.e. the difference between the emission intensity and the reflection intensity at the first infrared wavelength, I2Indicating the intensity of the second infrared wavelength,. DELTA.I2Expressing the fat absorption intensity under the second infrared wavelength, namely the difference between the emission intensity and the reflection intensity under the second infrared wavelength, wherein eta is the light intensity absorption correction coefficient, mu1And mu2Is represented by1And I2Absorption coefficient of fat for light at two intensities.
4. The vein identification and positioning method based on near infrared imaging according to claim 3, wherein μ1And mu2By using a general formula muaRepresents:
Figure FDA0003356742480000021
where k is the imaginary part of the refractive index of water,
Figure FDA0003356742480000022
λ is the volume fraction of water in the tissue and the wavelength of the incident light.
5. The vein vessel identification and positioning method based on near-infrared imaging as claimed in claim 1, wherein the preprocessing comprises performing gray processing on the image, then performing binarization processing by a maximum inter-class variance method, and finally enhancing the characteristics of the vein vessel in the image by adopting a multiscale filtering method based on a Hessian matrix.
6. The vein identification and positioning method based on near infrared imaging according to claim 1, wherein the first infrared wavelength is 850nm, and the second infrared wavelength is 940 nm.
7. The vein identification and positioning method based on near infrared imaging according to claim 1, wherein in step S1, the infrared imaging camera respectively takes a plurality of infrared imaging pictures at the first infrared wavelength and the second infrared wavelength, and selects the clearest image as the first infrared imaging picture and the second infrared imaging picture.
8. A vein vessel identification and positioning system based on near infrared imaging comprises a processor and a memory, and is characterized in that the processor calls a program in the memory to execute the following steps:
s1, respectively shooting infrared imaging pictures under the first infrared wavelength and the second infrared wavelength through a near-infrared imaging camera, wherein the infrared imaging pictures are respectively a first infrared imaging picture and a second infrared imaging picture;
s2, preprocessing the first infrared imaging picture, and enhancing the characteristics of the vein to obtain an enhanced image;
s3, inputting the enhanced image into the first vein vessel segmentation network model, and performing vessel distribution identification to obtain a segmentation image;
s4, inputting the segmentation image into a second vein segmentation network model, and carrying out blood vessel labeling suitable for puncture to obtain a labeled blood vessel; meanwhile, extracting the position of the connected domain of the marked blood vessel to obtain the central point of each connected domain in the marked blood vessel, rotating by taking each central point as a center to form a plurality of straight lines which pass through the connected domain at different angles, and selecting the straight line which is most intersected with the connected domain from the straight lines, wherein the central point corresponding to the selected straight line is the target puncture point, and the slope of the straight line is the optimal puncture angle;
s5, calculating the depth of the body surface fat layer of the human body marked on the blood vessel according to the light intensity change of the marked blood vessel in the first infrared imaging picture and the second infrared imaging picture, namely the depth information of the marked blood vessel.
9. The vein identification and positioning system based on near infrared imaging as claimed in claim 8, wherein the depth information of the labeled vessel is calculated by the following expression:
Figure FDA0003356742480000031
wherein χ represents a deformation coefficient, i1Indicating the intensity of the first infrared wavelength,. DELTA.i1Expressing the absorption intensity of fat at the first infrared wavelength, i.e. the difference between the reflected intensity and the emitted intensity at the first infrared wavelength2Indicating the intensity of the second infrared wavelength,. DELTA.i2Expressing the fat absorption intensity under the second infrared wavelength, namely the difference between the reflection intensity and the emission intensity under the second infrared wavelength, wherein eta is the light intensity absorption correction coefficient, mu1And mu2Is represented by1And I2Absorption coefficient of fat for light at two intensities.
10. The vein identification and positioning system based on near infrared imaging according to claim 9, wherein μ1And mu2By using a general formula muaRepresents:
Figure FDA0003356742480000032
where k is the imaginary part of the refractive index of water,
Figure FDA0003356742480000033
for water in tissueλ is the wavelength of the incident light.
CN202111353446.9A 2021-11-16 2021-11-16 Vein blood vessel identification positioning method and system based on near infrared imaging Active CN114176514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111353446.9A CN114176514B (en) 2021-11-16 2021-11-16 Vein blood vessel identification positioning method and system based on near infrared imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111353446.9A CN114176514B (en) 2021-11-16 2021-11-16 Vein blood vessel identification positioning method and system based on near infrared imaging

Publications (2)

Publication Number Publication Date
CN114176514A true CN114176514A (en) 2022-03-15
CN114176514B CN114176514B (en) 2023-08-29

Family

ID=80540225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111353446.9A Active CN114176514B (en) 2021-11-16 2021-11-16 Vein blood vessel identification positioning method and system based on near infrared imaging

Country Status (1)

Country Link
CN (1) CN114176514B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116746926A (en) * 2023-08-16 2023-09-15 深圳市益心达医学新技术有限公司 Automatic blood sampling method, device, equipment and storage medium based on image recognition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100920251B1 (en) * 2008-12-30 2009-10-05 동국대학교 산학협력단 A method for restoring infrared vein image blurred by skin scattering
CN102871645A (en) * 2011-07-11 2013-01-16 浙江大学 Near-infrared imaging ultrasonic vascular therapeutic apparatus
CN107748872A (en) * 2017-10-27 2018-03-02 孙洪军 A kind of IMAQ is clear and comprehensive intelligent palm vein identification device
CN107812283A (en) * 2017-10-18 2018-03-20 北京工商大学 A kind of method for automatically determining point of puncture position
CN109171905A (en) * 2018-10-11 2019-01-11 青岛浦利医疗技术有限公司 Guiding puncture equipment based on infrared imaging
CN112022346A (en) * 2020-08-31 2020-12-04 同济大学 Control method of full-automatic venipuncture recognition integrated robot
CN113303771A (en) * 2021-07-30 2021-08-27 天津慧医谷科技有限公司 Pulse acquisition point determining method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100920251B1 (en) * 2008-12-30 2009-10-05 동국대학교 산학협력단 A method for restoring infrared vein image blurred by skin scattering
CN102871645A (en) * 2011-07-11 2013-01-16 浙江大学 Near-infrared imaging ultrasonic vascular therapeutic apparatus
CN107812283A (en) * 2017-10-18 2018-03-20 北京工商大学 A kind of method for automatically determining point of puncture position
CN107748872A (en) * 2017-10-27 2018-03-02 孙洪军 A kind of IMAQ is clear and comprehensive intelligent palm vein identification device
CN109171905A (en) * 2018-10-11 2019-01-11 青岛浦利医疗技术有限公司 Guiding puncture equipment based on infrared imaging
CN112022346A (en) * 2020-08-31 2020-12-04 同济大学 Control method of full-automatic venipuncture recognition integrated robot
CN113303771A (en) * 2021-07-30 2021-08-27 天津慧医谷科技有限公司 Pulse acquisition point determining method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116746926A (en) * 2023-08-16 2023-09-15 深圳市益心达医学新技术有限公司 Automatic blood sampling method, device, equipment and storage medium based on image recognition
CN116746926B (en) * 2023-08-16 2023-11-10 深圳市益心达医学新技术有限公司 Automatic blood sampling method, device, equipment and storage medium based on image recognition

Also Published As

Publication number Publication date
CN114176514B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US9317761B2 (en) Method and an apparatus for determining vein patterns from a colour image
CN110223280B (en) Venous thrombosis detection method and venous thrombosis detection device
CN110675335B (en) Superficial vein enhancement method based on multi-resolution residual error fusion network
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
Jiang et al. Skin lesion segmentation based on multi-scale attention convolutional neural network
CN114066884B (en) Retinal blood vessel segmentation method and device, electronic device and storage medium
Cheng et al. High-resolution photoacoustic microscopy with deep penetration through learning
Sobhaninia et al. Localization of fetal head in ultrasound images by multiscale view and deep neural networks
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
Rajathi et al. Varicose ulcer (C6) wound image tissue classification using multidimensional convolutional neural networks
CN114176514B (en) Vein blood vessel identification positioning method and system based on near infrared imaging
CN116503607B (en) CT image segmentation method and system based on deep learning
CN114511581B (en) Multi-task multi-resolution collaborative esophageal cancer lesion segmentation method and device
CN116645380A (en) Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
Bhattacharya et al. A new approach to automated retinal vessel segmentation using multiscale analysis
CN110443217A (en) One kind being based on multispectral fingerprint method for anti-counterfeit and system
US20230225702A1 (en) Real-time image analysis for vessel detection and blood flow differentiation
CN112242193B (en) Automatic blood vessel puncture method based on deep learning
Taha et al. Digital Vein Mapping Using Augmented Reality.
Zhan et al. Recognition of angiographic atherosclerotic plaque development based on deep learning
CN116309593B (en) Liver puncture biopsy B ultrasonic image processing method and system based on mathematical model
CN109447956A (en) A kind of blood vessel relative width calculation method and system
CN116309558B (en) Esophageal mucosa IPCLs vascular region segmentation method, equipment and storage medium
Singh Diagnosis of skin cancer using novel computer vision and deep learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant