CN117398065A - Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging - Google Patents

Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging Download PDF

Info

Publication number
CN117398065A
CN117398065A CN202311185857.0A CN202311185857A CN117398065A CN 117398065 A CN117398065 A CN 117398065A CN 202311185857 A CN202311185857 A CN 202311185857A CN 117398065 A CN117398065 A CN 117398065A
Authority
CN
China
Prior art keywords
photoacoustic
convolution layer
photoacoustic imaging
imaging
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311185857.0A
Other languages
Chinese (zh)
Inventor
杨思华
潘树
罗雨溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202311185857.0A priority Critical patent/CN117398065A/en
Publication of CN117398065A publication Critical patent/CN117398065A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0093Detecting, measuring or recording by applying one single type of energy and measuring its conversion into another type of energy
    • A61B5/0095Detecting, measuring or recording by applying one single type of energy and measuring its conversion into another type of energy by applying light and detecting acoustic waves, i.e. photoacoustic measurements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4887Locating particular structures in or on the body
    • A61B5/489Blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The invention discloses a blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging. The method comprises generating pulsed laser and focusing to form a line spot on the tissue surface; collecting the generated photoacoustic signals, and processing and reconstructing the photoacoustic signals by a computer to obtain photoacoustic vessel images; projecting the reconstructed photoacoustic vessel image to the tissue surface by combining a projector with an approximate ellipse fitting curved surface projection algorithm; the projection image of the tissue surface is acquired in real time through the RGBD camera and transmitted to the computer, the pose of the tissue surface is solved in real time by utilizing a target tracking algorithm based on learning, and when the tissue surface moves involuntarily, the vascular image tracking projection is carried out. According to the invention, real-time naked eye visualization of the body surface of the blood vessel is realized through photoacoustic imaging, the imaging depth and resolution are high, and the scanning range is large and flexible; the blood vessel image can be projected on the surface of the tissue in real time in situ when the tissue moves by adopting a target tracking algorithm, and the real-time visualization is more accurate.

Description

Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging
Technical Field
The invention belongs to the technical field of photoacoustic imaging technology and computer vision, and particularly relates to a blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging.
Background
The accurate positioning of blood vessels is critical in vascular related surgery, and many imaging modes of blood vessels are available at present, such as Doppler ultrasound, computed tomography, nuclear magnetic resonance angiography, transmission near infrared imaging and the like. In these vessel imaging methods, doppler ultrasound has low sensitivity to microvasculature, failing to image high resolution microvascular images; computed tomography requires intravenous injection of contrast media, and a patient may be allergic to the contrast media to cause complications, and the imaging mode has ionizing radiation, which may cause injury to a human body; nuclear magnetic resonance angiography techniques also require intravenous injection of contrast agents, which can cause complications; while transmission near infrared imaging can perform nondestructive imaging on blood vessels, the transmission near infrared imaging has shallow imaging depth and is mostly used for imaging veins on the surface layer of the skin; the photoacoustic computed tomography with multiple array elements can image blood vessels in a lossless manner, and the blood vessels have the characteristic of specific absorption to light, so that the photoacoustic imaging has the congenital advantage of the photoacoustic imaging on the blood vessels, the photoacoustic imaging can image the blood vessels with high resolution, meanwhile, the photoacoustic computed tomography uses line light spots to excite photoacoustic signals, the penetration depth is deep, and the imaging range is large by adopting a signal receiving mode with multiple array elements, so that the photoacoustic computed tomography has the imaging advantage of large depth and high resolution.
In the traditional image navigation mode, the image is displayed on a 2D screen and separated from the real tissue surface, the combination of the blood vessel image and the real tissue surface completely depends on the experience of medical staff, and along with the development of the technology, a plurality of devices are developed to directly combine the blood vessel image with the tissue surface, so that the blood vessel can be visualized on the body surface. However, the current vessel surface visualization devices have the problems of shallow imaging depth and low resolution. Meanwhile, aiming at the blood vessel projection on the curved surface tissue, even if a plurality of devices are positioned by using a camera, the curved surface projection is not processed, so that the projection of the blood vessel on the curved surface is not accurate enough.
Patent CN 104665766A discloses a portable venous vessel visualizer. This patent uses an infrared imaging device to image venous blood vessels and a projector to project the blood vessels onto the skin surface for visualization. Although the vein imaging instrument is convenient to use, the blood vessel imaging method adopted by the patent is infrared imaging, and the imaging mode has shallow imaging depth and low imaging resolution due to the projection type irradiation and diffuse reflection receiving mode, so that the patent can only be used for imaging the vein on the surface layer of the skin. Furthermore, without the object positioning device, the accuracy of the vessel projection may be low.
Patent CN 112773333A discloses a portable vascular developing device. The patent uses near infrared imaging to image the superficial veins of the skin and a projector to project the vascular image to the skin surface while using a camera module for positioning. The near infrared imaging adopted by the patent also has the characteristics of shallow imaging depth and poor imaging resolution, and in addition, even if the camera is adopted for positioning, the patent does not adopt a target tracking and curved surface projection strategy to improve the blood vessel projection precision.
Patent CN 104116496A discloses a medical three-dimensional venous vessel augmented reality device and method. The patent designs a pair of glasses to fuse the vein image with the tissue surface, and the vein image can be visualized on the tissue surface by wearing the glasses. One disadvantage of this approach is that the augmented reality image is not visible to the naked eye, the effect of the blood vessel on the body surface is only presented to the eyeglass wearer, other members of the surgical team are not shareable, and the patent is only directed to venous vessel visualization, and is silent about deep microvascular networks.
Patent CN 210842997U discloses a clinical blood sampling pad, patent CN 214632123U discloses a near infrared vascular projector. Both of these patents also employ near infrared imaging, which has the disadvantage of shallow imaging depth and low resolution.
Disclosure of Invention
The invention mainly aims to provide a blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging aiming at the problems of shallow imaging depth, poor imaging resolution and inaccurate blood vessel naked eye visualization in the current blood vessel body surface visualization technology.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the invention provides a blood vessel body surface real-time naked eye visualization method based on photoacoustic imaging, which comprises the following steps of:
generating pulse laser and focusing the pulse laser on the surface of the tissue through a photoacoustic imaging probe to form a light spot;
acquiring generated photoacoustic signals through a photoacoustic imaging probe, and processing and reconstructing the photoacoustic signals through a computer to obtain photoacoustic vessel images;
projecting the reconstructed photoacoustic vessel image to the tissue surface by combining a projector with an approximate ellipse fitting curved surface projection algorithm;
the projection image of the tissue surface is acquired in real time through the RGBD camera and transmitted to the computer, the pose of the tissue surface is solved in real time by utilizing a target tracking algorithm based on learning, and when the tissue surface moves involuntarily, the vascular image tracking projection is carried out.
As an preferable technical scheme, the method for generating the pulse laser and focusing the pulse laser on the surface of the tissue through the photoacoustic imaging probe to form a light spot specifically comprises the following steps:
the pulse controller is controlled by a computer to send out a pulse signal, so that the pulse laser is driven to generate pulse laser, the generated pulse laser is coupled into an optical fiber bundle through an optical fiber coupler, the optical fiber bundle is connected with the photoacoustic imaging probe, and a focusing line light spot is formed on the surface of tissue after the pulse laser passes through the photoacoustic imaging probe.
As an optimal technical scheme, the approximate ellipse fitting curved surface projection algorithm specifically comprises:
performing three-dimensional surface reconstruction on the tissue surface by using the RGBD camera;
mathematical modeling is carried out on the three-dimensional surface model;
solving the vertex coordinates and the highest point coordinates of the edge of an imaging area of the three-dimensional surface model, solving the distances x, y and z of two points in a three-dimensional space by using the coordinates, taking x as an ellipse major-minor axis, and taking z as an ellipse minor-minor axis to establish an ellipse model; fitting the perimeter of the ellipse by an approximate ellipse fitting method, wherein the fitted curved edge is the real curved edge distance of the highest point of the edge vertex of the imaging area, and the projected blood vessel image is transformed in a corresponding proportion by the ratio of the curved edge length of the ellipse to the major axis of the ellipse, so that the projection of the two-dimensional blood vessel image on the curved surface tissue is realized.
As a preferable technical solution, the learning-based target tracking algorithm specifically includes:
an input layer for inputting an image;
the target recognition network for extracting the characteristics through the MobileNet network specifically comprises the following components: adding three additional convolution layers of 1x1, 3x3 and 5x5 before the original 1x1 convolution layer of the MobileNet network, and adding 1 pooling layer after the original 1x1 convolution layer to adjust the size and channel number of the feature map;
the feature extraction network for extracting feature points through the CNN network specifically comprises the following steps: on the basis of the original 3x3 convolution layer of the CNN network, a 3x3 convolution layer, a 5x5 convolution layer and two pooling layers are added in parallel, namely one path of output of the original 3x3 convolution layer is connected to a full connection layer through the 3x3 convolution layer, and the other path of output of the original 3x3 convolution layer is connected to the full connection layer through the newly added 3x3 convolution layer and pooling layer; the output of the newly added 3x3 convolution layer is also connected to the full connection layer through the newly added 5x5 convolution layer and the pooling layer;
the attention mechanism is used for respectively inputting the output feature graphs of the target recognition network and the feature extraction network into the two attention modules; the attention mechanism uses a gating mechanism approach to adjust the importance of the feature map.
As an preferable technical scheme, the device calibration method of the projector and the RGBD camera specifically comprises the following steps:
the method comprises the steps of respectively extracting characteristic points in a sample image acquired by an RGBD camera and a photo-acoustic image reconstructed after imaging by adopting a customized standard photo-acoustic imaging sample and a photo-acoustic image reconstructed after imaging, and completing calibration by solving the transformation relation of the corresponding characteristic points on the sample image and the photo-acoustic image; the customized standard photoacoustic imaging sample is made by placing carbon rods in agar in a checkerboard shape.
As a preferable technical scheme, no marker is adopted for registration after photoacoustic imaging, the tissue surface is acquired by the RGBD camera in any posture and is transmitted to a computer, characteristic points of the tissue surface in any posture are extracted in the computer, registration relation between a blood vessel image and the tissue surface is estimated, affine transformation is carried out on the projected blood vessel image, and finally projection is carried out through a projector.
The invention also provides a blood vessel body surface real-time naked eye visualization device based on photoacoustic imaging, which comprises a pulse laser generating device, a photoacoustic imaging probe, an acquisition amplifying circuit, a computer, a projector and an RGBD camera;
the pulse laser generating device is connected with the photoacoustic imaging probe and is used for generating pulse laser and focusing the pulse laser on the tissue surface through the photoacoustic imaging probe to form a light spot;
the photoacoustic imaging probe is also used for collecting generated photoacoustic signals and transmitting the photoacoustic signals to a computer through a collecting and amplifying circuit;
the computer is used for processing and reconstructing the photoacoustic vessel image and is loaded with an approximate ellipse fitting curved surface projection algorithm and a learning-based target tracking algorithm;
the projector is used for projecting the reconstructed photoacoustic vessel image to the tissue surface by combining an approximate ellipse fitting curved surface projection algorithm;
the RGBD camera is used for registering the projected photoacoustic vessel image and the tissue surface in combination with a learning-based target tracking algorithm, and re-projecting when the tissue surface moves involuntarily.
As an optimal technical scheme, the pulse laser generating device comprises a pulse controller, a pulse laser, an optical fiber coupler and an optical fiber bundle which are connected in sequence; the pulse controller is controlled by a computer to send out a pulse signal, so that the pulse laser is driven to generate pulse laser, the generated pulse laser is coupled into an optical fiber bundle through an optical fiber coupler, the optical fiber bundle is connected with the photoacoustic imaging probe, and the pulse laser is focused on the tissue surface to form a linear light spot after passing through the photoacoustic imaging probe.
As a preferable technical scheme, the photoacoustic imaging probe comprises a cylindrical lens, a reflecting mirror and a multi-array element ultrasonic transducer array, wherein the optical fiber bundle is focused into linear light spots through the cylindrical lens, and vertically enters the tissue surface after being reflected by the two reflecting mirrors by 45 degrees respectively, and the multi-array element ultrasonic transducer array is positioned beside the cylindrical lens and right above the focused linear light spots and is used for receiving generated photoacoustic signals; the cylindrical lens is arranged in a precise slide rail bracket, and the spot size of a focus line spot is adjusted by adjusting the distance between the precise slide rail and the optical fiber bundle, so that the imaging depth and the imaging resolution are adjusted; the multi-array element ultrasonic transducer array consists of 128 ultrasonic transducers with main frequency of 10 MHz.
As an optimal technical scheme, the approximate ellipse fitting curved surface projection algorithm specifically comprises:
performing three-dimensional surface reconstruction on the tissue surface by using the RGBD camera;
mathematical modeling is carried out on the three-dimensional surface model;
solving the vertex coordinates and the highest point coordinates of the edge of an imaging area of the three-dimensional surface model, solving the distances x, y and z of two points in a three-dimensional space by using the coordinates, taking x as an ellipse major-minor axis, and taking z as an ellipse minor-minor axis to establish an ellipse model; fitting the perimeter of an ellipse by an approximate ellipse fitting method, wherein the fitted curved edge is the real curved edge distance of the highest point of the edge vertex of the imaging area, and the projected blood vessel image is transformed in a corresponding proportion by the ratio of the curved edge length of the ellipse to the major axis of the ellipse, so that the projection of the two-dimensional blood vessel image on a curved surface tissue is realized;
the learning-based target tracking algorithm includes:
an input layer for inputting an image;
the target recognition network for extracting the characteristics through the MobileNet network specifically comprises the following components: adding three additional convolution layers of 1x1, 3x3 and 5x5 before the original 1x1 convolution layer of the MobileNet network, and adding 1 pooling layer after the original 1x1 convolution layer to adjust the size and channel number of the feature map;
the feature extraction network for extracting feature points through the CNN network specifically comprises the following steps: on the basis of the original 3x3 convolution layer of the CNN network, a 3x3 convolution layer, a 5x5 convolution layer and two pooling layers are added in parallel, namely one path of output of the original 3x3 convolution layer is connected to a full connection layer through the 3x3 convolution layer, and the other path of output of the original 3x3 convolution layer is connected to the full connection layer through the newly added 3x3 convolution layer and pooling layer; the output of the newly added 3x3 convolution layer is also connected to the full connection layer through the newly added 5x5 convolution layer and the pooling layer;
the attention mechanism is used for respectively inputting the output feature graphs of the target recognition network and the feature extraction network into the two attention modules; the attention mechanism uses a gating mechanism approach to adjust the importance of the feature map.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) The invention has deeper imaging depth and higher imaging resolution: the photoacoustic computed tomography adopts large light spot irradiation, the penetration depth is deep, and the detection sensitivity to blood vessels and micro blood vessels is high by utilizing the characteristic of specific absorption of the blood vessels to light;
(2) The invention has larger scanning range and more flexibility: the invention can not only adopt a mechanical scanning structure to drive the probe to scan, but also scan and image tissues by hand, the imaging mode is flexible, and the imaging range is not limited;
(3) The blood vessel of the invention is more accurate in real-time visualization on the body surface: the tissue surface is identified by utilizing a target tracking algorithm based on deep learning, and the tissue surface is subjected to feature extraction by utilizing a customized feature extraction network, so that pose solving is more accurate when the tissue surface moves involuntarily, the effect that the moving blood vessel image of the tissue can be projected on the tissue surface in real time and in situ is achieved, and in addition, the blood vessel image can be more accurate when the curved surface tissue is projected by combining an approximate elliptic curved surface algorithm.
(4) The vessel localization of the present invention has three-dimensional depth information: the three-dimensional model of the surgical surface and the three-dimensional photoacoustic vessel are fused, so that the position and structural relation of the surgical surface and the arm vessel can be provided, and meanwhile, the depth information of the subcutaneous vessel can be provided.
(5) The invention has simpler installation and deployment: real-time vessel visualization on the body surface can be achieved by only one commercial projector and one commercial RGBD camera.
(6) The invention has more free vessel visualization modes: the blood vessel image can be always utilized only by one-time completed photoacoustic imaging, no additional fixing device is needed for real-time imaging, in addition, any posture of a patient can be identified by utilizing a target tracking algorithm, and naked eye visualization can be accurately performed on the body surface in real time without the need of additional fixing device blood vessel images.
Drawings
Fig. 1 is a schematic structural diagram of a blood vessel body surface real-time naked eye visualization device based on photoacoustic imaging according to an embodiment of the present invention;
fig. 2 is a schematic structural view of a photoacoustic imaging probe according to an embodiment of the present invention;
FIG. 3 is a flow chart of a target tracking algorithm according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an approximate elliptic surface projection algorithm according to an embodiment of the present invention;
FIG. 5 is a schematic view of an elliptical model according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a neural network structure of a target tracking algorithm according to an embodiment of the present invention.
Reference numerals illustrate: 1. a pulse controller; 2. a pulsed laser; 3. a pulsed laser beam; 4. an optical fiber coupler; 5. an optical fiber bundle; 6. a photoacoustic imaging probe; 6-1, a cylindrical lens; 6-2, a reflector; 6-3 laser beams and 6-4 multi-array element ultrasonic transducers; 6-5, a precise sliding rail; 7. focusing the line light spot; 8. a tissue surface; 9. an amplifying circuit; 10. a data acquisition system; 11. a computer; 12. a projector; 13. RGBD camera.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Example 1
As shown in fig. 1, the present embodiment provides a blood vessel body surface real-time naked eye visualization device based on photoacoustic imaging, which comprises a pulse controller 1, a pulse laser 2, a pulse laser beam 3, an optical fiber coupler 4, an optical fiber bundle 5, a photoacoustic imaging probe 6, a focusing line light spot 7, a tissue surface 8, an amplifying circuit 9, a data acquisition system 10, a computer 11, a projector 12 and an RGBD camera 13; the computer 11 is communicated with the pulse controller 1, the pulse controller 1 is controlled to generate pulse signals to drive the pulse laser 2 to emit pulse laser beams 3, the laser beams are coupled into the optical fiber bundle 5 through the optical fiber coupler 4, the optical fiber bundle is connected with the photoacoustic imaging probe 6 to form a focusing line light spot 7 on the tissue surface 8 for exciting photoacoustic signals, the excited photoacoustic signals are received by the photoacoustic imaging probe and amplified by the amplifying circuit 9, then are collected and stored by the data collection system 10, and finally photoacoustic vessel images are rebuilt in the computer 11. After one complete imaging, the photoacoustic vessel image can be projected to the tissue surface 8 with any posture by using the projector 12 in combination with an approximate elliptic curve fitting algorithm, and vessel image registration and target tracking are performed by using the RGBD camera 13 in combination with a target tracking algorithm based on deep learning, so that the vessel image can still be accurately projected on the tissue surface 8 when the tissue surface 8 moves involuntarily, and the real-time naked eye visualization effect is achieved.
Further, as shown in fig. 2, the photoacoustic imaging probe 6 includes a cylindrical lens 6-1, two reflectors 6-2, a 128-array-element multi-array-element ultrasonic transducer 6-4, and the laser beam 6-3 coming out of the optical fiber bundle is focused by the cylindrical lens 6-1 and reflected by the reflectors 6-2 after being focused, so that the focused line light spot 7 is just under the multi-array-element ultrasonic transducer 6-4. The cylindrical lens 6-1 is connected with the precise sliding rail 6-5, and the distance between the cylindrical lens 6-1 and the optical fiber bundle can be adjusted through adjusting the sliding rail 6-5 to adjust the spot size of the focusing line light spot 7, so that different imaging depths and imaging resolutions can be adjusted.
Further, the multi-array element ultrasonic transducer array consists of 128 ultrasonic transducers with main frequency of 10 MHz.
Further, when the photoacoustic imaging probe 6 performs photoacoustic imaging on the tissue, different imaging modes can be selected according to different scenes and different requirements, scanning imaging can be performed on the tissue in a handheld mode, and the photoacoustic imaging probe 6 can also be placed on a two-dimensional motor structure platform for mechanical scanning imaging.
Further, the deep learning-based object tracking algorithm includes two networks, an object recognition network and a feature extraction network, and the flow of the object tracking algorithm is shown in fig. 3, where the RGBD camera 13 firstly captures images of the tissue surface 8 at a frame rate of 30fps, and then inputs the captured images into the object recognition network, where the network is a product of training the tissue surface 8 and the non-tissue surface by using the improved lightweight MobileNet network, and through which the tissue surface 8 can be detected and identified, and then focuses attention on the tissue surface area, while excluding interference of the non-tissue surface. Then inputting the camera image into a feature extraction network to extract features of the region of the tissue surface 8, wherein the network is a product of customized training of feature points of the tissue surface 8 by adopting an improved convolutional neural network CNN, the aim is to solve the problem of weak texture of skin tissue, pose solving is carried out after feature extraction is finished, and judgment is carried out, if the pose changes, the projected image needs to be transformed, so that the vessel image can still be accurately registered on the tissue surface 8.
The improved lightweight MobileNet network is shown in the left dashed box of fig. 6, specifically: three additional convolution layers of 1x1, 3x3 and 5x5 are added in front of the original 1x1 convolution layer of the MobileNet network (shown by dark gray boxes on the left side of fig. 6), and 1 pooling layer is added behind the original 1x1 convolution layer (shown by dark gray boxes on the left side of fig. 6) to adjust the size and channel number of the feature map; the newly added 3 convolution layers are used for enhancing the feature extraction and recognition performance of the target, and the 1 pooling layer is used for preventing overfitting;
the improved convolutional neural network CNN is shown in the right dashed box of fig. 6, specifically: on the basis of the original 3x3 convolution layer of the CNN network, a 3x3 convolution layer, a 5x5 convolution layer and two pooling layers (shown as dark gray boxes on the right side of fig. 6) are added in parallel, namely one path of output of the original 3x3 convolution layer is connected to the full connection layer through the 3x3 convolution layer, and the other path of output of the original 3x3 convolution layer is connected to the full connection layer through the newly added 3x3 convolution layer and pooling layers; the output of the newly added 3x3 convolution layer is also connected to the full connection layer through the newly added 5x5 convolution layer and the pooling layer; the newly added 2 convolution layers can better extract the characteristics, and the newly added 2 pooling layers can promote the maintenance of scale invariance and rotation invariance when extracting the characteristics from the target.
Further, the approximate elliptic surface projection algorithm comprises three-dimensional surface imaging of the tissue surface 8, accurate mathematical modeling and approximate elliptic fitting. An algorithm flow chart of the approximate elliptic surface projection algorithm is shown in fig. 4. Firstly, the RGBD camera 13 collects RGB images and depths, then, the collected images are used for carrying out dense three-dimensional surface reconstruction on the tissue surface 8, the reconstructed model can easily obtain three-dimensional coordinates of the edge vertex A of the imaging area and the highest point B of the model, and the x, y and z distances of the A and the B in the three-dimensional space can be solved by using the coordinates. An ellipse model is built by using x as the major half axis of the ellipse and z as the minor half axis of the ellipse, and a specific geometric model is shown in fig. 5. The perimeter of the ellipse is solved by using a method of approximate ellipse fitting. The solved 1/4 oval perimeter is the curve distance of the A distance B on the two-dimensional vertical plane. The projected image is enlarged by using the ratio of the perimeter of the 1/4 ellipse and the long half axis, so that the accurate projection on the curved surface tissue can be realized.
Furthermore, the blood vessel body surface real-time naked eye visualization device based on the photoacoustic imaging can further reconstruct the three-dimensional surface of the operation surface through the RGBD camera 13, and the reconstructed three-dimensional surface model can be fused with the three-dimensional photoacoustic blood vessel image and used for viewing the blood vessel image, the subcutaneous blood vessel position, the arm surface position, the structural relationship and the blood vessel depth information in the three-dimensional point cloud space, so that assistance is provided for preoperation planning.
Further, the projector 12 and the RGBD camera 13 form a visual projection tracking device, and can be freely disposed at any height right above the tissue surface 8, and connected to the computer 11. The extent of the visualization of the vascular body surface can be adjusted by adjusting the height from the tissue surface 8. The configuration of the visual projection tracking device can be completed by only completing a simple device calibration on the computer 11 after each height adjustment. The equipment calibration method specifically comprises the following steps:
the customized standard photoacoustic imaging sample and the reconstructed image after the photoacoustic imaging of the sample are adopted, the sample image acquired by the RGBD camera 13 and the characteristic points in the reconstructed photoacoustic image after the imaging are respectively extracted, and the calibration is completed by solving the transformation relation of the corresponding characteristic points on the sample image and the photoacoustic image; the customized standard photoacoustic imaging sample is prepared by placing 20 carbon rods with the length of 30mm and the diameter of 0.5mm in agar in a checkerboard shape.
Further, after photoacoustic imaging, without using any marker for registration, the tissue surface 8 can be acquired by the RGBD camera 13 in any posture and transmitted to the computer 11, feature points of the tissue surface 8 in any posture can be extracted in the computer 11, the registration relationship between the blood vessel image and the tissue surface 8 can be accurately estimated, affine transformation is performed on the projected blood vessel image, and finally naked eye visualization can be accurately performed on the skin surface in situ after projection by a projector.
It should be noted that, the apparatus provided in the foregoing embodiment is only exemplified by the division of the foregoing functional modules, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure may be divided into different functional modules to perform all or part of the functions described above, where the apparatus is a real-time naked eye visualization method for a vascular body surface based on photoacoustic high-depth high-resolution imaging, which is applicable to the following embodiments.
Example 2
In this embodiment, a method for visualizing a body surface of a blood vessel in real time with naked eyes based on photoacoustic imaging is provided, and the method can be applied to the device for visualizing a body surface of a blood vessel in real time with naked eyes based on photoacoustic imaging in the above embodiment, and includes the following steps:
s1, generating a pulse laser beam 3 and focusing the pulse laser beam on a tissue surface 8 through a photoacoustic imaging probe 6 to form a linear light spot, specifically:
the pulse controller 1 is controlled by the computer 11 to send out a pulse signal, so that the pulse laser 2 is driven to generate a pulse laser beam 3, the generated pulse laser beam 3 is coupled into the optical fiber bundle 5 through the optical fiber coupler 4, the optical fiber bundle 5 is connected with the photoacoustic imaging probe 6, and a focusing line light spot 7 is formed on the tissue surface 8 after the pulse laser beam 3 passes through the photoacoustic imaging probe 6;
s2, acquiring generated photoacoustic signals through a photoacoustic imaging probe 6, and processing and reconstructing the photoacoustic signals through a computer 11 to obtain photoacoustic vessel images;
s3, projecting the reconstructed photoacoustic vessel image to the tissue surface 8 through a projector 12 in combination with an approximate ellipse fitting curved surface projection algorithm;
further, the approximate ellipse fitting curved surface projection algorithm specifically comprises:
three-dimensional surface reconstruction of the tissue surface 8 using the RGBD camera 13;
mathematical modeling is carried out on the three-dimensional surface model;
solving the vertex coordinates and the highest point coordinates of the edge of an imaging area of the three-dimensional surface model, solving the distances x, y and z of two points in a three-dimensional space by using the coordinates, taking x as an ellipse major-minor axis, and taking z as an ellipse minor-minor axis to establish an ellipse model; fitting the perimeter of an ellipse by an approximate ellipse fitting method, wherein the fitted curved edge is the real curved edge distance of the highest point of the edge vertex of the imaging area, and the projected blood vessel image is transformed in a corresponding proportion by the ratio of the curved edge length of the ellipse to the major axis of the ellipse, so that the projection of the two-dimensional blood vessel image on a curved surface tissue is realized;
s4, acquiring projection images of the tissue surface 8 in real time through the RGBD camera 13, transmitting the projection images to the computer 11, solving the pose of the tissue surface 8 in real time by utilizing a target tracking algorithm based on learning, and carrying out vessel image tracking projection when the tissue surface 8 moves involuntarily.
Further, the learning-based target tracking algorithm specifically includes:
(1) An input layer for inputting an image;
(2) The target recognition network for feature extraction through the MobileNet network, as shown in the left dashed box of fig. 6, specifically includes: three additional convolution layers of 1x1, 3x3 and 5x5 are added in front of the original 1x1 convolution layer of the MobileNet network (shown by dark gray boxes on the left side of fig. 6), and 1 pooling layer is added behind the original 1x1 convolution layer (shown by dark gray boxes on the left side of fig. 6) to adjust the size and channel number of the feature map; the newly added 3 convolution layers are used for enhancing the feature extraction and recognition performance of the target, and the 1 pooling layer is used for preventing overfitting;
(3) The feature extraction network for feature point extraction through the CNN network, as shown in the right dashed box of fig. 6, specifically includes: on the basis of the original 3x3 convolution layer of the CNN network, a 3x3 convolution layer, a 5x5 convolution layer and two pooling layers (shown as dark gray boxes on the right side of fig. 6) are added in parallel, namely one path of output of the original 3x3 convolution layer is connected to the full connection layer through the 3x3 convolution layer, and the other path of output of the original 3x3 convolution layer is connected to the full connection layer through the newly added 3x3 convolution layer and pooling layers; the output of the newly added 3x3 convolution layer is also connected to the full connection layer through the newly added 5x5 convolution layer and the pooling layer; the newly added 2 convolution layers can better extract the characteristics, and the newly added 2 pooling layers can promote the maintenance of scale invariance and rotation invariance when extracting the characteristics from the target.
(4) The attention mechanism is used for respectively inputting the output feature graphs of the target recognition network and the feature extraction network into the two attention modules; the attention mechanism uses the method of the gating mechanism to adjust the importance of the feature map
Further, the projector 12 and the RGBD camera 13 form a visual projection tracking device, and can be freely disposed at any height right above the tissue surface 8, and connected to the computer 11. The extent of the visualization of the vascular body surface can be adjusted by adjusting the height from the tissue surface 8. The configuration of the visual projection tracking device can be completed by only completing a simple device calibration on the computer 11 after each height adjustment. The equipment calibration method specifically comprises the following steps:
the customized standard photoacoustic imaging sample and the reconstructed image after the photoacoustic imaging of the sample are adopted, the sample image acquired by the RGBD camera 13 and the characteristic points in the reconstructed photoacoustic image after the imaging are respectively extracted, and the calibration is completed by solving the transformation relation of the corresponding characteristic points on the sample image and the photoacoustic image; the customized standard photoacoustic imaging sample is prepared by placing 20 carbon rods with the length of 30mm and the diameter of 0.5mm in agar in a checkerboard shape.
Further, after photoacoustic imaging, without using any marker for registration, the tissue surface 8 is acquired in an arbitrary posture by the RGBD camera 13 and transferred to the computer 11, feature points of the tissue surface 8 in the arbitrary posture are extracted in the computer 11, registration relationship between the blood vessel image and the tissue surface 8 is estimated, affine transformation is performed on the projected blood vessel image, and finally projection is performed by a projector.
Furthermore, the RGBD camera 13 is combined with a target tracking algorithm to perform positioning tracking on the tissue surface 8 only by performing complete photoacoustic imaging once, so that the imaging result can be always used for visualization of body surface blood vessels with naked eyes.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (10)

1. The blood vessel body surface real-time naked eye visualization method based on photoacoustic imaging is characterized by comprising the following steps of:
generating pulse laser and focusing the pulse laser on the surface of the tissue through a photoacoustic imaging probe to form a light spot;
acquiring generated photoacoustic signals through a photoacoustic imaging probe, and processing and reconstructing the photoacoustic signals through a computer to obtain photoacoustic vessel images;
projecting the reconstructed photoacoustic vessel image to the tissue surface by combining a projector with an approximate ellipse fitting curved surface projection algorithm;
the projection image of the tissue surface is acquired in real time through the RGBD camera and transmitted to the computer, the pose of the tissue surface is solved in real time by utilizing a target tracking algorithm based on learning, and when the tissue surface moves involuntarily, the vascular image tracking projection is carried out.
2. The method for visualizing a vascular body surface in real time with naked eyes based on photoacoustic imaging according to claim 1, wherein the generating pulsed laser and focusing the pulsed laser on the tissue surface via a photoacoustic imaging probe to form a light spot comprises:
the pulse controller is controlled by a computer to send out a pulse signal, so that the pulse laser is driven to generate pulse laser, the generated pulse laser is coupled into an optical fiber bundle through an optical fiber coupler, the optical fiber bundle is connected with the photoacoustic imaging probe, and a focusing line light spot is formed on the surface of tissue after the pulse laser passes through the photoacoustic imaging probe.
3. The photoacoustic imaging-based vessel body surface real-time naked eye visualization method according to claim 1, wherein the approximate ellipse fitting curved surface projection algorithm specifically comprises:
performing three-dimensional surface reconstruction on the tissue surface by using the RGBD camera;
mathematical modeling is carried out on the three-dimensional surface model;
solving the vertex coordinates and the highest point coordinates of the edge of an imaging area of the three-dimensional surface model, solving the distances x, y and z of two points in a three-dimensional space by using the coordinates, taking x as an ellipse major-minor axis, and taking z as an ellipse minor-minor axis to establish an ellipse model; fitting the perimeter of the ellipse by an approximate ellipse fitting method, wherein the fitted curved edge is the real curved edge distance of the highest point of the edge vertex of the imaging area, and the projected blood vessel image is transformed in a corresponding proportion by the ratio of the curved edge length of the ellipse to the major axis of the ellipse, so that the projection of the two-dimensional blood vessel image on the curved surface tissue is realized.
4. The photoacoustic imaging-based vessel body surface real-time naked eye visualization method according to claim 1, wherein the learning-based target tracking algorithm specifically comprises:
an input layer for inputting an image;
the target recognition network for extracting the characteristics through the MobileNet network specifically comprises the following components: adding three additional convolution layers of 1x1, 3x3 and 5x5 before the original 1x1 convolution layer of the MobileNet network, and adding 1 pooling layer after the original 1x1 convolution layer to adjust the size and channel number of the feature map;
the feature extraction network for extracting feature points through the CNN network specifically comprises the following steps: on the basis of the original 3x3 convolution layer of the CNN network, a 3x3 convolution layer, a 5x5 convolution layer and two pooling layers are added in parallel, namely one path of output of the original 3x3 convolution layer is connected to a full connection layer through the 3x3 convolution layer, and the other path of output of the original 3x3 convolution layer is connected to the full connection layer through the newly added 3x3 convolution layer and pooling layer; the output of the newly added 3x3 convolution layer is also connected to the full connection layer through the newly added 5x5 convolution layer and the pooling layer;
the attention mechanism is used for respectively inputting the output feature graphs of the target recognition network and the feature extraction network into the two attention modules; the attention mechanism uses a gating mechanism approach to adjust the importance of the feature map.
5. The photoacoustic imaging-based vessel body surface real-time naked eye visualization method according to claim 1, wherein the projector and RGBD camera device calibration method specifically comprises:
the method comprises the steps of respectively extracting characteristic points in a sample image acquired by an RGBD camera and a photo-acoustic image reconstructed after imaging by adopting a customized standard photo-acoustic imaging sample and a photo-acoustic image reconstructed after imaging, and completing calibration by solving the transformation relation of the corresponding characteristic points on the sample image and the photo-acoustic image; the customized standard photoacoustic imaging sample is made by placing carbon rods in agar in a checkerboard shape.
6. The photoacoustic imaging-based vessel body surface real-time naked eye visualization method according to claim 1, wherein no marker is used as registration after photoacoustic imaging, the tissue surface is acquired by the RGBD camera in any posture and transmitted to a computer, characteristic points of the tissue surface in any posture are extracted in the computer, registration relation between the vessel image and the tissue surface is estimated, affine transformation is performed on the projected vessel image, and finally projection is performed through a projector.
7. The blood vessel body surface real-time naked eye visualization device based on photoacoustic imaging is characterized by comprising a pulse laser generating device, a photoacoustic imaging probe, an acquisition amplifying circuit, a computer, a projector and an RGBD camera;
the pulse laser generating device is connected with the photoacoustic imaging probe and is used for generating pulse laser and focusing the pulse laser on the tissue surface through the photoacoustic imaging probe to form a light spot;
the photoacoustic imaging probe is also used for collecting generated photoacoustic signals and transmitting the photoacoustic signals to a computer through a collecting and amplifying circuit;
the computer is used for processing and reconstructing the photoacoustic vessel image and is loaded with an approximate ellipse fitting curved surface projection algorithm and a learning-based target tracking algorithm;
the projector is used for projecting the reconstructed photoacoustic vessel image to the tissue surface by combining an approximate ellipse fitting curved surface projection algorithm;
the RGBD camera is used for registering the projected photoacoustic vessel image and the tissue surface in combination with a learning-based target tracking algorithm, and carrying out re-projection when the tissue surface moves involuntarily.
8. The photoacoustic imaging-based vessel body surface real-time naked eye visualization device according to claim 7, wherein the pulse laser generating device comprises a pulse controller, a pulse laser, an optical fiber coupler and an optical fiber bundle which are connected in sequence; the pulse controller is controlled by a computer to send out a pulse signal, so that the pulse laser is driven to generate pulse laser, the generated pulse laser is coupled into an optical fiber bundle through an optical fiber coupler, the optical fiber bundle is connected with the photoacoustic imaging probe, and the pulse laser is focused on the tissue surface to form a linear light spot after passing through the photoacoustic imaging probe.
9. The photoacoustic imaging-based vessel body surface real-time naked eye visualization device according to claim 7, wherein the photoacoustic imaging probe comprises a cylindrical lens, a reflecting mirror and a multi-array element ultrasonic transducer array, wherein the optical fiber bundle is focused into a linear light spot through the cylindrical lens, and vertically enters the tissue surface after being respectively reflected by 45 degrees through the two reflecting mirrors, and the multi-array element ultrasonic transducer array is positioned beside the cylindrical lens and right above the focusing linear light spot and is used for receiving the generated photoacoustic signals; the cylindrical lens is arranged in a precise slide rail bracket, and the spot size of a focus line spot is adjusted by adjusting the distance between the precise slide rail and the optical fiber bundle, so that the imaging depth and the imaging resolution are adjusted; the multi-array element ultrasonic transducer array consists of 128 ultrasonic transducers with main frequency of 10 MHz.
10. The photoacoustic imaging-based vessel body surface real-time naked eye visualization device according to claim 7, wherein the approximate ellipse fitting curved surface projection algorithm specifically comprises:
performing three-dimensional surface reconstruction on the tissue surface by using the RGBD camera;
mathematical modeling is carried out on the three-dimensional surface model;
solving the vertex coordinates and the highest point coordinates of the edge of an imaging area of the three-dimensional surface model, solving the distances x, y and z of two points in a three-dimensional space by using the coordinates, taking x as an ellipse major-minor axis, and taking z as an ellipse minor-minor axis to establish an ellipse model; fitting the perimeter of an ellipse by an approximate ellipse fitting method, wherein the fitted curved edge is the real curved edge distance of the highest point of the edge vertex of the imaging area, and the projected blood vessel image is transformed in a corresponding proportion by the ratio of the curved edge length of the ellipse to the major axis of the ellipse, so that the projection of the two-dimensional blood vessel image on a curved surface tissue is realized;
the learning-based target tracking algorithm includes:
an input layer for inputting an image;
the target recognition network for extracting the characteristics through the MobileNet network specifically comprises the following components: adding three additional convolution layers of 1x1, 3x3 and 5x5 before the original 1x1 convolution layer of the MobileNet network, and adding 1 pooling layer after the original 1x1 convolution layer to adjust the size and channel number of the feature map;
the feature extraction network for extracting feature points through the CNN network specifically comprises the following steps: on the basis of the original 3x3 convolution layer of the CNN network, a 3x3 convolution layer, a 5x5 convolution layer and two pooling layers are added in parallel, namely one path of output of the original 3x3 convolution layer is connected to a full connection layer through the 3x3 convolution layer, and the other path of output of the original 3x3 convolution layer is connected to the full connection layer through the newly added 3x3 convolution layer and pooling layer; the output of the newly added 3x3 convolution layer is also connected to the full connection layer through the newly added 5x5 convolution layer and the pooling layer;
the attention mechanism is used for respectively inputting the output feature graphs of the target recognition network and the feature extraction network into the two attention modules; the attention mechanism uses a gating mechanism approach to adjust the importance of the feature map.
CN202311185857.0A 2023-09-14 2023-09-14 Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging Pending CN117398065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311185857.0A CN117398065A (en) 2023-09-14 2023-09-14 Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311185857.0A CN117398065A (en) 2023-09-14 2023-09-14 Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging

Publications (1)

Publication Number Publication Date
CN117398065A true CN117398065A (en) 2024-01-16

Family

ID=89497004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311185857.0A Pending CN117398065A (en) 2023-09-14 2023-09-14 Blood vessel body surface real-time naked eye visualization method and device based on photoacoustic imaging

Country Status (1)

Country Link
CN (1) CN117398065A (en)

Similar Documents

Publication Publication Date Title
US20170215841A1 (en) Position correlated ultrasonic imaging
US20160192840A1 (en) Device and method for acquiring fusion image
US20170095155A1 (en) Object information acquiring apparatus and control method thereof
FR2694881A1 (en) Method for determining the position of an organ.
CN104323762B (en) A kind of nevus flammeus blood vessel quantification detection means based on opto-acoustic microscopic imaging
CN105992559A (en) System for automatic needle recalibration detection
JP6525565B2 (en) Object information acquisition apparatus and object information acquisition method
US20170055844A1 (en) Apparatus and method for acquiring object information
US20150359434A1 (en) Photoacoustic apparatus, signal processing method of photoacoustic apparatus, and program
CN107115098A (en) Based on one-dimensional non-focusing and the double array scanning imaging devices of focusing ultrasound and method
CN108606777A (en) Optoacoustic computed tomography system based on adjustable focus type fibre optical sensor
CN104706323A (en) High-speed large-view-field multi-spectral photoacoustic imaging method and device
CN104825180A (en) Tri-modal breast imaging system and imaging method thereof
JPWO2018087984A1 (en) Photoacoustic image evaluation apparatus, method and program, and photoacoustic image generation apparatus
CN113974830A (en) Surgical navigation system for ultrasonically guiding thyroid tumor thermal ablation
EP3937755A1 (en) Device and method for analyzing optoacoustic data, optoacoustic system and computer program
CN108464817A (en) A kind of double-mode imaging system and its imaging method
WO2018043193A1 (en) Information acquisition device and signal processing method
Cai et al. A low-cost camera-based ultrasound probe tracking system: Design and prototype
Xia et al. Fiber optic photoacoustic probe with ultrasonic tracking for guiding minimally invasive procedures
CN105249933A (en) Photo-acoustic molecule three-dimensional image instrument
JP6071589B2 (en) Subject information acquisition device
US20170265750A1 (en) Information processing system and display control method
US20170325693A1 (en) Photoacoustic apparatus and control method of photoacoustic apparatus
CN116966450A (en) Focusing ultrasonic noninvasive ablation device, and ablation preoperative planning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination