CN113303905A - Interventional operation simulation method based on video image feedback - Google Patents

Interventional operation simulation method based on video image feedback Download PDF

Info

Publication number
CN113303905A
CN113303905A CN202110577825.XA CN202110577825A CN113303905A CN 113303905 A CN113303905 A CN 113303905A CN 202110577825 A CN202110577825 A CN 202110577825A CN 113303905 A CN113303905 A CN 113303905A
Authority
CN
China
Prior art keywords
ablation
simulation
image
video
overflow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110577825.XA
Other languages
Chinese (zh)
Other versions
CN113303905B (en
Inventor
肖煜东
刘军
冷浩群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Xiangya Hospital of Central South University
Original Assignee
Second Xiangya Hospital of Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Xiangya Hospital of Central South University filed Critical Second Xiangya Hospital of Central South University
Priority to CN202110577825.XA priority Critical patent/CN113303905B/en
Publication of CN113303905A publication Critical patent/CN113303905A/en
Application granted granted Critical
Publication of CN113303905B publication Critical patent/CN113303905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations

Abstract

The invention provides an interventional operation simulation method based on video image feedback, which comprises the following steps: (1) constructing a human body simulation model, wherein the human body simulation model at least comprises a simulation tissue layer and a simulation organ, and at least the simulation organ is made of a transparent material; (2) arranging a video monitoring device for an ablation process at one side of the simulated visceral organ, wherein the video monitoring device has a video image acquisition function of at least two wave bands; (3) during a surgical simulation with the ablation device, the video monitoring device is notified to perform a video acquisition on at least one of the two acquisition bands by an ablation switch of the ablation device.

Description

Interventional operation simulation method based on video image feedback
The technical field is as follows:
the invention relates to the technical field of medical auxiliary training, in particular to an interventional operation simulation method based on video image feedback.
Background art:
chronic viral hepatitis and cirrhosis are all easy to develop into primary liver cancer finally, and surgical operation is the only treatment means. In recent years, ablation therapy has become one of the important choices for the treatment of liver cancer with the development of multidisciplinary techniques. For early and isolated liver cancer, especially for patients with good physical condition, surgery is still the preferred means; ablation therapy should be considered when the physical condition of some patients is not amenable to surgery or when multiple surgery on a tumor is difficult. The ablation operation belongs to non-vascular minimally invasive treatment, acts on tumors through a chemical method or heat energy to eradicate or substantially damage the tumors, and has the same treatment effect as surgical resection and liver transplantation for some liver cancers. The more common clinical applications are radiofrequency ablation and microwave ablation. The method is suitable for treating tumor with maximum diameter not more than 3 cm. The ablation therapy is also suitable for patients who relapse after liver cancer resection, or have liver function unsuitable for surgical resection or have tumor parts which are particularly difficult to remove.
The percutaneous tumor ablation is to induce tumor cell necrosis and local tumor tissue inactivation by means of percutaneous puncture and other chemical ablation, heat ablation or cryoablation technology under the guidance of medical imaging equipment such as ultrasound, CT, nuclear magnetic resonance and the like. In the treatment process, the medical imaging equipment performs operation navigation, accurately positions the tumor and kills the tumor under the condition of protecting the functions of organs and tissues to the maximum extent, so the medical imaging equipment has the characteristics of small wound, good curative effect, short recovery period and slight complication. For liver cancer with tumor diameter less than 3 cm, the ablation curative effect is definite, the wound is small, no abdomen opening is needed, and the wound of a major operation is avoided; the postoperative recovery is fast, the hospital can be discharged after 1 to 2 days of the operation, and the influence on the quality of life is small; has high safety and lower incidence of postoperative complications than that of open abdominal surgery.
Although ablation has many advantages, it is a challenge for doctors, because ablation cannot realize direct-view operation with naked eyes, an auxiliary visualization tool is needed, and there is no systematic practice tool for ablation operation, some high-end manufacturers have introduced a virtual training system, but such a system cannot be different from a real object after all, cannot provide a real operation object and a real experience, and cannot provide real feedback, compared with practical practice for various exercises such as mental quality of students, and the virtual training system often needs various body sensing devices, sensors and the like, especially devices provided by foreign medical equipment manufacturers, and is expensive and not easy to popularize.
In practice, doctors with certain experience often practice in clinic, which not only affects the confidence of patients to doctors, but also easily causes doctor-patient disputes once clinical problems or recurrence occur.
Therefore, if a systematic ablation operation training method for a doctor can be provided, the operation level of the doctor can be effectively improved, so that the doctor can accumulate sufficient experience before clinical operation.
The invention content is as follows:
in view of the above problems in the prior art, the present invention is intended to provide an operation training method for microwave (or radio frequency, etc.) ablation operation based on real objects, which can realize direct video feedback. In the image processing, the invention adopts the means of coaxial shooting, overflow correction, heat conductivity correction and the like, thereby ensuring the maximum approximation of the simulated environment and the real environment.
In one aspect, the present invention provides a method for simulating an interventional operation based on video image feedback (or may also be referred to as a method for constructing an operation simulation environment), where the method includes:
(1) Constructing a human body simulation model, wherein the human body simulation model at least comprises a simulation tissue layer and a simulation organ, and at least the simulation organ is made of a transparent material;
(2) arranging a video monitoring device for an ablation process at one side of the simulated visceral organ, wherein the video monitoring device has a video image acquisition function of at least two wave bands;
(3) during a surgical simulation with the ablation device, the video monitoring device is notified to perform a video acquisition on at least one of the two acquisition bands by an ablation switch of the ablation device.
Preferably, a first band of the two bands is a visible light band or a shorter wavelength light band, and the second band is an infrared band.
Preferably, the simulation process includes a puncturing stage and an ablation stage, an ablation switch of the ablation device is in communication connection with the video monitoring device for mode switching, the puncturing stage adopts a first wave band for video acquisition, the ablation stage adopts a second wave band for video acquisition or acquires a fusion image of the first and second wave bands, and the fusion image is presented to a user.
Preferably, the fused image is formed as follows:
(3.1) extracting RGB values of the visible light image, and weighting the RGB values respectively, wherein the weighting coefficient of each value is less than or equal to 45% or 50%;
(3.2) acquiring an infrared image by using infrared camera equipment, and weighting the RGB value of the infrared image, wherein the weighting coefficient is more than or equal to 45% or 50%;
and (3.3) performing pixel-by-pixel fusion on each frame of the infrared image and the visible light image according to the synchronous signals, and adding the RGB values of the two images respectively.
Preferably, R of each pixel point in the fused image is fusedMelt、GMelt、BMeltComparing the values with corresponding overflow thresholds respectively, determining the proportion of overflow pixels in the total pixel value, and multiplying the pixel value of any one of RGB by an anti-overflow coefficient when the proportion of the overflow pixels in the total pixel exceeds a preset threshold, wherein the anti-overflow coefficient is as follows:
Figure BDA0003085094450000041
where Q is the overflow threshold, A is the average pixel value, and Y is the overflow ratio. The coefficient is adopted to reduce the pixel value of the corresponding item, the higher the overflow proportion is, the larger the reduction amplitude is, and the overflow phenomenon is effectively avoided. Then, iterating and calculating the overflow proportion again, if the overflow phenomenon still occurs, reducing the coefficient according to the range of 0.05-0.1 (namely 5% -10%) each time, if the overflow proportion still exceeds the limit after three times of reduction, judging the picture to be abnormal, abandoning the picture, adopting the array value of the equal difference number of the corresponding pixels of the first three frames of the picture as the pixel value of the pixel, for example, setting the pixel value range as [0,255 ] ]For example, if a pixel value is abnormal, the G pixel values of the pixels in the first three frames are 172, 176 and 184 respectively, and the distance between every two pixels is calculatedThe average value of the difference, 7, is set to 191. Therefore, the image change trend can be kept, and excessive distortion is avoided.
Preferably, the method further comprises: and establishing a corresponding relation mapping of pixels of the ablation monitoring equipment and the temperature, and highlighting or marking the specific temperature value concerned by a doctor in the ablation process based on the corresponding relation mapping.
Preferably, the method further comprises: measuring the heat conductivity coefficient of the simulated organ material and the heat conductivity coefficient of the animal liver material, firstly weighting the infrared image based on the material, and then fusing, wherein the weighting mode is as follows:
Figure BDA0003085094450000051
p' is the pixel value after correction, P is the pixel value before correction, λ1And λ2Thermal conductivity of the measured animal tissue and the thermal conductivity of the simulation material, c1And c2Specific heat of tested animal tissue and specific heat of simulated material, rho1And ρ2The density of the tested animal tissue (such as liver) and the density of the simulated material, respectively.
Preferably, the method further comprises repairing the punctured simulated tissue layer by a repair patch, and removing and replacing the simulated organ by detaching the detached upper body part model and lower body part model.
Preferably, the video monitoring device adopts a coaxial light beam splitting collection mode to collect images of different wave bands.
In another aspect, the present invention provides an operation training system for minimally invasive ablation procedures, the operation training system comprising: human simulation model, ablation equipment, operation panel, ablation monitoring facilities and ablation needle state display device, human simulation model set up on the operation panel, and human simulation model includes the upper part of the body model at least, the upper part of the body model is close to viscera one side and sets up ablation monitoring facilities, and the upper part of the body model is inside to have the simulation internal organs, the simulation internal organs is made by transparent material, it includes visible light imaging device and infrared thermal imaging device to melt monitoring facilities, and visible light imaging device and infrared thermal imaging device all are to the target internal organs, camera equipment and thermal imager with it melts needle state display device and is used for the image based on the control signal transport different grade type of ablation equipment.
By adopting the operation training simulation method, the user can repeatedly practice the operation, and the corresponding relation between the hand operation action and the advancing distance of the puncture needle can be known by the user through the video-based insertion depth feedback, so that the user can conveniently control the action amplitude of the user, know the instrument action condition caused by the action amplitude, and provide better control feeling. And the visual image and the infrared image of the target organ are switched and displayed or the visual image and the fusion image are switched and displayed by triggering the ablation switch, so that the visual display of the ablation time and the temperature change of the ablation area can be realized, and a user can better know the influence of the operation on the target tumor. The existing operation mode is usually to distinguish the ablation state by changing color, but the color change cannot accurately reflect the ablation temperature, and the temperature change caused by ablation can be effectively and really reflected by heat conductivity correction, temperature measurement of a temperature sensor and mapping.
Description of the drawings:
FIG. 1 is a schematic flow chart of the operation training method of the present invention
FIG. 2 is a schematic diagram of the arrangement of the various devices in practice of the operation training method of the present invention.
Fig. 3 is a schematic view of an optical path configuration of an ablation monitoring apparatus used in the present invention.
Fig. 4 is a schematic structural view of a prosthetic patch used in the present invention.
FIG. 5 is a schematic view of a preferred simulated tissue layer and inner substrate construction.
The specific implementation mode is as follows:
as shown in fig. 1, the method for simulating an interventional operation based on video image feedback of the present invention comprises:
(1) constructing a human body simulation model, wherein the human body simulation model at least comprises a simulation tissue layer and a simulation organ, and at least the simulation organ is made of a transparent material;
(2) arranging a video monitoring device for an ablation process at one side of the simulated visceral organ, wherein the video monitoring device has a video image acquisition function of at least two wave bands;
(3) during a surgical simulation with the ablation device, the video monitoring device is controlled by an ablation switch of the ablation device to perform video acquisition on at least one of two acquisition bands. As shown, before ablation starts, i.e. during the puncture phase, image acquisition is performed with visible light, and after ablation starts, image acquisition and display is performed with infrared light or display is performed with a fusion image.
The fused image is formed as follows:
(3.1) extracting RGB values of the visible light image, and weighting the RGB values respectively, wherein the weighting coefficient of each value is less than or equal to 45% or 50%;
(3.2) acquiring an infrared image by using infrared camera equipment, and weighting the RGB value of the infrared image, wherein the weighting coefficient is more than or equal to 45% or 50%;
and (3.3) performing pixel-by-pixel fusion on each frame of the infrared image and the visible light image according to the synchronous signals, and adding the RGB values of the two images respectively.
As shown in fig. 2, the system used in the training method for minimally invasive ablation operation in this embodiment includes: the manikin 100, the ablation device 200, the console 300, the ablation monitoring device 400, and the ablation needle motion display device 500.
The operating table 300 is constructed substantially the same as or similar to an operating table in order to give the trainee as a realistic experience as possible.
For example, for liver cancer treatment training, the phantom 100 may be divided into two parts, one part being an upper part having visceral tissues, the other part being a lower part (which may be omitted), and the other part being a more realistic phantom having better quality, the other part being a lower part (which may be omitted), the other part being a common phantom having lower cost, thereby further reducing the cost. And the upper body part and the lower body part can be assembled together in two parts so as to be disassembled and assembled, and when the upper body part needs to be replaced, the lower body part does not need to be replaced at the same time. More preferably, a human head model (which can be omitted) is further included, and the human head model and the upper body part model are detachably assembled together.
A hollow region is provided below the phantom 100 (the lower portion of the upper phantom), and the hollow region is located in a non-visceral region (particularly, a non-liver region) near the liver portion. The purpose of the hollow region is to position ablation monitoring device 400, with the height of ablation monitoring device 400 being proportional to the height of the target monitoring region. At least the inner part (particularly, the internal organs) of the upper body model of the manikin 100 is made of a transparent material, such as transparent silica gel.
The ablation monitoring device 400 includes a visible light imaging device 401 and/or an infrared thermal imaging device 402, both of which are directed at the target organ. The target organ is made of a silica gel model or other transparent or semitransparent materials, more preferably, the heat capacity and the heat conductivity of the target organ are made of materials similar to the heat conductivity and the heat capacity of human tissues, and simulated tumors in various shapes and structures can be plugged into the target organ according to requirements.
The imaging focal points of the visible light imaging device 401 and the infrared thermal imaging device 402 are directed to a target simulated lesion region of a target organ.
In a preferred implementation, the visible light imaging device 401 and the infrared thermal imaging device 402 are arranged side by side, and the two images respectively capture images of the target organ and the simulated tumor from respective angles and respective wave bands. However, there is a problem that the user needs to observe the state of the two images at the same time, and since the two images are photographed side by side, the photographing angles of the two images are not completely the same but are shifted by a certain distance in the horizontal or vertical direction, so that the contents of the photographed images are not directed to the same portion. This easily imposes a burden of observation on the user.
To address this problem, in another preferred implementation, an optical path structure as shown in fig. 3 is adopted to perform coaxial and spectroscopic image acquisition. As shown, the ablation monitoring device 400 includes a visible light imaging device 401, an infrared thermal imaging device 402, an optical lens 403, a beam splitter 404, a reflector 405, and a housing 406, where the optical lens 403 is located at the frontmost end of the housing, the housing has an opening, the optical lens 403 is just sealed at the opening, and the beam splitter 404 is located behind the optical lens 403, and is used for receiving incident light from the optical lens 403 and performing sub-band reflection and transmission on the incident light, where the beam splitter 404 is plated with a reflective film for visible light or infrared light, so as to reflect one of the visible light or infrared light and transmit light of another band, for example, the visible light is transmitted to the visible light imaging device 401, the infrared light is reflected to the reflector 405, and then reflected to the infrared light imaging device 402.
In this way, the same field of view of the visible light imaging device 401 and the infrared light imaging device 402 can be achieved.
Further, in order to make the observation of the user more convenient, the specific operation flow of the surgical simulation training method may be divided into two modes, a puncturing mode and an ablation mode, wherein in the puncturing mode, the image obtained by the visible light imaging device 401 is presented to the user through the display device 500, so that the user can know the corresponding relationship between the hand operation action and the travel distance of the puncturing needle, the user can control the action amplitude of the user conveniently, know the instrument action condition brought by the action amplitude, and provide better control feeling.
When the puncture reaches a preset position, an ablation mode is started, a user triggers a control switch of an ablation needle, the control switch simultaneously sends a mode switching signal to the ablation monitoring device, and the ablation monitoring device fuses an image obtained by the visible light imaging device 401 and an image obtained by the infrared light imaging device 402.
The specific fusion mode is that shutter triggering is performed on synchronous trigger signals of the visible light imaging device 401 and the infrared light imaging device 402, and visible light images and infrared light images of corresponding frames are acquired based on the synchronous signals. RGB values of a visible light image are extracted, and the RGB values are weighted unequally, and in the weighting, a weight w with an R value smaller than G, B is givenR1、wG1、wB1Preferably, the weight assigned to the R value is less than about 10-20% of the weight of the G, B value. The respective weighting coefficients of RGB may be set to between 30% and 60%, preferably 50% or less. The RGB values of the infrared photographic image obtained by the infrared light imaging device 402 are extracted, and since infrared light is invisible to the human eye, the infrared light imaging device 402 converts the infrared light image captured by it into an image visible to the human eye, where the object of the operation performed therein is the converted infrared image. The RGB values of the infrared image are weighted unequally, wherein the weighting coefficient for the R value is larger than the weighting coefficient w of GB R2、wG2、wB2. By adopting the weighting mode, on one hand, the tone of the visible light image can be adjusted towards the short wave direction, and the tone of the image formed by the infrared light imaging equipment can be adjusted towards the long wave direction, so that once the infrared light image enters the ablation stage, the heat conduction condition of the ablated area can be reflected more clearly, and on the other hand, the possibility of overflow of the pixel values of the weighted and fused two images can be reduced. And controlling the two image acquisition devices to output images with the same resolution, so that better fusion can be realized, and certainly, before the fusion, the registration of the two images can be carried out by adopting a conventional image registration method.
R of the weighted visible light image1The value and the corresponding pixel R of the weighted infrared image2Adding the values, and weighting the G of the visible light image1Value and G of weighted infrared image2Adding the values, and weighting B of the visible light image1Value and B of weighted infrared image2Adding values to obtain RGB values of pixels corresponding to the fused image: rMelt、GMelt、BMelt
Preferably, the method further comprises fusing R of each pixel point in the imageMelt、GMelt、BMeltComparing the values with corresponding overflow thresholds respectively, determining the proportion of overflow pixels to the total pixel value, taking possible noise spikes into account, multiplying the pixel value of any one of RGB by an anti-overflow coefficient when the proportion of overflow pixels to the total pixel exceeds a predetermined threshold, and multiplying the anti-overflow coefficient by the pixel value of the corresponding one of RGB May be set based on the average pixel value of the term, the overflow threshold, and the overflow ratio, for example, the coefficient is set to:
Figure BDA0003085094450000101
wherein Q is an overflow threshold, A is an average pixel value, and Y is an overflow proportion, and the coefficient is adopted to reduce the pixel value of the corresponding item so as to avoid the overflow phenomenon. Then, iterating and calculating the overflow proportion again, if the overflow phenomenon still occurs, reducing the coefficient according to the range of 0.05-0.1 (namely 5% -10%) each time, if the overflow proportion still exceeds the limit after three times of reduction, judging the picture to be abnormal, abandoning the picture, adopting the array value of the equal difference number of the corresponding pixels of the first three frames of the picture as the pixel value of the pixel, for example, setting the pixel value range as [0,255 ]]For example, if a pixel value is abnormal, the G pixel values of the pixels in the first three frames are 172, 176, 184 respectively, the average value of the difference values between the two is calculated, 7, and the current pixel value is set to 191. Therefore, the image change trend can be kept, and excessive distortion is avoided.
Further, a corresponding relation mapping of pixels of the ablation monitoring device and temperature is established, a temperature sensor is arranged at a preset position (distance) of the simulated organ, the ablation electrode of the ablation needle is inserted into the simulated organ, the distance of 1-4cm is kept between the ablation electrode and the temperature sensor, the power of the ablation electrode is gradually increased, measurement trigger signals are respectively sent to the temperature sensor and the ablation monitoring device, the temperature sensor measures the actual temperature of the position point where the temperature sensor is located, and the ablation monitoring device obtains a fusion image of the ablation electrode and the temperature sensor. Extracting the pixel value of the simulated organ close to the temperature sensor through image extraction software, establishing the corresponding relation between the pixel value and the temperature value, repeatedly testing to obtain the temperature value corresponding to each pixel value section, and forming a mapping relation table between the pixel value and the temperature value under the current parameter condition. Based on the relation table, a mark or a pixel expansion display is performed for a specific temperature value (specific pixel value) concerned by a doctor in the ablation process. For example, for a pixel value corresponding to a specific temperature, such as 120 degrees celsius, an isotherm is formed, and a pixel and its surrounding pixels at the isotherm are assigned with a higher brightness value, or pixels 3 × 3 around the pixel at the isotherm are assigned with the same RGB values, so as to form a distinct marking region. According to different tumors or experimental conditions, a plurality of parameter systems and corresponding mapping relations can be established.
Further, the thermal conductivity, specific heat and density of the material of the simulated organ (tumor) are measured, the thermal conductivity, specific heat and density of the material of the animal liver (tumor) are measured, and the pixel values are weighted based on the material by the following weighting method:
Figure BDA0003085094450000111
p' is the pixel value after correction, P is the pixel value before correction, λ1And λ2Thermal conductivity of the measured animal tissue and the thermal conductivity of the simulation material, c1And c2Specific heat of tested animal tissue and specific heat of simulated material, rho1And ρ2The density of the tested animal tissue (such as liver) and the density of the simulated material, respectively.
More preferably, in order to reduce the cost better, a replaceable organ model and/or an upper body surface tissue model are provided, and the organ model and the upper body surface tissue model are made of double-component silica gel. Preferably, the upper body model comprises tissue layers and a skeleton (the skeleton is used to form skeleton contours, and the tissue layers, especially the outer tissue layers, are used to cover the skeleton, and for the puncture region we are concerned with, there are no skeleton layers, as it is necessary to puncture from the skeleton space or the abdominal frameless region). The tissue layers comprise an outer tissue layer 3 and an inner substrate layer 4, the outer tissue layer and the inner substrate layer are attached to each other, the two ends are hermetically connected, the two are fixed at intervals and are approximately attached to each other, and a fluid channel is formed between the two.
The inner substrate is attached to the inner side of the outer tissue layer, and a plurality of attaching fixing points are arranged at intervals between the inner substrate and the outer tissue layer in a bonding or other fixing mode, so that the inner substrate and the simulated tissue layer are fixed together, and flexible glue is used, is dispensed and is bonded together. The periphery of the inner substrate is hermetically connected with the inner wall of the simulated tissue layer, a filling layer is formed between the inner substrate and the simulated tissue layer, a filling liquid input port 8 and a filling liquid suction port 9 are respectively arranged at the joint part of the inner substrate and the simulated tissue layer and at the two ends of the joint part, and the filling liquid input port and the filling liquid suction port are respectively communicated with the filling layer between the inner substrate and the inner simulated tissue layer. Preferably, the inner substrate is made of a material with little viscosity, such as flexible silicone, to ensure a tight fit between the inner substrate and the simulated tissue layer. Preferably, the backing layer is provided only on one side of the simulated tissue layer adjacent to the anterior chest and not on the other side, since most surgical ablation procedures are performed with a puncture from the anterior side.
More preferably, the system further includes a repair patch 501, the repair patch includes a circular or square patch, one side of the patch has a smooth surface, at least the peripheral part of the other side surface is a sticky surface, the middle part of the sticky surface is movably bonded with a circular repair block 502, the circular repair block is attached with a first component of the two-component silica gel (in this embodiment, the repair block includes a cover film and a first component of the silica gel therein, the cover film can be broken by squeezing under external force), and the first component silica gel is matched with a second component silica gel input in the filling liquid input port to form the two-component silica gel. Preferably, the first component of the circular repair block is a silica gel curing agent, or the silica gel curing agent is filled in the circular repair block, and the silica gel curing agent is matched with silica gel liquid input in the filling liquid input port and is two components in two-component silica gel respectively. More preferably, the two-component silica gel is colorless or pure color, flexible gel.
Preferably, the two-component silica gel consists of a first component and a second component, wherein the mass ratio of the second component to the first component is 10: 1. the first component comprises 2-3 parts of a cross-linking agent, 1-4 parts of a coupling agent, 0.1-0.3 part of a catalyst, 5 parts of a plasticizer and 3 parts of a chain extender, the second component comprises 100-120 parts of hydroxyl-terminated polysiloxane, 80 parts of a filler and 30 parts of a plasticizer, the filler can be heavy calcium carbonate, silicon micropowder and aluminum hydroxide, the plasticizer is methyl silicone oil, and the viscosity of the hydroxyl-terminated polysiloxane is 400-650 mPa. The cross-linking agent is at least one of methyl orthosilicate or ethyl silicate, methyl triethoxysilane and tetraisopropoxy silane, the coupling agent is vinyl trimethoxysilane or aminopropyl trimethoxysilane, the chain extender can be a conventional common chain extender, such as dimethyl diethylsilane, and the aminopropyl methyl dimethoxysilane catalyst is dibutyltin dilaurate.
The repairing process comprises the following steps: and respectively pasting a repairing patch 101 on each of two sides of each puncture hole, namely the inner side of the inner substrate layer and the outer side of the simulated tissue layer, wherein the middle part of each repairing patch is provided with a circular repairing block or is coated with a first component of repairing liquid, and the repairing patches are opposite to the puncture holes. Then, the filling liquid input port is opened, and the filling liquid suction port is naturally opened to discharge residual air possibly existing in the filling layer. The second component silica gel is slowly filled into the filling layer through the filling liquid inlet, and if the filling is not uniform, the second component silica gel can be manually pressed and adjusted. Because the inner substrate layer and the simulated tissue layer are originally attached together and fixedly connected at intervals, a fluid channel is formed between the inner substrate layer and the simulated tissue layer after the second component adhesive is filled, then the second component is conveyed to each position between the two components, including each puncture position, the air remained in the original puncture is extruded to the other side of the filling layer, so that the filling liquid is filled into the puncture and is contacted with the repair block on the repair patch or the first component of the repair liquid, after the second component is filled in each puncture hole, the filling liquid input port is closed, the filling liquid suction port is opened, the filling liquid is slowly sucked outwards until the inner substrate layer and the simulated tissue layer are basically attached together again, because the filling liquid input port and the filling liquid suction port are respectively positioned at two sides of the simulation equipment, residual air in the original puncture hole can be pumped out in the suction process, and the second component adhesive is left in the pit of the puncture hole. At the moment, the repairing blocks on the puncture holes can be gently squeezed and kneaded, so that the first component adhesive and the second component on the repairing blocks are fully mixed, then the repairing blocks are kept still until the curing is completed, the repairing of the inner substrate layer and the simulated tissue layer is realized, and the residual second component in the repairing blocks can be further sucked. After curing, the repair will become a further fixation point between the two layers. By adopting the liquid curing repairing mode, the repairing position has almost no obvious boundary, and the repairing position is better fused with the original material. Most preferably, the inner backing layer and the simulated tissue layer are made of the same material as the repair material, i.e., they are also made of two-component silicone or other similar two-component curing material. In another preferred implementation, the repair patch is separable from the inner simulated tissue layer and the repair patch is torn off when the repair is complete.
While the principles of the invention have been described in detail in connection with the preferred embodiments thereof, it will be understood by those skilled in the art that the foregoing embodiments are merely illustrative of exemplary implementations of the invention and are not limiting of the scope of the invention.

Claims (10)

1. An interventional operation simulation method based on video image feedback is characterized by comprising the following steps:
(1) constructing a human body simulation model, wherein the human body simulation model at least comprises a simulation tissue layer and a simulation organ, and at least the simulation organ is made of a transparent material;
(2) arranging a video monitoring device for an ablation process at one side of the simulated visceral organ, wherein the video monitoring device has a video image acquisition function of at least two wave bands;
(3) during a surgical simulation with the ablation device, the video monitoring device is notified to perform a video acquisition on at least one of the two acquisition bands by an ablation switch of the ablation device.
2. The method of claim 1, wherein a first of the two bands is a visible light or shorter wavelength light band, and the second band is an infrared band.
3. The method for simulating interventional operation based on video image feedback as claimed in claim 1, wherein the simulation process comprises a puncturing stage and an ablation stage, an ablation switch of the ablation device is connected with the video monitoring device in communication for mode switching, the puncturing stage adopts a first wave band for video acquisition, the ablation stage adopts a second wave band for video acquisition or acquires a fused image of the first and second wave bands, and the fused image is presented to the user.
4. The method for simulating an interventional operation based on video image feedback according to claim 3, wherein the fused image is formed as follows:
(3.1) extracting RGB values of the visible light image, and weighting the RGB values respectively, wherein the weighting coefficient of each value is less than or equal to 45% or 50%;
(3.2) acquiring an infrared image by using infrared camera equipment, and weighting the RGB value of the infrared image, wherein the weighting coefficient is more than or equal to 45% or 50%;
and (3.3) performing pixel-by-pixel fusion on each frame of the infrared image and the visible light image according to the synchronous signals, and adding the RGB values of the two images respectively.
5. The method of claim 4, wherein R of each pixel in the fused image is determined by the method of simulating interventional operation based on video image feedback Melt、GMelt、BMeltComparing the values with corresponding overflow thresholds respectively, determining the proportion of overflow pixels in the total pixel value, and multiplying the pixel value of any one of RGB by an anti-overflow coefficient when the proportion of the overflow pixels in the total pixel exceeds a preset threshold, wherein the anti-overflow coefficient is as follows:
Figure FDA0003085094440000021
where Q is the overflow threshold, A is the average pixel value, and Y is the overflow ratio.
6. The method of claim 4, further comprising: and establishing a corresponding relation mapping of pixels of the ablation monitoring equipment and the temperature, and highlighting or marking the specific temperature value concerned by a doctor in the ablation process based on the corresponding relation mapping.
7. The method of claim 4, further comprising: measuring the heat conductivity coefficient of the simulated organ material and the heat conductivity coefficient of the animal liver material, firstly weighting the infrared image based on the material, and then fusing, wherein the weighting mode is as follows:
Figure FDA0003085094440000022
p' is the pixel value after correction, P is the pixel value before correction, λ1And λ 2Thermal conductivity of the measured animal tissue and the thermal conductivity of the simulation material, c1And c2Specific heat of tested animal tissue and specific heat of simulated material, rho1And ρ2The density of the tested animal tissue (such as liver) and the density of the simulated material, respectively.
8. The operation training system for minimally invasive ablation surgery according to claim 1, further comprising repairing the punctured simulated tissue layer by a repair patch, and removing and replacing the simulated organ by detaching the detached upper body part model and lower body part model.
9. The operation training system for minimally invasive ablation surgery according to claim 1, wherein the video monitoring device adopts a coaxial light beam splitting collection mode to collect images of different wave bands.
10. A training system for performing the method of any one of claims 1-9.
CN202110577825.XA 2021-05-26 2021-05-26 Interventional operation simulation method based on video image feedback Active CN113303905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110577825.XA CN113303905B (en) 2021-05-26 2021-05-26 Interventional operation simulation method based on video image feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110577825.XA CN113303905B (en) 2021-05-26 2021-05-26 Interventional operation simulation method based on video image feedback

Publications (2)

Publication Number Publication Date
CN113303905A true CN113303905A (en) 2021-08-27
CN113303905B CN113303905B (en) 2022-07-01

Family

ID=77374874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110577825.XA Active CN113303905B (en) 2021-05-26 2021-05-26 Interventional operation simulation method based on video image feedback

Country Status (1)

Country Link
CN (1) CN113303905B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114569252A (en) * 2022-03-02 2022-06-03 中南大学 Master-slave mapping proportion control system and method for surgical robot

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07177308A (en) * 1993-12-20 1995-07-14 Ricoh Co Ltd Method for controlling double-sided scanner
JPH07271544A (en) * 1994-03-29 1995-10-20 Toshiba Corp Image information processor
US20030122759A1 (en) * 2001-11-21 2003-07-03 Canon Kabushiki Kaisha Display apparatus, and image signal processing apparatus and drive control apparatus for the same
US20060045347A1 (en) * 2004-09-02 2006-03-02 Jing Xiao System and method for registration and modeling of deformable shapes by direct factorization
CN101489022A (en) * 2008-01-16 2009-07-22 延世大学工业学术合作社 Color recovery method and system
CN101902950A (en) * 2007-12-21 2010-12-01 爱尔康折射视界公司 Virtual microscope system for monitoring the progress of corneal ablative surgery and associated methods
CN102917183A (en) * 2011-08-04 2013-02-06 索尼公司 Imaging device, image processing method and program
CN103236213A (en) * 2013-04-19 2013-08-07 上海交通大学 Atrial fibrillation catheter ablation simulation based on optical binocular positioning
CN103489360A (en) * 2012-06-12 2014-01-01 韦伯斯特生物官能(以色列)有限公司 Physical heart simulator
CN104505053A (en) * 2015-01-04 2015-04-08 京东方科技集团股份有限公司 Display signal conversion method and display signal conversion device
US20150187331A1 (en) * 2013-12-30 2015-07-02 Lg Display Co., Ltd. Method and apparatus for controlling luminance of organic light emitting diode display device
US20150228090A1 (en) * 2014-02-10 2015-08-13 Synaptics Display Devices Kk Image processing apparatus, image processing method, display panel driver and display apparatus
CN105726117A (en) * 2014-12-29 2016-07-06 韦伯斯特生物官能(以色列)有限公司 Spectral Sensing Of Ablation
CN105744883A (en) * 2013-11-20 2016-07-06 乔治华盛顿大学 Systems and methods for hyperspectral analysis of cardiac tissue
CN106572842A (en) * 2014-06-24 2017-04-19 阿帕玛医疗公司 Tissue ablation and monitoring thereof
CN106683056A (en) * 2016-12-16 2017-05-17 凯迈(洛阳)测控有限公司 Airborne photoelectric infrared digital image processing method and apparatus thereof
CN107205120A (en) * 2017-06-30 2017-09-26 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image
CN107452000A (en) * 2017-08-31 2017-12-08 天津大学 Verify the experimental facilities of ultrasonic temperature imaging accuracy
US20180122285A1 (en) * 2015-07-08 2018-05-03 Eizo Corporation Image processing apparatus, display apparatus, and computer-readable storage medium
CN108347560A (en) * 2018-01-17 2018-07-31 浙江大华技术股份有限公司 A kind of anti-sun of video camera is burnt method, video camera and readable storage medium storing program for executing
CN108961299A (en) * 2017-05-18 2018-12-07 北京金山云网络技术有限公司 A kind of foreground image preparation method and device
CN109998451A (en) * 2019-04-30 2019-07-12 东北大学 A kind of photo-thermal therapy device of based endoscopic imaging guidance
CN110365878A (en) * 2019-07-04 2019-10-22 华为技术有限公司 A kind of photographic device and method
CN110472658A (en) * 2019-07-05 2019-11-19 哈尔滨工程大学 A kind of the level fusion and extracting method of the detection of moving-target multi-source
CN112001873A (en) * 2020-08-27 2020-11-27 中广核贝谷科技有限公司 Data generation method based on container X-ray image
CN112686820A (en) * 2020-12-29 2021-04-20 北京旷视科技有限公司 Virtual makeup method and device and electronic equipment

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07177308A (en) * 1993-12-20 1995-07-14 Ricoh Co Ltd Method for controlling double-sided scanner
JPH07271544A (en) * 1994-03-29 1995-10-20 Toshiba Corp Image information processor
US20030122759A1 (en) * 2001-11-21 2003-07-03 Canon Kabushiki Kaisha Display apparatus, and image signal processing apparatus and drive control apparatus for the same
US20060045347A1 (en) * 2004-09-02 2006-03-02 Jing Xiao System and method for registration and modeling of deformable shapes by direct factorization
CN101902950A (en) * 2007-12-21 2010-12-01 爱尔康折射视界公司 Virtual microscope system for monitoring the progress of corneal ablative surgery and associated methods
CN101489022A (en) * 2008-01-16 2009-07-22 延世大学工业学术合作社 Color recovery method and system
CN102917183A (en) * 2011-08-04 2013-02-06 索尼公司 Imaging device, image processing method and program
CN103489360A (en) * 2012-06-12 2014-01-01 韦伯斯特生物官能(以色列)有限公司 Physical heart simulator
CN103236213A (en) * 2013-04-19 2013-08-07 上海交通大学 Atrial fibrillation catheter ablation simulation based on optical binocular positioning
CN105744883A (en) * 2013-11-20 2016-07-06 乔治华盛顿大学 Systems and methods for hyperspectral analysis of cardiac tissue
US20150187331A1 (en) * 2013-12-30 2015-07-02 Lg Display Co., Ltd. Method and apparatus for controlling luminance of organic light emitting diode display device
US20150228090A1 (en) * 2014-02-10 2015-08-13 Synaptics Display Devices Kk Image processing apparatus, image processing method, display panel driver and display apparatus
CN106572842A (en) * 2014-06-24 2017-04-19 阿帕玛医疗公司 Tissue ablation and monitoring thereof
CN105726117A (en) * 2014-12-29 2016-07-06 韦伯斯特生物官能(以色列)有限公司 Spectral Sensing Of Ablation
CN104505053A (en) * 2015-01-04 2015-04-08 京东方科技集团股份有限公司 Display signal conversion method and display signal conversion device
US20180122285A1 (en) * 2015-07-08 2018-05-03 Eizo Corporation Image processing apparatus, display apparatus, and computer-readable storage medium
CN106683056A (en) * 2016-12-16 2017-05-17 凯迈(洛阳)测控有限公司 Airborne photoelectric infrared digital image processing method and apparatus thereof
CN108961299A (en) * 2017-05-18 2018-12-07 北京金山云网络技术有限公司 A kind of foreground image preparation method and device
CN107205120A (en) * 2017-06-30 2017-09-26 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image
CN107452000A (en) * 2017-08-31 2017-12-08 天津大学 Verify the experimental facilities of ultrasonic temperature imaging accuracy
CN108347560A (en) * 2018-01-17 2018-07-31 浙江大华技术股份有限公司 A kind of anti-sun of video camera is burnt method, video camera and readable storage medium storing program for executing
CN109998451A (en) * 2019-04-30 2019-07-12 东北大学 A kind of photo-thermal therapy device of based endoscopic imaging guidance
CN110365878A (en) * 2019-07-04 2019-10-22 华为技术有限公司 A kind of photographic device and method
CN110472658A (en) * 2019-07-05 2019-11-19 哈尔滨工程大学 A kind of the level fusion and extracting method of the detection of moving-target multi-source
CN112001873A (en) * 2020-08-27 2020-11-27 中广核贝谷科技有限公司 Data generation method based on container X-ray image
CN112686820A (en) * 2020-12-29 2021-04-20 北京旷视科技有限公司 Virtual makeup method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
龚玮: "皮肤选择性光热解效应与光疗及热损伤的评估方法和技术", 《中国优秀博硕士学位论文全文数据库医药卫生科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114569252A (en) * 2022-03-02 2022-06-03 中南大学 Master-slave mapping proportion control system and method for surgical robot
CN114569252B (en) * 2022-03-02 2024-01-30 中南大学 Master-slave mapping proportion control system and method for surgical robot

Also Published As

Publication number Publication date
CN113303905B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
AU2021266318B2 (en) Simulated tissue models and methods
US11158212B2 (en) Simulated tissue structure for surgical training
US10573201B2 (en) Method of producing a phantom and phantom
JP2020024473A (en) Incisable simulation tissue
CN106214110B (en) photo-coupler for endoscope
CN113303905B (en) Interventional operation simulation method based on video image feedback
CN106683518A (en) Endoscope /Cavity mirror analog simulation training system and method thereof
WO2020215807A1 (en) Deep-learning-based method for improving colonoscope adenomatous polyp detection rate
CN113380093B (en) Operation training system for microwave ablation operation
Sun et al. Virtually transparent epidermal imagery for laparo-endoscopic single-site surgery
Terry et al. An integrated port camera and display system for laparoscopy
CN109196570A (en) Assessment executes the method and assessment tool of the people of medical procedure or the performance of robot
Kingston et al. Hysteroscopic training: the butternut pumpkin model
JPS6312365Y2 (en)
CN109965987A (en) Visor outside a kind of robot with common focus point migration function
Brown et al. Comparison of conventional and gaze-down imaging in laparoscopic task performance
CN218833265U (en) Visual stomach tube device
CN216388413U (en) Laparoscope ultrasonic puncture simulation training device
CN218004236U (en) Analogue means is used in gastroendoscope gastric mucosa pathological change operation training
JP2023003261A (en) Medical image processing device, operation method thereof, and endoscope system
JP7053873B2 (en) Image processing equipment and endoscopic system
CN113313988B (en) Repeatedly-usable and repairable operation simulation equipment
US6716170B2 (en) Body channel motion picture production system
CN113077662A (en) Laparoscopic surgery and training system based on 5G network technology application
CN114373347A (en) Intelligent high-simulation training system for whole-organ surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant