CN117690331A - Prostate puncture operation training system and method - Google Patents
Prostate puncture operation training system and method Download PDFInfo
- Publication number
- CN117690331A CN117690331A CN202410154896.2A CN202410154896A CN117690331A CN 117690331 A CN117690331 A CN 117690331A CN 202410154896 A CN202410154896 A CN 202410154896A CN 117690331 A CN117690331 A CN 117690331A
- Authority
- CN
- China
- Prior art keywords
- training
- image
- prostate puncture
- model
- surgical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 370
- 210000002307 prostate Anatomy 0.000 title claims abstract description 131
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001356 surgical procedure Methods 0.000 claims abstract description 54
- 238000011156 evaluation Methods 0.000 claims abstract description 43
- 239000007787 solid Substances 0.000 claims abstract description 19
- 238000010276 construction Methods 0.000 claims abstract description 12
- 230000006399 behavior Effects 0.000 claims description 14
- 238000011471 prostatectomy Methods 0.000 claims description 13
- 238000012800 visualization Methods 0.000 claims description 9
- 210000000056 organ Anatomy 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 4
- 210000003708 urethra Anatomy 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 210000003204 ejaculatory duct Anatomy 0.000 claims description 3
- 210000003734 kidney Anatomy 0.000 claims description 3
- 210000003899 penis Anatomy 0.000 claims description 3
- 210000001625 seminal vesicle Anatomy 0.000 claims description 3
- 210000000626 ureter Anatomy 0.000 claims description 3
- 210000003932 urinary bladder Anatomy 0.000 claims description 3
- 210000001177 vas deferen Anatomy 0.000 claims description 3
- 201000010653 vesiculitis Diseases 0.000 claims description 3
- 229940094443 oxytocics prostaglandins Drugs 0.000 claims 4
- 150000003180 prostaglandins Chemical class 0.000 claims 4
- 230000000740 bleeding effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 210000004907 gland Anatomy 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 206010004446 Benign prostatic hyperplasia Diseases 0.000 description 1
- 208000004403 Prostatic Hyperplasia Diseases 0.000 description 1
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 239000004816 latex Substances 0.000 description 1
- 229920000126 latex Polymers 0.000 description 1
- 230000027939 micturition Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000036299 sexual function Effects 0.000 description 1
- 239000000741 silica gel Substances 0.000 description 1
- 229910002027 silica gel Inorganic materials 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 208000014001 urinary system disease Diseases 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to the technical field of prostate puncture surgery training, and provides a system and a method for training prostate puncture surgery, wherein the system comprises the following steps: the system comprises a surgical training operation acquisition module, a virtual model construction module, a surgical flow generation module and an operation evaluation module. According to the method, the training image information of the operation training operation is collected in the prostate puncture operation training solid model, so that a target training person can execute the operation training operation according to the alternative prostate puncture operation training virtual model or training image information, the predicted training image at the next moment is predicted by utilizing the training image at the current moment, the operation evaluation is performed on the prostate puncture operation training, and an operation feedback picture is generated, so that the problems that the behavior evaluation of the operation training operation and the operation training operation feedback are lacked in the existing prostate puncture operation training scheme, and the effectiveness of the operation training operation is influenced are solved.
Description
Technical Field
The invention relates to the technical field of prostate puncture surgery training, in particular to a system and a method for training prostate puncture surgery.
Background
Prostatic hyperplasia is a common male urinary system disease, and the swelling of the prostatic tissue of a patient leads to the compression of the urethra, thereby causing difficult urination. In severe cases, the disease usually needs to be treated by an operation, and the operation treatment is divided into an open operation and a puncture operation, wherein the open operation has a large bleeding on a wound and can seriously affect the sexual function of a patient, and the open operation is gradually replaced by the puncture operation. Puncture surgery uses X-ray or B-ray guided surgical instruments to perform transurethral or vascular surgical procedures, but prostate puncture surgery requires a high level of skill and experience from the physician. Improper operation can lead to bleeding, infection, pain, and other complications, with unnecessary pain and risk to the patient. Thus, a suitable surgical training system or method is important to improve surgical quality.
The existing prostate puncture operation training method still adopts a model or human body sample mode, and the training mode can not evaluate whether improper behaviors exist in operation training operation or not, and can not intuitively display feedback of human bodies to operation links in the operation process, such as bleeding, gland contraction and the like.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a prostate puncture operation training system and a method, which aim to solve the problems that the prior prostate puncture operation training scheme lacks the behavior evaluation of operation training operation and operation training operation feedback, and affects the effectiveness of operation training operation.
In a first aspect of the present invention, there is provided a prostatectomy training system comprising:
a surgical training operation acquisition module configured to acquire training image information of a target trainer performing a surgical training operation in a prostate puncture surgical training solid model;
a virtual model construction module configured to generate a prostate puncture surgery training virtual model from the training image information, so that a target trainer performs surgery training operations according to the alternative prostate puncture surgery training virtual model or the training image information;
the surgical procedure generation module is configured to input a training image of the current moment when a target training person executes surgical training operation into a trained prostate puncture surgical operation prediction model to obtain a predicted training image of the next moment;
and the operation evaluation module is configured to perform operation evaluation on the prostate puncture operation of the target training personnel according to the training image at the next moment and the predicted training image at the next moment.
Optionally, the training solid model of the prostate puncture surgery is configured as a solid physical model with a real environment of the prostate puncture surgery, and the solid physical model comprises a male human urogenital association structure in the real environment of the prostate puncture surgery; wherein the male human genitourinary associated structure comprises kidney, ureter, bladder, prostate, seminal vesicle, vas deferens, ejaculatory duct, urethra, penis, and organ appendages.
Optionally, the surgical training operation acquisition module specifically includes:
a surgical training operation acquisition unit;
the operation training operation acquisition unit is configured to acquire training image information of an operation training operation performed by a target training person acquired by the operation acquisition device for prostate puncture in the operation training solid model for prostate puncture;
wherein the operation acquisition device for the prostate puncture operation is configured as an X-ray machine and/or a B-ultrasonic machine for carrying out image acquisition on the operation of the position of the prostate puncture operation.
Optionally, the virtual model building module specifically includes:
a virtual model construction unit configured to construct a virtual model corresponding to the physical model of the entity according to the training image information, the virtual model being configured as a virtual three-dimensional model having a virtual environment of a prostate puncture operation;
and the virtual model visualization unit is configured to transmit the virtual three-dimensional model to virtual reality visualization equipment worn by a target training person so as to enable the target training person to execute operation training operation according to the virtual three-dimensional model.
Optionally, the prostate puncture surgery training system further includes: the prostate puncture operation prediction model construction module specifically comprises:
a surgical real operation acquisition unit configured to acquire real image information of a surgical operator performing a standard surgical operation in a real prostatectomy surgery, and to divide the real image information by time stamps to obtain continuous real images in the real prostatectomy surgery;
A model training unit configured to generate a continuous real image based on the continuous real imageTrue image +.>True image +.>Training the prostate puncture operation prediction model to obtain a trained prostate puncture operation prediction model.
Optionally, the model training unit specifically includes:
a true image prediction subunit configured to predict the successive true imagesTrue image +.>In the input generator, the predicted real image +.>;
A cyclic training subunit configured to train a real image at a next time instantAnd predict real image +.>And inputting the judgment result into a discriminator, and training the generator until convergence to obtain a prostate puncture operation prediction model.
Optionally, the generator is a convolutional neural network structure configured to consist of an encoder, a bottleneck layer, a decoder, and three residual connections;
wherein the continuous real image is displayedTrue image +.>In the input generator, the predicted real image +.>The method specifically comprises the following steps:
will continue the real imageTrue image +.>Input encoder, split into size +.>Is encoded by downsampling and upsampling, the encoding result is input into a bottleneck layer for noise removal, and finally is input into a decoder which is in an inverse structure with the encoder, so as to obtain a predicted real image +_of the next moment output by the decoder>;
The expression of downsampling and upsampling is specifically:
wherein,indicate->Downsampling of layer>Indicate->Upsampling of layer>And->Respectively represent the->The>Expansion of->Represents the +.about.of the Swin transducer model>A layer.
Optionally, the operation evaluation module specifically includes:
the image contour difference generation unit is configured to extract ORB characteristic points in a training image at the next moment and a predicted training image at the next moment, calculate a projection matrix between the training image and the predicted training image according to the ORB characteristic points, and project the training image to the predicted training image based on the predicted training image to obtain contour difference images of the training image and the predicted training image;
the operation evaluation unit is configured to determine an operated area according to the contour dense area in the contour difference image, judge whether the pixel value of the contour dense area in the operated area exceeds a preset threshold value, and if so, judge that the current prostate puncture operation has improper behavior.
Optionally, the operation evaluation module further includes:
an evaluation result feedback unit;
the evaluation result feedback unit is configured to acquire the position information of the operated area when the current prostate puncture operation is judged to have improper behaviors, and generate evaluation result feedback on the position information;
when a target training person executes operation training operation according to the prostate puncture operation training virtual model, driving a virtual model building module to generate a first feedback picture in a region corresponding to the position information in the prostate puncture operation training virtual model;
when the target training personnel execute operation training operation according to the training image information, a second feedback picture is generated in the training image information acquired by the X-ray machine and/or the B-ultrasonic machine.
In a second aspect of the present invention, there is provided a method of training a prostate puncture procedure, the method comprising:
s1: acquiring training image information of a target training person for performing operation training operation in a prostate puncture operation training solid model;
s2: generating a prostate puncture operation training virtual model according to the training image information, so that a target training person executes operation training operation according to the alternative prostate puncture operation training virtual model or the training image information;
s3: inputting a training image of the current moment of the operation training operation performed by the target training personnel into a trained prostate puncture operation prediction model to obtain a predicted training image of the next moment;
s4: and performing operation evaluation on the prostate puncture operation of the target training personnel according to the training image at the next moment and the predicted training image at the next moment.
The invention has the beneficial effects that: the system and the method are characterized in that training image information of a target training person for executing operation training operation in a prostate puncture operation training solid model is collected, an operation training virtual model is generated according to the training image information, so that the target training person executes operation training operation according to the alternative prostate puncture operation training virtual model or the training image information, meanwhile, a training image of the current moment of executing operation training operation of the target training person is input into a training completed prostate puncture operation prediction model to obtain a next moment of prediction training image, operation evaluation can be carried out on the prostate puncture operation of the target training person according to the next moment of training image and the next moment of prediction training image, and an operation feedback picture is generated according to an evaluation result, so that the problems that the existing prostate puncture operation training scheme lacks operation behavior evaluation and operation feedback and influences the effectiveness of the operation training operation are solved.
Drawings
Fig. 1 is a schematic structural diagram of a training system for prostate puncture surgery provided by the present invention;
fig. 2 is a schematic flow chart of the training method for prostate puncture operation provided by the invention.
Reference numerals:
10-an operation training operation acquisition module; a virtual model building module; 30-an operation flow generating module; 40-operation evaluation module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1:
referring to fig. 1, fig. 1 is a schematic structural diagram of a training system for prostate puncture surgery according to an embodiment of the present invention.
As shown in fig. 1, a prostatectomy training system, comprising: a surgical training operation acquisition module 10, the surgical training operation acquisition module 10 configured to acquire training image information of a target trainer performing a surgical training operation in a prostate puncture surgical training solid model; a virtual model construction module 20, the virtual model construction module 20 being configured to generate a prostate puncture surgery training virtual model from the training image information, to enable a target trainer to perform a surgery training operation from the alternative prostate puncture surgery training virtual model or the training image information; a surgical procedure generation module 30, wherein the surgical procedure generation module 30 is configured to input a training image of a current moment when a target training person performs a surgical training operation into a trained prostate puncture surgical operation prediction model, and obtain a predicted training image of a next moment; an operation evaluation module 40, wherein the operation evaluation module 40 is configured to perform operation evaluation on the prostate puncture operation of the target trainer according to the training image at the next moment and the predicted training image at the next moment.
It should be noted that, the existing prostate puncture operation training method is still performed in a model or a human body specimen mode, and the training mode cannot evaluate whether improper behavior exists in operation training operation, and cannot intuitively display feedback of human body to each operation link of operation, such as bleeding, gland shrinkage, and the like, in the operation process, so as to influence the effectiveness of operation training operation, for example, whether the current operation is correct operation cannot be judged, and feedback animation cannot be generated for the operation of a target training person, so that the target training person cannot execute corresponding operation according to an actual operation scene, and the effectiveness of operation training operation is not high. In order to solve the above problems, in this embodiment, by collecting training image information of a target training person performing a surgical training operation in a prostate puncture surgery training solid model, generating a prostate puncture surgery training virtual model according to the training image information, so that the target training person performs the surgical training operation according to the alternative prostate puncture surgery training virtual model or the training image information, and simultaneously, inputting a training image of the target training person at the current time of performing the surgical training operation into a trained prostate puncture surgery operation prediction model to obtain a predicted training image at the next time, thereby performing operation evaluation on the prostate puncture surgery operation of the target training person according to the training image at the next time and the predicted training image at the next time, and generating an operation feedback picture according to the evaluation result, so as to solve the problems of lack of behavior evaluation of the surgical training operation and influence on effectiveness of the surgical training operation in the existing prostate puncture surgery training scheme.
In a preferred embodiment, the prostatectomy training solid model is configured as a solid physical model having a real environment of a prostatectomy, the solid physical model comprising a male human genitourinary-related structure in the real environment of a prostatectomy; wherein the male human genitourinary associated structure comprises kidney, ureter, bladder, prostate, seminal vesicle, vas deferens, ejaculatory duct, urethra, penis, and organ appendages. In this embodiment, a physical model made of skin-like materials (such as silica gel and latex) is provided, a main body of an operation training operation is provided for a target training person by simulating a male human genitourinary structure, the operation training operation is represented in real time by a visual means by collecting training image information of the operation training operation performed on the physical model by the target training person, and the accuracy of the operation training operation is conveniently analyzed by a system and operation feedback is provided, so that the authenticity and effectiveness of the operation training of prostate puncture are improved.
On the basis, the operation training operation acquisition module specifically comprises: a surgical training operation acquisition unit; the operation training operation acquisition unit is configured to acquire training image information of an operation training operation performed by a target training person acquired by the operation acquisition device for prostate puncture in the operation training solid model for prostate puncture; wherein the operation acquisition device for the prostate puncture operation is configured as an X-ray machine and/or a B-ultrasonic machine for carrying out image acquisition on the operation of the position of the prostate puncture operation. In this embodiment, the training image information of the target training personnel performing the operation training operation on the physical model is an X-ray machine and/or a B-ultrasonic machine, and the visualization of the operation training operation is implemented by acquiring an X-ray image or a B-ultrasonic image when the physical model is subjected to the prostate puncture operation.
After that, the present embodiment provides two operation training operation viewing modes that can be selected by the target training personnel, that is, viewing the virtual three-dimensional model through a virtual reality visualization device (such as VR glasses) or directly viewing the operation screen in the physical model through the training image acquired by the X-ray machine and/or the B-ultrasonic machine.
Illustratively, when viewing the virtual three-dimensional model through the virtual reality visualization device, the system further comprises a virtual model building module, which specifically comprises: a virtual model construction unit configured to construct a digital twin virtual model corresponding to the physical model from the training image information, the virtual model being configured as a virtual three-dimensional model having a virtual environment of a prostate puncture operation; a virtual model visualization unit configured to transmit the virtual three-dimensional model to a virtual reality visualization device worn by a target trainer, so that the target trainer performs a surgical training operation according to the virtual three-dimensional model; when the operation picture in the physical model is checked by selecting the training image collected by the X-ray machine and/or the B-ultrasonic machine, the target training personnel execute the operation training operation according to the operation picture collected by the X-ray machine and/or the B-ultrasonic machine when executing the operation training operation. Therefore, the aim of providing reference for a target training person to execute operation training operation is fulfilled by visualizing the prostate puncture operation environment, meanwhile, a foundation is provided for generating operation feedback animation subsequently, and the prostate puncture operation training system with higher authenticity, operation evaluation accuracy and operation feedback is provided.
In a preferred embodiment, the prostatectomy training system further comprises: the prostate puncture operation prediction model construction module specifically comprises: a surgical real operation acquisition unit configured to acquire real image information of a surgical operator performing a standard surgical operation in a real prostatectomy surgery, and to divide the real image information by time stamps to obtain continuous real images in the real prostatectomy surgeryThe method comprises the steps of carrying out a first treatment on the surface of the A model training unit configured to generate a continuous real image based on the continuous real imageTrue image +.>True image +.>Training the prostate puncture operation prediction model to obtain a trained prostate puncture operation prediction model.
The model training unit specifically comprises: a true image prediction subunit configured to predict the successive true imagesTrue image +.>In the input generator, prediction is obtainedObtaining the predicted real image of the next moment +.>The method comprises the steps of carrying out a first treatment on the surface of the A cyclic training subunit configured to put the real image of the next moment +.>And predict real image +.>And inputting the judgment result into a discriminator, and training the generator until convergence to obtain a prostate puncture operation prediction model.
The generator is a convolutional neural network structure, and the convolutional neural network structure is configured to be composed of an encoder, a bottleneck layer, a decoder and three residual error connections; wherein the continuous real image is displayedTrue image +.>In the input generator, the predicted real image +.>The method specifically comprises the following steps: continuous real image +.>True image +.>Input encoder, split into size +.>Is encoded by downsampling and upsampling, the encoding result is input into a bottleneck layer for noise removal, and finally is input into a decoder which is in an inverse structure with the encoder, so as to obtain a predicted real image +_of the next moment output by the decoder>;
The expression of downsampling and upsampling is specifically:
wherein,indicate->Downsampling of layer>Indicate->Upsampling of layer>And->Respectively represent the->The>Expansion of->Represents the +.about.of the Swin transducer model>A layer.
In this embodiment, the pre-operative operator performs during the actual prostatectomy procedureReal image information of line standard operation is divided according to time stamp, and the image is converted into RGB image which is easier to process, so as to obtain continuous real imageConsecutive real images +.>Inputting into a generator based on convolutional neural network composed of encoder, bottleneck layer, decoder and three residual connections, decomposing the image into +.>The non-overlapping blocks of the current time are coded, the coding process is that downsampling is firstly carried out and then upsampling is carried out, then the upsampling result is input into a bottleneck layer for noise removal, and finally the image +_of the next time of the current time is output through a decoder which is in an inverse structure with the encoder>Output of generator subunit +.>With true value->As the input of the discriminator, repeatedly driving the generator to sequentially circularly process each continuous real image until the generator converges to obtain a prostate puncture operation prediction model; and predicting a training image at the next moment according to the training image at the current moment of the operation training operation executed by the target training personnel by using the prostate puncture operation prediction model to obtain a training prediction image. Therefore, by constructing a GAN architecture comprising a generator and a discriminator, a real continuous operation image is utilized to train and obtain a prostate puncture operation prediction model, so that a training image at the next moment is predicted, the operation training operation correctness assessment by utilizing the predicted image and the training image at the next moment is realized, and the existing prostate puncture operation training party is solvedThe existing behavior evaluation lacking operation training operation has higher prediction accuracy and reliability.
In a preferred embodiment, the operation evaluation module specifically includes: the image contour difference generation unit is configured to extract ORB characteristic points in a training image at the next moment and a predicted training image at the next moment, calculate a projection matrix between the training image and the predicted training image according to the ORB characteristic points, and project the training image to the predicted training image based on the predicted training image to obtain contour difference images of the training image and the predicted training image; the operation evaluation unit is configured to determine an operated area according to the contour dense area in the contour difference image, judge whether the pixel value of the contour dense area in the operated area exceeds a preset threshold value, and if so, judge that the current prostate puncture operation has improper behavior.
In a preferred embodiment, the operation evaluation module further comprises: an evaluation result feedback unit; the evaluation result feedback unit is configured to acquire the position information of the operated area when the current prostate puncture operation is judged to have improper behaviors, and generate evaluation result feedback on the position information; when a target training person executes operation training operation according to the prostate puncture operation training virtual model, driving a virtual model building module to generate a first feedback picture in a region corresponding to the position information in the prostate puncture operation training virtual model; when the target training personnel execute operation training operation according to the training image information, a second feedback picture is generated in the training image information acquired by the X-ray machine and/or the B-ultrasonic machine.
In this embodiment, a projection matrix is established according to the ORB feature points by extracting a preset number (for example, 100) of ORB feature points in a training image at the next moment and a predicted training image at the next moment, so as to project the training image onto the predicted training image, thereby obtaining a contour difference between the two images. Considering that the imaging positions of different organs on the images are fixed, the contour dense region can be considered as an operated region according to the contour difference images of the two images (for example, the ratio of the contour difference region to the organ region exceeds the preset ratio corresponding to the organ), after that, whether the pixel value of the contour dense region in the operated region exceeds the preset threshold value is judged, if yes, the difference between the training image and the predicted training image at the next moment is considered to be larger, the operation is not a correct operation, and at the moment, the current prostate puncture operation is judged to have an improper behavior. Meanwhile, the embodiment also obtains the position information of the operated area, and generates a corresponding feedback picture for the target training personnel according to the training operation observation mode adopted by the target training personnel; when the target trainer executes the operation training operation according to the virtual model of the prostate puncture operation training, the virtual model construction module is driven to generate a first feedback picture in a region corresponding to the position information in the virtual model of the prostate puncture operation training (for example, an organ corresponding to an operated region is driven to sparkle red light in the virtual model of the prostate puncture operation training), when the target trainer executes the operation training operation according to the training image information, a second feedback picture is generated in the training image information acquired by an X-ray machine and/or a B-ultrasonic machine (for example, a random situation that bleeding, shrinkage and the like are added in the operated region in the acquired training image information by the X-ray machine and/or the B-ultrasonic machine and meet medical rationality is fed back to a user in a mode of overlapping two-dimensional special effects), thereby providing a prostate puncture operation training system with higher authenticity, operation evaluation accuracy and operation feedback, and improving the accuracy and the in-situ reaction operation effectiveness of the operation training operation of the target trainer.
Referring to fig. 2, fig. 2 is a schematic flow chart of a training method for prostate puncture surgery according to an embodiment of the present invention.
As shown in fig. 2, a training method for prostate puncture surgery includes the steps of:
s1: acquiring training image information of a target training person for performing operation training operation in a prostate puncture operation training solid model;
s2: generating a prostate puncture operation training virtual model according to the training image information, so that a target training person executes operation training operation according to the alternative prostate puncture operation training virtual model or the training image information;
s3: inputting a training image of the current moment of the operation training operation performed by the target training personnel into a trained prostate puncture operation prediction model to obtain a predicted training image of the next moment;
s4: and performing operation evaluation on the prostate puncture operation of the target training personnel according to the training image at the next moment and the predicted training image at the next moment.
In this embodiment, by collecting training image information of a target training person performing a surgical training operation in a prostate puncture surgery training solid model, generating a prostate puncture surgery training virtual model according to the training image information, so that the target training person performs the surgical training operation according to the alternative prostate puncture surgery training virtual model or the training image information, and simultaneously, inputting a training image of the current time of the target training person performing the surgical training operation into a trained prostate puncture surgery operation prediction model to obtain a predicted training image of the next time, thereby performing operation evaluation on the prostate puncture surgery operation of the target training person according to the training image of the next time and the predicted training image of the next time, and generating an operation feedback picture according to the evaluation result, so as to solve the problems of lack of behavior evaluation of the surgical training operation and operation feedback and influence on the effectiveness of the surgical training operation in the existing prostate puncture surgery training scheme.
The specific implementation manner of the training method for prostate puncture surgery in the present application is basically the same as that of each embodiment of the training system for prostate puncture surgery described above, and will not be described in detail herein.
In describing embodiments of the present invention, it should be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "center", "top", "bottom", "inner", "outer", "inside", "outside", etc. indicate orientations or positional relationships based on the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Wherein "inside" refers to an interior or enclosed area or space. "peripheral" refers to the area surrounding a particular component or region.
In the description of embodiments of the present invention, the terms "first," "second," "third," "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", "a third" and a fourth "may explicitly or implicitly include one or more such feature. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
In describing embodiments of the present invention, it should be noted that the terms "mounted," "connected," and "assembled" are to be construed broadly, as they may be fixedly connected, detachably connected, or integrally connected, unless otherwise specifically indicated and defined; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the description of embodiments of the invention, a particular feature, structure, material, or characteristic may be combined in any suitable manner in one or more embodiments or examples.
In describing embodiments of the present invention, it will be understood that the terms "-" and "-" refer to ranges between two values, and that the ranges include endpoints. For example: "A-B" means a range greater than or equal to A and less than or equal to B. "A-B" means a range of greater than or equal to A and less than or equal to B.
In the description of embodiments of the present invention, the term "and/or" is merely an association relationship describing an association object, meaning that three relationships may exist, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. A prostaentesis surgical training system, comprising:
a surgical training operation acquisition module configured to acquire training image information of a target trainer performing a surgical training operation in a prostate puncture surgical training solid model;
a virtual model construction module configured to generate a prostate puncture surgery training virtual model from the training image information, so that a target trainer performs surgery training operations according to the alternative prostate puncture surgery training virtual model or the training image information;
the surgical procedure generation module is configured to input a training image of the current moment when a target training person executes surgical training operation into a trained prostate puncture surgical operation prediction model to obtain a predicted training image of the next moment;
and the operation evaluation module is configured to perform operation evaluation on the prostate puncture operation of the target training personnel according to the training image at the next moment and the predicted training image at the next moment.
2. The prostaglandins surgery training system according to claim 1, wherein the prostaglandins surgery training physical model is configured as a physical model with a real environment of a prostaglandins surgery, the physical model comprising a male human urogenital association structure under the real environment of a prostaglandins surgery; wherein the male human genitourinary associated structure comprises kidney, ureter, bladder, prostate, seminal vesicle, vas deferens, ejaculatory duct, urethra, penis, and organ appendages.
3. The prostaentesis surgical training system of claim 2, wherein the surgical training operation acquisition module specifically comprises:
a surgical training operation acquisition unit;
the operation training operation acquisition unit is configured to acquire training image information of an operation training operation performed by a target training person acquired by the operation acquisition device for prostate puncture in the operation training solid model for prostate puncture;
wherein the operation acquisition device for the prostate puncture operation is configured as an X-ray machine and/or a B-ultrasonic machine for carrying out image acquisition on the operation of the position of the prostate puncture operation.
4. The prostaentesis surgical training system of claim 3, wherein the virtual model building module specifically comprises:
a virtual model construction unit configured to construct a virtual model corresponding to the physical model of the entity according to the training image information, the virtual model being configured as a virtual three-dimensional model having a virtual environment of a prostate puncture operation;
and the virtual model visualization unit is configured to transmit the virtual three-dimensional model to virtual reality visualization equipment worn by a target training person so as to enable the target training person to execute operation training operation according to the virtual three-dimensional model.
5. The prostaentesis surgical training system of claim 1, further comprising: the prostate puncture operation prediction model construction module specifically comprises:
a surgical real operation acquisition unit configured to acquire real image information of a surgical operator performing a standard surgical operation in a real prostatectomy surgery, and to divide the real image information by time stamps to obtain continuous real images in the real prostatectomy surgery;
A model training unit configured to generate a continuous real image based on the continuous real imageTrue image +.>True image +.>Training the prostate puncture operation prediction model to obtain a trained prostate puncture operation prediction model.
6. The prostate puncture surgery training system according to claim 5, characterized in that the model training unit specifically comprises:
a true image prediction subunit configured to predict the successive true imagesTrue image +.>In the input generator, the predicted real image +.>;
A cyclic training subunit configured to train a real image at a next time instantAnd predict real image +.>And inputting the judgment result into a discriminator, and training the generator until convergence to obtain a prostate puncture operation prediction model.
7. The prostaglandis puncture surgery training system according to claim 6, wherein the generator is a convolutional neural network structure configured to consist of an encoder, a bottleneck layer, a decoder and three residual connections;
wherein the continuous real image is displayedTrue image +.>In the input generator, the predicted real image +.>The method specifically comprises the following steps:
will continue the real imageTrue image +.>Input encoder, decomposed into sizeIs encoded by downsampling and upsampling, the encoding result is input into a bottleneck layer for noise removal, and finally is input into a decoder which is in an inverse structure with the encoder, so as to obtain a predicted real image +_of the next moment output by the decoder>;
The expression of downsampling and upsampling is specifically:
;
;
wherein,indicate->Downsampling of layer>Indicate->Upsampling of layer>And->Respectively represent the->The>Expansion of->Represents the +.about.of the Swin transducer model>A layer.
8. The prostaentesis surgical training system of claim 7, wherein the operation assessment module specifically comprises:
the image contour difference generation unit is configured to extract ORB characteristic points in a training image at the next moment and a predicted training image at the next moment, calculate a projection matrix between the training image and the predicted training image according to the ORB characteristic points, and project the training image to the predicted training image based on the predicted training image to obtain contour difference images of the training image and the predicted training image;
the operation evaluation unit is configured to determine an operated area according to the contour dense area in the contour difference image, judge whether the pixel value of the contour dense area in the operated area exceeds a preset threshold value, and if so, judge that the current prostate puncture operation has improper behavior.
9. The prostaentesis surgical training system of claim 8, wherein the operation assessment module further comprises:
an evaluation result feedback unit;
the evaluation result feedback unit is configured to acquire the position information of the operated area when the current prostate puncture operation is judged to have improper behaviors, and generate evaluation result feedback on the position information;
when a target training person executes operation training operation according to the prostate puncture operation training virtual model, driving a virtual model building module to generate a first feedback picture in a region corresponding to the position information in the prostate puncture operation training virtual model;
when the target training personnel execute operation training operation according to the training image information, a second feedback picture is generated in the training image information acquired by the X-ray machine and/or the B-ultrasonic machine.
10. A method of training a prostate puncture procedure comprising:
s1: acquiring training image information of a target training person for performing operation training operation in a prostate puncture operation training solid model;
s2: generating a prostate puncture operation training virtual model according to the training image information, so that a target training person executes operation training operation according to the alternative prostate puncture operation training virtual model or the training image information;
s3: inputting a training image of the current moment of the operation training operation performed by the target training personnel into a trained prostate puncture operation prediction model to obtain a predicted training image of the next moment;
s4: and performing operation evaluation on the prostate puncture operation of the target training personnel according to the training image at the next moment and the predicted training image at the next moment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410154896.2A CN117690331B (en) | 2024-02-04 | 2024-02-04 | Prostate puncture operation training system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410154896.2A CN117690331B (en) | 2024-02-04 | 2024-02-04 | Prostate puncture operation training system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117690331A true CN117690331A (en) | 2024-03-12 |
CN117690331B CN117690331B (en) | 2024-05-14 |
Family
ID=90128638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410154896.2A Active CN117690331B (en) | 2024-02-04 | 2024-02-04 | Prostate puncture operation training system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117690331B (en) |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5836894A (en) * | 1992-12-21 | 1998-11-17 | Artann Laboratories | Apparatus for measuring mechanical parameters of the prostate and for imaging the prostate using such parameters |
FR2920961A1 (en) * | 2007-09-18 | 2009-03-20 | Koelis Soc Par Actions Simplif | SYSTEM AND METHOD FOR IMAGING AND LOCATING PONCTIONS UNDER PROSTATIC ECHOGRAPHY |
DE102010041781A1 (en) * | 2010-09-30 | 2012-04-05 | Siemens Aktiengesellschaft | Method and a CT device for computed tomography spiral scanning of a patient |
US20140199673A1 (en) * | 2013-01-11 | 2014-07-17 | Superd Co. Ltd. | 3d virtual training system and method |
CN107527542A (en) * | 2017-09-18 | 2017-12-29 | 南京梦宇三维技术有限公司 | Percussion training system based on motion capture |
CN109034748A (en) * | 2018-08-09 | 2018-12-18 | 哈尔滨工业大学 | The building method of mold attaching/detaching engineering training system based on AR technology |
CN111523352A (en) * | 2019-02-02 | 2020-08-11 | 宁波艾腾湃智能科技有限公司 | Method for intelligently and rapidly identifying illegal modified vehicle and monitoring system thereof |
CN111640345A (en) * | 2020-05-22 | 2020-09-08 | 北京数医脊微科技有限公司 | Spinal endoscope puncture catheterization training method and device and computer equipment |
US20200405398A1 (en) * | 2016-04-27 | 2020-12-31 | Arthrology Consulting, Llc | Methods for augmenting a surgical field with virtual guidance and tracking and adapting to deviation from a surgical plan |
CN113946217A (en) * | 2021-10-20 | 2022-01-18 | 北京科技大学 | Intelligent auxiliary evaluation system for enteroscope operation skills |
CN114708752A (en) * | 2022-01-12 | 2022-07-05 | 南京九阵维医疗科技有限公司 | Dental apex film virtual teaching system and virtual apex film acquisition method |
CN115169855A (en) * | 2022-06-29 | 2022-10-11 | 郑州轻工业大学 | Unsafe state detection method based on digital twin workshop mixed data set |
CN115424485A (en) * | 2022-07-15 | 2022-12-02 | 顾卫坤 | Puncture biopsy simulation system based on mixed reality and space micro-positioning technology |
CN115577953A (en) * | 2022-10-20 | 2023-01-06 | 公安部道路交通安全研究中心 | Site driving ability digital evaluation method and device based on all-element data |
CN115830703A (en) * | 2022-10-26 | 2023-03-21 | 国网四川省电力公司资阳供电公司 | Method and system for judging electric power violation operation in advance through intention identification |
CN115880111A (en) * | 2023-02-22 | 2023-03-31 | 山东工程职业技术大学 | Virtual simulation training classroom teaching management method and system based on images |
CN116156202A (en) * | 2023-01-18 | 2023-05-23 | 上海大学 | Method, system, terminal and medium for realizing video error concealment |
CN116309910A (en) * | 2023-03-12 | 2023-06-23 | 上海大学 | Method for removing Gibbs artifacts of magnetic resonance images |
CN116342596A (en) * | 2023-05-29 | 2023-06-27 | 云南电网有限责任公司 | YOLOv5 improved substation equipment nut defect identification detection method |
CN117315591A (en) * | 2023-11-13 | 2023-12-29 | 安徽光谷智能科技股份有限公司 | Intelligent campus safety monitoring prediction management system |
CN117315336A (en) * | 2023-09-14 | 2023-12-29 | 北京工业大学 | Pollen particle identification method, device, electronic equipment and storage medium |
CN117454116A (en) * | 2023-11-03 | 2024-01-26 | 国网河南省电力公司经济技术研究院 | Ground carbon emission monitoring method based on multi-source data interaction network |
-
2024
- 2024-02-04 CN CN202410154896.2A patent/CN117690331B/en active Active
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5836894A (en) * | 1992-12-21 | 1998-11-17 | Artann Laboratories | Apparatus for measuring mechanical parameters of the prostate and for imaging the prostate using such parameters |
FR2920961A1 (en) * | 2007-09-18 | 2009-03-20 | Koelis Soc Par Actions Simplif | SYSTEM AND METHOD FOR IMAGING AND LOCATING PONCTIONS UNDER PROSTATIC ECHOGRAPHY |
DE102010041781A1 (en) * | 2010-09-30 | 2012-04-05 | Siemens Aktiengesellschaft | Method and a CT device for computed tomography spiral scanning of a patient |
US20140199673A1 (en) * | 2013-01-11 | 2014-07-17 | Superd Co. Ltd. | 3d virtual training system and method |
US20200405398A1 (en) * | 2016-04-27 | 2020-12-31 | Arthrology Consulting, Llc | Methods for augmenting a surgical field with virtual guidance and tracking and adapting to deviation from a surgical plan |
CN107527542A (en) * | 2017-09-18 | 2017-12-29 | 南京梦宇三维技术有限公司 | Percussion training system based on motion capture |
CN109034748A (en) * | 2018-08-09 | 2018-12-18 | 哈尔滨工业大学 | The building method of mold attaching/detaching engineering training system based on AR technology |
CN111523352A (en) * | 2019-02-02 | 2020-08-11 | 宁波艾腾湃智能科技有限公司 | Method for intelligently and rapidly identifying illegal modified vehicle and monitoring system thereof |
CN111640345A (en) * | 2020-05-22 | 2020-09-08 | 北京数医脊微科技有限公司 | Spinal endoscope puncture catheterization training method and device and computer equipment |
CN113946217A (en) * | 2021-10-20 | 2022-01-18 | 北京科技大学 | Intelligent auxiliary evaluation system for enteroscope operation skills |
CN114708752A (en) * | 2022-01-12 | 2022-07-05 | 南京九阵维医疗科技有限公司 | Dental apex film virtual teaching system and virtual apex film acquisition method |
CN115169855A (en) * | 2022-06-29 | 2022-10-11 | 郑州轻工业大学 | Unsafe state detection method based on digital twin workshop mixed data set |
CN115424485A (en) * | 2022-07-15 | 2022-12-02 | 顾卫坤 | Puncture biopsy simulation system based on mixed reality and space micro-positioning technology |
CN115577953A (en) * | 2022-10-20 | 2023-01-06 | 公安部道路交通安全研究中心 | Site driving ability digital evaluation method and device based on all-element data |
CN115830703A (en) * | 2022-10-26 | 2023-03-21 | 国网四川省电力公司资阳供电公司 | Method and system for judging electric power violation operation in advance through intention identification |
CN116156202A (en) * | 2023-01-18 | 2023-05-23 | 上海大学 | Method, system, terminal and medium for realizing video error concealment |
CN115880111A (en) * | 2023-02-22 | 2023-03-31 | 山东工程职业技术大学 | Virtual simulation training classroom teaching management method and system based on images |
CN116309910A (en) * | 2023-03-12 | 2023-06-23 | 上海大学 | Method for removing Gibbs artifacts of magnetic resonance images |
CN116342596A (en) * | 2023-05-29 | 2023-06-27 | 云南电网有限责任公司 | YOLOv5 improved substation equipment nut defect identification detection method |
CN117315336A (en) * | 2023-09-14 | 2023-12-29 | 北京工业大学 | Pollen particle identification method, device, electronic equipment and storage medium |
CN117454116A (en) * | 2023-11-03 | 2024-01-26 | 国网河南省电力公司经济技术研究院 | Ground carbon emission monitoring method based on multi-source data interaction network |
CN117315591A (en) * | 2023-11-13 | 2023-12-29 | 安徽光谷智能科技股份有限公司 | Intelligent campus safety monitoring prediction management system |
Also Published As
Publication number | Publication date |
---|---|
CN117690331B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107452266B (en) | Method for producing a model and model | |
CN112075914B (en) | Capsule endoscopy system | |
US6939138B2 (en) | Endoscopic tutorial system for urology | |
CN107811710A (en) | Operation aided positioning system | |
CN110522516A (en) | A kind of multi-level interactive visual method for surgical navigational | |
CN110074813A (en) | A kind of ultrasonic image reconstruction method and system | |
CN103268630B (en) | A kind of blood vessel three-dimensional visualization method based on intravascular ultrasound image | |
CN111080778B (en) | Online three-dimensional reconstruction method of binocular endoscope soft tissue image | |
JP2007537770A (en) | A dynamic crop box determination method for display optimization of luminal structures in endoscopic images | |
CN109157284A (en) | A kind of brain tumor medical image three-dimensional reconstruction shows exchange method and system | |
CN113662573B (en) | Mammary gland focus positioning method, device, computer equipment and storage medium | |
CN112598649B (en) | 2D/3D spine CT non-rigid registration method based on generation of countermeasure network | |
CN110021445A (en) | A kind of medical system based on VR model | |
JPH11104072A (en) | Medical support system | |
CN108492693A (en) | A kind of laparoscopic surgery simulated training system shown based on computer aided medicine | |
CN109859827A (en) | Gastrointestinal Endoscopes operation horizontal points-scoring system and method in real time | |
Williams et al. | Volumetric curved planar reformation for virtual endoscopy | |
Ratul et al. | CCX-rayNet: a class conditioned convolutional neural network for biplanar X-rays to CT volume | |
CN107204045A (en) | Virtual endoscope system based on CT images | |
CN117690331B (en) | Prostate puncture operation training system and method | |
US20240138923A1 (en) | Use of Immersive Real-time Metaverse and Avatar and 3-D hologram for Medical and Veterinary Applications using Spatially Coordinated Multi-imager based 3-D Imaging. | |
US20230281968A1 (en) | Recording Medium, Method for Generating Learning Model, Surgical Support Device and Information Processing Method | |
CN114145761A (en) | Fluorine bone disease medical imaging detection system and use method thereof | |
CN106845138A (en) | Method is previewed before a kind of surgery | |
CN113995525A (en) | Medical scene synchronous operation system capable of switching visual angles and based on mixed reality and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |