CN116959307A - Hip arthroscope operation auxiliary teaching system based on virtual reality - Google Patents

Hip arthroscope operation auxiliary teaching system based on virtual reality Download PDF

Info

Publication number
CN116959307A
CN116959307A CN202311112508.6A CN202311112508A CN116959307A CN 116959307 A CN116959307 A CN 116959307A CN 202311112508 A CN202311112508 A CN 202311112508A CN 116959307 A CN116959307 A CN 116959307A
Authority
CN
China
Prior art keywords
branch
feature map
output
image
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311112508.6A
Other languages
Chinese (zh)
Inventor
王卫国
丁冉
张启栋
张逸凌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Japan Friendship Hospital
Original Assignee
China Japan Friendship Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Japan Friendship Hospital filed Critical China Japan Friendship Hospital
Priority to CN202311112508.6A priority Critical patent/CN116959307A/en
Publication of CN116959307A publication Critical patent/CN116959307A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a hip arthroscope operation auxiliary teaching system based on virtual reality. This hip arthroscopic surgery assistance teaching system based on virtual reality includes: the first image segmentation module is used for inputting the CT image of the hip joint into a preset image segmentation model and outputting a bone image of the hip joint; the second image segmentation module is used for inputting the hip joint MRI image into a preset image segmentation model and outputting a hip joint part image; the registration fusion reconstruction module is used for determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image; the rendering display module is used for performing three-dimensional model rendering based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching; according to the embodiment of the application, the registration position can be quickly and accurately determined, and the teaching effect is further improved.

Description

Hip arthroscope operation auxiliary teaching system based on virtual reality
Technical Field
The application belongs to the technical field of deep learning intelligent recognition, and particularly relates to a hip arthroscope operation auxiliary teaching system and method based on virtual reality, electronic equipment and a computer readable storage medium.
Background
Hip arthroscope is considered as a new minimally invasive surgical technology with potential clinical value, and a traditional hip arthroscope operation auxiliary teaching system based on virtual reality is used for determining registration positions by using medical instruments depending on self experience of teaching doctors to perform three-dimensional reconstruction virtual display to assist teaching. However, the registration position determined by the teaching doctor using the medical instrument depending on own experience is inaccurate and inefficient, thereby resulting in poor teaching effect.
Therefore, how to quickly and accurately determine the registration position, and further improve the teaching effect is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a hip arthroscope operation auxiliary teaching system and method based on virtual reality, electronic equipment and a computer readable storage medium, which can quickly and accurately determine a registration position and further improve teaching effects.
In a first aspect, an embodiment of the present application provides a virtual reality-based hip arthroscopic surgery assistance teaching system, including:
The first image segmentation module is used for inputting the CT image of the hip joint into a preset image segmentation model and outputting a bone image of the hip joint; wherein the hip bone image includes first pelvic positional information and first femur positional information;
the second image segmentation module is used for inputting the hip joint MRI image into a preset image segmentation model and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information;
the registration fusion reconstruction module is used for determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image;
the rendering display module is used for performing three-dimensional model rendering based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching;
the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
Optionally, the characteristic diagram obtained after the proportional branch passes through the convolution layer and the characteristic diagram obtained after the differential branch passes through the convolution layer are input into the pixel attention guiding module and then output through the convolution layer, and the three convolution layers are all passed through;
the integral branch outputs a characteristic diagram after passing through three convolution layers and a pooling layer; the feature map output by the integrating branch is output through three pooling layers, and the size is reduced, so that the feature map is restored to the original size through a pyramid pooling module;
combining a characteristic diagram obtained after the differential branch passes through the convolution layer and a characteristic diagram obtained after the proportional branch passes through the convolution layer, outputting the characteristic diagram to the next convolution layer, and outputting the characteristic diagram after the differential branch passes through three convolution layers in total;
the output results of the three branches are input into the boundary attention guiding module together, and the output feature map outputs a prediction mask after passing through the convolution layer, the BN layer and the RELU activation function.
Optionally, the method further comprises:
a loss function calculation module, configured to: performing boundary detection on the real mask to obtain a boundary mask; calculating a Loss function Loss1 according to the boundary mask and the result of the differential branch output; calculating a Loss function Loss2 by the prediction mask and the real mask; the predicted mask, the real mask and the differential branch output result jointly calculate a Loss function Loss3; calculating a Loss function Loss4 by the proportional branch output result and the real mask; the final Loss function is calculated comprehensively based on the Loss functions los 1, lossL2, loss3, and Loss 4.
Optionally, the method further comprises:
the pixel attention guiding module is used for carrying out interaction enhancement on the characteristic diagrams of the proportional branch and the differential branch by using an attention mechanism;
the two inputs of the pixel attention directing module are the output of the proportional branch and the output of the differential branch; the output of the proportional branch is subjected to a convolution layer (3 x 3), a BN layer and a RELU activation function to obtain a characteristic diagram T1 and a characteristic diagram T2;
the output of the differential branch is subjected to a convolution layer (3 x 3), a BN layer and a RELU activation function to obtain a characteristic diagram T4 and a characteristic diagram T5;
combining a feature map obtained after the feature map T1 passes through the convolution layer with a feature map obtained after the feature map T4 passes through the convolution layer to obtain a feature map T3;
the feature map T6 is obtained by fusing the feature map obtained after the feature map T2 passes through the convolution layer and the feature map T3;
the feature map T7 is obtained by fusing the feature map obtained after the feature map T5 passes through the convolution layer and the feature map T3;
combining the feature map T7 and the feature map T6 to obtain a feature map T8, and obtaining an output 1 after RELU activation function;
the output of the differential branch is passed through the convolutional layer (3 x 3), BN layer and RELU activation functions to obtain output 2.
Optionally, the method further comprises:
the pyramid pooling module is used for aggregating the context information of different areas so as to improve the capacity of the network for acquiring global information;
Pyramid pooling module for: pooling with different scales is used on the original feature map to obtain a plurality of feature maps with different sizes, then the feature maps are spliced on the channel dimension, then the feature maps are spliced with the original feature map, and finally a composite feature map which is mixed with various scales is output, so that the purpose of considering global semantic information and local detail information is achieved.
Optionally, the method further comprises:
pyramid pooling module for: pooling operation of different scales is carried out on the original feature images to obtain a plurality of feature images (5 branches are adopted) with different sizes; performing up-sampling operation on the obtained feature map, recovering to the original feature map size (6 multiplied by 6), and finally splicing in the channel dimension to obtain a final composite feature map;
first branch: pooling (6×6) with an output size of (1×1), and upsampling to (6×6) by bilinear interpolation;
and a second branch: pooling with (3×3) output size of (2×2), and upsampling to (6×6) by bilinear interpolation;
third branch: pooling with (2×2) output size (3×3), and upsampling to (6×6) by bilinear interpolation;
fourth branch: pooling with (1×1) and output size of (6×6);
Fifth branch: representing an input original feature map to play a role of residual connection;
and after characteristic splicing is carried out on the output results of the first four branches, splicing operation is carried out on the output results of the first four branches and the fifth branch, and then the output results are output.
Optionally, the method further comprises:
a boundary attention directing module for: the differential branch input is subjected to a Sigmoid activation function to obtain two branch outputs, wherein one branch output characteristic is fused with the characteristic of the differential branch input, then is combined with the characteristic of the proportional branch input, and the other branch output characteristic is fused with the characteristic of the proportional branch input, and then is combined with the characteristic of the differential branch; and after the two combined features are output through a convolution layer (3 x3 convolution kernel) and a BN layer respectively, combining and outputting final features.
In a second aspect, an embodiment of the present application provides a hip arthroscopic surgery auxiliary teaching method based on virtual reality, including:
inputting the CT image of the hip joint into a preset image segmentation model, and outputting a bone image of the hip joint; wherein the hip bone image includes first pelvic positional information and first femur positional information;
inputting the hip joint MRI image into a preset image segmentation model, and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information;
Determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image;
rendering a three-dimensional model based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching;
the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
The processor, when executing the computer program instructions, implements the virtual reality-based hip arthroscopic surgery assistance teaching method according to the second aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the virtual reality-based hip arthroscopic surgery assistance teaching method of the second aspect.
The hip arthroscope operation auxiliary teaching system, the method, the electronic equipment and the computer readable storage medium based on the virtual reality can quickly and accurately determine the registration position, and further improve the teaching effect.
This hip arthroscopic surgery assistance teaching system based on virtual reality includes: the first image segmentation module is used for inputting the CT image of the hip joint into a preset image segmentation model and outputting a bone image of the hip joint; wherein the hip bone image includes first pelvic positional information and first femur positional information; the second image segmentation module is used for inputting the hip joint MRI image into a preset image segmentation model and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information; the registration fusion reconstruction module is used for determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image; the rendering display module is used for performing three-dimensional model rendering based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching; the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
The embodiment of the application provides a hip arthroscope operation auxiliary teaching system and method based on virtual reality, electronic equipment and a computer readable storage medium, which can quickly and accurately determine a registration position and further improve teaching effects.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a virtual reality-based hip arthroscopic surgical assistance teaching system according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a virtual reality-based method of assisted teaching of hip arthroscopic surgery according to one embodiment of the present application;
FIG. 3 is a schematic diagram of an image segmentation model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a pixel attention directing module according to one embodiment of the present application;
FIG. 5 is a schematic diagram of a pyramid pooling module according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a boundary attention directing module provided by one embodiment of the present application;
FIG. 7 is a flow chart of a virtual reality-based assisted teaching method for hip arthroscopic surgery according to one embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Hip arthroscope is considered as a new minimally invasive surgical technology with potential clinical value, and a traditional hip arthroscope operation auxiliary teaching system based on virtual reality is used for determining registration positions by using medical instruments depending on self experience of teaching doctors to perform three-dimensional reconstruction virtual display to assist teaching. However, the registration position determined by the teaching doctor using the medical instrument depending on own experience is inaccurate and inefficient, thereby resulting in poor teaching effect.
In order to solve the problems in the prior art, the embodiment of the application provides a hip arthroscope operation auxiliary teaching system and method based on virtual reality, electronic equipment and a computer readable storage medium. The hip arthroscope operation auxiliary teaching system based on virtual reality provided by the embodiment of the application is first described below.
Fig. 1 shows a schematic structural diagram of a hip arthroscopic surgery assistance teaching system based on virtual reality according to an embodiment of the present application. As shown in fig. 1, the hip arthroscopic surgery auxiliary teaching system based on virtual reality comprises:
the first image segmentation module 101 is configured to input a hip joint CT image into a preset image segmentation model, and output a hip joint bone image; wherein the hip bone image includes first pelvic positional information and first femur positional information;
The second image segmentation module 102 is used for inputting the hip joint MRI image into a preset image segmentation model and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information;
the registration fusion reconstruction module 103 is configured to determine a registration position based on the first pelvic position information, the first femur position information, the second pelvic position information, the second femur position information and the soft tissue position information, perform registration, and perform fusion reconstruction to obtain a multi-modal information image of the hip joint part;
the rendering display module 104 is used for performing three-dimensional model rendering based on the multi-modal information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching;
the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
Specifically, a schematic diagram of a frame of a hip arthroscopic surgery auxiliary teaching method based on virtual reality corresponding to fig. 1 is shown in fig. 2, and hip joint CT and MRI data of a patient are input; dividing CT and MRI images of a patient through a neural network, and obtaining hip joint skeleton images (comprising pelvis and femur) through CT image division; hip region images (including pelvis, femur, soft tissues) were obtained by hip MRI image segmentation. The hip joint skeleton image obtained by dividing the CT image and the MRI image are divided to obtain the hip joint skeleton image for registration, and the hip joint part multi-modal information image is obtained by fusion reconstruction. With the help of VR head-mounted equipment, a three-dimensional model of the hip joint is imported and rendered on VR head-mounted display equipment, and visual, stereoscopic and lifelike three-dimensional space virtual display is provided for planning and reference of simulated operation.
In one embodiment, the feature map obtained after the proportional branch passes through the convolution layer and the feature map obtained after the differential branch passes through the convolution layer are input to the pixel attention guiding module and then output through the convolution layer, and the three convolution layers are all passed through;
the integral branch outputs a characteristic diagram after passing through three convolution layers and a pooling layer; the feature map output by the integrating branch is output through three pooling layers, and the size is reduced, so that the feature map is restored to the original size through a pyramid pooling module;
Combining a characteristic diagram obtained after the differential branch passes through the convolution layer and a characteristic diagram obtained after the proportional branch passes through the convolution layer, outputting the characteristic diagram to the next convolution layer, and outputting the characteristic diagram after the differential branch passes through three convolution layers in total;
the output results of the three branches are input into the boundary attention guiding module together, and the output feature map outputs a prediction mask after passing through the convolution layer, the BN layer and the RELU activation function.
Specifically, the schematic structure of the image segmentation model is shown in fig. 3, in order to improve the segmentation accuracy, we adopt a cascade network structure, and firstly, three convolution layers and a pooling layer are adopted to perform downsampling operation, so as to reduce the image size, reduce the calculated amount and accelerate the model reasoning speed.
Three branches are then employed in the network: proportional branching (P): is responsible for resolving and retaining detailed information in the high-resolution feature map; integrating branch (integrate, I): responsible for aggregating local and global context information to capture remote dependencies; differential branch (D): is responsible for extracting high frequency features to predict the boundary region. The entire model uses the concatenated residual blocks as a backbone network and uses networks of different depths and widths to extract features.
The network structure inputs three-dimensional CT data (MRI data), and features obtained after three convolution layers and a pooling layer are respectively input to three branches. The characteristic obtained after the proportional branch passes through the convolution layer and the characteristic obtained after the differential branch passes through the convolution layer are input into the pixel attention guiding module and then output after passing through the convolution layer, and the total of the characteristic and the characteristic passes through three convolution layers; the integral branch passes through three convolution layers, and the characteristic is output after the layer is pooled; and combining the characteristics obtained after the differential branches pass through the convolution layers and the characteristics obtained after the proportional branches pass through the convolution layers, outputting the characteristics to the next convolution layer, and outputting the characteristics after the differential branches pass through the three convolution layers. The feature of the integral branch output is output through three pooling layers, and the size is reduced, and the original size is recovered through a pyramid pooling module. The output results of the three branches are input into the boundary attention guiding module together, and the output characteristics output the prediction result mask after passing through the convolution layer, the BN layer and the RELU activation function.
In one embodiment, further comprising:
a loss function calculation module, configured to: performing boundary detection on the real mask to obtain a boundary mask; calculating a Loss function Loss1 according to the boundary mask and the result of the differential branch output; calculating a Loss function Loss2 by the prediction mask and the real mask; the predicted mask, the real mask and the differential branch output result jointly calculate a Loss function Loss3; calculating a Loss function Loss4 by the proportional branch output result and the real mask; the final Loss function is calculated comprehensively based on the Loss functions los 1, lossL2, loss3, and Loss 4.
Specifically, in order to enhance the segmentation accuracy, a multi-Loss fusion mode is adopted, in order to enhance the prediction capability of the boundary, boundary detection is performed on the real mask to obtain the boundary mask, the Loss function Loss1 is calculated according to the result of differential branch output, the Loss function Loss2 is calculated according to the prediction mask and the real mask, the Loss function Loss3 is calculated according to the prediction mask, the real mask and the differential branch output result, and the Loss function Loss4 is calculated according to the proportion branch output result and the real mask.
And finally, comprehensively calculating the final Loss functions of the Loss functions Loss1, lossL2, loss3 and Loss4.
Wherein the output of the differential branch is:
wherein, the liquid crystal display device comprises a liquid crystal display device,k mn refers to the nth value of the convolution kernel in the mth layer. In the integrating branch, I [ I-1 ]],I[i]And I [ i+1 ]]More than 70% of the total number of terms is set in order to more focus on local information. In the proportional and differential branches, I [ I-1 ]],I[i]And I [ i+1 ]]Setting is less than 30% of the total number of entries, in order to make the two branches more focused on surrounding information.
In one embodiment, further comprising:
the pixel attention guiding module is used for carrying out interaction enhancement on the characteristic diagrams of the proportional branch and the differential branch by using an attention mechanism;
the two inputs of the pixel attention directing module are the output of the proportional branch and the output of the differential branch; the output of the proportional branch is subjected to a convolution layer (3 x 3), a BN layer and a RELU activation function to obtain a characteristic diagram T1 and a characteristic diagram T2;
The output of the differential branch is subjected to a convolution layer (3 x 3), a BN layer and a RELU activation function to obtain a characteristic diagram T4 and a characteristic diagram T5;
combining a feature map obtained after the feature map T1 passes through the convolution layer with a feature map obtained after the feature map T4 passes through the convolution layer to obtain a feature map T3;
the feature map T6 is obtained by fusing the feature map obtained after the feature map T2 passes through the convolution layer and the feature map T3;
the feature map T7 is obtained by fusing the feature map obtained after the feature map T5 passes through the convolution layer and the feature map T3;
combining the feature map T7 and the feature map T6 to obtain a feature map T8, and obtaining an output 1 after RELU activation function;
the output of the differential branch is passed through the convolutional layer (3 x 3), BN layer and RELU activation functions to obtain output 2.
Specifically, as shown in fig. 4, the schematic structural diagram of the pixel attention guiding module defines vectors of pixels corresponding to the proportional branch and differential branch feature diagrams as follows:the output of Sigmoid, i.e., feature map T6/T7, can be expressed as:
where σ represents the likelihood that the two pixels belong to the same object. If σ is higher, we trust moreBecause the score branches are semantically richer and more accurate and vice versa. Thus, the output 1 of the pixel attention directing module can be expressed as:
In one embodiment, further comprising:
the pyramid pooling module is used for aggregating the context information of different areas so as to improve the capacity of the network for acquiring global information;
pyramid pooling module for: pooling with different scales is used on the original feature map to obtain a plurality of feature maps with different sizes, then the feature maps are spliced on the channel dimension, then the feature maps are spliced with the original feature map, and finally a composite feature map which is mixed with various scales is output, so that the purpose of considering global semantic information and local detail information is achieved.
In one embodiment, further comprising:
pyramid pooling module for: pooling operation of different scales is carried out on the original feature images to obtain a plurality of feature images (5 branches are adopted) with different sizes; performing up-sampling operation on the obtained feature map, recovering to the original feature map size (6 multiplied by 6), and finally splicing in the channel dimension to obtain a final composite feature map;
first branch: pooling (6×6) with an output size of (1×1), and upsampling to (6×6) by bilinear interpolation;
and a second branch: pooling with (3×3) output size of (2×2), and upsampling to (6×6) by bilinear interpolation;
Third branch: pooling with (2×2) output size (3×3), and upsampling to (6×6) by bilinear interpolation;
fourth branch: pooling with (1×1) and output size of (6×6);
fifth branch: representing an input original feature map to play a role of residual connection;
and after characteristic splicing is carried out on the output results of the first four branches, splicing operation is carried out on the output results of the first four branches and the fifth branch, and then the output results are output.
Specifically, the pyramid pooling module is used for aggregating context information of different areas to improve the capability of the network to acquire global information, and the network structure is shown in fig. 5. The specific method comprises the following steps: pooling with different scales is used on the original feature map to obtain a plurality of feature maps with different sizes, then the feature maps are spliced on the channel dimension, then the feature maps are spliced with the original feature map, and finally a composite feature map which is mixed with various scales is output, so that the purpose of considering global semantic information and local detail information is achieved.
The original feature map is subjected to pooling operations of different scales to obtain a plurality of feature maps of different sizes (5 branches are adopted). Performing up-sampling operation on the obtained feature map, recovering to the original feature map size (6 multiplied by 6), and finally splicing in the channel dimension to obtain a final composite feature map;
First branch: using (6 x 6) pooling, an output size of (1 x 1), and upsampling to (6 x 6) by bilinear interpolation;
and a second branch: pooling with (3×3) output size of (2×2), and upsampling to (6×6) by bilinear interpolation;
third branch: pooling with (2×2) output size (3×3), and upsampling to (6×6) by bilinear interpolation;
fourth branch: pooling using (1×1) and output size of (6×6) was used.
Fifth branch: representing the input feature map, here acting as a residual connection.
And after characteristic splicing is carried out on the output results of the first four branches, splicing operation is carried out on the output results of the first four branches and the fifth branch, and then the output results are output.
In one embodiment, further comprising:
a boundary attention directing module for: the differential branch input is subjected to a Sigmoid activation function to obtain two branch outputs, wherein one branch output characteristic is fused with the characteristic of the differential branch input, then is combined with the characteristic of the proportional branch input, and the other branch output characteristic is fused with the characteristic of the proportional branch input, and then is combined with the characteristic of the differential branch; and after the two combined features are output through a convolution layer (3 x3 convolution kernel) and a BN layer respectively, combining and outputting final features.
Specifically, the schematic structural diagram of the boundary attention guiding module is shown in fig. 6, the module has three inputs, the three inputs are outputs of three branches in front of the network, the inputs of the differential branches are subjected to a Sigmoid activation function to obtain two branch outputs, one branch output characteristic is fused with the characteristic of the differential branch input, then the two branch outputs are combined with the characteristic of the proportional branch input, the other branch output characteristic is fused with the characteristic of the proportional branch input, and then the two branch outputs are combined with the characteristic of the differential branch; and after the two combined features are output through a convolution layer (3 x3 convolution kernel) and a BN layer respectively, combining and outputting final features.
The module is used for guiding the fusion of the context information by utilizing the boundary characteristics so as to realize a better semantic segmentation effect. Despite the semantic accuracy in the context information, it loses too much geometric detail on the bounding region and small objects, thus forcing the model to trust the differential branches more in the bounding region here, strengthening the attention to the boundary.
The vectors defining the corresponding pixels of the feature map of the proportional branch, the integral branch and the differential branch are respectively:then the output of Sigmoid, the output Out of the boundary attention director module, is:
Where fout represents a combination of convolution, batch normalization, and ReLU, the model is more confident of detailed features when σ >0.5, and is otherwise more biased towards detection of context information.
The Loss function of Loss is specifically described below:
the loss function in our designed network is a complex function consisting of four parts:
first, at the output location of the first pixel attention directing module, through a two-layer convolution operation, additional semantic Loss1 is generated to better optimize the overall network.
Where y is the tag value and y' is the predicted value.
Second, to address imbalance problems in boundary detection, a weighted binary cross entropy Loss of 2 is used instead of the Dice Loss, as this may make the network more prone to highlighting boundary regions using rough boundaries and enhance the characteristics of small objects.
Wherein the Loss of a single sample is calculated as Loss (i),if the label is the true label corresponding category, the value of the label is 1 if the label is the kth category, otherwise, the label is 0, and a plurality of items of which are 0 are shielded from participating in calculation. />Is the probability predicted by the softmax function.
To make some pixels more important, w (x) is introduced. A weight map is pre-calculated for each labeled image to compensate different frequencies of each type of pixels in the training set, so that the network is more focused on learning small segmentation boundaries which are mutually contacted, and the weight map is calculated according to the following formula:
Where d1 represents the distance to the most boundary and d2 represents the distance to the second most boundary. Based on experience we set w0=10, σ≡5 pixel values.
Third, loss3 and Loss4 represent cross entropy losses, respectively, where output boundary heads are used to coordinate semantic segmentation and boundary detection tasks and enhance the functionality of the boundary attention director module, and thus the losses can be defined as:
wherein t represents a predefined threshold, b i ,s i,cThe output of the boundary, the true value of the segmentation and the prediction result of the ith pixel for the class are respectively.
Thus, the loss function of the final network is expressed as:
Loss=λ 1 L 12 L 23 L 34 L 4
the training loss parameters are set to λ1=0.4, λ2=0.2, λ3=0.2, λ4=0.2, t=0.8, and λ1 is set large in order to enhance the learning ability for the boundary.
VR assisted display: and generating a three-dimensional model from CT and MRI segmentation results of the patient, inputting the three-dimensional model into VR equipment, rendering the three-dimensional model on VR head-mounted display equipment, and providing visual, stereoscopic and vivid three-dimensional space virtual display for planning and reference of simulated operation.
Fig. 7 is a schematic flow chart of a virtual reality-based assisted teaching method for hip arthroscopic surgery according to an embodiment of the present application, which is characterized by comprising:
S701, inputting a hip joint CT image into a preset image segmentation model, and outputting a hip joint skeleton image; wherein the hip bone image includes first pelvic positional information and first femur positional information;
s702, inputting a hip joint MRI image into a preset image segmentation model, and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information;
s703, determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image;
s704, performing three-dimensional model rendering based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching;
the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
Fig. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may include a processor 801 and a memory 802 storing computer program instructions.
In particular, the processor 801 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 802 may include mass storage for data or instructions. By way of example, and not limitation, memory 802 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the above. Memory 802 may include removable or non-removable (or fixed) media, where appropriate. The memory 802 may be internal or external to the electronic device, where appropriate. In a particular embodiment, the memory 802 may be a non-volatile solid state memory.
In one embodiment, memory 802 may be Read Only Memory (ROM). In one embodiment, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 801 implements any of the above embodiments of the virtual reality-based hip arthroscopic surgical assistance teaching method by reading and executing computer program instructions stored in the memory 802.
In one example, the electronic device may also include a communication interface 803 and a bus 810. As shown in fig. 8, the processor 801, the memory 802, and the communication interface 803 are connected to each other via a bus 810 and perform communication with each other.
Communication interface 803 is primarily used to implement communication between modules, devices, units, and/or apparatuses in an embodiment of the present application.
Bus 810 includes hardware, software, or both, that couple components of an electronic device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 810 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
In addition, in combination with the hip arthroscopic surgery assistance teaching method based on virtual reality in the above embodiment, the embodiment of the application can be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the virtual reality-based hip arthroscopic surgery assistance teaching methods of the above embodiments.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (10)

1. A virtual reality-based hip arthroscopic surgery assistance teaching system, comprising:
the first image segmentation module is used for inputting the CT image of the hip joint into a preset image segmentation model and outputting a bone image of the hip joint; wherein the hip bone image includes first pelvic positional information and first femur positional information;
the second image segmentation module is used for inputting the hip joint MRI image into a preset image segmentation model and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information;
The registration fusion reconstruction module is used for determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image;
the rendering display module is used for performing three-dimensional model rendering based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching;
the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
2. The virtual reality-based hip arthroscope operation auxiliary teaching system according to claim 1, wherein a feature map obtained after a proportional branch passes through a convolution layer and a feature map obtained after a differential branch passes through the convolution layer are input to a pixel attention guiding module and then output through the convolution layer, and the feature map and the differential branch pass through three convolution layers in total;
the integral branch outputs a characteristic diagram after passing through three convolution layers and a pooling layer; the feature map output by the integrating branch is output through three pooling layers, and the size is reduced, so that the feature map is restored to the original size through a pyramid pooling module;
combining a characteristic diagram obtained after the differential branch passes through the convolution layer and a characteristic diagram obtained after the proportional branch passes through the convolution layer, outputting the characteristic diagram to the next convolution layer, and outputting the characteristic diagram after the differential branch passes through three convolution layers in total;
the output results of the three branches are input into the boundary attention guiding module together, and the output feature map outputs a prediction mask after passing through the convolution layer, the BN layer and the RELU activation function.
3. The virtual reality-based hip arthroscopic surgical assistance teaching system of claim 2, further comprising:
a loss function calculation module, configured to: performing boundary detection on the real mask to obtain a boundary mask; calculating a Loss function Loss1 according to the boundary mask and the result of the differential branch output; calculating a Loss function Loss2 by the prediction mask and the real mask; the predicted mask, the real mask and the differential branch output result jointly calculate a Loss function Loss3; calculating a Loss function Loss4 by the proportional branch output result and the real mask; the final Loss function is calculated comprehensively based on the Loss functions los 1, lossL2, loss3, and Loss 4.
4. The virtual reality-based hip arthroscopic surgical assistance teaching system of claim 3, further comprising:
the pixel attention guiding module is used for carrying out interaction enhancement on the characteristic diagrams of the proportional branch and the differential branch by using an attention mechanism;
the two inputs of the pixel attention directing module are the output of the proportional branch and the output of the differential branch; the output of the proportional branch is subjected to a convolution layer (3 x 3), a BN layer and a RELU activation function to obtain a characteristic diagram T1 and a characteristic diagram T2;
the output of the differential branch is subjected to a convolution layer (3 x 3), a BN layer and a RELU activation function to obtain a characteristic diagram T4 and a characteristic diagram T5;
combining a feature map obtained after the feature map T1 passes through the convolution layer with a feature map obtained after the feature map T4 passes through the convolution layer to obtain a feature map T3;
the feature map T6 is obtained by fusing the feature map obtained after the feature map T2 passes through the convolution layer and the feature map T3;
the feature map T7 is obtained by fusing the feature map obtained after the feature map T5 passes through the convolution layer and the feature map T3;
combining the feature map T7 and the feature map T6 to obtain a feature map T8, and obtaining an output 1 after RELU activation function;
the output of the differential branch is passed through the convolutional layer (3 x 3), BN layer and RELU activation functions to obtain output 2.
5. The virtual reality-based hip arthroscopic surgical assistance teaching system of claim 4, further comprising:
the pyramid pooling module is used for aggregating the context information of different areas so as to improve the capacity of the network for acquiring global information;
pyramid pooling module for: pooling with different scales is used on the original feature map to obtain a plurality of feature maps with different sizes, then the feature maps are spliced on the channel dimension, then the feature maps are spliced with the original feature map, and finally a composite feature map which is mixed with various scales is output, so that the purpose of considering global semantic information and local detail information is achieved.
6. The virtual reality-based hip arthroscopic surgical assistance teaching system of claim 5, further comprising:
pyramid pooling module for: pooling operation of different scales is carried out on the original feature images to obtain a plurality of feature images (5 branches are adopted) with different sizes; performing up-sampling operation on the obtained feature map, recovering to the original feature map size (6 multiplied by 6), and finally splicing in the channel dimension to obtain a final composite feature map;
first branch: pooling (6×6) with an output size of (1×1), and upsampling to (6×6) by bilinear interpolation;
And a second branch: pooling with (3×3) output size of (2×2), and upsampling to (6×6) by bilinear interpolation;
third branch: pooling with (2×2) output size (3×3), and upsampling to (6×6) by bilinear interpolation;
fourth branch: pooling with (1×1) and output size of (6×6);
fifth branch: representing an input original feature map to play a role of residual connection;
and after characteristic splicing is carried out on the output results of the first four branches, splicing operation is carried out on the output results of the first four branches and the fifth branch, and then the output results are output.
7. The virtual reality-based hip arthroscopic surgical assistance teaching system of claim 6, further comprising:
a boundary attention directing module for: the differential branch input is subjected to a Sigmoid activation function to obtain two branch outputs, wherein one branch output characteristic is fused with the characteristic of the differential branch input, then is combined with the characteristic of the proportional branch input, and the other branch output characteristic is fused with the characteristic of the proportional branch input, and then is combined with the characteristic of the differential branch; and after the two combined features are output through a convolution layer (3 x3 convolution kernel) and a BN layer respectively, combining and outputting final features.
8. The hip arthroscope operation auxiliary teaching method based on virtual reality is characterized by comprising the following steps of:
inputting the CT image of the hip joint into a preset image segmentation model, and outputting a bone image of the hip joint; wherein the hip bone image includes first pelvic positional information and first femur positional information;
inputting the hip joint MRI image into a preset image segmentation model, and outputting a hip joint part image; wherein the hip joint region image includes second pelvic position information, second femur position information, and soft tissue position information;
determining a registration position based on the first pelvis position information, the first femur position information, the second pelvis position information, the second femur position information and the soft tissue position information, registering, and carrying out fusion reconstruction to obtain a hip joint part multi-mode information image;
rendering a three-dimensional model based on the multi-mode information image of the hip joint part so as to perform three-dimensional space virtual display auxiliary teaching;
the method comprises the steps that a preset image segmentation model is obtained by model training based on a neural network, a cascade residual block is used as a backbone network for the structure of the neural network, and networks with different depths and widths are used for extracting features; the neural network structure comprises: the three convolution layers and the pooling layer are used for performing downsampling operation, reducing the image size, reducing the calculated amount and accelerating the model reasoning speed; three network branches connected respectively: the proportion branch is used for analyzing and reserving detailed information in the high-resolution characteristic diagram; an integrating branch responsible for aggregating local and global context information to capture remote dependencies; and a differential branch for taking charge of extracting high-frequency characteristics to predict the boundary region.
9. An electronic device, characterized in that the electronic device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the virtual reality-based hip arthroscopic surgical assistance teaching method of claim 8.
10. A computer readable storage medium having stored thereon computer program instructions which when executed by a processor implement the virtual reality based hip arthroscopic surgical assistance teaching method of claim 8.
CN202311112508.6A 2023-08-31 2023-08-31 Hip arthroscope operation auxiliary teaching system based on virtual reality Pending CN116959307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311112508.6A CN116959307A (en) 2023-08-31 2023-08-31 Hip arthroscope operation auxiliary teaching system based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311112508.6A CN116959307A (en) 2023-08-31 2023-08-31 Hip arthroscope operation auxiliary teaching system based on virtual reality

Publications (1)

Publication Number Publication Date
CN116959307A true CN116959307A (en) 2023-10-27

Family

ID=88446447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311112508.6A Pending CN116959307A (en) 2023-08-31 2023-08-31 Hip arthroscope operation auxiliary teaching system based on virtual reality

Country Status (1)

Country Link
CN (1) CN116959307A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635951A (en) * 2024-01-24 2024-03-01 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117635951A (en) * 2024-01-24 2024-03-01 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image
CN117635951B (en) * 2024-01-24 2024-05-03 苏州大学附属第二医院 Determination method and system for automatically identifying hip osteoporosis based on X-ray image

Similar Documents

Publication Publication Date Title
WO2020215984A1 (en) Medical image detection method based on deep learning, and related device
CN109242844B (en) Pancreatic cancer tumor automatic identification system based on deep learning, computer equipment and storage medium
CN112966697B (en) Target detection method, device and equipment based on scene semantics and storage medium
CN111369567B (en) Method and device for segmenting target object in three-dimensional image and electronic equipment
CN116959307A (en) Hip arthroscope operation auxiliary teaching system based on virtual reality
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
Letscher et al. Image segmentation using topological persistence
CN116543221A (en) Intelligent detection method, device and equipment for joint pathology and readable storage medium
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
Li et al. S 3 egANet: 3D spinal structures segmentation via adversarial nets
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
Wang et al. Msfnet: multistage fusion network for infrared and visible image fusion
Ann et al. Multi-scale conditional generative adversarial network for small-sized lung nodules using class activation region influence maximization
CN112884702A (en) Polyp identification system and method based on endoscope image
CN116704549A (en) Position detection method, device, equipment and storage medium for three-dimensional space key points
CN114565953A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116309636A (en) Knee joint segmentation method, device and equipment based on multi-task neural network model
CN116152197A (en) Knee joint segmentation method, knee joint segmentation device, electronic equipment and computer readable storage medium
CN116521915A (en) Retrieval method, system, equipment and medium for similar medical images
CN116543222A (en) Knee joint lesion detection method, device, equipment and computer readable storage medium
CN112837318B (en) Ultrasonic image generation model generation method, ultrasonic image synthesis method, medium and terminal
CN115861207A (en) Lightweight medical image segmentation method and system
CN112801964B (en) Multi-label intelligent detection method, device, equipment and medium for lung CT image
CN116310618A (en) Registration network training device and method for multimode images and registration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination