CN117853665A - Image generation method, device and medium for acetabulum and guide - Google Patents

Image generation method, device and medium for acetabulum and guide Download PDF

Info

Publication number
CN117853665A
CN117853665A CN202410239816.3A CN202410239816A CN117853665A CN 117853665 A CN117853665 A CN 117853665A CN 202410239816 A CN202410239816 A CN 202410239816A CN 117853665 A CN117853665 A CN 117853665A
Authority
CN
China
Prior art keywords
model
acetabulum
guide
dimensional code
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410239816.3A
Other languages
Chinese (zh)
Inventor
冯卫
付莉
赵耀
孙鹏
韩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Hospital Jinlin University
Original Assignee
First Hospital Jinlin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Hospital Jinlin University filed Critical First Hospital Jinlin University
Priority to CN202410239816.3A priority Critical patent/CN117853665A/en
Publication of CN117853665A publication Critical patent/CN117853665A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0025Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Prostheses (AREA)

Abstract

The disclosure relates to an image generation method, device and medium of an acetabulum and a guide, and relates to the technical field of computer graphic image generation. The method comprises the following steps: the true pelvis model is fixed in a standard lateral position, and a two-dimensional code is fixed on the upper edge of the acetabulum; performing plain scan on the true pelvis model, and deriving model data; reconstructing real pelvic model data; uploading the two-dimensional code, extracting a mark point and spatially positioning; creating an acetabular prosthesis model and a guide model, and calibrating angle information; importing a two-dimensional code generated by Vuformia into Unity; scanning a two-dimensional code in a real environment by using head-mounted glasses to obtain space data; a virtual acetabular prosthesis is rendered at the hip joint acetabular target location. Through the clamping of the invention, namely the generated acetabulum image and the guide image, medical students in the surgical clinical teaching process can be assisted to implant the acetabulum prosthesis to the corresponding position of the model more efficiently and accurately.

Description

Image generation method, device and medium for acetabulum and guide
Technical Field
The disclosure relates to the technical field of computer graphic image generation, in particular to an image generation method, device and medium of an acetabulum and a guide.
Background
Surgery is an important component of medical science, whose category is formed in the historical development of the entire medical science and is constantly updated. In ancient times, the scope of surgery was limited to only a few diseases and trauma to the body surface; however, with the development of medical science, the disease of each system and each organ of the human body is clearly known in terms of etiology and pathology, and along with the continuous improvement of diagnostic methods and surgical techniques, the category of modern surgery has included a number of internal diseases. The surgery mainly researches how to remove the pathogen of a patient by using a surgical method, so that the patient is treated, and the definition, etiology, manifestation, diagnosis, staging, treatment and prognosis of the disease are required to be known as in all clinical medicine, and the surgery focuses on the problems related to the surgery, such as the indication of the operation, the evaluation and care before the surgery, the skill and method of the surgery, the care after the surgery, the complications and prognosis of the surgery and the like.
The clinical teaching method has the advantages that the clinical capability is cultivated in the key link, the quality of the clinical teaching can directly influence the medical level and comprehensive quality of medical students, the problem of dislocation between theory and practice exists in the traditional surgical teaching mode, the teaching method is simple and single, the students passively accept knowledge, and the clinical comprehensive analysis capability and the manual capability are poor. The teaching mode based on clinical practice is adopted, the teaching reform is carried out by combining the clinical practice, the surgical teaching method is reformed based on the problem, the interaction of teachers and students is combined with the clinical practice, the enthusiasm of students for learning can be improved, and more excellent talents are delivered to society.
In the surgical clinical teaching process, the problem of teaching students how to accurately implant the acetabular prosthesis into the corresponding position of the model is involved, but in the teaching process, it is found that medical students cannot accurately implant the acetabular prosthesis into the corresponding position of the model basically, so how to provide an effective way to assist the students to accurately implant the acetabular prosthesis into the corresponding position of the model is an important subject faced by research staff.
Disclosure of Invention
The disclosure provides an image generation method, device and medium of an acetabulum and a guide, which are used for solving the problem that medical students do not have auxiliary means in the surgical clinical teaching process to accurately implant an acetabular prosthesis into a corresponding position of a model.
According to a first aspect of the present disclosure, there is provided an image generation method of an acetabulum and a guide, comprising: the true pelvis model is fixed in a standard lateral position, and a two-dimensional code serving as identification is fixed on the upper edge of the acetabulum; CT flattening is carried out on the real pelvis model, and 3D model software is utilized to derive real pelvis model data; reconstructing the real pelvic model data using Unity 3D software; uploading the two-dimensional code to a Vufronia database, extracting a mark point and spatially positioning; creating an acetabular prosthesis model and a guide model in the Unity 3D software, and calibrating angle information of the guide and the acetabulum; importing a two-dimensional code generated by the Vuformia into Unity, and importing holonens 2 head-mounted glasses by using APP; scanning a two-dimensional code in a real environment by using the holonens 2 head-mounted glasses to acquire space data; a virtual acetabular prosthesis is rendered at the hip joint acetabular target location.
In some embodiments, the uploading the two-dimensional code to the Vuforia database and performing the marker point extraction and the spatial positioning includes: registering Vufronia and uploading a two-dimensional code; downloading and exporting Vuformia identification package data, and enabling the identification map to correspond to a good position of the display model in Unity; generating APP into HoloLens; the HoloLens automatically calls Vufronia based on the real two-dimensional code to calculate the position of the identification map in the HoloLens space, displays the virtual two-dimensional code and the corresponding model, and completes space positioning.
In some embodiments, the creating the director model includes: in Unity 3D software, a virtual image of the pilot model is created, with the trailing end of the pilot model being placed in the center of the virtual acetabulum.
In some embodiments, calibrating the angle information of the guide to the acetabulum includes: the abduction angle and the anteversion angle of the guide and the acetabulum are marked, the abduction angle ranges from 38 degrees to 43 degrees, and the anteversion angle ranges from 13 degrees to 19 degrees, so that the medical students can adjust the abduction angle and the anteversion angle of the acetabular prosthesis based on operation requirements.
In some embodiments, the rendering of the virtual acetabular prosthesis at the hip acetabular target location comprises: scanning the real model to obtain scanning data; 3D modeling is conducted on the scanning data; displaying the built FBX 3-dimensional model in a unit platform; adding Vufronia two-dimensional code data into the Unity platform, and marking the specific positions of the hip joint model and the two-dimensional code in space and the specific positions of the acetabulum and the hip joint in space.
In some embodiments, in the holonens 2 headset glasses, the pre-tilt, abduction angle of acetabular prosthesis placement is controlled with gesture gestures or sounds; controlling rotation of the virtual pelvic model in different orientations; and (5) restoring by a control system.
In some embodiments, the control range of the rake angle is 0 to 19 degrees and the control range of the abduction angle is 0 to 43 degrees.
According to a second aspect of the present disclosure, there is provided an image generating apparatus of an acetabulum and a guide, comprising: the fixing module is used for fixing the true pelvis model in a standard lateral position, and fixing a two-dimensional code serving as identification on the upper edge of the acetabulum; the leveling and sweeping module is used for performing CT leveling and sweeping on the real pelvis model and deriving real pelvis model data by utilizing 3D model software; the reconstruction module is used for reconstructing the real pelvis model data by using Unity 3D software; the uploading module is used for uploading the two-dimensional code to the Vufronia database, extracting the mark points and positioning the space; the creating module is used for creating an acetabular prosthesis model and a guide model in the Unity 3D software and calibrating angle information of the guide and the acetabulum; the importing module is used for importing the two-dimensional code generated by the Vufronia into Unity and importing holonens 2 head-wearing glasses by using APP; the scanning module is used for scanning the two-dimensional code in the real environment by using the holonens 2 head-mounted glasses to acquire space data; and the rendering module is used for rendering the virtual acetabular prosthesis at the acetabular target positioning part of the hip joint.
According to a third aspect of the present disclosure, there is provided an image generating apparatus of an acetabulum and a guide, comprising: a memory and a processor coupled to the memory, the processor configured to perform the image generation method of the acetabulum and guide as described above based on instructions stored in the memory.
According to a fourth aspect of the present disclosure, a computer-readable storage medium is presented, having stored thereon computer program instructions which, when executed by a processor, implement an image generation method of an acetabulum and guide as described above.
The advantages of the present disclosure are: through the clamping of the invention, namely the generated acetabular image and the guide image, medical students can be assisted to implant the acetabular prosthesis to the corresponding position of the model more efficiently and more accurately in the surgical clinical teaching process; the abduction angle of the guide and the acetabular prosthesis (image) can be flexibly marked in the method, the value range of the abduction angle is 38-43 degrees, the value range of the anteversion angle is 13-19 degrees, the abduction angle and the anteversion angle of the acetabular prosthesis (image) can be adjusted on the basis of needs by a medical student, the acetabular prosthesis can be flexibly and accurately implanted to the corresponding position of the model by the medical student under the support of the method, and the application scene of the method is enlarged; the present disclosure relates to the application of holonens 2 head-mounted glasses, and medical students can utilize gesture gestures or sounds to control the anteversion angle and abduction angle of the placement of an acetabular prosthesis (corresponding to a generated acetabular image), namely, the acetabular image generated through gesture gestures or sounds is controlled, and can assist medical students in implanting the acetabular prosthesis more accurately in the surgical clinical teaching process. The present disclosure enables qualified teaching training by combining the generated virtual acetabulum with an image of the guide (or hip three-dimensional image) and the real world.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure will be more clearly understood from the following detailed description with reference to the accompanying drawings.
Fig. 1 is a flow chart illustrating an image generation method of an acetabulum and guide in accordance with some embodiments of the present disclosure.
Fig. 2 is a diagram illustrating a project study technology roadmap in accordance with some embodiments of the disclosure.
Fig. 3 is a diagram illustrating an application technology architecture according to some embodiments of the present disclosure.
Fig. 4 is a schematic diagram illustrating a pelvic model with two-dimensional code markers, according to some embodiments of the present disclosure.
Fig. 5 is a schematic diagram illustrating rendering a virtual acetabular prosthesis according to some embodiments of the disclosure.
Fig. 6 is a schematic diagram illustrating a virtual interface according to some embodiments of the present disclosure.
Fig. 7 is a block diagram illustrating an image generation device of an acetabulum and guide in accordance with some embodiments of the present disclosure.
Fig. 8 is a block diagram illustrating an image generation device of an acetabulum and guide in accordance with further embodiments of the present disclosure.
FIG. 9 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one example embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
At present, in the clinical teaching process of surgery, the problem of teaching students how to accurately implant the acetabular prosthesis into the corresponding position of the model is related, but in the teaching process, it is found that medical students can not accurately implant the acetabular prosthesis into the corresponding position of the model basically, so how to provide an effective way to assist the students to accurately implant the acetabular prosthesis into the corresponding position of the model is an important subject faced by research personnel.
In view of this, the present disclosure proposes an image generating method, apparatus and medium of an acetabulum and a guide, by which the generated acetabulum image and the guide image are held, so that a medical student can be assisted to implant an acetabular prosthesis to a corresponding position of a model more efficiently and accurately in a surgical clinical teaching process; the abduction angle of the guide and the acetabular prosthesis (image) can be flexibly marked in the method, the value range of the abduction angle is 38-43 degrees, the value range of the anteversion angle is 13-19 degrees, the abduction angle and the anteversion angle of the acetabular prosthesis (image) can be adjusted on the basis of needs by a medical student, the acetabular prosthesis can be flexibly and accurately implanted to the corresponding position of the model by the medical student under the support of the method, and the application scene of the method is enlarged; the present disclosure relates to the application of holonens 2 head-mounted glasses, and medical students can utilize gesture gestures or sounds to control the anteversion angle and abduction angle of the placement of an acetabular prosthesis (corresponding to a generated acetabular image), namely, the acetabular image generated through gesture gestures or sounds is controlled, and can assist medical students in implanting the acetabular prosthesis more accurately in the surgical clinical teaching process. The present disclosure enables qualified teaching training by combining the generated virtual acetabulum with an image of the guide (or hip three-dimensional image) and the real world.
Fig. 1 is a flow chart illustrating an image generation method of an acetabulum and guide in accordance with some embodiments of the present disclosure. As shown in fig. 1, the image generation method of the acetabulum and the guide includes steps 110 to 180.
In some embodiments, three-dimensional imaging and augmented reality techniques may address optimal positions and angles for assisting a medical student in completing prosthetic implants during clinical teaching due to viewing and positioning constraints of the medical student. The augmented reality technology (Augmented Reality, AR) is that physical information (vision, touch, etc.) which is difficult to experience in a certain time space range of the real world originally is simulated by a scientific technology such as a computer and then superimposed, virtual information is applied to the real world and perceived by human senses, so that sense organ experience exceeding the reality is achieved.
The augmented reality system implementation process includes: firstly, a virtual model is acquired, wherein the virtual model can be structural data or can be obtained by three-dimensional reconstruction of medical image data; registering the virtual model into a real scene to realize virtual-real fusion; and finally, displaying the virtual-real combined scene in the head-mounted display device.
The present disclosure relates to an Augmented Reality (AR) glasses system for assisting medical student positioning, which is mounted on a Microsoft holonens 2 head-mounted AR glasses system, and which realizes teaching training by combining virtual hip three-dimensional images with the real world.
In step 110, standard lateral position fixation is performed on the real pelvis model, and a two-dimensional code as identification is fixed on the upper edge of the acetabulum.
In some embodiments, standard lateral position fixation can be automatically performed on the true pelvic model through a specific driver, and a two-dimensional code as identification is fixed on the upper edge of the acetabulum.
At step 120, CT panning is performed on the real pelvic model and real pelvic model data is derived using 3D model software.
At step 130, the real pelvic model data is reconstructed using Unity 3D software.
In some embodiments, three-dimensional reconstruction of medical images: firstly, CT equipment collects CT data; secondly, the image visualization platform processes the image data, marks the processing and rendering model; and finally, creating and exporting an FBX type model by the three-dimensional model, wherein the FBX type model can be used by a Unity platform. As shown in fig. 2 and 3.
In step 140, the two-dimensional code is uploaded to the Vuforia database, and the marker point extraction and the spatial positioning are performed.
In some embodiments, instant localization and mapping: realizing a positioning function through the IMU; the spatial scan construction is implemented using SLAM techniques, as shown in fig. 2 and 3.
In some embodiments, vuforia recognition may be a two-dimensional code, a picture, an object, or the like as the recognition object. The two-dimensional code is easy to manufacture, and meanwhile, the real marking information of the two-dimensional code can be obtained.
In some embodiments, spatial localization is accomplished, primarily by computer vision techniques to identify and capture planar images or three-dimensional objects in real time, to place virtual objects through a camera viewfinder, and to adjust the position of the objects on a physical background in front of the lens.
In some embodiments, the specific process includes: registering Vuforia, uploading a plane marker, such as a two-dimensional code, emphasizing that holonens is applied, and the size of the identification chart must correspond to the real size, otherwise, the virtual-real combination is not fit. And downloading and exporting Vufronia identification package data for Unity development HoloLens app. In Unity, the recognition graph must be located in a good position with respect to the display model. App is generated into holonens. The HoloLens can acquire the space position in real time, and when the real identification two-dimensional code is found, vufronia is called, the position of the identification map in the HoloLens space is calculated, the virtual two-dimensional code and the corresponding model are displayed, and at the moment, the space positioning is realized.
In step 150, in the Unity 3D software, an acetabular prosthesis model and a guide model are created and angle information of the guide and the acetabulum is calibrated.
In some embodiments, the inventor finds that in the Unity 3D software, an acetabular prosthesis model and a guide model are created, a guide virtual image is designed in advance, the tail end of the guide virtual image is positioned at the center of a virtual acetabulum, the angles of the guide and the acetabulum are marked, the angles comprise an abduction angle of 38 degrees to 43 degrees and a anteversion angle of 13 degrees to 19 degrees, and the abduction angle and the anteversion angle of the acetabular prosthesis can be adjusted according to the operation needs of medical students, so that the guide model also changes accordingly, and in the embodiment, the guide flexibly changes according to specific application scenes, so that the application scope of the disclosure is enlarged.
In some embodiments, because the model is derived and created by using real CT scan data, the model is ensured to be consistent with the real object size by manually repairing details, and in Unity, the system unit is meter, the space coordinates are displayed in three dimensions, including vectors and angles, the hip joint three-dimensional object is calibrated by using a Vufronia recognition graph, the size of the recognition graph is emphasized before the recognition graph is required to correspond to the real size, and the cuboid cigarette case is verified, so that the error is within 2 millimeters.
In some embodiments, calibrating the angle of the guide to the acetabulum is accomplished: when the positions of the true model and the identification two-dimensional code are consistent with the positions of the virtual model and the marked two-dimensional code in the Unity, when the Hololens+Vufronia identifies the two-dimensional code, and the position information of the true two-dimensional code and the virtual two-dimensional code is calculated and guaranteed to be consistent, the virtual and real objects are displayed in a superposition mode, the angles required by the true acetabulum and the true hip joint are completed through controlling the virtual acetabulum at the angles corresponding to the virtual model, and at the moment, the superposition of the true acetabulum and the virtual acetabulum is guaranteed, and the accuracy of the angles is guaranteed.
In some embodiments, the techniques employed mainly include SLAM (spatial localization and mapping) and Vuforia image recognition techniques.
In some embodiments, the algorithm is a slam algorithm, so as to realize positioning, mapping and path planning; the Vufronia algorithm comprises a feature extraction CNN (convolutional neural network), an object inspection Fast R-CNN algorithm and position recognition from space coordinates constructed by Hololens, and is realized through space geometric transformation.
In step 160, the two-dimensional code generated by Vuforia is imported into Unity, and holonens 2 head-mounted glasses are imported by APP.
In step 170, the holonens 2 head-mounted glasses are used for scanning the two-dimensional codes in the real environment, and space data are obtained.
At step 180, a virtual acetabular prosthesis is rendered at the hip acetabular target location.
The inventor needs to know how holonens displays real and virtual images. MR is an abbreviation of Mixed Reality, i.e., mixed Reality, a technology that merges real and virtual worlds, and Mixed Reality concepts are proposed by microsoft corporation, emphasizing coexistence and real-time interactions of physical entities and digital objects, such as virtual-real occlusion, environmental reflection, etc. In contrast, AR emphasizes the enhancement of the real world, MR emphasizes the fusion of virtual and real, focuses more on interactions between the virtual digital world and the real world, such as environmental occlusion, humanoid occlusion, scene depth, physical simulation, and focuses more on manipulating virtual objects in a natural and instinctive manner. Thanks to the excellent motion tracking capabilities of holonens devices, holograms can be placed anywhere in the real space where the user is located, which will be fixed in the environment like real objects, and which will remain in place even if the user moves positions. Of course, it is also possible to set up to keep the hologram always in the field of view or to follow the user, in MR applications, keeping the hologram in the field of view is also called Display lock (Display lock), in which the hologram always occupies a part of the Display area just like the UI element in normal applications, and in general, this form of hologram will be used to Display fixed information such as electric quantity and time, but it should be noted that this Display mode does not coincide with the 3D mixed reality scene created by MR applications, and discomfort is generated, and no special situation is recommended not to be adopted in this mode; keeping the hologram following the user is also known as Body lock (Body lock), which will follow the user, but which is also in the 3D space of MR applications, a typical example is a performance diagnostic panel (Diagnostics panel) common in MR development, and a well designed delay and elastic jog effect can make this mode very suitable for display of usual menus, tools.
In some embodiments, the rendering of the acetabular prosthesis mainly comprises the steps of firstly realizing CT scanning of a real model to perform 3D modeling, displaying the built FBX 3-dimensional model in a unit platform, secondly adding Vufronia two-dimensional code data in the unit platform, and marking specific positions of the acetabular model and the two-dimensional code in space and specific positions of the acetabulum and the hip joint in space. At the moment, the Hololens can reconstruct real-time modeling of the real environment, and when the real space two-dimensional code is watched, the virtual model two-dimensional code and the hip joint model can track, overlap and display in real time along with the real two-dimensional code. The image of the superposition of the virtual scene and the real scene is rendered.
In the clinical teaching process, a medical student introduces a real artificial acetabular prosthesis together with an implanter into a model, and enables the implanter to completely coincide with a virtual guide, so that the artificial acetabular prosthesis is accurately implanted into a corresponding position of the model.
In some embodiments, a virtual insertion guide is presented in the AR environment that is fixed in the center of the virtual acetabular prosthesis and that remains in position with the hip prosthesis model without absolute movement. It is important that once the AR system is identified, the position of the virtual model is not changed with the change of the position of the real model, and re-identification and calibration are not required.
In some embodiments, the medical student introduces the real acetabular prosthesis into the model along with the implanter, and the medical student moves the inserted fixator until it fully coincides with the virtual guide in the AR environment, i.e., the real acetabular prosthesis implanter fully coincides with its virtual guide wire. The visualization interface allows multiple planes in the AR environment to be viewed to eliminate perceived blurring. When satisfied with the position of the placed acetabular prosthesis, the acetabular prosthesis is implanted.
AR technical guidance set compared to traditional perspective guidance set: medical students use opaque Sawbonees pelvic models to place acetabular cup prostheses in a simulated operating room, and the medical students are randomly divided into two groups, one group being guided by AR technology and the other group being guided by traditional freehand technology. The two sets of measurement indicators include accuracy and precision analysis of the acetabular prosthesis position angle (acetabular anteversion angle, abduction angle) from a target value.
The inventors found that: the abduction angle and the anteversion angle of the acetabular prosthesis of the AR technology guide group are more accurate than those of the traditional freehand technology group; the execution time of the AR technology guide group is faster; the AR technology-based guiding simulation of the acetabular prosthesis position placement shows that the acetabular prosthesis position placement is more accurate and precise, and operation time is saved compared with the traditional freehand technology. Can be used for clinical teaching of surgery.
The advantages of the present disclosure are: platform development based on the headset holonens 2 technology is easy. The AR equipment used for related research at present mainly comprises AR holonens 2 head-wearing glasses, an AR environment workstation, an AR-HIP system, an AR visualization tool, an AR navigation system and the like; while AR-HIP systems, AR environment workstations, AR visualization tools can provide accurate acetabular cup placement angles, the utility of being able to navigate is still unclear; the AR navigation system has the defects of complicated equipment, inconvenient carrying and the like; the AR head-mounted system is convenient to carry and suitable for operation of traditional Chinese medicine students in the clinical teaching process; real world and virtual world are combined through the real-time operation of gestures, sounds and the like of medical students; the AR can provide three-dimensional stereoscopic images for medical students according to a plan; the data error is within 3 mm, so that the accuracy and the precision are realized; microsoft Hololens 2 eyepiece has man-machine interaction's simplicity.
The method and the device have the advantages that the defects of complexity, inconvenience in carrying and the like of the traditional display equipment are overcome, the HoloLens 2 system is easier to carry and is more suitable for clinical teaching environments; the medical science carries out real-time operation according to gestures, sounds and the like, and the acceptance is stronger.
As shown in fig. 6, compared with the robot, the medical generator can perform real-time operation through gestures or sounds, and adjust the abduction angle and the anteversion angle of the acetabular prosthesis at any time according to the needs, so that the method has the advantage of simple operation, and improves the accuracy of acetabular prosthesis implantation. The virtual operation interface displayed in the holonens 2 glasses can adjust operation keys in the virtual interface through gesture gestures or sounds, adjustment contents comprise an anteversion angle for placing the acetabular prosthesis, an abduction angle, rotation (upward, downward, leftward and rightward) of different directions of a virtual pelvis model, a system reduction key and the like, wherein the adjustable range of the anteversion angle for placing the acetabular prosthesis is 0-19 degrees, the adjustable range of the abduction angle is 0-43 degrees, and the interface operation is simple.
The depth camera is adopted to measure the angle of the guide and the acetabular prosthesis in real time, and the data are displayed in Hololens 2 in real time, so that the method has the advantages of more accurate data and convenience in operation.
As shown in fig. 4, in the present disclosure, the two-dimensional code picture position recognition technology refers to the following implementation principle through analysis and processing of pictures:
feature extraction: vuforia uses Convolutional Neural Network (CNN) to extract feature points from the picture; object detection: after feature extraction, detecting objects in the picture by using R-CNN, fast R-CNN and Fast R-CNN by Vuforia; and (3) position identification: after constructing a depth map, holoLens 2 acquires a real space coordinate system and can identify the position of an object; map matching: after the position identification, an object in the picture is matched with the map by using a SIFT, SURF, ORB algorithm.
As shown in fig. 5, in an AR environment with holonens 2 glasses, a virtual acetabular prosthesis is rendered at the hip acetabular target location: in the AR environment a virtually inserted virtual guide 3 is presented, which virtual guide 3 is fixed in the centre of the virtual acetabular prosthesis and is kept in an absolute non-moving position with the model, wherein the virtual model has two-dimensional code marks 21 thereon. The medical student introduces the real prosthesis into the model and the medical student moves the inserted holder until it is fully coincident with, i.e. co-axial with, the virtual guide in the AR environment.
Fig. 7 is a block diagram illustrating an image generation device of an acetabulum and guide in accordance with some embodiments of the present disclosure. As shown in fig. 7, the image generating apparatus 700 of the acetabulum and guide includes a fixing module 710, a panning module 720, a reconstruction module 730, an uploading module 740, a creating module 750, an importing module 760, a scanning module 770, a rendering module 770, and a rendering module 780.
A fixing module 710 configured to perform standard lateral position fixing on the real pelvic model, and fix a two-dimensional code as an identification on the upper edge of the acetabulum;
a flattening module 720 configured to CT flatten the real pelvic model and derive real pelvic model data using 3D model software;
a reconstruction module 730 configured to reconstruct the real pelvic model data using Unity 3D software;
the uploading module 740 is configured to upload the two-dimensional code to the Vuforia database, extract the mark points and spatially locate;
a creation module 750 configured to create an acetabular prosthesis model and a guide model in the Unity 3D software and to calibrate angle information of the guide and the acetabulum;
an importing module 760 configured to import the two-dimensional code generated by the Vuforia into Unity and import holonens 2 head-mounted glasses using APP;
the scanning module 770 is configured to scan the two-dimensional code in the real environment by using the holonens 2 head-mounted glasses to obtain space data;
a rendering module 780 configured to render a virtual acetabular prosthesis at the hip acetabular target location.
In the device of the embodiment of the disclosure, the generated acetabulum image and the guide image are held, so that medical students can be assisted to implant the acetabulum prosthesis to the corresponding position of the model more efficiently and accurately in the surgical clinical teaching process; the abduction angle of the guide and the acetabular prosthesis (image) can be flexibly marked in the method, the value range of the abduction angle is 38-43 degrees, the value range of the anteversion angle is 13-19 degrees, the abduction angle and the anteversion angle of the acetabular prosthesis (image) can be adjusted on the basis of needs by a medical student, the acetabular prosthesis can be flexibly and accurately implanted to the corresponding position of the model by the medical student under the support of the method, and the application scene of the method is enlarged; the present disclosure relates to the application of holonens 2 head-mounted glasses, and medical students can utilize gesture gestures or sounds to control the anteversion angle and abduction angle of the placement of an acetabular prosthesis (corresponding to a generated acetabular image), namely, the acetabular image generated through gesture gestures or sounds is controlled, and can assist medical students in implanting the acetabular prosthesis more accurately in the surgical clinical teaching process. The present disclosure enables qualified teaching training by combining the generated virtual acetabulum with an image of the guide (or hip three-dimensional image) and the real world.
Fig. 8 is a block diagram illustrating an image generation device of an acetabulum and guide in accordance with further embodiments of the present disclosure. As shown in fig. 8, the image generation device 800 of the acetabulum and guide includes a memory 810; and a processor 820 coupled to the memory 810. The memory 810 is used to store instructions for performing corresponding embodiments of the image generation method of the acetabulum and guide. Processor 820 is configured to perform the image generation method of the acetabulum and guide in any of the embodiments of the present disclosure based on instructions stored in memory 810.
FIG. 9 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure. As shown in FIG. 9, computer system 900 may be embodied in the form of a general purpose computing device. Computer system 900 includes a memory 910, a processor 920, and a bus 930 that couples various system components.
Memory 910 may include, for example, system memory, nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot Loader (Boot Loader), and other programs. The system memory may include volatile storage media, such as Random Access Memory (RAM) and/or cache memory. The non-volatile storage medium stores, for example, instructions for performing a corresponding embodiment of at least one of the image generation methods of the acetabulum and the guide. Non-volatile storage media include, but are not limited to, disk storage, optical storage, flash memory, and the like.
The processor 920 may be implemented as discrete hardware components such as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gates or transistors, or the like. Accordingly, each module, such as the fixing module, the panning module, the reconstruction module, the uploading module, the creating module, the importing module, the scanning module, and the rendering module, may be implemented by a Central Processing Unit (CPU) executing instructions of executing corresponding steps in a memory, or may be implemented by a dedicated circuit executing corresponding steps.
Bus 930 may employ any of a variety of bus architectures. For example, bus structures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, and a Peripheral Component Interconnect (PCI) bus.
Computer system 900 may also include input/output interfaces 940, network interfaces 950, storage interfaces 960, and the like. These interfaces 940, 950, 960 may be connected between the memory 910 and the processor 920 via a bus 930. The input output interface 940 may provide a connection interface for input output devices such as a display, mouse, keyboard, etc. Network interface 950 provides a connection interface for various networking devices. Storage interface 960 provides a connection interface for external storage devices such as floppy disk, USB flash disk, SD card, etc.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor, create means for implementing the functions specified in the flowchart and/or block diagram block or blocks.
These computer readable program instructions may also be stored in a computer readable memory that can direct a computer to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instructions which implement the function specified in the flowchart and/or block diagram block or blocks.
The present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
Thus far, the image generation method, apparatus, and medium of the acetabulum and guide according to the present disclosure have been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
Although specific embodiments of the disclosure have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (10)

1. A method of generating an image of an acetabulum and a guide, the method comprising:
the true pelvis model is fixed in a standard lateral position, and a two-dimensional code serving as identification is fixed on the upper edge of the acetabulum;
CT flattening is carried out on the real pelvis model, and 3D model software is utilized to derive real pelvis model data;
reconstructing the real pelvic model data using Unity 3D software;
uploading the two-dimensional code to a Vufronia database, extracting a mark point and spatially positioning;
creating an acetabular prosthesis model and a guide model in the Unity 3D software, and calibrating angle information of the guide and the acetabulum;
importing a two-dimensional code generated by the Vuformia into Unity, and importing holonens 2 head-mounted glasses by using APP;
scanning a two-dimensional code in a real environment by using the holonens 2 head-mounted glasses to acquire space data;
a virtual acetabular prosthesis is rendered at the hip joint acetabular target location.
2. The method for generating an image of an acetabulum and guide according to claim 1, wherein uploading two-dimensional codes to a Vuforia database and performing marker point extraction and spatial localization comprises:
registering Vufronia and uploading a two-dimensional code;
downloading and exporting Vuformia identification package data, and enabling the identification map to correspond to a good position of the display model in Unity;
generating APP into HoloLens;
the HoloLens automatically calls Vufronia based on the real two-dimensional code to calculate the position of the identification map in the HoloLens space, displays the virtual two-dimensional code and the corresponding model, and completes space positioning.
3. The method of image generation of an acetabulum and guide according to claim 1, wherein said creating a guide model comprises: in Unity 3D software, a virtual image of the pilot model is created, with the trailing end of the pilot model being placed in the center of the virtual acetabulum.
4. A method of generating an image of an acetabulum and guide according to claim 3, wherein calibrating the guide to acetabular angle information comprises: the abduction angle and the anteversion angle of the guide and the acetabulum are marked, the abduction angle ranges from 38 degrees to 43 degrees, and the anteversion angle ranges from 13 degrees to 19 degrees, so that the medical students can adjust the abduction angle and the anteversion angle of the acetabular prosthesis based on operation requirements.
5. The method of generating an image of an acetabulum and guide according to claim 1, wherein said rendering a virtual acetabular prosthesis at a hip acetabular target location comprises:
scanning the real model to obtain scanning data;
3D modeling is conducted on the scanning data;
displaying the built FBX 3-dimensional model in a unit platform;
adding Vufronia two-dimensional code data into the Unity platform, and marking the specific positions of the hip joint model and the two-dimensional code in space and the specific positions of the acetabulum and the hip joint in space.
6. The image generation method of an acetabulum and guide according to claim 1, wherein in said holonens 2 head-mounted glasses, the anteversion angle and abduction angle of artificial acetabular prosthesis placement are controlled by gesture posture or sound; controlling rotation of the virtual pelvic model in different orientations; and (5) restoring by a control system.
7. The method of generating an image of an acetabulum and guide according to claim 6, wherein the control range of the anteversion angle is 0 degrees to 19 degrees and the control range of the abduction angle is 0 degrees to 43 degrees.
8. An image generation device for an acetabulum and a guide, comprising:
the fixing module is used for fixing the true pelvis model in a standard lateral position, and fixing a two-dimensional code serving as identification on the upper edge of the acetabulum;
the leveling and sweeping module is used for performing CT leveling and sweeping on the real pelvis model and deriving real pelvis model data by utilizing 3D model software;
the reconstruction module is used for reconstructing the real pelvis model data by using Unity 3D software;
the uploading module is used for uploading the two-dimensional code to the Vufronia database, extracting the mark points and positioning the space;
the creating module is used for creating an acetabular prosthesis model and a guide model in the Unity 3D software and calibrating angle information of the guide and the acetabulum;
the importing module is used for importing the two-dimensional code generated by the Vufronia into Unity and importing holonens 2 head-wearing glasses by using APP;
the scanning module is used for scanning the two-dimensional code in the real environment by using the holonens 2 head-mounted glasses to acquire space data;
and the rendering module is used for rendering the virtual acetabular prosthesis at the acetabular target positioning part of the hip joint.
9. An image generation device of an acetabulum and guide, comprising a memory and a processor coupled to the memory, the processor configured to perform the image generation method of an acetabulum and guide of any one of claims 1 to 7 based on instructions stored in the memory.
10. A computer-readable storage medium, having stored thereon computer program instructions which, when executed by a processor, implement the image generation method of an acetabulum and guide of any one of claims 1 to 7.
CN202410239816.3A 2024-03-04 2024-03-04 Image generation method, device and medium for acetabulum and guide Pending CN117853665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410239816.3A CN117853665A (en) 2024-03-04 2024-03-04 Image generation method, device and medium for acetabulum and guide

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410239816.3A CN117853665A (en) 2024-03-04 2024-03-04 Image generation method, device and medium for acetabulum and guide

Publications (1)

Publication Number Publication Date
CN117853665A true CN117853665A (en) 2024-04-09

Family

ID=90540168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410239816.3A Pending CN117853665A (en) 2024-03-04 2024-03-04 Image generation method, device and medium for acetabulum and guide

Country Status (1)

Country Link
CN (1) CN117853665A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689577A (en) * 2021-09-03 2021-11-23 上海涞秋医疗科技有限责任公司 Method, system, device and medium for matching virtual three-dimensional model and entity model
CN114224508A (en) * 2021-11-12 2022-03-25 苏州微创畅行机器人有限公司 Medical image processing method, system, computer device and storage medium
CN114259330A (en) * 2022-03-01 2022-04-01 北京壹点灵动科技有限公司 Measuring method, device and measuring system for angle of acetabular cup prosthesis
CN114943802A (en) * 2022-05-13 2022-08-26 南开大学深圳研究院 Knowledge-guided surgical operation interaction method based on deep learning and augmented reality
CN115844531A (en) * 2023-02-22 2023-03-28 北京壹点灵动科技有限公司 Hip replacement surgery navigation system
CN117075769A (en) * 2023-08-14 2023-11-17 张仲元 Two-dimensional code display method based on augmented reality, computer device and computer readable storage medium
CN117618168A (en) * 2024-01-25 2024-03-01 北京壹点灵动科技有限公司 Method and device for determining implantation angle of acetabular cup prosthesis and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689577A (en) * 2021-09-03 2021-11-23 上海涞秋医疗科技有限责任公司 Method, system, device and medium for matching virtual three-dimensional model and entity model
CN114224508A (en) * 2021-11-12 2022-03-25 苏州微创畅行机器人有限公司 Medical image processing method, system, computer device and storage medium
CN114259330A (en) * 2022-03-01 2022-04-01 北京壹点灵动科技有限公司 Measuring method, device and measuring system for angle of acetabular cup prosthesis
CN114943802A (en) * 2022-05-13 2022-08-26 南开大学深圳研究院 Knowledge-guided surgical operation interaction method based on deep learning and augmented reality
CN115844531A (en) * 2023-02-22 2023-03-28 北京壹点灵动科技有限公司 Hip replacement surgery navigation system
CN117075769A (en) * 2023-08-14 2023-11-17 张仲元 Two-dimensional code display method based on augmented reality, computer device and computer readable storage medium
CN117618168A (en) * 2024-01-25 2024-03-01 北京壹点灵动科技有限公司 Method and device for determining implantation angle of acetabular cup prosthesis and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙鹏: "《增强现实技术在模拟人工全髋关节置换术中的实验研究》", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, no. 02, 15 February 2024 (2024-02-15), pages 2 - 32 *

Similar Documents

Publication Publication Date Title
Wang et al. Video see‐through augmented reality for oral and maxillofacial surgery
Wang et al. A practical marker-less image registration method for augmented reality oral and maxillofacial surgery
US20230072188A1 (en) Calibration for Augmented Reality
CN111529063B (en) Operation navigation system and method based on three-dimensional reconstruction multi-mode fusion
JP2966089B2 (en) Interactive device for local surgery inside heterogeneous tissue
CN106691600A (en) Spine pedicle screw implanting and locating device
Jiang et al. Registration technology of augmented reality in oral medicine: A review
CN109700550A (en) A kind of augmented reality method and device for dental operation
WO2021048158A1 (en) Method for controlling a display, computer program and mixed reality display device
CN110751681B (en) Augmented reality registration method, device, equipment and storage medium
CN108366778B (en) Multi-view, multi-source registration of mobile anatomy and device
US20230114385A1 (en) Mri-based augmented reality assisted real-time surgery simulation and navigation
CN115153835A (en) Acetabular prosthesis placement guide system and method based on feature point registration and augmented reality
US20230074630A1 (en) Surgical systems and methods for positioning objects using augmented reality navigation
CN107752979A (en) Automatically generated to what is manually projected
CN113842227B (en) Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium
Scherfgen et al. Estimating the pose of a medical manikin for haptic augmentation of a virtual patient in mixed reality training
Li et al. A vision-based navigation system with markerless image registration and position-sensing localization for oral and maxillofacial surgery
Wang et al. Real-time marker-free patient registration and image-based navigation using stereovision for dental surgery
CN110478042B (en) Interventional operation navigation device based on artificial intelligence technology
KR20210150633A (en) System and method for measuring angle and depth of implant surgical instrument
CN117853665A (en) Image generation method, device and medium for acetabulum and guide
US12023208B2 (en) Method for operating a visualization system in a surgical application, and visualization system for a surgical application
Shi et al. Augmented reality for oral and maxillofacial surgery: The feasibility of a marker‐free registration method
EP3637374A1 (en) Method and system for visualising a spatial surface curvature of a 3d-object, computer program product, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination