CN114224508A - Medical image processing method, system, computer device and storage medium - Google Patents

Medical image processing method, system, computer device and storage medium Download PDF

Info

Publication number
CN114224508A
CN114224508A CN202111338635.9A CN202111338635A CN114224508A CN 114224508 A CN114224508 A CN 114224508A CN 202111338635 A CN202111338635 A CN 202111338635A CN 114224508 A CN114224508 A CN 114224508A
Authority
CN
China
Prior art keywords
medical image
image
real
operation information
target virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111338635.9A
Other languages
Chinese (zh)
Inventor
周安稳
刘赫
刘鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xiaowei Changxing Robot Co ltd
Original Assignee
Suzhou Xiaowei Changxing Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xiaowei Changxing Robot Co ltd filed Critical Suzhou Xiaowei Changxing Robot Co ltd
Priority to CN202111338635.9A priority Critical patent/CN114224508A/en
Publication of CN114224508A publication Critical patent/CN114224508A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a medical image processing method, a medical image processing apparatus, a computer device and a storage medium. The method comprises the following steps: acquiring a real medical image acquired by a real image acquisition device, the real medical image including a first marker; acquiring a target virtual medical image which corresponds to the real medical image and contains operation information and a position relation between the target virtual medical image and the operation information, wherein the target virtual medical image comprises a first marker; calculating the matching relation between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain a current medical image; and fusing the operation information in the current medical image according to the matching relationship between the target virtual medical image and the real medical image and the position relationship between the target virtual medical image and the operation information. By adopting the method, the operation information can be fused on the real space to be operated, and data prompt capable of controlling the operation globally is provided for doctors.

Description

Medical image processing method, system, computer device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to a medical image processing method, apparatus, computer device, and storage medium.
Background
With the development of computer technology, surgical navigation systems appeared, and the appearance of the surgical navigation systems conforms to the development trend of precise surgery. The operation navigation system provides richer reference information and more accurate guidance for operation through analysis of medical images of a patient and application of various sensors in the operation, and becomes a powerful tool for assisting a doctor in completing the operation.
In the conventional technology, a preoperative surgical scheme is completed in a computer of a surgical navigation system by acquiring data of a patient CT and the like before a surgery, and then a doctor performs the surgery according to the preoperative surgical scheme.
However, in the current surgical navigation system, doctors cannot see surgical data based on a real scene, only can see a preoperative surgical scheme, and the system is not intelligent enough.
Disclosure of Invention
In view of the above, it is necessary to provide a medical image processing method, an apparatus, a computer device and a storage medium, which can fuse operation information to a real space to be operated and provide data prompt for a doctor to perform a globally-controlled operation.
A method of medical image processing, the method comprising:
acquiring a real medical image acquired by a real image acquisition device, the real medical image including a first marker;
acquiring a target virtual medical image which corresponds to the real medical image and contains operation information and a position relation between the target virtual medical image and the operation information, wherein the target virtual medical image comprises the first marker;
calculating the matching relation between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain a current medical image;
and fusing the operation information in the current medical image according to the matching relationship between the target virtual medical image and the real medical image and the position relationship between the target virtual medical image and the operation information.
In one embodiment, the calculating a matching relationship between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain a current medical image includes:
calculating to obtain corresponding matching features in the target virtual medical image and the real medical image according to the first marker;
and calculating the matching relation between the target virtual medical image and the real medical image according to the matching characteristics, and fusing the target virtual medical image and the real medical image to obtain the current medical image.
In one embodiment, before the acquiring the target virtual medical image corresponding to the real medical image, the method further includes:
acquiring an initial medical image obtained by scanning of medical imaging equipment;
carrying out three-dimensional reconstruction on the initial medical image to obtain a target virtual medical image;
receiving an operation information configuration instruction aiming at the target virtual medical image, configuring operation information on the target virtual medical image according to the operation information configuration instruction, and acquiring the position relation between the target virtual medical image and the operation information.
In one embodiment, the method further comprises:
acquiring a matching relation between a visual space coordinate system of augmented reality equipment and the real space coordinate system; acquiring a reference image which is acquired by the augmented reality equipment and is to be displayed to a visual space;
and matching and fusing the current medical image carrying the operation information and the reference image to obtain a mixed reality image according to the matching relationship between the visual space coordinate system of the augmented reality device and the real space coordinate system and the matching relationship between the target virtual medical image and the real medical image, and displaying the mixed reality image into the visual space of the augmented reality device.
In one embodiment, the obtaining of the matching relationship between the visual space coordinate system of the augmented reality device and the real space coordinate system includes:
acquiring a first conversion matrix of a real space coordinate system and an image space coordinate system of augmented reality equipment;
acquiring a second transformation matrix of an image space coordinate system of the augmented reality equipment and a visual space coordinate system of the augmented reality equipment;
and obtaining the matching relation between the visual space coordinate system and the real space coordinate system of the augmented reality equipment according to the first conversion matrix and the second conversion matrix.
In one embodiment, the obtaining a first transformation matrix of the real space coordinate system and the image space coordinate system of the augmented reality device includes:
acquiring a first reference object image of a first calibration reference object in an image space through augmented reality equipment, establishing an image coordinate system by taking a target point of the first reference object image as an origin, and determining a preset number of reference points from the first reference object image based on the image coordinate system;
acquiring a second reference object image of a first calibration reference object acquired by real image acquisition equipment under an image coordinate system of the real image acquisition equipment, and determining a mapping point corresponding to the reference point from the second reference object image;
converting the mapping points into a real space coordinate system;
and calculating the coordinates of the mapping points under the real space coordinate system based on the reference points to obtain a first conversion matrix of the real space coordinate system and the image space coordinate system of the augmented reality equipment.
In one embodiment, the displaying the mixed reality image to the visual space of the augmented reality device further comprises:
acquiring a display angle of the mixed reality image;
calculating to obtain a reference angle based on the display angle, and acquiring a target virtual medical image under the reference angle;
and displaying the target virtual medical image and the mixed reality image under the reference angle in the visual space.
In one embodiment, after displaying the mixed reality image to the visual space of the augmented reality device, the method further includes:
receiving an editing instruction aiming at the operation information through the enhanced display equipment; editing the operation information based on the editing instruction through the enhanced display equipment.
In one embodiment, the receiving, by the enhanced display device, an edit instruction for the operation information includes:
recognizing the gesture of an operator through the enhanced display equipment, and displaying an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement;
receiving an editing instruction for the operation information through the editing instruction receiving panel.
In one embodiment, the editing the operation information based on the editing instruction includes:
and inquiring the operation information under the corresponding angle according to the editing instruction, and displaying the inquired operation information in the visual space.
In one embodiment, the editing the operation information based on the editing instruction includes:
performing at least one of translation, rotation and type and size conversion on the operation information according to the editing instruction;
and displaying the edited operation information in the visual space.
In one embodiment, the editing the operation information based on the editing instruction includes:
intercepting the mixed reality image according to the editing instruction to obtain an intercepted image;
and editing operation is carried out on the operation information in the intercepted image, wherein the editing operation comprises translation and/or rotation.
In one embodiment, after the editing operation is performed on the operation information in the captured image, the method further includes:
and acquiring the edited operation information, and displaying the edited operation information in the visual space.
In one embodiment, the method further comprises:
and generating a mechanical arm control command according to the operation information, and sending the mechanical arm control command to a mechanical arm, wherein the mechanical arm control command is used for indicating the mechanical arm to operate according to the operation information.
A medical image processing system, the system comprising a real image acquisition device and an image processing device;
the real image acquisition equipment is used for acquiring a real medical image and sending the acquired real medical image to the image processing equipment, wherein the real medical image comprises a first marker;
the image processing device is used for realizing the medical image processing method.
In one embodiment, the system further includes at least one display, the display is connected to the image processing device, and the display is configured to display a current medical image processed by the image processing device, where the current medical image carries operation information.
In one embodiment, the system further comprises at least one medical imaging device, the medical imaging device is connected with the image processing device, and the medical imaging device is used for scanning to obtain an initial medical image and sending the scanned initial medical image to the image processing device.
In one embodiment, the system further includes an augmented reality device, the augmented reality device being in communication with the image processing device, the augmented reality device being configured to display a mixed reality image processed by the image processing device.
In one embodiment, the augmented reality device is further configured to receive an editing instruction for the operation information, and edit the operation information based on the editing instruction.
In one embodiment, the augmented reality device comprises a gesture recognition module, an interaction control module and a display module, wherein the gesture recognition module is used for recognizing the gesture of an operator and controlling the display module to display an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement; the interactive control module is used for carrying out conversion calculation on the editing instruction received by the operation information editing instruction receiving panel and displaying the operation information after conversion calculation on the display module.
In one embodiment, the system further comprises a robot arm, wherein the controller of the robot arm is in communication with the image processing device, and the robot arm is used for receiving a robot arm control command generated by the image processing device according to the operation information and operating according to the operation information based on the robot arm control command.
A surgical system comprising an operating trolley, a navigation trolley, and an operating table;
real medical image acquisition equipment is installed on the navigation trolley; image processing equipment is arranged on the operation trolley and/or the navigation trolley; the real image acquisition device is used for acquiring a real medical image of a patient on an operating table and sending the acquired real medical image to the image processing device, wherein the real medical image comprises a first marker;
the image processing device is used for realizing the medical image processing method in any one of the above embodiments.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
According to the medical image processing method, the device, the computer equipment and the storage medium, the real medical image and the corresponding target virtual medical image are firstly obtained, the matching relation between the real medical image and the corresponding target virtual medical image is obtained through calculation according to the first marker existing in the real medical image and the corresponding target virtual medical image, so that the real medical image and the corresponding target virtual medical image can be fused according to the matching relation to obtain the current medical image, the operation information is also fused and displayed in the current medical image, the operation information can be fused to the real space to be operated, and the data prompt capable of grasping the operation globally is provided for a doctor.
Drawings
FIG. 1 is a schematic diagram of a medical image processing system in one embodiment;
FIG. 2 is a schematic diagram of a medical image processing system in yet another embodiment;
FIG. 3 is a diagram illustrating the relationship of various components of an image processing system, in one embodiment;
FIG. 4 is a flow diagram of a method of medical image processing in one embodiment;
FIG. 5 is a flow chart illustrating a medical image processing method according to another embodiment;
FIG. 6 is a diagram illustrating a spatial coordinate transformation of portions according to an embodiment;
FIG. 7 is a schematic diagram of a blended display image in one embodiment;
FIG. 8 is a diagram illustrating a relationship between a real space coordinate system and an image space coordinate system of an augmented reality device, in one embodiment;
fig. 9 is a schematic view of a display of a visual space of an augmented reality device in another embodiment;
FIG. 10 is a diagram illustrating an operational information adjustment flow, in one embodiment;
fig. 11 is a schematic view of a display of a visual space of an augmented reality device in a further embodiment;
FIG. 12 is a functional diagram of an interaction control module of the augmented reality device in one embodiment;
fig. 13 is a functional diagram of an interaction control module of an augmented reality device in yet another embodiment;
FIG. 14 is a functional diagram of an interaction control module of an augmented reality device in a further embodiment;
FIG. 15 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment of the present application, as illustrated in connection with fig. 1, a medical image processing system is provided, the medical image processing system comprising at least an image processing device and a real medical image acquisition device, wherein the image processing device and the real medical image acquisition device are in communication. Referring to fig. 1, fig. 1 is a schematic diagram of a medical image processing system in an embodiment in which the real medical image acquisition device is an optical tracking module 6. The image processing device herein may be understood as a processor, and the processor is not limited to a processor in a navigation car, but may also include a processor of an augmented reality device. In a preferred embodiment, the image processing system may include at least two processors, which communicate with each other to complete different functions. The processor in which the processing function exists is collectively referred to as an image processing apparatus herein for convenience.
The medical image processing system comprises an image processing device and an optical tracking module 6, wherein the optical tracking module 6 is mounted on a navigation trolley 9, and the image processing device may be defined as above, and is not specifically limited herein. In which a patient 17 lies on an operating table 16 at the time of surgery and marks the site to be operated on by first markers, as exemplified in fig. 1 for the lower limb bones, including the tibia 14 and the femur 12, wherein the first markers include the tibia markers 13 and the femur markers 11. A real medical image can be acquired by the optical tracking module 6, which real medical image comprises the first marker, so that the site to be operated can be marked. The image processing equipment acquires a target virtual medical image corresponding to the real medical image and the position relation between the target virtual medical image and the operation information, and the target virtual medical image also comprises a first marker, so that the real medical image and the target virtual medical image can be matched and fused to obtain the current medical image according to the first marker. Therefore, according to the matching relationship between the target virtual medical image and the real medical image and the position relationship between the target virtual medical image and the operation information, the operation information can be displayed in the current medical image so as to fuse the operation information to the real part to be operated in real time.
In one embodiment, the medical image processing system further comprises a display, such as the main display 8 and the auxiliary display 7 mounted on the navigation trolley 9 in fig. 1, so that the image processing apparatus can send the current medical image to the display for displaying, wherein it is to be noted that the current medical image displayed on the display carries the operation information. The navigation trolley 9 can also be provided with a keyboard 10 and the like, and the operation information in the current medical image in the display can be adjusted through the operation of the keyboard 10 and the like.
In one embodiment, the medical image processing system further comprises a mechanical arm 2, the mechanical arm 2 is fixed on the operation trolley 1, the operation trolley 1 comprises a base target 15, the front end of the mechanical arm 2 is provided with a swing saw 5, a bone cutting guide tool 4 and the like, wherein the bone cutting guide tool 4 is further marked through a tool marker 3, so that the optical tracking module 6 can acquire the position of the tool in real time, and further the position of the tool and the operation of the tool can be adjusted according to operation information.
It should be noted that the target virtual medical image is obtained by preoperative planning, for example, a medical imaging device scans a target region preoperatively to obtain an initial medical image, performs three-dimensional reconstruction according to the initial medical image to obtain a target virtual medical image, and receives an operation information configuration instruction for the target virtual medical image, so as to configure and obtain operation information on the target virtual medical image.
In one embodiment, as shown in fig. 2, an augmented reality device is introduced to allow the main surgeon 21 to view the current medical image in real time without touching the computer. Furthermore, in order to enable the main surgeon 21 to optimally adjust the surgical plan based on the real surgical scene without contacting the computer, an augmented reality-based device is introduced, and the adjustment of the operation information can be faster and more accurate through the interactive operation of the augmented reality-based device.
The augmented reality equipment is communicated with the image processing equipment, so that the current medical image and the reference image acquired by the augmented reality equipment can be matched and fused to obtain a mixed reality image, and the mixed reality image is displayed in the augmented reality equipment. Preferably, the augmented reality device may further receive an editing instruction for the operation information, and edit the operation information based on the editing instruction.
The augmented reality device comprises a gesture recognition module, an interaction control module and a display module, wherein the gesture recognition module is used for recognizing the gesture of an operator and controlling the display module to display an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement; the interactive control module is used for carrying out conversion calculation on the editing instruction received by the operation information editing instruction receiving panel and displaying the operation information after the conversion calculation on the display module.
Specifically, referring to fig. 3, fig. 3 is a relationship diagram of each part of the image processing system in an embodiment, where first, the preoperative planning module acquires an initial medical image of a patient through a medical imaging device, and obtains a target virtual medical image after reconstruction, and then performs configuration of operation information, i.e., an operation plan, on the target virtual medical image, and a spatial coordinate corresponding to the target virtual medical image is referred to as a virtual operation spatial coordinate. The optical tracking module 6 can then obtain the position information of the first marker in the real surgical scene, so that the real surgical space coordinates can be provided. The AR glasses 20 may then capture and display images of the actual surgical scene in its visual space so that it can provide visual space coordinates. The space matching module matches the virtual operation space coordinate with the real operation space coordinate according to the first marker, matches the real operation space coordinate with the visual space coordinate, and then completes matching of the virtual operation space coordinate with the visual space coordinate, so that operation information configured on the target virtual medical image can be displayed in the visual space.
Further, with reference to fig. 3, the AR glasses augmented reality device may include a gesture recognition module, an interaction control module, and a display module, where the gesture recognition module is configured to recognize a gesture of an operator, so that an interaction control command panel button or an interaction controlled virtual rectangular frame may be triggered according to the gesture of the operator to send an operation information adjustment instruction. The interactive control module is used for processing coordinate system conversion calculation required by interactive control and providing the adjusted conversion relation for the display module to display; the display module is used for fusing and displaying the operation information to the intra-articular area of the patient 17, providing suggestions for adjusting the operation information, and displaying an interactive control command panel and/or an interactively controlled virtual cuboid frame and the like for adjusting the operation information.
Each functional module in fig. 3 may be completed by processors of different entities, or may be completed in one processor, which is not limited herein.
In one embodiment, as shown in fig. 4, a medical image processing method is provided, which is exemplified by the application of the method to the image processing apparatus in fig. 1, and includes the following steps:
s402: a real medical image acquired by a real image acquisition device is acquired, the real medical image including a first marker.
Specifically, the real image capturing device, i.e., the optical tracking module 6 in fig. 1, can acquire the positions of the respective first markers in the real space. The first markers are used for marking the part to be operated on and/or the surgical device in real space, and for example, in fig. 1, the first markers include a tibia marker 13 and a femur marker 11, which can record the position of the bone of the patient 17 in real space, so that the real image acquiring device can acquire a real space coordinate system by acquiring a real medical image including the first markers. Specifically, prior to surgery, with reference to fig. 1, the surgical trolley 1 and navigation trolley 9 are placed in position alongside the patient bed, and femoral markers 11, tibial markers 13, base target 15, sterile bags, osteotomy guide tools 4, tool markers 3, etc. are installed to provide a surgical environment.
The real medical image includes real information of the real prosthesis, such as real-time force line, prosthesis positioning effect, force line angle, flexion angle display, intraoperative osteotomy gap and the like.
S404: acquiring a target virtual medical image which corresponds to the real medical image and contains operation information, and a position relation between the target virtual medical image and the operation information, wherein the target virtual medical image comprises a first marker.
Specifically, the target virtual medical image is acquired before the operation, for example, an initial medical image of a target portion of a patient is acquired by a medical imaging device, and the initial medical image is three-dimensionally reconstructed to obtain the target virtual medical image, so that a doctor can configure corresponding operation information on the target virtual medical image to guide the operation. The three-dimensional reconstruction may be obtained by performing image segmentation on the initial medical image to obtain a target organ, tissue or bone, and then performing three-dimensional reconstruction. For subsequent coordinate space matching, before the initial medical image is acquired, corresponding first markers are installed at corresponding feature points of a target region, and then the reconstructed target virtual medical image comprises the first markers.
Wherein the operational information is a preoperatively planned surgical plan including anatomical landmark markers and prosthesis information. The prosthesis information includes the model and the installation position of the prosthesis, and the anatomical landmark mark is used to guide the operation track of the surgical equipment, such as the operation track of the mechanical arm 2, the coordinates of the screenshot plane, and the like. After the target virtual medical image is generated, an operation information configuration instruction of a doctor is received, so that operation information is configured on the target virtual medical image, and the position relation between the target virtual medical image and the operation information is established according to the position size and the like of the configured operation information. In practical application, a doctor calls the prosthesis icon with the corresponding model through the terminal, then moves the prosthesis icon to the corresponding position of the target virtual medical image, adjusts the size of the prosthesis icon to meet the requirement, and further generates operation information according to the size and the position of the prosthesis icon. Furthermore, the doctor can configure the anatomical landmark points, i.e. select the corresponding anatomical points on the target virtual medical image, and in other embodiments, the doctor can automatically identify the anatomical points in the target virtual medical image by means of a neural network, so as to improve the efficiency.
After the operation information planning before the operation is finished, the target virtual medical image and the operation information are imported into an image processing device in the operation for use in the operation.
S406: and calculating the matching relation between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain the current medical image.
In particular, since the first marker is present in both the target virtual medical image and the real medical image and is fixed in position, i.e. the first marker is fixed in position relative to the target object, e.g. a bone, the movement of the target object does not affect the effect of the surgery. Taking fig. 1 as an example, the actual orientation of the femur 12, tibia 14 is linked to corresponding first markers mounted on the femur 12 and tibia 14, so that the femur markers 11 and tibia markers 13 can track the actual position of the bone in real time.
Therefore, according to the coordinate positions of the corresponding first markers in the target virtual medical image and the real medical image, a space conversion matrix can be established so as to calculate the matching relationship between the target virtual medical image and the real medical image, and then the target virtual medical image can be mapped to the real medical image according to the matching relationship, namely the real space to obtain the current medical image. Thus, the current medical image includes both information in real space and information for preoperative planning.
When the matching relationship is calculated, the optical tracking module 6 acquires the first marker to acquire the position of the feature point of a target object, such as a bone, and then sends the position of the feature point to the image processing device, and finally, the corresponding relationship between the target object in the real space and the target object in the target virtual medical image, namely the matching relationship between the target virtual medical image and the real medical image, is calculated through the feature matching algorithm of the image processing device.
S408: and fusing the operation information in the current medical image according to the matching relationship between the target virtual medical image and the real medical image and the position relationship between the target virtual medical image and the operation information.
Specifically, the image processing device converts the operation information into a target virtual medical image coordinate system according to the position relationship between the operation information and the target virtual medical image, then converts the operation information in the target virtual medical image coordinate system into the coordinate system of the real medical image according to the matching relationship between the target virtual medical image and the real medical image, and further displays the operation information in the current medical image obtained by fusing the target virtual medical image and the real medical image, so that a doctor can fuse the operation information into a real space to be operated, and data prompt capable of globally holding an operation is provided for the doctor.
According to the medical image processing method, the real medical image and the corresponding target virtual medical image are obtained firstly, the matching relation between the real medical image and the corresponding target virtual medical image is calculated according to the first marker existing in the real medical image and the corresponding target virtual medical image, the real medical image and the corresponding target virtual medical image can be fused according to the matching relation to obtain the current medical image, the operation information is also fused and displayed in the current medical image, and therefore a doctor can fuse the operation information to a real space to be operated, and data prompt capable of globally holding an operation is provided for the doctor.
In one embodiment, calculating a matching relationship between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain the current medical image includes: calculating to obtain corresponding matching features in the target virtual medical image and the real medical image according to the first marker; and calculating the matching relation between the target virtual medical image and the real medical image according to the matching characteristics, and fusing the target virtual medical image and the real medical image to obtain the current medical image.
Specifically, the matching feature refers to a feature point of a target object of the target virtual medical image corresponding to the real medical image, the feature point being marked by the first marker, for example, by installing the first marker at the position of the target object before the real medical image is acquired, so that the optical tracking module 6 can track the position of the first marker. Before the target virtual medical image is obtained, a first marker is also installed at a corresponding position of the target object, so that the first marker exists in the acquired initial medical image, the first marker also exists in the reconstructed target virtual medical image, the matching features in the target virtual medical image and the real medical image are determined according to the corresponding first marker, and a spatial transformation matrix can be constructed according to the matching features to obtain the matching relation between the target virtual medical image and the real medical image. Therefore, the target virtual medical image and the real medical image can be fused according to the obtained matching relation to obtain the current medical image.
In the above embodiment, the matching between the real space and the virtual space is realized through the first marker, so that a foundation is laid for displaying a surgical plan, that is, operation information, in the real space in the following process.
In one embodiment, before acquiring the target virtual medical image corresponding to the real medical image, the method further includes: acquiring an initial medical image obtained by scanning of medical imaging equipment; carrying out three-dimensional reconstruction on the initial medical image to obtain a target virtual medical image; receiving an operation information configuration instruction aiming at the target virtual medical image, configuring operation information on the target virtual medical image according to the operation information configuration instruction, and acquiring the position relation between the target virtual medical image and the operation information.
Specifically, the initial medical image is obtained by scanning a target object through a medical imaging device, for example, by CT or other medical imaging devices, and taking the lower limb bone in fig. 1 as an example, before an operation, the initial medical image is obtained by scanning the lower limb bone, and the target object is obtained by segmenting the initial medical image, and then a target virtual medical image is obtained by performing three-dimensional reconstruction according to the segmented target object.
The physician can perform the configuration of the anatomical landmark points and the prosthesis information on the target virtual medical image, so that the operation information can be generated. Taking the limb bone as an example, the doctor can configure all anatomical feature points (such as the rotation center of the femur 12, the knee joint center, the talus center, etc.), all corresponding connecting lines (such as a force line, a through condyle line, etc.), relevant included angles (such as a force line angle, a medial-lateral rotation angle, etc.), osteotomy mark points (such as a distal resection point of the medial-lateral condyle of the femur 12, a medial-lateral resection point of the posterior condyle, etc.), osteotomy thicknesses (such as a distal resection thickness of the medial-lateral condyle of the femur 12, a proximal resection thickness of the tibia 14, etc.), etc. on the target virtual medical image, and can also configure the prosthesis positioning, etc. on the target virtual medical image, wherein the operation information configured by the doctor is not limited, and only the operation needs are taken as targets.
After the configuration of the operation information is completed, the position relationship between the operation information and the target virtual medical image may be recorded, for example, in a virtual coordinate system of the target virtual medical image, the coordinates of the corresponding operation information are acquired.
In the above embodiment, the medical image of the target object is acquired before the operation, and the corresponding operation information is configured, so that a foundation is laid for subsequently displaying the operation information in a real space.
Specifically, referring to fig. 5, fig. 5 is a flowchart illustrating a medical image processing method in another embodiment, where before a surgery, CT image data of a target object of a patient is obtained, bone segmentation is performed on the CT image data, three-dimensional bone reconstruction is performed based on a bone segmentation result, an anatomical landmark point is marked on the reconstructed CT image, and a prosthesis is placed according to the anatomical landmark point, where the anatomical landmark point and the prosthesis placement are operation information in the above. Wherein the reconstructed CT image corresponds to a virtual surgical space.
Before an operation, the mechanical arm 2 trolley and the navigation trolley 9 are placed at proper positions beside a sickbed, a first marker, namely an optical marker, is installed, the position of the first marker on the bone of the patient 17 is collected through the optical tracking module 6, the position data of the feature point on the bone of the patient 17 is obtained, the real operation space and the virtual operation space are matched through a feature matching algorithm to display operation information in the real operation space, the operation information can be adjusted in the operation by combining the real operation space, and a doctor adjusts and confirms the operation scheme before the operation based on the actual condition in the operation. And the bone cutting plane coordinate in the operation information is sent to the mechanical arm 2, the mechanical arm 2 can be automatically positioned to a corresponding place, so that a doctor can be assisted to cut bones, namely the doctor can use the swing saw 5 or the electric drill to cut bones and drill holes through the bone cutting guide groove and the guide hole of the bone cutting guide tool 4, if the bone cutting is completed, the prosthesis is installed, otherwise, the adjustment of the operation information can be continuously repeated, so that the complete bone cutting is realized, and after the bone cutting and drilling operations are completely completed, the doctor can install the prosthesis and perform other operation operations.
In one embodiment, in order to enable the main doctor 21 to view the current medical image in real time without contacting with a computer, an augmented reality device is introduced, and the image processing method further includes: acquiring a matching relation between an image coordinate system of augmented reality equipment in a visual space and a matching space coordinate system of a real medical image; acquiring a reference image which is acquired by augmented reality equipment and is to be displayed to a visual space; and matching and fusing the current medical image carrying the operation information and the reference image to obtain a mixed reality image according to the matching relationship between the image in the visual space coordinate system of the augmented reality equipment and the image in the real medical space coordinate system and the matching relationship between the target virtual medical image and the real medical image, and displaying the mixed reality image to the visual space of the augmented reality equipment.
Specifically, the augmented reality device may refer to AR glasses or the like, and the augmented reality device may acquire a medical image in a real space and convert the medical image in the real space into a visual space of the augmented reality device for display. That is to say, the doctor sees the image in the visual space, and in order to enable the doctor to see the real medical image and the operation information, the target virtual medical image carrying the operation information needs to be matched and fused with the reference image in the visual space.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a transformation relationship between spatial coordinates of each part in an embodiment, wherein the transformation relationship between the virtual operation space coordinate system and the visual space coordinate system can be calculated by the following formula:
DTR=CTR*DTC=CTB*ETC*DTE
wherein the transformation relation CT of the virtual operating space coordinate system R and the real operating space coordinate system CRThe method can be obtained through the matching relation between the three-dimensional reconstructed bone model surface point cloud data and the characteristic point cloud data recorded by the optical tracking module 6 on the real bone surface of the patient 17. Transformation relationship ET between an optical marker to an image coordinate system of an augmented reality deviceCCan be obtained by calibration. Transformation relation DT between augmented reality device image coordinate system to augmented reality device visual space coordinate systemECan be obtained by image recognition algorithms. Thus, the conversion relation from the virtual operation space coordinate system to the visual space coordinate system can be obtained, the current medical image carrying the operation information and the reference image can be matched and fused to obtain a mixed reality image, and the mixed reality image is obtainedThe real image is displayed to a visual space of the augmented reality device.
Specifically, in conjunction with fig. 7, fig. 7 is a schematic diagram of a mixed display image in an embodiment in which a mixed reality image of the force line, the force line angle, the prosthesis setup, and the intraoperative osteotomy gap of the patient 17 in the extended position is displayed in real time.
As described with reference to the transformation relationship of each spatial coordinate system in fig. 7, the content displayed by the mixed reality image according to the embodiment of the present invention includes, but is not limited to, at least the following aspects: all anatomical feature points (such as the rotation center B of the femur 12, the knee joint center C, the talus center D, etc.) and all corresponding connecting lines (such as the force line, the universal condyle line, etc.) marked in the virtual operation space can be fused into the real operation scene in real time, and relevant included angles (such as the force line included angle a, the internal and external rotation angles, the rotation center B of the femur 12, etc.) are displayed. The prosthesis positioning planned before the operation can be fused into a real operation scene, so that a doctor can check the prosthesis positioning effect on a real bone; osteotomy markers (e.g., a medial-lateral condyle distal resection of the femur 12, a posterior-medial-lateral resection, etc.) and osteotomy thicknesses (e.g., a distal-medial-lateral resection of the femur 12, a proximal resection of the tibia 14, etc.) may be fused into the real surgical scene.
Fig. 7 also shows a lateral distal plane E of the femoral component 12, a lateral distal plane F of the tibial component 14, a lateral knee joint osteotomy gap G in the extension position, a medial distal plane H of the femoral component 12, a medial osteotomy gap I in the extension position, a medial distal plane J of the tibial component 14, a femur 12K of the real patient 17, a virtual femoral component L of the virtual femoral component 12, a virtual tibial component M of the virtual tibial component 14, a tibia 14N of the real patient 17, and a fibula Z of the real patient 17, which may be specifically combined with fig. 7, and will not be described herein again.
In one embodiment, the obtaining of the matching relationship between the image coordinate system in the visual space of the augmented reality device and the matching space coordinate system of the real medical image includes: acquiring a first conversion matrix of a real space coordinate system and an image space coordinate system of augmented reality equipment; acquiring a second transformation matrix of an image space coordinate system of the augmented reality equipment and a visual space coordinate system of the augmented reality equipment; and obtaining the matching relation between the visual space coordinate system and the real space coordinate system of the augmented reality equipment according to the first conversion matrix and the second conversion matrix.
The first transformation matrix is a first transformation matrix of a real space coordinate system (i.e., the real operation space coordinate system) and an image space coordinate system of the augmented reality device, and the first transformation matrix may be a transformation matrix between the optical marker and the image 19 to be recognized of the AR glasses, which is obtained by a calibration method, so as to establish a matching relationship between the real space coordinate system and the image space coordinate system of the augmented reality device, which is specifically referred to below.
The second transformation matrix is a second transformation matrix of the image space coordinate system of the augmented reality device and the visual space coordinate system of the augmented reality device obtained through the image recognition algorithm.
And thus, the matching relation between the visual space coordinate system and the real space coordinate system of the augmented reality equipment is obtained according to the first conversion matrix and the second conversion matrix.
Specifically, acquiring a first transformation matrix of a real space coordinate system and an image space coordinate system of the augmented reality device includes: acquiring a first reference object image of a first calibration reference object in an image space through augmented reality equipment, establishing an image coordinate system by taking a target point of the first reference object image as an origin, and determining a preset number of reference points from the first reference object image based on the image coordinate system; acquiring a second reference object image of the first calibration reference object acquired by the real image acquisition equipment under the image coordinate system of the real image acquisition equipment, and determining a mapping point corresponding to the reference point from the second reference object image; converting the mapping points to a real space coordinate system; and calculating the coordinates of the reference point and the mapping point in the real space coordinate system to obtain a first conversion matrix of the real space coordinate system and the image space coordinate system of the augmented reality equipment.
Specifically, referring to fig. 8, fig. 8 includes an optical tracking module 6, a base target 15, a tip target 18, and an image 19 to be recognized of an augmented reality device, where the first calibration reference object is an entity reference object corresponding to the image 19 to be recognized, and the image 19 to be recognized may be any image type recognizable by the augmented reality device, such as a two-dimensional code, a frame diagram, and the like. The image 19 to be recognized of the augmented reality device is fixed on the base target 15, and in other embodiments, the image may be fixed on any other target, and the position is kept fixed, that is, the picture acquired by the augmented reality device is the base target 15. The coordinate system of the base target 15 is the real space coordinate system.
The image of the base target 15 in the image space is acquired through the augmented reality device, and an image coordinate system is established by taking the image central point as an origin, so that at least four coordinate points of ABCD are obtained. The second reference object image of the corresponding tip target 18 in the image coordinate system of the real image acquisition device is obtained through the optical tracking module 6, the mapping point corresponding to the reference point is determined from the second reference object image, then the mapping point is converted into the corresponding base target 15 coordinate system, namely, the real space coordinate system, and therefore the conversion relation between the reference point and the mapping point in the real space coordinate system can be obtained through the feature matching algorithm according to the corresponding point coordinate.
In the above embodiment, a calibration method of the first conversion matrix of the real space coordinate system and the image space coordinate system of the augmented reality device is provided.
In one embodiment, after displaying the mixed reality image to the visual space of the augmented reality device, the method further includes: acquiring a display angle of the mixed reality image; calculating to obtain a reference angle based on the display angle, and acquiring a target virtual medical image under the reference angle; and displaying the target virtual medical image and the mixed reality image under the reference angle in a visual space.
Specifically, referring to fig. 9, fig. 9 is a schematic diagram of a display of a visual space of an augmented reality device in another embodiment, where the visual space displays a mixed reality image of a force line, a flexion angle, a prosthesis setup and an intraoperative osteotomy gap of a patient 17 in a flexion position in real time, where the mixed reality image includes a flexion included angle a2, a rotation center B of a femur 12, a knee joint center C and a talus center D. The image processing device further obtains a display angle at which the mixed display image is displayed in the visual space, and then calculates a reference angle according to the display angle, such as an angle of 90 degrees with respect to the display angle, which includes, for example, an inward-side view from the outside of the patient 17 and an outward-side view from the inside of the patient 17 in fig. 9, so that the image processing device extracts the target virtual medical image at the corresponding angle from the target virtual medical image according to the reference angle and displays the target virtual medical image and the mixed reality image in the visual space. Femoral 12 prosthetic lateral posterior condylar plane E2, tibial 14 prosthetic lateral distal plane F, flexion lateral resection gap G2-, femoral 12 prosthetic medial posterior condylar plane H2, flexion medial resection gap I2, tibial 14 prosthetic medial distal plane J, real patient 17 femur 12K, virtual femur 12 prosthesis L, virtual tibia 14 prosthesis M, real patient 17 tibia 14N, and real patient 17 fibula Z as in fig. 9.
In one embodiment, after displaying the mixed reality image to the visual space of the augmented reality device, the method further includes: receiving an editing instruction aiming at the operation information through the augmented reality equipment; and editing the operation information based on the editing instruction through the augmented reality equipment.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating an operation information adjustment flow in an embodiment, where the editing instruction includes, but is not limited to, a position adjustment, a display adjustment, and the like.
Wherein, with reference to fig. 10 and fig. 2, firstly, a matching relationship between the real operation space and the image 19 to be recognized (image coordinate system of augmented reality device) is established through a transformation matrix between the optical marker and the image 19 to be recognized of the AR glasses, then, coordinates of the image 19 to be recognized of the AR glasses in the glasses visual space are obtained, and a matching relationship between the image coordinate system and the glasses visual space is established, so that the matching relationship between the real operation space and the glasses visual space can be established, and because the real operation space and the virtual operation space have the matching relationship, the matching relationship between the virtual operation space and the glasses visual space can be established, further, the operation information in the virtual operation space can be fused into the real operation scene, that is, into the glasses visual space scene, so that a doctor can perform the intra-operative scheme by using the gesture recognition and the interactive control operation of the augmented reality device, i.e. adjustment of the operation information.
In one embodiment, receiving, by an augmented reality device, an editing instruction for operation information includes: recognizing the gesture of an operator through augmented reality equipment, and displaying an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement; an editing instruction for the operation information is received through the editing instruction receiving panel.
Specifically, the augmented reality device may include a plurality of modules, in conjunction with fig. 2 and 3, wherein the master surgeon gesture 22 is recognized by the gesture recognition module of the augmented reality device, so that a real editing instruction receiving panel may be triggered, wherein the editing instruction receiving panel may be an interactive control command panel button or an interactively controlled virtual cuboid box to send a scheme adjustment instruction. The interactive control module is used for processing coordinate system conversion calculation required by interactive control and providing the adjusted conversion relation for the display module to display; and the display module is used for fusing and displaying the operation scheme information to the intraoperative region of the patient 17, providing a suggestion for adjusting the operation scheme, and displaying an interactive control command panel for intraoperative scheme adjustment and an interactive control virtual cuboid frame.
In one embodiment, editing the operation information based on the editing instruction includes: and inquiring the operation information under the corresponding angle according to the editing instruction, and displaying the inquired operation information in a visual space.
Specifically, referring to fig. 11 and 12, fig. 11 is a schematic view of a display of a visual space of an augmented reality device in a further embodiment.
The editing instruction may be an inquiry instruction, so as to inquire the operation information at the corresponding angle, and display the inquired operation information in the visual space, where the corresponding angle may be the operation information of the patient 17 in the extension position and the operation information of the patient 17 in the flexion position, and in other embodiments, other angles may be preset to facilitate the user to select, so as to display the operation information at the corresponding angle.
As shown in fig. 11, the doctor can also select to display the whole operation scheme information (prosthesis model and size, angle and osteotomy amount) of the patient 17 in the extended position in the AR glasses in real time, so that the doctor can grasp the operation related data globally.
In other embodiments, the doctor can also select to display the whole operation scheme information (prosthesis model and size, angle and osteotomy amount) of the patient 17 in the flexion position in the AR glasses in real time, so that the doctor can master the operation related data globally.
In one embodiment, editing the operation information based on the editing instruction includes: performing at least one of translation, rotation and transformation of type and size on the operation information according to the editing instruction; and displaying the edited operation information in a visual space.
Specifically, referring to fig. 12, fig. 12 is a functional schematic diagram of an interaction control module of an augmented reality device in an embodiment.
As shown in fig. 12, it is a schematic view of an interactive control panel in AR glasses for a doctor to adjust the scheme in real time during the femur 12 operation, so that the doctor can use fingers to trigger the corresponding command buttons 22 for translation and rotation in the front, right and top views, i.e. the corresponding angle and osteotomy amount schemes can be changed; the type and size of the prosthesis, the adjustment range of the distance and the angle and the like can be changed by triggering the corresponding command button 22 by fingers at the right column of the diagram.
In other embodiments, an interactive control panel for the physician to adjust the tibial 14 intraoperative plan in real time may be provided in the AR glasses, so that the physician may use fingers to trigger the corresponding command buttons 22 for translation and rotation in the front, right and top views, i.e. the corresponding angle and osteotomy amount plan may be changed; the type and size of the prosthesis, the adjustment range of the distance and the angle and the like can be changed by triggering the corresponding command button 22 by fingers at the right column of the diagram.
In one embodiment, editing the operation information based on the editing instruction includes: intercepting the current medical mixed reality image according to the editing instruction to obtain an intercepted image; and editing operation is carried out on the operation information in the intercepted image, wherein the editing operation comprises translation and/or rotation.
In one embodiment, after performing an editing operation on the operation information in the captured image, the method further includes: and acquiring the edited operation information, and displaying the edited operation information in a visual space.
Specifically, referring to fig. 13 and 14, fig. 13 is a functional schematic diagram of an interaction control module of an augmented reality device in another embodiment, and fig. 14 is a functional schematic diagram of an interaction control module of an augmented reality device in yet another embodiment.
Fig. 13 is a schematic diagram of the AR glasses for the physician to perform real-time gesture grabbing adjustment of the femur 12 intraoperative adjustment scheme, in which the real femur 12K and the femur 12 virtual prosthesis L of the patient 17 are displayed. The doctor can grab the virtual cuboid frame 23 for adjusting the placement of the prosthesis by hands, and can horizontally move and rotate up and down, left and right, and the adjusted scheme details are displayed on the scheme detail display panel 24 during the adjustment of the scheme in the operation in real time.
Fig. 14 is a schematic diagram of the AR glasses for the physician to perform the adjustment of the gesture grasping adjustment scheme in the tibia 14 operation in real time, in which a virtual tibia 14 prosthesis M, a real tibia 14N of a patient 17, and a real fibula Z of the patient 17 are displayed. The doctor can grab the virtual cuboid frame 23 for adjusting the placement of the prosthesis by hands, and can horizontally move and rotate up and down, left and right, and the adjusted scheme details are displayed on the scheme detail display panel 24 during the adjustment of the scheme in the operation in real time.
In one embodiment, the method further comprises: and generating a mechanical arm 2 control command according to the operation information, and sending the mechanical arm 2 control command to the mechanical arm 2, wherein the mechanical arm 2 control command is used for instructing the mechanical arm 2 to operate according to the operation information.
Specifically, after the operation information is determined, the robot arm 2 control command is generated based on the operation information to transmit the robot arm 2 control command to the robot arm 2, so that the robot arm 2 can perform an operation in accordance with the operation information to assist a doctor in osteotomy or the like.
In the above embodiments, the conversion relationship among the virtual operation space, the real operation space and the AR glasses visual space is established, so that the information among the three can be rapidly converted and fused. And the detailed operation scheme information is fused on the real operation position (including real-time force line, prosthesis positioning effect, force line angle, flexion angle display, intraoperative osteotomy gap and the like) by utilizing the augmented reality technology, so that data prompt and adjustment scheme prompt capable of realizing global operation are provided for doctors, and more accurate and personalized operation schemes are manufactured. Furthermore, the convenient interactive operation based on the augmented reality technology is set, so that the main surgeon 21 can optimally adjust the surgical plan based on the real surgical scene in real time without contacting a computer end, and the risk of polluting a sterile area is reduced.
It should be understood that although the steps in the flowcharts of fig. 4 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 4 and 5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 15. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a medical image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard 10, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the following steps when executing the computer program: acquiring a real medical image acquired by a real image acquisition device, the real medical image including a first marker; acquiring a target virtual medical image which corresponds to the real medical image and contains operation information and a position relation between the target virtual medical image and the operation information, wherein the target virtual medical image comprises a first marker; calculating the matching relation between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain a current medical image; and fusing the operation information in the current medical image according to the matching relationship between the target virtual medical image and the real medical image and the position relationship between the target virtual medical image and the operation information.
In one embodiment, the calculating, by the processor, a matching relationship between the target virtual medical image and the real medical image according to the first marker and fusing the target virtual medical image and the real medical image to obtain the current medical image, which is implemented when the processor executes the computer program, includes: calculating to obtain corresponding matching features in the target virtual medical image and the real medical image according to the first marker; and calculating the matching relation between the target virtual medical image and the real medical image according to the matching characteristics, and fusing the target virtual medical image and the real medical image to obtain the current medical image.
In one embodiment, prior to acquiring the target virtual medical image corresponding to the real medical image as implemented when the processor executes the computer program, the method further comprises: acquiring an initial medical image obtained by scanning of medical imaging equipment; carrying out three-dimensional reconstruction on the initial medical image to obtain a target virtual medical image; receiving an operation information configuration instruction aiming at the target virtual medical image, configuring operation information on the target virtual medical image according to the operation information configuration instruction, and acquiring the position relation between the target virtual medical image and the operation information.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a matching relation between an image coordinate system of augmented reality equipment in a visual space and a matching space coordinate system of a real medical image; and matching and fusing the current medical image carrying the operation information and the reference image to obtain a mixed reality image according to the matching relationship between the image in the visual space coordinate system of the augmented reality equipment and the image in the real medical space coordinate system and the matching relationship between the target virtual medical image and the real medical image, and displaying the mixed reality image to the visual space of the augmented reality equipment.
In one embodiment, the acquiring a matching relationship of an image coordinate system in a visual space of an augmented reality device and a matching space coordinate system of a real medical image, which is realized by a processor when the processor executes a computer program, comprises: acquiring a first conversion matrix of a real space coordinate system and an image space coordinate system of augmented reality equipment; acquiring a second transformation matrix of an image space coordinate system of the augmented reality equipment and a visual space coordinate system of the augmented reality equipment; and obtaining the matching relation between the visual space coordinate system and the real space coordinate system of the augmented reality equipment according to the first conversion matrix and the second conversion matrix.
In one embodiment, the obtaining a first transformation matrix of a real space coordinate system and an image space coordinate system of an augmented reality device, as implemented by a processor executing a computer program, comprises: acquiring a first reference object image of a first calibration reference object in an image space through augmented reality equipment, establishing an image coordinate system by taking a target point of the first reference object image as an origin, and determining a preset number of reference points from the first reference object image based on the image coordinate system; acquiring a second reference object image of the first calibration reference object acquired by the real image acquisition equipment under the image coordinate system of the real image acquisition equipment, and determining a mapping point corresponding to the reference point from the second reference object image; and calculating the coordinates of the reference point and the mapping point in the real space coordinate system to obtain a first conversion matrix of the real space coordinate system and the image space coordinate system of the augmented reality equipment.
In one embodiment, the processor, when executing the computer program, further comprises, after displaying the mixed reality image to the visual space of the augmented reality device: displaying the target virtual medical image and the mixed reality image in a visual space under the reference angle; calculating to obtain a reference angle based on the display angle, and acquiring a target virtual medical image under the reference angle; and displaying the target virtual medical image and the mixed reality image under the reference angle in a visual space.
In one embodiment, the processor, when executing the computer program, further comprises, after displaying the mixed reality image to the visual space of the augmented reality device: receiving an editing instruction aiming at the operation information through the augmented reality equipment; and editing the operation information based on the editing instruction through the augmented reality equipment.
In one embodiment, the receiving, by the augmented reality device, the editing instruction for the operation information, implemented when the processor executes the computer program, includes: recognizing the gesture of an operator through augmented reality equipment, and displaying an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement; an editing instruction for the operation information is received through the editing instruction receiving panel.
In one embodiment, the editing of the operation information based on the editing instruction implemented when the processor executes the computer program includes: and inquiring the operation information under the corresponding angle according to the editing instruction, and displaying the inquired operation information in a visual space.
In one embodiment, the editing of the operation information based on the editing instruction implemented when the processor executes the computer program includes: performing at least one of translation, rotation and transformation of type and size on the operation information according to the editing instruction; and displaying the edited operation information in a visual space.
In one embodiment, the editing of the operation information based on the editing instruction implemented when the processor executes the computer program includes: intercepting the current medical mixed reality image according to the editing instruction to obtain an intercepted image; and editing operation is carried out on the operation information in the intercepted image, wherein the editing operation comprises translation and/or rotation.
In one embodiment, after the editing operation performed on the operation information in the captured image by the processor when executing the computer program, the method further includes: and acquiring the edited operation information, and displaying the edited operation information in a visual space.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and generating a mechanical arm 2 control command according to the operation information, and sending the mechanical arm 2 control command to the mechanical arm 2, wherein the mechanical arm 2 control command is used for instructing the mechanical arm 2 to operate according to the operation information.
In one embodiment, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of: acquiring a real medical image acquired by a real image acquisition device, the real medical image including a first marker; acquiring a target virtual medical image which corresponds to the real medical image and contains operation information and a position relation between the target virtual medical image and the operation information, wherein the target virtual medical image comprises a first marker; calculating the matching relation between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain a current medical image; and displaying the operation information in the current medical image according to the matching relationship between the target virtual medical image and the real medical image and the position relationship between the target virtual medical image and the operation information.
In one embodiment, the computer program, when executed by the processor, for calculating a matching relationship between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain the current medical image, includes: calculating to obtain corresponding matching features in the target virtual medical image and the real medical image according to the first marker; and calculating the matching relation between the target virtual medical image and the real medical image according to the matching characteristics, and fusing the target virtual medical image and the real medical image to obtain the current medical image.
In one embodiment, the computer program, when executed by the processor, further comprises, prior to acquiring the target virtual medical image corresponding to the real medical image: acquiring an initial medical image obtained by scanning of medical imaging equipment; carrying out three-dimensional reconstruction on the initial medical image to obtain a target virtual medical image; receiving an operation information configuration instruction aiming at the target virtual medical image, configuring operation information on the target virtual medical image according to the operation information configuration instruction, and acquiring the position relation between the target virtual medical image and the operation information.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a matching relation between an image coordinate system of augmented reality equipment in a visual space and a matching space coordinate system of a real medical image; and matching and fusing the current medical image carrying the operation information and the reference image to obtain a mixed reality image according to the matching relationship between the image in the visual space coordinate system of the augmented reality equipment and the image in the real medical space coordinate system and the matching relationship between the target virtual medical image and the real medical image, and displaying the mixed reality image to the visual space of the augmented reality equipment.
In one embodiment, the computer program, when executed by a processor, for obtaining a matching relationship between an image coordinate system in a visual space of an augmented reality device and a matching space coordinate system of a real medical image, includes: acquiring a first conversion matrix of a real space coordinate system and an image space coordinate system of augmented reality equipment; acquiring a second transformation matrix of an image space coordinate system of the augmented reality equipment and a visual space coordinate system of the augmented reality equipment; and obtaining the matching relation between the visual space coordinate system and the real space coordinate system of the augmented reality equipment according to the first conversion matrix and the second conversion matrix.
In one embodiment, a computer program, implemented when executed by a processor, for obtaining a first transformation matrix of a real space coordinate system and an image space coordinate system of an augmented reality device, comprises: acquiring a first reference object image of a first calibration reference object in an image space through augmented reality equipment, establishing an image coordinate system by taking a target point of the first reference object image as an origin, and determining a preset number of reference points from the first reference object image based on the image coordinate system; acquiring a second reference object image of the first calibration reference object acquired by the real image acquisition equipment under the image coordinate system of the real image acquisition equipment, and determining a mapping point corresponding to the reference point from the second reference object image; and calculating the coordinates of the reference point and the mapping point in the real space coordinate system to obtain a first conversion matrix of the real space coordinate system and the image space coordinate system of the augmented reality equipment.
In one embodiment, the computer program, when executed by the processor, further comprises, after displaying the mixed reality image to the visual space of the augmented reality device: displaying the target virtual medical image and the mixed reality image in a visual space under the reference angle; calculating to obtain a reference angle based on the display angle, and acquiring a target virtual medical image under the reference angle; and displaying the target virtual medical image and the mixed reality image under the reference angle in a visual space.
In one embodiment, the computer program, when executed by the processor, further comprises, after displaying the mixed reality image to the visual space of the augmented reality device: receiving an editing instruction aiming at the operation information through the augmented reality equipment; and editing the operation information based on the editing instruction through the augmented reality equipment.
In one embodiment, receiving, by an augmented reality device, editing instructions for operational information, implemented when a computer program is executed by a processor, includes: recognizing the gesture of an operator through augmented reality equipment, and displaying an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement; an editing instruction for the operation information is received through the editing instruction receiving panel.
In one embodiment, the editing of the operation information based on editing instructions, implemented when the computer program is executed by a processor, includes: and inquiring the operation information under the corresponding angle according to the editing instruction, and displaying the inquired operation information in a visual space.
In one embodiment, the editing of the operation information based on editing instructions, implemented when the computer program is executed by a processor, includes: performing at least one of translation, rotation and transformation of type and size on the operation information according to the editing instruction; and displaying the edited operation information in a visual space.
In one embodiment, the editing of the operation information based on editing instructions, implemented when the computer program is executed by a processor, includes: intercepting the current medical mixed reality image according to the editing instruction to obtain an intercepted image; and editing operation is carried out on the operation information in the intercepted image, wherein the editing operation comprises translation and/or rotation.
In one embodiment, the computer program, after being executed by the processor, performs an editing operation on the operation information in the captured image, further includes: and acquiring the edited operation information, and displaying the edited operation information in a visual space.
In one embodiment, the computer program when executed by the processor further performs the steps of: and generating a mechanical arm 2 control command according to the operation information, and sending the mechanical arm 2 control command to the mechanical arm 2, wherein the mechanical arm 2 control command is used for instructing the mechanical arm 2 to operate according to the operation information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (24)

1. A method of medical image processing, the method comprising:
acquiring a real medical image acquired by a real image acquisition device, the real medical image including a first marker;
acquiring a target virtual medical image which corresponds to the real medical image and contains operation information and a position relation between the target virtual medical image and the operation information, wherein the target virtual medical image comprises the first marker;
calculating the matching relation between the target virtual medical image and the real medical image according to the first marker, and fusing the target virtual medical image and the real medical image to obtain a current medical image;
and fusing the operation information in the current medical image according to the matching relation between the target virtual medical image and the real medical image and the position relation between the target virtual medical image and the operation information.
2. The method according to claim 1, wherein the calculating a matching relationship between the target virtual medical image and the real medical image according to the first marker, and the fusing the target virtual medical image and the real medical image to obtain a current medical image comprises:
calculating to obtain corresponding matching features in the target virtual medical image and the real medical image according to the first marker;
and calculating the matching relation between the target virtual medical image and the real medical image according to the matching characteristics, and fusing the target virtual medical image and the real medical image to obtain the current medical image.
3. The method of claim 2, wherein prior to said obtaining a target virtual medical image corresponding to said real medical image, further comprising:
acquiring an initial medical image obtained by scanning of medical imaging equipment;
carrying out three-dimensional reconstruction on the initial medical image to obtain a target virtual medical image;
receiving an operation information configuration instruction aiming at the target virtual medical image, configuring operation information on the target virtual medical image according to the operation information configuration instruction, and acquiring the position relation between the target virtual medical image and the operation information.
4. The method of claim 1, further comprising:
acquiring a matching relation between a visual space coordinate system and a real space coordinate system of augmented reality equipment; acquiring a reference image which is acquired by the augmented reality equipment and is to be displayed to a visual space;
and matching and fusing the current medical image carrying the operation information and the reference image to obtain a mixed reality image according to the matching relationship between the visual space coordinate system of the augmented reality device and the real space coordinate system and the matching relationship between the target virtual medical image and the real medical image, and displaying the mixed reality image into the visual space of the augmented reality device.
5. The method according to claim 4, wherein obtaining the matching relationship between the visual space coordinate system of the augmented reality device and the real space coordinate system comprises:
acquiring a first conversion matrix of a real space coordinate system and an image space coordinate system of augmented reality equipment;
acquiring a second transformation matrix of an image space coordinate system of the augmented reality equipment and a visual space coordinate system of the augmented reality equipment;
and obtaining the matching relation between the visual space coordinate system and the real space coordinate system of the augmented reality equipment according to the first conversion matrix and the second conversion matrix.
6. The method of claim 5, wherein obtaining the first transformation matrix of the real space coordinate system and the image space coordinate system of the augmented reality device comprises:
acquiring a first reference object image of a first calibration reference object in an image space through augmented reality equipment, establishing an image coordinate system by taking a target point of the first reference object image as an origin, and determining a preset number of reference points from the first reference object image based on the image coordinate system;
acquiring a second reference object image of a first calibration reference object acquired by real image acquisition equipment under an image coordinate system of the real image acquisition equipment, and determining a mapping point corresponding to the reference point from the second reference object image;
converting the mapping points into the real space coordinate system;
and calculating to obtain a first conversion matrix of the real space coordinate system and the image space coordinate system of the augmented reality device based on the reference point and the coordinates of the mapping point in the real space coordinate system.
7. The method of claim 4, wherein displaying the mixed reality image to a visual space of the augmented reality device further comprises:
acquiring a display angle of the mixed reality image;
calculating to obtain a reference angle based on the display angle, and acquiring a target virtual medical image under the reference angle;
and displaying the target virtual medical image and the mixed reality image under the reference angle in the visual space.
8. The method of claim 4, wherein after displaying the mixed reality image to the visual space of the augmented reality device, further comprising:
receiving an editing instruction aiming at the operation information through the enhanced display equipment; editing the operation information based on the editing instruction through the enhanced display equipment.
9. The method of claim 8, wherein receiving, by the enhanced display device, an editing instruction for the operation information comprises:
recognizing the gesture of an operator through the enhanced display equipment, and displaying an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement;
receiving an editing instruction for the operation information through the editing instruction receiving panel.
10. The method according to claim 8, wherein the editing the operation information based on the editing instruction comprises:
and inquiring the operation information under the corresponding angle according to the editing instruction, and displaying the inquired operation information in the visual space.
11. The method according to claim 8, wherein the editing the operation information based on the editing instruction comprises:
performing at least one of translation, rotation and type and size conversion on the operation information according to the editing instruction;
and displaying the edited operation information in the visual space.
12. The method according to claim 8, wherein the editing the operation information based on the editing instruction comprises:
intercepting the mixed reality image according to the editing instruction to obtain an intercepted image;
and editing operation is carried out on the operation information in the intercepted image, wherein the editing operation comprises translation and/or rotation.
13. The method according to claim 12, wherein after the editing operation is performed on the operation information in the captured image, the method further comprises:
and acquiring the edited operation information, and displaying the edited operation information in the visual space.
14. The method according to any one of claims 1 to 13, further comprising:
and generating a mechanical arm control command according to the operation information, and sending the mechanical arm control command to a mechanical arm, wherein the mechanical arm control command is used for indicating the mechanical arm to operate according to the operation information.
15. A medical image processing system is characterized in that the system comprises a real image acquisition device and an image processing device;
the real image acquisition equipment is used for acquiring a real medical image and sending the acquired real medical image to the image processing equipment, wherein the real medical image comprises a first marker;
the image processing device is used for realizing the medical image processing method of any one of claims 1 to 14.
16. The system according to claim 15, further comprising at least one display, wherein the display is connected to the image processing device, and the display is configured to display a current medical image processed by the image processing device, and the current medical image carries operation information.
17. The system according to claim 15, further comprising at least one medical imaging device, the medical imaging device being connected to the image processing device, the medical imaging device being configured to scan an initial medical image and to transmit the scanned initial medical image to the image processing device.
18. The system of claim 15, further comprising an augmented reality device in communication with the image processing device, the augmented reality device configured to display a mixed reality image processed by the image processing device.
19. The system of claim 18, wherein the augmented reality device is further configured to receive an editing instruction for the operation information, and edit the operation information based on the editing instruction.
20. The system according to claim 19, wherein the augmented reality device comprises a gesture recognition module, an interaction control module and a display module, wherein the gesture recognition module is configured to recognize a gesture of an operator and control the display module to display an operation information editing instruction receiving panel when the gesture of the operator meets a preset requirement; the interactive control module is used for carrying out conversion calculation on the editing instruction received by the operation information editing instruction receiving panel and displaying the operation information after conversion calculation on the display module.
21. The system of claim 15, further comprising a robot having a controller in communication with the image processing device, the robot configured to receive a robot control command generated by the image processing device based on the operation information and to operate in accordance with the operation information based on the robot control command.
22. A surgical system, comprising an operating trolley, a navigation trolley, and an operating table;
real medical image acquisition equipment is installed on the navigation trolley; image processing equipment is arranged on the operation trolley and/or the navigation trolley; the real image acquisition device is used for acquiring a real medical image of a patient on an operating table and sending the acquired real medical image to the image processing device, wherein the real medical image comprises a first marker;
the image processing device is used for realizing the medical image processing method of any one of claims 1 to 14.
23. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 14.
24. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 14.
CN202111338635.9A 2021-11-12 2021-11-12 Medical image processing method, system, computer device and storage medium Pending CN114224508A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111338635.9A CN114224508A (en) 2021-11-12 2021-11-12 Medical image processing method, system, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111338635.9A CN114224508A (en) 2021-11-12 2021-11-12 Medical image processing method, system, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN114224508A true CN114224508A (en) 2022-03-25

Family

ID=80749214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111338635.9A Pending CN114224508A (en) 2021-11-12 2021-11-12 Medical image processing method, system, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN114224508A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853665A (en) * 2024-03-04 2024-04-09 吉林大学第一医院 Image generation method, device and medium for acetabulum and guide

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180303558A1 (en) * 2016-08-17 2018-10-25 Monroe Milas Thomas Methods and systems for registration of virtual space with real space in an augmented reality system
CN109223121A (en) * 2018-07-31 2019-01-18 广州狄卡视觉科技有限公司 Based on medical image Model Reconstruction, the cerebral hemorrhage puncturing operation navigation system of positioning
CN110353806A (en) * 2019-06-18 2019-10-22 北京航空航天大学 Augmented reality navigation methods and systems for the operation of minimally invasive total knee replacement
US20190365498A1 (en) * 2017-02-21 2019-12-05 Novarad Corporation Augmented Reality Viewing and Tagging For Medical Procedures
CN112057165A (en) * 2020-09-22 2020-12-11 上海联影医疗科技股份有限公司 Path planning method, device, equipment and medium
CN112155732A (en) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 Readable storage medium, bone modeling and registering system and bone surgery system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180303558A1 (en) * 2016-08-17 2018-10-25 Monroe Milas Thomas Methods and systems for registration of virtual space with real space in an augmented reality system
US20190365498A1 (en) * 2017-02-21 2019-12-05 Novarad Corporation Augmented Reality Viewing and Tagging For Medical Procedures
CN109223121A (en) * 2018-07-31 2019-01-18 广州狄卡视觉科技有限公司 Based on medical image Model Reconstruction, the cerebral hemorrhage puncturing operation navigation system of positioning
CN110353806A (en) * 2019-06-18 2019-10-22 北京航空航天大学 Augmented reality navigation methods and systems for the operation of minimally invasive total knee replacement
CN112057165A (en) * 2020-09-22 2020-12-11 上海联影医疗科技股份有限公司 Path planning method, device, equipment and medium
CN112155732A (en) * 2020-09-29 2021-01-01 苏州微创畅行机器人有限公司 Readable storage medium, bone modeling and registering system and bone surgery system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853665A (en) * 2024-03-04 2024-04-09 吉林大学第一医院 Image generation method, device and medium for acetabulum and guide

Similar Documents

Publication Publication Date Title
US11672613B2 (en) Robotized system for femoroacetabular impingement resurfacing
US11944392B2 (en) Systems and methods for guiding a revision procedure
WO2022126827A1 (en) Navigation and positioning system and method for joint replacement surgery robot
WO2022126828A1 (en) Navigation system and method for joint replacement surgery
CN109069208B (en) Ultra-wideband positioning for wireless ultrasound tracking and communication
USRE49930E1 (en) Methods and systems for computer-aided surgery using intra-operative video acquired by a free moving camera
CN112155732B (en) Readable storage medium, bone modeling and registering system and bone surgery system
CN113940755B (en) Surgical planning and navigation method integrating surgical operation and image
JP2022133440A (en) Systems and methods for augmented reality display in navigated surgeries
US20070016008A1 (en) Selective gesturing input to a surgical navigation system
US20110306873A1 (en) System for performing highly accurate surgery
JP2022524752A (en) Systems and methods for surgical alignment
CN113017834B (en) Joint replacement operation navigation device and method
JP2004254899A (en) Surgery supporting system and surgery supporting method
WO2023116823A1 (en) Positioning method, system and apparatus, computer device, and storage medium
CN114224508A (en) Medical image processing method, system, computer device and storage medium
CN114246635A (en) Osteotomy plane positioning method, osteotomy plane positioning system and osteotomy plane positioning device
CN107898499A (en) Orthopaedics 3D region alignment system and method
WO2023116232A1 (en) Control method for arthroplasty surgical robot
CN115089302A (en) Surgical robot system and method
CN114224428A (en) Osteotomy plane positioning method, osteotomy plane positioning system and osteotomy plane positioning device
CN114418960A (en) Image processing method, system, computer device and storage medium
CN117814913A (en) Full-functional orthopedic operation control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination