CN110786877B - Marking method, device and system for medical image - Google Patents

Marking method, device and system for medical image Download PDF

Info

Publication number
CN110786877B
CN110786877B CN201911096645.9A CN201911096645A CN110786877B CN 110786877 B CN110786877 B CN 110786877B CN 201911096645 A CN201911096645 A CN 201911096645A CN 110786877 B CN110786877 B CN 110786877B
Authority
CN
China
Prior art keywords
dimensional image
image
virtual reality
dimensional
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911096645.9A
Other languages
Chinese (zh)
Other versions
CN110786877A (en
Inventor
余航
陈宽
王少康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infervision Medical Technology Co Ltd
Original Assignee
Beijing Tuoxiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tuoxiang Technology Co ltd filed Critical Beijing Tuoxiang Technology Co ltd
Priority to CN201911096645.9A priority Critical patent/CN110786877B/en
Publication of CN110786877A publication Critical patent/CN110786877A/en
Application granted granted Critical
Publication of CN110786877B publication Critical patent/CN110786877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Human Computer Interaction (AREA)
  • Pulmonology (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention provides a marking method, a device and a system of a medical image, wherein the marking method of the medical image comprises the following steps: performing three-dimensional reconstruction on the multilayer two-dimensional image to acquire a first three-dimensional image; sending the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image in the virtual reality device; receiving a third three-dimensional image with a mark point, which is sent by the virtual reality equipment and is generated after the user marks the region of interest of the second three-dimensional image; and converting the coordinate system of the third three-dimensional image to generate a multilayer two-dimensional image with the mark points, so that the missed diagnosis rate of medical staff can be reduced, and the medical image detection efficiency of the medical staff is improved.

Description

Marking method, device and system for medical image
Technical Field
The invention relates to the technical field of medicine, in particular to a marking method, a marking device and a marking system for medical images.
Background
Computed Tomography (CT) is a three-dimensional radiographic medical image reconstructed by using digital geometry processing. The technology mainly irradiates a human body through the rotation of X-rays with a single axial surface, and because different tissues have different absorption capacities (or called refractive indexes) to the X-rays, a fault surface image can be reconstructed by using a three-dimensional technology of a computer, fault images of corresponding tissues can be obtained through window width and window level processing, and the fault images are stacked layer by layer to form a three-dimensional image.
The CT tomographic image is formed by stacking a plurality of two-dimensional images, and has three-dimensional characteristics, and is an important means and basis for determining a lesion region. Most of the existing methods still use a mouse to slide on a computer to view a plurality of two-dimensional images layer by professional medical care personnel, and accordingly obtain the position (region of interest) of a lesion area, so that the working efficiency is obviously low, and the requirements on the medical level and the spatial imagination of the professional medical care personnel are very high due to the serious shortage of the professional medical care personnel, so that the tasks are various, the working pressure is high, and the condition of missed detection can occur under the high-intensity working state, so that the whole detection process is time-consuming and labor-consuming, and the missed diagnosis rate is high.
Disclosure of Invention
In view of this, embodiments of the present invention are directed to providing a method, an apparatus, and a system for marking a medical image, which can reduce the rate of missed diagnosis of a medical care worker and improve the efficiency of medical care worker in detecting the medical image.
According to a first aspect of embodiments of the present invention, there is provided a marking method of a medical image including a multilayer two-dimensional image, the marking method including: performing three-dimensional reconstruction on the multilayer two-dimensional image to acquire a first three-dimensional image; sending the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image in the virtual reality device; receiving a third three-dimensional image with a mark point, which is sent by the virtual reality equipment and is generated after the user marks the region of interest of the second three-dimensional image; and converting the third three-dimensional image into a coordinate system to generate a multilayer two-dimensional image with the mark points.
In one embodiment, the converting the third three-dimensional image into a coordinate system to generate a multi-layer two-dimensional image with the marked points includes: converting the coordinate system of the third three-dimensional image to generate a fourth three-dimensional image with the mark points; and reconstructing the fourth three-dimensional image into the multilayer two-dimensional image with the marking points.
In one embodiment, before three-dimensionally reconstructing the medical image to obtain the first three-dimensional image, the method further comprises: and performing at least one of image preprocessing, image enhancement and image contour processing on the multilayer two-dimensional image to obtain a multilayer optimized two-dimensional image.
In one embodiment, the three-dimensional reconstruction of the medical image to obtain a first three-dimensional image comprises: performing three-dimensional reconstruction on the multi-layer optimized two-dimensional image to acquire the first three-dimensional image.
In one embodiment, before sending the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space, the method further comprises: and converting the file format of the first three-dimensional image into the file format of the virtual reality equipment.
In one embodiment, the converting the file format of the first three-dimensional image into the file format of the virtual reality device includes: and encoding the geometric information and the appearance information of the first three-dimensional image to acquire the file format of the virtual reality device.
In one embodiment, the file format of the virtual reality device comprises an STL file format; and/or the file format of the first three-dimensional image comprises a BMP file format.
According to a second aspect of the embodiments of the present invention, there is provided a marking method of a medical image, including: acquiring a first three-dimensional image to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image; acquiring position information of a mark point in a real space generated after a user marks the region of interest of the second three-dimensional image; converting a coordinate system of the real space and a coordinate system of the virtual reality space to generate a third three-dimensional image with the mark points in the virtual reality space according to the position information of the mark points and the second three-dimensional image; and transmitting the third three-dimensional image.
In one embodiment, the acquiring a first three-dimensional image to generate a second three-dimensional image in a virtual reality space from the first three-dimensional image includes: and receiving a first three-dimensional image generated after the medical image sent by the server is subjected to three-dimensional reconstruction, so as to generate a second three-dimensional image in the virtual reality space according to the first three-dimensional image.
In one embodiment, the transmitting the third three-dimensional image comprises: and sending the third three-dimensional image to the server.
In one embodiment, the acquiring the position information of the mark point in the real space generated after the user marks the region of interest of the second three-dimensional image includes: receiving position information of a marking point in the real space, which is generated after the user marks the region of interest of the second three-dimensional image through wearable equipment and is monitored by at least one sensor located in the real space.
According to a third aspect of embodiments of the present invention, there is provided a marking apparatus of a medical image including a multilayer two-dimensional image, the marking apparatus including: a three-dimensional reconstruction module configured to perform three-dimensional reconstruction on the multi-layer two-dimensional image to obtain a first three-dimensional image; a first image sending module configured to send the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space in the virtual reality device from the first three-dimensional image; the image receiving module is configured to receive a third three-dimensional image which is sent by the virtual reality equipment and is provided with a mark point and generated after the user marks the region of interest of the second three-dimensional image; and the first conversion module is configured to convert the coordinate system of the third three-dimensional image to generate a multilayer two-dimensional image with the mark points.
According to a fourth aspect of the embodiments of the present invention, there is provided another medical image marking apparatus including: the image acquisition module is configured to acquire a first three-dimensional image so as to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image; the position information acquisition module is configured to acquire position information of a mark point in a real space generated after a user marks the region of interest of the second three-dimensional image; a second conversion module configured to perform coordinate system conversion on a coordinate system of the real space and a coordinate system of the virtual reality space to generate a third three-dimensional image with the marker point in the virtual reality space according to the position information of the marker point and the second three-dimensional image; and a second image transmitting module configured to transmit the third three-dimensional image.
In one embodiment, the image acquisition module is configured to: and receiving a first three-dimensional image generated after the medical image sent by the server is subjected to three-dimensional reconstruction, so as to generate a second three-dimensional image in the virtual reality space according to the first three-dimensional image.
In one embodiment, the second image sending module is configured to: and sending the third three-dimensional image to the server.
According to a fifth aspect of an embodiment of the present invention, there is provided a marking system of a medical image, including: a server for executing the marking method of the medical image mentioned in the above embodiment; and a virtual reality device connected with the server.
In one embodiment, the virtual reality device comprises: the virtual reality all-in-one machine is used for executing the marking method of the medical image mentioned in the embodiment; the virtual reality glasses are connected with the virtual reality all-in-one machine and used for observing a second three-dimensional image in a virtual reality space; the sensor is connected with the virtual reality all-in-one machine and used for monitoring position information of a mark point in a real space generated after a user marks the region of interest of the second three-dimensional image; and the wearable device is in communication connection with the virtual reality all-in-one machine and the at least one sensor and is used for marking the region of interest of the second three-dimensional image, wherein the wearable device is a glove or a handle.
According to a sixth aspect of an embodiment of the present invention, there is provided a computer-readable storage medium storing a computer program for executing the marking method of a medical image mentioned in the above-mentioned embodiment, or for executing another marking method of a medical image mentioned in the above-mentioned embodiment.
According to a seventh aspect of the embodiments of the present invention, there is provided an electronic apparatus including: a processor for performing the marking method of the medical image mentioned in the above embodiment, or for performing the marking method of another medical image mentioned in the above embodiment; and a memory for storing the processor-executable instructions.
According to the marking method of the medical image, provided by the embodiment of the invention, the region of interest of the second three-dimensional image is marked in the virtual reality device, so that the region of interest of the medical image is marked, the rate of missed diagnosis of medical staff is reduced, and the efficiency of medical staff for detecting the medical image is improved.
Drawings
Fig. 1 is a flowchart illustrating a method for marking a medical image according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a marking method for a medical image according to another embodiment of the present invention.
Fig. 3 is a flowchart illustrating a marking method for medical images according to another embodiment of the present invention.
Fig. 4 is a block diagram illustrating a medical image marking apparatus according to an embodiment of the present invention.
Fig. 5 is a block diagram of a medical image marking apparatus according to another embodiment of the present invention.
Fig. 6 is a block diagram illustrating a medical image marking system according to an embodiment of the present invention.
Fig. 7 is a block diagram illustrating an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Summary of the application
As described above, the conventional marking method for medical image detection still uses a mouse by professional medical staff to slide a plurality of two-dimensional images layer by layer on a computer to view the positions (regions of interest) of the lesion regions, but the working efficiency of detecting the medical images by the conventional marking method is obviously not high, and the requirements on the medical level and the spatial imagination ability of the professional medical staff are very high, which also causes the problems of various tasks and high working pressure, and under a high-intensity working state, the condition of missed detection may occur, which causes the time and labor consumption of the whole detection process and high missed diagnosis rate.
In view of the above technical problem, the basic idea of the present application is to provide a marking method for a medical image, which may include a multi-layer two-dimensional image, a first three-dimensional image may be acquired by three-dimensionally reconstructing the multi-layer two-dimensional image, then sending the first three-dimensional image to a virtual reality device so as to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image in the virtual reality device, wherein the medical staff can mark the second three-dimensional image through the virtual reality device to generate a third three-dimensional image with mark points, marking the interested region of the multilayer two-dimensional image by processing modes such as coordinate system conversion and/or reconstruction and the like on the third three-dimensional image, thereby presenting the multilayer two-dimensional image with the marked points on a computer for medical personnel to use.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary medical image marking method
Fig. 1 is a flowchart illustrating a method for marking a medical image according to an embodiment of the present invention. The method described in fig. 1 is performed by a computing device (e.g., a server), but the embodiments of the present application are not limited thereto. The server may be one server, or may be composed of a plurality of servers, or may be a virtualization platform, or a cloud computing service center, which is not limited in this embodiment of the present application. The medical image comprises a multi-layer two-dimensional image, as shown in fig. 1, and the marking method comprises the following steps:
s101: and performing three-dimensional reconstruction on the multilayer two-dimensional image to acquire a first three-dimensional image.
It should be understood that the medical image may refer to a CT tomographic image, the CT tomographic image is formed by stacking a plurality of two-dimensional images and has a three-dimensional characteristic, the CT tomographic image is an important means and basis for determining a lesion region, and the embodiment of the present invention is not limited to a specific type of the medical image. The first three-dimensional image is an expression mode that a server carries out three-dimensional reconstruction on a medical image, the first three-dimensional image is only used for distinguishing three-dimensional images of different space coordinate systems, the first three-dimensional image is a three-dimensional image of a first space coordinate system located at the server end, the multilayer two-dimensional image is a two-dimensional image of a fourth space coordinate system located at the server end, and the first space coordinate system and the fourth space coordinate system are both space coordinate systems located at the server end and used for displaying unused images.
It should also be understood that three-dimensional reconstruction refers to the creation of mathematical models suitable for computer representation and processing of three-dimensional objects, which is the basis for processing, manipulating and analyzing the properties of three-dimensional objects in a computer environment, and is also a key technology for creating virtual reality expressing an objective world in a computer. The three-dimensional reconstruction process may actually be understood as a process of performing coordinate system conversion on the multilayer two-dimensional image, where the coordinate system conversion refers to a process of converting a position of a spatial entity from one coordinate system to another coordinate system, and specifically, a one-to-one correspondence relationship between two coordinate systems may be established through translation, enlargement, reduction, rotation, or other transformations, that is, a one-to-one correspondence relationship between a third spatial coordinate system of the multilayer two-dimensional image and a first spatial coordinate system of the first three-dimensional image is established, so as to reconstruct the multilayer two-dimensional image into the first three-dimensional image.
It should be noted that the server may acquire the medical image, i.e. the multi-layer two-dimensional image, before performing the three-dimensional reconstruction on the medical image.
S102: and sending the first three-dimensional image to a virtual reality device so as to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image in the virtual reality device.
It should be understood that the second three-dimensional image refers to a three-dimensional image formed by the virtual reality device in the virtual reality space according to the first three-dimensional image after the server sends the first three-dimensional image to the virtual reality device, and a second three-dimensional image in the second three-dimensional image is only an expression manner for distinguishing three-dimensional images of different spatial coordinate systems, where the second three-dimensional image refers to a three-dimensional image of a second spatial coordinate system in the virtual reality space at the virtual reality device end.
Specifically, after the server sends the first three-dimensional image to the virtual reality device, the virtual reality device can project a second three-dimensional image in the virtual reality space according to the first three-dimensional image, and the medical staff can view the second three-dimensional image through the virtual reality device.
S103: and receiving a third three-dimensional image with a mark point, which is sent by the virtual reality equipment and is generated after the user marks the region of interest of the second three-dimensional image.
It should be understood that the third three-dimensional image refers to a three-dimensional image with a mark point generated after the virtual reality device marks the second three-dimensional image, and a third three-dimensional image in the third three-dimensional image is only an expression for distinguishing three-dimensional images of different spatial coordinate systems, where the third three-dimensional image refers to a three-dimensional image of a third spatial coordinate system in the virtual reality space at the virtual reality device end. The difference between the third three-dimensional image and the second three-dimensional image is that the third three-dimensional image is provided with marked points, and the second space coordinate system and the third space coordinate system are both space coordinate systems which are positioned in a virtual reality space at the virtual reality equipment end and used for displaying unused images.
Specifically, the user can mark the second three-dimensional image in the virtual reality space through the wearable device in the virtual reality device, generate a third three-dimensional image with a mark point, and the virtual reality device sends the third three-dimensional image to the server.
It should be understood that the virtual reality device sending the third three-dimensional image to the server may also be understood as sending geometric information and appearance information of the third three-dimensional image to the server, and the geometric information may include position information of unmarked points and position information of marked points among all the points constituting the third three-dimensional image, and the like.
S104: and converting the third three-dimensional image into a coordinate system to generate a multilayer two-dimensional image with the mark points.
Specifically, after the server receives the third three-dimensional image sent by the virtual reality device, the server may perform coordinate system conversion on the third three-dimensional image, so as to generate a multilayer two-dimensional image with the marker point.
It should be understood that the third three-dimensional image may be translated, enlarged, reduced, or rotated to establish a corresponding relationship between the third spatial coordinate system of the third three-dimensional image and the fourth spatial coordinate system of the multi-layered two-dimensional image, so as to generate a multi-layered two-dimensional image with a marked point, which is the multi-layered two-dimensional image with the marked point, where the region of interest has been marked, and the multi-layered two-dimensional image with the marked point is practically no different from the multi-layered two-dimensional image when the three-dimensional reconstruction is performed initially, except for the marked point. However, the embodiment of the present invention is not limited to a specific transformation form of the coordinate system transformation, and may be other transformation forms besides the above-mentioned transformation forms such as translation, enlargement, reduction, or rotation.
It should be noted that, the process of performing coordinate system conversion on the third three-dimensional image to generate the multilayer two-dimensional image with the mark point may be to directly perform coordinate system conversion on the third three-dimensional image to generate the multilayer two-dimensional image with the mark point, or may first perform coordinate system conversion on the third three-dimensional image to generate a transitional three-dimensional image, and then perform coordinate system conversion on the transitional three-dimensional image to generate the multilayer two-dimensional image with the mark point, which is not limited in the embodiment of the present invention. Meanwhile, the number of the transitional three-dimensional images is not limited in the embodiment of the invention, and one transitional three-dimensional image or more transitional three-dimensional images can be generated through the transformation of the coordinate system.
Therefore, medical staff can mark the region of interest of the medical image by marking the second three-dimensional image in the virtual reality equipment, the missed diagnosis rate of the medical staff is reduced, and the efficiency of the medical staff for detecting the medical image is improved.
Fig. 2 is a flowchart illustrating a marking method for a medical image according to another embodiment of the present invention. As shown in fig. 2, the coordinate system converting the third three-dimensional image to generate a multi-layer two-dimensional image with the mark points includes:
s201: and converting the coordinate system of the third three-dimensional image to generate a fourth three-dimensional image with the mark points.
It should be understood that the fourth three-dimensional image refers to a three-dimensional image with marked points generated after the server performs coordinate system conversion on the third three-dimensional image, and a fourth three-dimensional image in the fourth three-dimensional image is only an expression for distinguishing three-dimensional images of different spatial coordinate systems, where the fourth three-dimensional image refers to a three-dimensional image of a fifth spatial coordinate system located at the server side. The difference between the first three-dimensional image and the fourth three-dimensional image is that the fourth three-dimensional image is provided with marked points, and the first space coordinate system, the fourth space coordinate system and the fifth space coordinate system are all space coordinate systems which are positioned at the server end and used for displaying unused images, namely the first three-dimensional image, the multilayer two-dimensional image and the fourth three-dimensional image.
The coordinate system conversion of the third three-dimensional image includes translation, enlargement, reduction, rotation, or other transformations of the third three-dimensional image, so as to generate a fourth three-dimensional image with a marker point. However, the embodiment of the present invention is not limited to a specific transformation form of the coordinate system transformation, and may be other transformation forms besides the above-mentioned transformation forms such as translation, enlargement, reduction, or rotation.
S202: and reconstructing the fourth three-dimensional image into the multilayer two-dimensional image with the marking points.
It should be understood that the process of reconstructing the fourth three-dimensional image by the server actually refers to an inverse operation of three-dimensionally reconstructing the multilayer two-dimensional image, and the embodiment of the present invention does not limit a specific implementation form of the inverse operation, and may be a coordinate system transformation, for example, a coordinate system transformation such as cutting, enlarging, reducing, or rotating, is performed on the fourth three-dimensional image to generate the multilayer two-dimensional image with the mark point.
The multi-layer two-dimensional image with the mark points has no difference with the multi-layer two-dimensional image which is originally subjected to three-dimensional reconstruction except the mark points. For example, if a lesion point on the 100 th two-dimensional image of the fourth three-dimensional image is marked, the lesion point on the 100 th two-dimensional image of the multi-layered two-dimensional image with the marked point can be obtained after reconstruction.
It should be noted that the relative positions of the marking points on the third three-dimensional image, the fourth three-dimensional image and the multi-layer two-dimensional image are all the same, that is, a lesion point on the third three-dimensional image is marked, the corresponding lesion point on the fourth three-dimensional image is also marked, and the corresponding lesion point on the multi-layer two-dimensional image is also marked.
It should also be understood that, in addition to the above-mentioned embodiment that the multi-layer two-dimensional image with the mark point is generated again through the coordinate system conversion, at least one reference point may be automatically selected on the first three-dimensional image before the first three-dimensional image is sent to the virtual reality device, and then the at least one reference point may also exist on the third three-dimensional image in the virtual reality space of the virtual reality device, after the virtual reality device sends the third three-dimensional image to the server, the server may determine the corresponding relationship between the first space coordinate system of the first three-dimensional image and the third space coordinate system of the third three-dimensional image through the coordinate system conversion, and then the server calculates which point on the first three-dimensional image is the mark point according to the at least one reference point on the first three-dimensional image, the mark point on the third three-dimensional image and the at least one reference point and the corresponding relationship, and finally, the server reconstructs the first three-dimensional image with the determined mark points into a multilayer two-dimensional image, and the mark points correspondingly exist on the multilayer two-dimensional image.
In another embodiment, before the three-dimensional reconstruction of the medical image to acquire the first three-dimensional image, the method further comprises: and performing at least one of image preprocessing, image enhancement and image contour processing on the multilayer two-dimensional image to obtain a multilayer optimized two-dimensional image.
It should be understood that the preprocessing of the multi-layer two-dimensional image refers to adjusting the multi-layer two-dimensional image into two-dimensional images with the same specification, such as turning, enlarging or reducing the multi-layer two-dimensional image; the image enhancement of the multilayer two-dimensional image refers to optimizing pixels of the multilayer two-dimensional image to improve the quality of the pixels and the resolution of the image; the image contour processing of the multilayer two-dimensional image means that the contour of the multilayer two-dimensional image is adjusted to remove burrs on the edge of the multilayer two-dimensional image, and the contour of the multilayer two-dimensional image is clearer.
It should be noted that, in the embodiment of the present invention, it is not limited to what kind of image processing is required by the server to obtain the optimized two-dimensional image before performing the three-dimensional reconstruction on the multi-layer two-dimensional image, and at least one of image preprocessing, image enhancement, and image contour processing may be performed before performing the three-dimensional reconstruction on the multi-layer two-dimensional image, and other image processing besides the image preprocessing, the image enhancement, and the image contour processing may also be performed.
In another embodiment, the three-dimensional reconstruction of the medical image to obtain a first three-dimensional image comprises: performing three-dimensional reconstruction on the multi-layer optimized two-dimensional image to acquire the first three-dimensional image.
It should be understood that after obtaining the optimized multi-layer two-dimensional image, the server may perform three-dimensional reconstruction on the optimized multi-layer two-dimensional image by using an open source algorithm in the 3D-slicer to obtain the first three-dimensional image. It should be noted that, the embodiment of the present invention does not limit the specific algorithm selected for performing the three-dimensional reconstruction, and may select another algorithm for performing the three-dimensional reconstruction.
In another embodiment, before transmitting the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space, the method further comprises: and converting the file format of the first three-dimensional image into the file format of the virtual reality equipment.
It should be noted that, since the file format of the three-dimensional image used by the virtual reality device is different from the file format of the three-dimensional image used by the server, in order to project the second three-dimensional image in the virtual reality device, the file format of the first three-dimensional image may be converted into a file format suitable for the virtual reality device before the first three-dimensional image is sent to the virtual reality device.
In another embodiment, the converting the file format of the first three-dimensional image into the file format of the virtual reality device includes: and encoding the geometric information and the appearance information of the first three-dimensional image to acquire the file format of the virtual reality device.
Specifically, the server may encode the geometric information and the appearance information of the first three-dimensional image by using an STL conversion algorithm to obtain a file format of the virtual reality device, and it should be noted that the embodiment of the present invention does not limit a specific algorithm used for encoding the geometric information and the appearance information, and may select another algorithm for encoding.
It should be understood that the process of encoding the geometric information of the first three-dimensional image may actually be understood as a process of converting a coordinate system of the first three-dimensional image, and specifically, a corresponding relationship between a first spatial coordinate system of the first three-dimensional image at the server end and a three-dimensional spatial coordinate system of a virtual reality space at the virtual reality device end may be established through translation, magnification, reduction, rotation, or the like, so as to form a file format suitable for the virtual reality device, and further, after the first three-dimensional image is sent to the virtual reality device, a second three-dimensional image may be projected in the virtual reality device.
In another embodiment, the file format of the virtual reality device comprises an STL file format; and/or the file format of the first three-dimensional image comprises a BMP file format.
It should be noted that, the file format of the virtual reality device is not limited in the embodiment of the present invention, and may be an STL file format, or may be another file format as long as it is applicable to the virtual reality device; meanwhile, the file format of the first three-dimensional image is not limited, and may be a BMP file format, or may be another file format as long as it is applicable to the client.
Fig. 3 is a flowchart illustrating a marking method for medical images according to still another embodiment of the present invention. The method of fig. 3 is performed by a computing device (e.g., a virtual reality kiosk in a virtual reality device). As shown in fig. 3, the marking method includes:
s301: a first three-dimensional image is acquired to generate a second three-dimensional image in virtual reality space from the first three-dimensional image.
It should be appreciated that the virtual reality device includes a virtual reality all-in-one machine with a built-in processor, which has independent computing capabilities. The data exchange between the server and the virtual reality all-in-one machine can be connected through a data line and can also be connected through a communication network, and the embodiment of the invention does not limit the data exchange.
Specifically, after the virtual reality all-in-one machine acquires the first three-dimensional image, the virtual reality all-in-one machine performs calculation processing on the first three-dimensional image, and then can generate a second three-dimensional image in a virtual reality space in the virtual reality device.
It should be noted that the embodiment of the present invention is not limited to a specific implementation manner of acquiring the first three-dimensional image, and the first three-dimensional image may be acquired by directly copying through an external device such as a usb disk, or may be acquired from a computing device (e.g., a server) through a communication network or a data line; meanwhile, the embodiment of the invention does not limit what kind of calculation processing is performed on the first three-dimensional image by the virtual reality simulating all-in-one machine, and the adopted calculation processing only needs to project the second three-dimensional image in the virtual reality space of the virtual reality device, for example, after the virtual reality device acquires the first three-dimensional image, meanwhile, the coordinates of all points forming the first three-dimensional image are received, the virtual reality all-in-one machine can calculate the coordinates of all points forming the second three-dimensional image according to the coordinates of all points forming the first three-dimensional image, and calculates the projection of the second three-dimensional image on the virtual reality glasses under different angles and directions according to the angles, the directions and the like of the virtual reality glasses, and the projection is changed in real time, so that the user can watch the second three-dimensional image by changing the angle and the direction of the virtual reality glasses.
S302: and acquiring position information of a mark point in a real space generated after the user marks the region of interest of the second three-dimensional image.
It should be understood that real space refers to the space in which the user is located, such as the doctor's office. When the user marks the region of interest of the second three-dimensional image, position information of the mark point is correspondingly generated in the real space, and the position information may refer to coordinate information of the mark point in a coordinate system of the real space.
S303: and converting the coordinate system of the real space and the coordinate system of the virtual reality space to generate a third three-dimensional image with the mark points in the virtual reality space according to the position information of the mark points and the second three-dimensional image.
Specifically, the virtual reality all-in-one machine converts a coordinate system of a real space where a user is located and a coordinate system of a virtual reality space, so that a corresponding relation between the coordinate system of the real space and the coordinate system of the virtual reality space is formed, and a third three-dimensional image with a mark point is generated in the coordinate system of the virtual reality space according to the position information of the mark point in the real space, the second three-dimensional image and the corresponding relation.
It should be understood that a third three-dimensional image with the marker point may be regenerated through coordinate system conversion, or the second three-dimensional image may be marked by coordinate system conversion and the position information of the marker point in the coordinate system of real space, so that the second three-dimensional image has the marker point, which is not limited by the embodiment of the present invention.
It should be noted that the third three-dimensional image refers to the third three-dimensional image mentioned in the above embodiments, and details are not repeated here.
S304: and transmitting the third three-dimensional image.
Specifically, after the virtual reality all-in-one machine generates the third three-dimensional image, the virtual reality all-in-one machine may send the third three-dimensional image to another computing device (e.g., a server), so that the computing device (e.g., the server) performs the medical image marking method mentioned in the above embodiment.
In another embodiment, the acquiring a first three-dimensional image to generate a second three-dimensional image in virtual reality space from the first three-dimensional image includes: and receiving a first three-dimensional image generated after the medical image sent by the server is subjected to three-dimensional reconstruction, so as to generate a second three-dimensional image in the virtual reality space according to the first three-dimensional image.
Specifically, the server may perform three-dimensional reconstruction on the medical image according to the three-dimensional reconstruction method mentioned in the above embodiment to generate a first three-dimensional image, and then the server may send the first three-dimensional image to the virtual reality all-in-one machine through a communication network or a data line, and after the virtual reality all-in-one machine receives the first three-dimensional image, the virtual reality all-in-one machine performs calculation processing on the first three-dimensional image, and may generate a second three-dimensional image in a virtual reality space in the virtual reality device.
In another embodiment, said transmitting said third three-dimensional image comprises: and sending the third three-dimensional image to the server.
Specifically, after the virtual reality all-in-one machine receives the first three-dimensional image sent by the server, the virtual reality all-in-one machine may further send a third three-dimensional image obtained by processing the first three-dimensional image to the server, so that the server implements the medical image marking method mentioned in the above embodiment.
In another embodiment, the acquiring the position information of the mark point in the real space generated after the user marks the region of interest of the second three-dimensional image includes: receiving position information of a marking point in the real space, which is generated after the user marks the region of interest of the second three-dimensional image through wearable equipment and is monitored by at least one sensor located in the real space.
It should be understood that the virtual reality device further includes at least one sensor and a wearable device (e.g., glasses and a handle), the at least one sensor is disposed in a real space (doctor's office), the handle is held by a medical staff, after the medical staff observes a focal point in an area of interest through the glasses, the medical staff places the handle at the focal point in a visual field of the glasses and presses a determination button of the handle, the at least one sensor can monitor a position of the handle pressed by the button in the real space, so that the at least one sensor can acquire position information of the marked point in the real space.
Marking device for exemplary medical images
Fig. 4 is a block diagram illustrating a medical image marking apparatus according to an embodiment of the present invention. The medical image comprises a multi-layer two-dimensional image, and as shown in fig. 4, the marking device of the medical image comprises:
a three-dimensional reconstruction module 410 configured to perform a three-dimensional reconstruction of the multi-layered two-dimensional image to obtain a first three-dimensional image.
It should be understood that the medical image may refer to a CT tomographic image, the CT tomographic image is formed by stacking a plurality of two-dimensional images and has a three-dimensional characteristic, the CT tomographic image is an important means and basis for determining a lesion region, and the embodiment of the present invention is not limited to a specific type of the medical image. The first three-dimensional image is an expression mode that a server carries out three-dimensional reconstruction on a medical image, the first three-dimensional image is only used for distinguishing three-dimensional images of different space coordinate systems, the first three-dimensional image is a three-dimensional image of a first space coordinate system located at the server end, the multilayer two-dimensional image is a two-dimensional image of a fourth space coordinate system located at the server end, and the first space coordinate system and the fourth space coordinate system are both space coordinate systems located at the server end and used for displaying unused images.
It should also be understood that three-dimensional reconstruction refers to the creation of mathematical models suitable for computer representation and processing of three-dimensional objects, which is the basis for processing, manipulating and analyzing the properties of three-dimensional objects in a computer environment, and is also a key technology for creating virtual reality expressing an objective world in a computer. The three-dimensional reconstruction process may actually be understood as a process of performing coordinate system conversion on the multilayer two-dimensional image, where the coordinate system conversion refers to a process of converting a position of a spatial entity from one coordinate system to another coordinate system, and specifically, a one-to-one correspondence relationship between two coordinate systems may be established through translation, enlargement, reduction, rotation, or other transformations, that is, a one-to-one correspondence relationship between a third spatial coordinate system of the multilayer two-dimensional image and a first spatial coordinate system of the first three-dimensional image is established, so as to reconstruct the multilayer two-dimensional image into the first three-dimensional image.
It should be noted that before the medical image is three-dimensionally reconstructed, the medical image, i.e., the multi-layer two-dimensional image, may be acquired by the image acquisition module.
A first image sending module 420 configured to send the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space in the virtual reality device from the first three-dimensional image.
It should be understood that the second three-dimensional image refers to a three-dimensional image formed by the virtual reality device in the virtual reality space according to the first three-dimensional image after the first image sending module 420 sends the first three-dimensional image to the virtual reality device, and a second three-dimensional image in the second three-dimensional image is only an expression manner for distinguishing three-dimensional images of different spatial coordinate systems, where the second three-dimensional image refers to a three-dimensional image of a second spatial coordinate system in the virtual reality space at the virtual reality device side.
Specifically, after the first image sending module 420 sends the first three-dimensional image to the virtual reality device, the virtual reality device may project a second three-dimensional image in the virtual reality space according to the first three-dimensional image, and the medical staff may view the second three-dimensional image through the virtual reality device.
An image receiving module 430 configured to receive a third three-dimensional image with a marked point, which is sent by the virtual reality device and generated after the user marks the region of interest of the second three-dimensional image.
It should be understood that the third three-dimensional image refers to a three-dimensional image with a mark point generated after the virtual reality device marks the second three-dimensional image, and a third three-dimensional image in the third three-dimensional image is only an expression for distinguishing three-dimensional images of different spatial coordinate systems, where the third three-dimensional image refers to a three-dimensional image of a third spatial coordinate system in the virtual reality space at the virtual reality device end. The difference between the third three-dimensional image and the second three-dimensional image is that the third three-dimensional image is provided with marked points, and the second space coordinate system and the third space coordinate system are both space coordinate systems which are positioned at the end of the virtual reality equipment and used for displaying unused images.
Specifically, the user may mark the second three-dimensional image in the virtual reality space through the wearable device in the virtual reality device, generate a third three-dimensional image with a mark point, and the virtual reality device sends the third three-dimensional image to the image receiving module 430.
It should be understood that the virtual reality device sending the third three-dimensional image to the image receiving module 430 may also be understood as sending geometric information and appearance information of the third three-dimensional image to the image receiving module 430, and the geometric information may include position information of unmarked points and position information of marked points among all the points constituting the third three-dimensional image, and the like.
A first conversion module 440 configured to perform coordinate system conversion on the third three-dimensional image to generate a multi-layer two-dimensional image with the marked points.
Specifically, after the image receiving module 430 receives the third three-dimensional image sent by the virtual reality device, the first converting module 440 may perform coordinate system conversion on the third three-dimensional image, so as to generate a multi-layer two-dimensional image with the mark points.
It should be understood that the third three-dimensional image may be translated, enlarged, reduced, or rotated to establish a corresponding relationship between the third spatial coordinate system of the third three-dimensional image and the fourth spatial coordinate system of the multi-layered two-dimensional image, so as to generate a multi-layered two-dimensional image with a marked point, which is the multi-layered two-dimensional image with the marked point, where the region of interest has been marked, and the multi-layered two-dimensional image with the marked point is practically no different from the multi-layered two-dimensional image when the three-dimensional reconstruction is performed initially, except for the marked point. However, the embodiment of the present invention is not limited to a specific transformation form of the coordinate system transformation, and may be other transformation forms besides the above-mentioned transformation forms such as translation, enlargement, reduction, or rotation.
It should be noted that, the process of performing coordinate system conversion on the third three-dimensional image to generate the multilayer two-dimensional image with the mark point may be to directly perform coordinate system conversion on the third three-dimensional image to generate the multilayer two-dimensional image with the mark point, or may first perform coordinate system conversion on the third three-dimensional image to generate a transitional three-dimensional image, and then perform coordinate system conversion on the transitional three-dimensional image to generate the multilayer two-dimensional image with the mark point, which is not limited in the embodiment of the present invention. Meanwhile, the number of the transitional three-dimensional images is not limited in the embodiment of the invention, and one transitional three-dimensional image or more transitional three-dimensional images can be generated through the transformation of the coordinate system.
Therefore, medical staff can mark the region of interest of the medical image by marking the second three-dimensional image in the virtual reality equipment, the missed diagnosis rate of the medical staff is reduced, and the efficiency of the medical staff for detecting the medical image is improved.
Fig. 5 is a block diagram of a medical image marking apparatus according to another embodiment of the present invention. As shown in fig. 5, the marking device for medical images includes:
an image acquisition module 510 configured to acquire a first three-dimensional image to generate a second three-dimensional image in a virtual reality space from the first three-dimensional image;
a position information obtaining module 520, configured to obtain position information of a mark point in a real space generated after the user marks the region of interest of the second three-dimensional image;
a second conversion module 530 configured to perform coordinate system conversion on the coordinate system of the real space and the coordinate system of the virtual reality space to generate a third three-dimensional image with the marker point in the virtual reality space according to the position information of the marker point and the second three-dimensional image;
a second image sending module 540 configured to send the third three-dimensional image.
It should be understood that the operations and functions of the image obtaining module 510, the position information obtaining module 520, the second converting module 530 and the second image sending module 540 in the medical image marking apparatus provided in fig. 5 may refer to the above-mentioned medical image marking method provided in fig. 3, and are not described herein again to avoid repetition.
In another embodiment, the image acquisition module 510 is configured to: and receiving a first three-dimensional image generated after the medical image sent by the server is subjected to three-dimensional reconstruction, so as to generate a second three-dimensional image in the virtual reality space according to the first three-dimensional image.
Specifically, the server may perform three-dimensional reconstruction on the medical image according to the three-dimensional reconstruction method mentioned in the above embodiment to generate a first three-dimensional image, and then the server may send the first three-dimensional image to the image acquisition module 510 through a communication network or a data line, and after the image acquisition module 510 receives the first three-dimensional image, the image acquisition module 510 performs calculation processing on the first three-dimensional image, and may generate a second three-dimensional image in the virtual reality space in the virtual reality device.
In another embodiment, the second image sending module 540 is configured to send the third three-dimensional image to the server.
Exemplary entity System
Fig. 6 is a block diagram illustrating a medical image marking system according to an embodiment of the present invention. As shown in fig. 6, the marking system of the medical image includes:
a server 61 for executing the marking method of the medical image mentioned in the above embodiment; and a virtual reality device 62 connected to the server 61.
It should be understood that the server 61 is connected to the display 63 of the computer, and after the server 61 and the virtual reality device 62 respectively complete the detection of the medical image by the medical image marking method mentioned in the above embodiments, the medical image 631 with the marked points is displayed on the display 63 of the computer for the medical staff to use.
In another embodiment, as shown in fig. 6, the virtual reality device 62 includes: a virtual reality integrator 621 configured to perform the marking method of the medical image mentioned in the above embodiment; virtual reality glasses 622 connected to the virtual reality integrator 621, configured to observe a second three-dimensional image 625 in the virtual reality space; the at least one sensor 623 is connected to the virtual reality all-in-one machine 631 and is configured to monitor position information of a mark point in a real space generated after the user marks the region of interest of the second three-dimensional image 625; and a wearable device 624 in communication connection with the virtual reality all-in-one machine 621 and the at least one sensor 623, and configured to mark the region of interest of the second three-dimensional image 625, where the wearable device 624 is a handle or a glove.
It should be noted that, in the embodiment of the present invention, the number of the at least one sensor 623 is not limited, as long as the number of the sensors 623 is sufficient to accurately and real-timely acquire the fourth position information, and preferably, the number of the at least one sensor 623 is three. Meanwhile, the embodiment of the present invention also does not limit the specific type of the sensor 623, and may be an infrared sensor, or other types of sensors.
It should be appreciated that after the wearable device 624 marks the region of interest of the second three-dimensional image 625, the infrared sensor 623 may monitor the position of the wearable device 624, so as to obtain the position information of the marked point.
Specifically, referring to the data flow shown by the solid line with arrow in fig. 6, the process of detecting the medical image implemented by the marking system for medical images in this embodiment is as follows: after the server 61 generates the first three-dimensional image 632 using the medical image 631, the first three-dimensional image 632 is sent to the virtual reality integrator 621; after the virtual reality all-in-one machine 621 receives the first three-dimensional image 632, the virtual reality all-in-one machine 621 performs calculation processing on the first three-dimensional image 632 to generate a second three-dimensional image 625 in a virtual reality space; the healthcare worker observes the second three-dimensional image 625 through the virtual reality glasses 622 and marks a region of interest of the second three-dimensional image 625 with the wearable device 624 (e.g., the healthcare worker presses a confirmation button of the wearable device 624); the at least one sensor 623 detects the position of the wearable device 624, records the position information of the mark point, and sends the position information to the virtual reality all-in-one machine 621; after receiving the position information, the virtual-reality all-in-one machine 621 generates a third three-dimensional image (not shown in fig. 6) with a mark point through coordinate system conversion; the virtual reality all-in-one machine 621 sends the third three-dimensional image to the server 61; after the server 61 performs coordinate system conversion on the third three-dimensional image, a medical image 631 with marked points can be obtained and finally displayed on the display 73 for medical staff to use.
It should be understood that the medical image 631 and the second three-dimensional image 625 are viewable by the medical personnel through the eyes, and the third three-dimensional image 632 is an invisible transition image generated by the virtual reality kiosk 621 and the server 61 during the processing calculations.
Exemplary electronic device
Fig. 7 is a block diagram illustrating an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic apparatus 70 includes: one or more processors 710 and memory 720; and computer program instructions stored in the memory 720, which when executed by the processor 710, cause the processor 710 to perform a method of tagging medical images as in any of the embodiments described above.
The processor 710 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
Memory 720 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 710 to implement the steps of the medical image marking method of the various embodiments of the present application described above and/or other desired functions. Information such as light intensity, compensation light intensity, position of the filter, etc. may also be stored in the computer readable storage medium.
In one example, the electronic device 70 may further include: an input device 730 and an output device 740, which are interconnected by a bus system and/or other form of connection mechanism (not shown in fig. 7).
For example, when the electronic device is a robot in an industrial production line, the input device 730 can be a camera for capturing the position of the part to be processed. When the electronic device is a stand-alone device, the input means 730 may be a communication network connector for receiving the collected input signal from an external mobile device. The input device 730 may also include, for example, a keyboard, a mouse, a microphone, and so forth.
The output device 740 may output various information to the outside, and may include, for example, a display, a speaker, a printer, and a communication network and a remote output apparatus connected thereto. In addition, the electronic device 70 may include any other suitable components, depending on the particular application.
Of course, for the sake of simplicity, only some of the components of the electronic apparatus 70 relevant to the present application are shown in fig. 7, and components such as a bus, an input device/output interface, and the like are omitted. In addition, the electronic device 70 may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatuses, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the method of marking a medical image as described in any of the above-described embodiments.
The computer program product may write program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method for labeling medical images according to various embodiments of the present application described in the section "method for labeling medical images" mentioned above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory ((RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
It should be understood that the terms such as first, second, etc. used in the embodiments of the present invention are only used for clearly describing the technical solutions of the embodiments of the present invention, and are not used to limit the protection scope of the present invention.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modifications, equivalents and the like that are within the spirit and principle of the present application should be included in the scope of the present application.

Claims (13)

1. A method of marking a medical image, the medical image comprising a multi-layer two-dimensional image, comprising:
performing three-dimensional reconstruction on the multilayer two-dimensional image to acquire a first three-dimensional image;
sending the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image in the virtual reality device;
receiving a third three-dimensional image with a mark point, which is sent by the virtual reality equipment and is generated after the user marks the region of interest of the second three-dimensional image; and
and converting a coordinate system of the third three-dimensional image to generate a multilayer two-dimensional image with the mark points, wherein the multilayer two-dimensional image with the mark points is the multilayer two-dimensional image with the marked region of interest.
2. The marking method according to claim 1, wherein the coordinate system converting the third three-dimensional image to generate a multi-layer two-dimensional image with the marking points comprises:
converting the coordinate system of the third three-dimensional image to generate a fourth three-dimensional image with the mark points; and
and reconstructing the fourth three-dimensional image into the multilayer two-dimensional image with the marking points.
3. The labeling method of claim 1, further comprising, prior to three-dimensional reconstructing the medical image to obtain the first three-dimensional image:
performing at least one of image preprocessing, image enhancement, and image contour processing on the multi-layered two-dimensional image to obtain a multi-layered optimized two-dimensional image,
wherein the three-dimensional reconstruction of the medical image to obtain a first three-dimensional image comprises:
performing three-dimensional reconstruction on the multi-layer optimized two-dimensional image to acquire the first three-dimensional image.
4. The tagging method of claim 1, further comprising, prior to transmitting said first three-dimensional image to a virtual reality device to generate a second three-dimensional image in virtual reality space:
and converting the file format of the first three-dimensional image into the file format of the virtual reality equipment.
5. The tagging method of claim 4, wherein said converting the file format of the first three-dimensional image to the file format of the virtual reality device comprises:
encoding the geometric information and appearance information of the first three-dimensional image to obtain a file format of the virtual reality device,
wherein the file format of the virtual reality device comprises an STL file format; and/or the file format of the first three-dimensional image comprises a BMP file format.
6. A method of marking a medical image, comprising:
acquiring a first three-dimensional image to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image;
acquiring position information of a mark point in a real space generated after a user marks the region of interest of the second three-dimensional image;
converting a coordinate system of the real space and a coordinate system of the virtual reality space to generate a third three-dimensional image with the mark points in the virtual reality space according to the position information of the mark points and the second three-dimensional image; and
-sending said third three-dimensional image,
wherein the acquiring a first three-dimensional image to generate a second three-dimensional image in a virtual reality space from the first three-dimensional image comprises:
receiving a first three-dimensional image generated after three-dimensional reconstruction of a plurality of layers of two-dimensional images in the medical images sent by a server so as to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image,
wherein the sending the third three-dimensional image comprises:
and sending the third three-dimensional image to the server so that the server can convert the coordinate system of the third three-dimensional image to generate a multilayer two-dimensional image with the mark points.
7. The marking method according to claim 6, wherein the acquiring of the position information of the marking point in the real space generated after the marking of the region of interest of the second three-dimensional image by the user comprises:
receiving position information of a marking point in the real space, which is generated after the user marks the region of interest of the second three-dimensional image through wearable equipment and is monitored by at least one sensor located in the real space.
8. A marking apparatus for a medical image, the medical image including a multi-layered two-dimensional image, comprising:
a three-dimensional reconstruction module configured to perform three-dimensional reconstruction on the multi-layer two-dimensional image to obtain a first three-dimensional image;
a first image sending module configured to send the first three-dimensional image to a virtual reality device to generate a second three-dimensional image in a virtual reality space in the virtual reality device from the first three-dimensional image;
the image receiving module is configured to receive a third three-dimensional image which is sent by the virtual reality equipment and is provided with a mark point and generated after the user marks the region of interest of the second three-dimensional image; and
a first conversion module configured to perform coordinate system conversion on the third three-dimensional image to generate a multi-layer two-dimensional image with the mark point, wherein the multi-layer two-dimensional image with the mark point is the multi-layer two-dimensional image with the region of interest already marked.
9. A medical image marking apparatus, comprising:
the image acquisition module is configured to acquire a first three-dimensional image so as to generate a second three-dimensional image in a virtual reality space according to the first three-dimensional image;
the position information acquisition module is configured to acquire position information of a mark point in a real space generated after a user marks the region of interest of the second three-dimensional image;
a second conversion module configured to perform coordinate system conversion on a coordinate system of the real space and a coordinate system of the virtual reality space to generate a third three-dimensional image with the marker point in the virtual reality space according to the position information of the marker point and the second three-dimensional image; and
a second image transmitting module configured to transmit the third three-dimensional image,
wherein the image acquisition module is configured to:
receiving a first three-dimensional image generated after three-dimensional reconstruction of a plurality of layers of two-dimensional images in a medical image sent by a server, and generating a second three-dimensional image in a virtual reality space according to the first three-dimensional image; and
the second image sending module is configured to: and sending the third three-dimensional image to the server so that the server can convert the coordinate system of the third three-dimensional image to generate a multilayer two-dimensional image with the mark points.
10. A medical image marking system, comprising:
a server for performing the marking method of the medical image according to any one of the above claims 1 to 5; and
and the virtual reality equipment is connected with the server.
11. The tagging system of claim 10, wherein said virtual reality device comprises:
a virtual reality integrator for performing the medical image marking method according to claim 6 or 7;
the virtual reality glasses are connected with the virtual reality all-in-one machine and used for observing a second three-dimensional image in a virtual reality space;
the sensor is connected with the virtual reality all-in-one machine and used for monitoring position information of a mark point in a real space generated after a user marks the region of interest of the second three-dimensional image; and
and the wearable device is in communication connection with the virtual reality all-in-one machine and the at least one sensor and is used for marking the region of interest of the second three-dimensional image, wherein the wearable device is a glove or a handle.
12. A computer-readable storage medium storing a computer program for executing the method of labeling a medical image according to any one of claims 1 to 5 or the method of labeling a medical image according to claim 6 or 7.
13. An electronic device, comprising:
a processor for performing a method of marking a medical image according to any one of claims 1 to 5 or for performing a method of marking a medical image according to claim 6 or 7; and
a memory for storing the processor-executable instructions.
CN201911096645.9A 2019-11-11 2019-11-11 Marking method, device and system for medical image Active CN110786877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911096645.9A CN110786877B (en) 2019-11-11 2019-11-11 Marking method, device and system for medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911096645.9A CN110786877B (en) 2019-11-11 2019-11-11 Marking method, device and system for medical image

Publications (2)

Publication Number Publication Date
CN110786877A CN110786877A (en) 2020-02-14
CN110786877B true CN110786877B (en) 2020-08-25

Family

ID=69443926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911096645.9A Active CN110786877B (en) 2019-11-11 2019-11-11 Marking method, device and system for medical image

Country Status (1)

Country Link
CN (1) CN110786877B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581635B (en) * 2022-03-03 2023-03-24 上海涞秋医疗科技有限责任公司 Positioning method and system based on HoloLens glasses

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102908142A (en) * 2011-08-04 2013-02-06 上海联影医疗科技有限公司 Three-dimensional graphical lamina positioning method in magnetic resonance imaging and magnetic resonance imaging system
CN106097325A (en) * 2016-06-06 2016-11-09 厦门铭微科技有限公司 The instruction of a kind of location based on three-dimensional reconstruction image generates method and device
CN107452074A (en) * 2017-07-31 2017-12-08 上海联影医疗科技有限公司 A kind of image processing method and system
CN107481326A (en) * 2017-08-25 2017-12-15 上海嘉奥信息科技发展有限公司 A kind of anatomical structure VR display methods rendered based on CT images body

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8638989B2 (en) * 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102908142A (en) * 2011-08-04 2013-02-06 上海联影医疗科技有限公司 Three-dimensional graphical lamina positioning method in magnetic resonance imaging and magnetic resonance imaging system
CN106097325A (en) * 2016-06-06 2016-11-09 厦门铭微科技有限公司 The instruction of a kind of location based on three-dimensional reconstruction image generates method and device
CN107452074A (en) * 2017-07-31 2017-12-08 上海联影医疗科技有限公司 A kind of image processing method and system
CN107481326A (en) * 2017-08-25 2017-12-15 上海嘉奥信息科技发展有限公司 A kind of anatomical structure VR display methods rendered based on CT images body

Also Published As

Publication number Publication date
CN110786877A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
US20230289664A1 (en) System for monitoring object inserted into patient's body via image processing
US9659409B2 (en) Providing a spatial anatomical model of a body part of a patient
Gallo et al. 3D interaction with volumetric medical data: experiencing the Wiimote
El‐Hariri et al. Augmented reality visualisation for orthopaedic surgical guidance with pre‐and intra‐operative multimodal image data fusion
EP2372660A2 (en) Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same
US20110262015A1 (en) Image processing apparatus, image processing method, and storage medium
CN109616197A (en) Tooth data processing method, device, electronic equipment and computer-readable medium
JP6127032B2 (en) Radiation image capturing system, image processing apparatus, image processing method, and image processing program
JP6360495B2 (en) Method for reducing data transmission volume in tomosynthesis
JP5274894B2 (en) Image display device
CN110931121A (en) Remote operation guiding device based on Hololens and operation method
JP6480922B2 (en) Visualization of volumetric image data
Macedo et al. A semi-automatic markerless augmented reality approach for on-patient volumetric medical data visualization
US20220346888A1 (en) Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality environment
Abou El-Seoud et al. An interactive mixed reality ray tracing rendering mobile application of medical data in minimally invasive surgeries
CN110786877B (en) Marking method, device and system for medical image
JP6738631B2 (en) Medical image processing apparatus, medical image processing method, and medical image processing program
Doughty et al. HMD-EgoPose: Head-mounted display-based egocentric marker-less tool and hand pose estimation for augmented surgical guidance
JP7154098B2 (en) Medical image viewing device, medical image processing device, and medical image diagnostic device
EP3195272B1 (en) Device for visualizing a 3d object
US20230054394A1 (en) Device and system for multidimensional data visualization and interaction in an augmented reality virtual reality or mixed reality image guided surgery
US11443497B2 (en) Medical image processing apparatus, medical image processing system, medical image processing method, and recording medium
JP2014188095A (en) Remote diagnosis system
EP4069129A1 (en) Augmented reality display of surgical imaging
Marsh et al. VR in medicine: virtual colonoscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Patentee after: Tuxiang Medical Technology Co., Ltd

Address before: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Patentee before: Beijing Tuoxiang Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder