CN111275825B - Positioning result visualization method and device based on virtual intelligent medical platform - Google Patents

Positioning result visualization method and device based on virtual intelligent medical platform Download PDF

Info

Publication number
CN111275825B
CN111275825B CN202010038150.7A CN202010038150A CN111275825B CN 111275825 B CN111275825 B CN 111275825B CN 202010038150 A CN202010038150 A CN 202010038150A CN 111275825 B CN111275825 B CN 111275825B
Authority
CN
China
Prior art keywords
target object
dimensional model
positioning
real
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010038150.7A
Other languages
Chinese (zh)
Other versions
CN111275825A (en
Inventor
于金明
卢洁
王琳琳
钱俊超
张凯
李彦飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202010038150.7A priority Critical patent/CN111275825B/en
Publication of CN111275825A publication Critical patent/CN111275825A/en
Application granted granted Critical
Publication of CN111275825B publication Critical patent/CN111275825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Radiation-Therapy Devices (AREA)

Abstract

The disclosure relates to a positioning result visualization method and device based on a virtual intelligent medical platform, wherein the method comprises the following steps: according to the target object data, obtaining a three-dimensional visual virtual image; performing virtual-real registration on a three-dimensional model related to a target object in the virtual image and a real positioning scene to obtain a registration result; combining the three-dimensional model of the accelerator beam in the virtual image with the registration result, and rendering to obtain a positioning result; and displaying the positioning result in the radiotherapy positioning process. In the embodiment of the disclosure, by combining a mixed reality technology, the information such as tumor, rays and the like is visualized, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of positioning results are realized; the target object can observe the positioning result more intuitively and efficiently, can confirm the positioning completion degree, and reduces positioning errors; meanwhile, the positioning result can be corrected through display information, so that the positioning accuracy is improved.

Description

Positioning result visualization method and device based on virtual intelligent medical platform
Technical Field
The disclosure relates to the technical field of computer vision, in particular to a positioning result visualization method and device based on a virtual intelligent medical platform.
Background
With the development of information technology and electronic technology, the traditional information receiving way and information processing way cannot meet the requirement of efficiently acquiring information of a target object; for example, in the medical field, during positioning for Radiation (RT) treatment, the patient orally acquires positioning results by a technician; because the tumors, normal tissues and rays in the human body are not visible to naked eyes, and most patients have no medical background, the patients cannot intuitively and efficiently acquire the positioning result, the patients and technicians cannot effectively interact, and the positioning efficiency is reduced.
Disclosure of Invention
In view of this, the disclosure provides a positioning result visualization method and device based on a virtual intelligent medical platform.
According to an aspect of the present disclosure, there is provided a positioning result visualization method based on a virtual intelligent medical platform, including:
according to the target object data, obtaining a three-dimensional visual virtual image;
performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
combining the three-dimensional model of the accelerator beam in the virtual image with the registration result, and rendering to obtain a positioning result;
and displaying the positioning result in the radiotherapy positioning process.
In one possible implementation manner, the obtaining a three-dimensional visualized virtual image according to the target object data includes:
obtaining target object DICOM RT (Radiothearapy In DICOM) data through a DICOM network;
extracting the target object data according to the DICOM RT data;
establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data;
and obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation manner, the building the accelerator beam three-dimensional model and the target object related three-dimensional model according to the target object data includes:
analyzing the target object data to obtain radiotherapy related data;
establishing corresponding three-dimensional model data according to the radiotherapy related data;
and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
In a possible implementation manner, the registering result is obtained by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene, and the method includes:
acquiring a real-time picture of the real positioning scene;
according to the real-time picture, obtaining characteristic points of the real-time positioning scene;
and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers that are added to the target subject's skin during a computed tomography (Computed Tomography, CT) scan.
In one possible implementation manner, the displaying the positioning result during the radiotherapy positioning process includes:
determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene;
and displaying the positioning result at the target position through a display device.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, planning information, structure set information and dose information;
the target object-related three-dimensional model includes: a target region three-dimensional model, a ROI region three-dimensional model and a dose distribution three-dimensional model.
According to another aspect of the present disclosure, there is provided a positioning result visualization device based on a virtual intelligent medical platform, including:
the virtual image construction module is used for obtaining a three-dimensional visualized virtual image according to the target object data;
the virtual-real registration module is used for carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
the rendering module is used for combining the three-dimensional model of the accelerator beam in the virtual image and the registration result, and rendering to obtain a positioning result;
and the display module is used for displaying the positioning result in the radiation treatment positioning process.
According to another aspect of the present disclosure, there is provided a positioning result visualization device based on a virtual intelligent medical platform, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, by combining a mixed reality technology, the information such as tumor, rays and the like is visualized, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of positioning results are realized; the target object can observe the positioning result more intuitively and more efficiently, clearly know the positioning condition, confirm the positioning completion degree and reduce the positioning error. Meanwhile, the display information can be used for assisting communication, so that the positioning efficiency is improved. In addition, doctors can correct the positioning result through display information, so that the positioning accuracy is improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 illustrates a flow chart of a method for visualizing a positioning result based on a virtual intelligent medical platform in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a device connection schematic for visualization of positioning results according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic view of a radiotherapy positioning result visualization scenario according to an embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of a positioning result visualization device based on a virtual intelligent medical platform according to an embodiment of the present disclosure;
fig. 5 illustrates a block diagram of an apparatus for virtual intelligent medical platform based positioning result visualization, according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
With the change of disease spectrum, malignant tumors have become the first killer threatening human health; about 2/3 of the patients will inevitably receive radiation therapy for the purpose of eradication or palliative treatment, etc. throughout the course of tumor progression.
In the current clinical actual radiotherapy positioning process, the positioning result is informed to a patient through technician dictation; however, since the tumors, normal tissues and rays in the human body are not visible to the naked eye, and most patients have no medical background, the technician can not clearly understand the specific situation by dictation, and the fear and worry of the patients on the tumors and radiotherapy can not be reduced, so that a plurality of physical and psychological symptoms and adverse psychological reactions are generated in the implementation stage of the radiotherapy, and the life quality and the treatment compliance of the patients, and even the treatment effect are seriously affected.
Therefore, the technical scheme for visualizing the positioning result is provided, and by combining the mixed reality technology, the information such as tumor, rays and the like is visualized, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of the positioning result are realized; the patient can observe the positioning result more intuitively and efficiently, know the positioning condition clearly, confirm the positioning completion degree and reduce the positioning error. Meanwhile, the display information can be used for assisting communication, so that the positioning efficiency is improved. In addition, doctors can correct the positioning result through displaying information, thereby improving the positioning accuracy
Fig. 1 illustrates a flowchart of a method for visualizing a positioning result based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step 10, obtaining a three-dimensional visualized virtual image according to target object data;
step 20, performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
step 30, combining the three-dimensional model of the accelerator beam in the virtual image and the registration result, and rendering to obtain a positioning result;
and step 40, displaying the positioning result in the radiotherapy positioning process.
The virtual intelligent (Virtual Intelligent, VI) medical platform is a medical platform constructed by combining methods of artificial intelligence, big data analysis and the like and is based on holographic technologies of virtual reality, augmented reality, mixed reality and the like, is used for assisting and guiding invasive, minimally invasive and noninvasive clinical diagnosis and treatment processes, assisting diagnosis and treatment of patients, and can be applied to the fields including but not limited to surgery, internal medicine, radiotherapy department, interventional department and the like. The positioning result refers to the result obtained in the radiation therapy positioning process, namely, a doctor firstly draws a tumor on an image of a planning system, so that the tumor center coordinate of a patient is determined, and a physical engineer and an operator put the tumor center of the patient on a treatment center (including an isocenter) of a radiotherapy device according to the tumor center coordinate.
Therefore, based on the virtual intelligent medical platform, the data information of the existing target object in the hospital is analyzed and converted into the three-dimensional visual virtual image, the virtual image is matched with the real scene through the virtual intelligent technology and is displayed on the display terminal, so that three-dimensional holographic display of the medical image, three-dimensional display of a radiotherapy plan and visual display of the positioning result are realized, the target object can observe the positioning result more intuitively and more efficiently, positioning errors are reduced, and positioning efficiency is improved.
The above-mentioned positioning result visualization scheme based on the virtual intelligent medical platform is illustrated in the following with reference to fig. 2 and 3.
Fig. 2 illustrates a schematic diagram of device connection for positioning result visualization according to an embodiment of the present disclosure, as illustrated in fig. 2, the device for positioning result visualization may include: the system comprises an image acquisition device (namely a camera 01, a camera 02 and a camera 03 in the figure), a display device (namely the display device 01 and the display device 02 in the figure), a processing device PC, a server and an in-hospital information system; fig. 3 shows a schematic view of a radiotherapy positioning result visualization scene according to an embodiment of the present disclosure, as shown in fig. 3, including: image acquisition equipment (namely camera 01, camera 02 and camera 03 in the figure), display equipment (namely a display in the figure), a PC, a server, an in-hospital information system and an accelerator.
In fig. 2 and fig. 3, the image acquisition device is configured to acquire a picture in a real-time positioning scene in real time, transmit the picture to the PC in real time in a wired or wireless manner, acquire target object data through the in-hospital information system by the PC and the server, perform positioning result visualization processing on the data, and transmit the obtained processing result to the display device for terminal display after the PC and the server perform processing such as data extraction, three-dimensional reconstruction, virtual-real registration, and the like. In fig. 2 and 3, the number, installation position, connection mode, and the like of the image capturing apparatus, the display apparatus, and the like may be set according to actual needs, which is not limited in the present disclosure.
In one possible implementation manner, in step 10, the obtaining a three-dimensional visualized virtual image according to the target object data may include: obtaining DICOM RT data through a DICOM network; extracting the target object data according to the DICOM RT data; establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
The DICOM RT data is related data acquired from a hospital DICOM network, and DICOM is an international standard (ISO 12052) of medical images and related information. DICOM RT data may be acquired by an in-hospital information system, and may include: CT image data, RT Plan planning information, RT Structure Set information and RT Dose information are obtained by CI scanning, and then related data such as planning information, structure Set information and Dose information can be obtained according to the CT image data. And then, according to the information such as the identity of the target object for radiotherapy, extracting data from the obtained DICOM RT data to obtain corresponding target object data, wherein the target object data can comprise: basic information of a target object, CT image data, planning information, structure set information, dose information and other related information; further, the target object data may be subjected to a segmentation and modeling process, and a plurality of three-dimensional models may be established, where the plurality of three-dimensional models may include: a target object-related three-dimensional model such as a target region three-dimensional model, a region of interest (region of interest, ROI) three-dimensional model, a dose distribution three-dimensional model, and an accelerator beam three-dimensional model; further, according to the established spatial relative position relationship among the three-dimensional models, the three-dimensional models can be combined to obtain a three-dimensional visual virtual image.
For example, the server in fig. 2 or 3 may interface with a DICOM network of a hospital to provide a C-Store network service (relational database for quick query), so as to receive DICOM RT data such as CT image data, RT Plan information, RT Structure Set information, RT Dose information, etc. sent by the DICOM protocol; then, the DICOM RT data can be analyzed and processed to extract target object basic information, target object CT image data, plan information, structure set information, dose information and other target object data; further, according to the target object data, a plurality of three-dimensional models are established, and finally a three-dimensional visualized virtual image is obtained.
In one possible implementation manner, the building the accelerator beam three-dimensional model and the target object related three-dimensional model according to the target object data includes: analyzing the target object data to obtain radiotherapy related data; establishing corresponding three-dimensional model data according to the radiotherapy related data; and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
For example, the extracted target object data may be subjected to data analysis to obtain radiation therapy related data, and a Json file (i.e., a file stored in a Json data format) as description data information may be generated, and the radiation therapy related data (i.e., the Json file) may be imported into a medical image processing software 3D slice, and the target area, the region of interest, the CT image data of the dose distribution, the accelerator beam, and the like of the target object may be segmented and modeled by using a segmentation Editor and modeling Model Maker of the software using Python language, so as to obtain a plurality of corresponding three-dimensional models, such as a target area three-dimensional Model, a region of interest three-dimensional Model, a dose distribution three-dimensional Model, and an accelerator beam three-dimensional Model; and finally, storing the target region three-dimensional model, the region of interest three-dimensional model, the dose distribution three-dimensional model, the accelerator beam three-dimensional model and the like as an OBJ format model data file, and generating a Json file describing the model data file for subsequent processing. Thus, based on 3D slicer software, modeling processing is carried out on target object data to obtain a three-dimensional model, so that automatic and batched processing of the data is realized; meanwhile, the target object can acquire the positioning condition intuitively and efficiently through the three-dimensional visual virtual image obtained by three-dimensional reconstruction of CT image data, subjectively judge, participate in positioning completion confirmation, and reduce errors.
In a possible implementation manner, in step 20, the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes: acquiring a real-time picture of the real positioning scene; according to the real-time picture, obtaining characteristic points of the real-time positioning scene; and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In the embodiment of the disclosure, the images of the real positioning scene can be acquired in real time through one or more image acquisition settings arranged on the real positioning scene, and in an exemplary case that the number of the image acquisition devices is greater than 1, the real-time images obtained by the image acquisition devices can be fused, and then the registration result is obtained by performing virtual-real registration on the fused images and the three-dimensional model related to the target object in the three-dimensional visualized virtual image.
Wherein the feature points correspond to position markers that are added to the skin of the target object during computed tomography CT scanning. Illustratively, the CT scan may be performed at a specific location on the skin such as: marks are added in the middle and two sides of the chest, the middle and two sides of the abdomen and the like, and the marks are generated into a two-dimensional code form; the mark corresponds to the space position of the characteristic point of the real scene obtained through the real-time picture, and meanwhile, the position of the virtual image reconstructed by CT scanning data and the position of the mark are relatively unchanged; furthermore, according to the relative relation between the characteristic points obtained by the camera transmitting the characteristic points into the image surface in real time and the established three-dimensional visualized virtual image, the three-dimensional models related to the target objects in a plurality of virtual images can be matched into the real-time image, so that virtual-real registration is realized.
For example, as shown in fig. 3, 3 cameras, a PC and other devices may be used to perform multi-angle real-time image acquisition on a real-time positioning scene through 3 cameras installed at different positions, and the real-time images are transmitted to the PC, and the PC calculates feature points of the real-time positioning scene in the image; then the PC acquires patient data information through the background, and the information of the Json file of the patient obtained by extracting the data is matched; and a corresponding three-dimensional reconstructed model data file is called from a server, and the three-dimensional models related to target objects such as target areas, ROI (region of interest) and the like of interest of patients, technicians and doctors are matched to the corresponding positions of the real positioning scene according to the relative relation between the characteristic points obtained by the real-time transmission of the camera into the image and the established three-dimensional visualized virtual images.
In one possible implementation manner, in step 40, rendering obtains a positioning result by combining the three-dimensional model of the accelerator beam in the virtual image and the registration result; may include: according to the registration result, determining the position of the three-dimensional model (portal model) of the accelerator beam, for example, the position of the three-dimensional model of the accelerator beam can be determined based on the consistency of the three-dimensional model related to the target object and the isocenter of the three-dimensional model of the accelerator beam, and then the three-dimensional model of the accelerator beam and the registration result are rendered to obtain a positioning result; it should be noted that in the embodiment of the present disclosure, the number of the three-dimensional models of the accelerator beam may be one or more, that is, the positioning result may include three-dimensional models of accelerator beams with different angles and different shapes.
In one possible implementation, in step 40, the displaying the positioning result during the radiotherapy positioning process includes: determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene; and displaying the positioning result at the target position through a display device.
In the embodiment of the disclosure, the number of target positions can be set according to the position, the visual angle, the actual environment and other factors of the target object, so that one or more target positions can be obtained, and the registration result can be intuitively displayed; by setting different areas to different colors or different color depths, the display device can distinguish each component element in the registration result, so that the target object can acquire the registration result more intuitively and efficiently, the target object can observe and confirm the positioning result conveniently, and the positioning efficiency is improved.
For example, as shown in fig. 3, the virtual-real registration result may be displayed by a display device (projector, display, etc.), and multiple display devices may be added at different angles at different positions according to the position and viewing angle of the patient in the radiotherapy scene. For example, for a patient lying, a display device may be projected or placed directly over the patient to facilitate the patient's observation and confirmation of the positioning results. Thus, the target object can intuitively and conveniently observe the registration result, and further can combine the self situation to subjectively judge the positioning situation, so that the positioning situation can be corrected; meanwhile, doctors can correct the positioning result by observing the display equipment, so that the positioning accuracy is improved.
It should be noted that, although the above embodiment describes a positioning result visualization method based on a virtual intelligent medical platform as above by way of example, those skilled in the art will understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or practical application scene, so long as the technical scheme of the disclosure is met.
Thus, by combining the mixed reality technology, the information such as tumor, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can observe the positioning result more intuitively and efficiently, clearly know the positioning condition, and can perform subjective judgment to participate in positioning and positioning completion confirmation, so that positioning errors are reduced. Meanwhile, the device can be used for assisting the communication between doctors and patients, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the healthy psychological state and good immune function of the patients, more positively matching treatment, reducing treatment errors in the aspect of patients and positively influencing the treatment of tumor radiotherapy patients. In addition, doctors can correct the positioning result through the three-dimensional images, and the positioning accuracy is improved.
Fig. 4 illustrates a block diagram of a positioning result visualization device based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include: the virtual image construction module 41 is configured to obtain a three-dimensional visualized virtual image according to the target object data; the virtual-real registration module 42 is configured to obtain a registration result by performing virtual-real registration on the virtual image and the real positioning scene; a rendering module 43, configured to combine the three-dimensional model of the accelerator beam in the virtual image and the registration result, and render to obtain a positioning result; a display module 44 for displaying the positioning result during the radiotherapy positioning process.
In one possible implementation manner, the virtual image construction module 41 may include: the DICOM RT data acquisition unit is used for acquiring target object DICOM RT data through a DICOM network; a target object data extraction unit, configured to extract the target object data according to the DICOM RT data; the three-dimensional model building unit is used for building an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and the virtual image acquisition unit is used for obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation manner, the three-dimensional model building unit may include: the data analysis subunit is used for obtaining radiation therapy related data by analyzing and processing the target object data; the model data construction subunit is used for establishing corresponding three-dimensional model data according to the radiotherapy related data; and the format conversion subunit is used for converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation, the virtual-real registration module 42 may include: the real-time picture acquisition unit is used for acquiring a real-time picture of the real positioning scene; the characteristic point obtaining unit is used for obtaining characteristic points of the real positioning scene according to the real-time picture; and the virtual-real registration unit is used for matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers that are added to the skin of the target object during a computed tomography CT scan.
In one possible implementation, the display module 44 may include: the target position selection unit is used for determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene; and the display unit is used for displaying the positioning result at the target position through display equipment.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, planning information, structure set information and dose information; the target object-related three-dimensional model includes: a target region three-dimensional model, a ROI region three-dimensional model, a dose distribution three-dimensional model, and an accelerator beam three-dimensional model.
It should be noted that, although the above embodiment describes a positioning result visualization device based on a virtual intelligent medical platform as an example, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or practical application scene, so long as the technical scheme of the disclosure is met.
Thus, by combining the mixed reality technology, the information such as tumor, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can observe the positioning result more intuitively and more efficiently, clearly know the positioning condition, and can perform subjective judgment to participate in positioning and positioning completion confirmation, so that positioning errors are reduced. Meanwhile, the device can be used for assisting the communication between doctors and patients, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the healthy psychological state and good immune function of the patients, more positively matching treatment, reducing treatment errors in the aspect of patients and positively influencing the treatment of tumor radiotherapy patients. In addition, doctors can correct the positioning result through the three-dimensional images, and the positioning accuracy is improved.
Fig. 5 illustrates a block diagram of an apparatus 1900 for virtual intelligent medical platform based positioning result visualization, according to an embodiment of the disclosure. For example, the apparatus 1900 may be provided as a server. Referring to fig. 5, the apparatus 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that are executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The apparatus 1900 may further include a power component 1926 configured to perform power management of the apparatus 1900, a wired or wireless network interface 1950 configured to connect the apparatus 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of apparatus 1900 to perform the above-described methods.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. The positioning result visualization method based on the virtual intelligent medical platform is characterized by comprising the following steps of:
according to the target object data, obtaining a three-dimensional visual virtual image;
performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result; the target object related three-dimensional model comprises a target region three-dimensional model and an ROI region three-dimensional model;
combining the three-dimensional model of the accelerator beam in the virtual image with the registration result, and rendering to obtain a positioning result;
and displaying the positioning result in the radiotherapy positioning process.
2. The method of claim 1, wherein obtaining a three-dimensional visual virtual image from the target object data comprises:
obtaining target object DICOM RT data through a DICOM network;
extracting the target object data according to the DICOM RT data;
establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data;
and obtaining the three-dimensional visualized virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
3. The method of claim 2, wherein said creating an accelerator beam three-dimensional model and a target object-related three-dimensional model from said target object data comprises:
analyzing the target object data to obtain radiotherapy related data;
establishing corresponding three-dimensional model data according to the radiotherapy related data;
and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
4. The method according to claim 1, wherein the obtaining the registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes:
acquiring a real-time picture of the real positioning scene;
according to the real-time picture, obtaining characteristic points of the real-time positioning scene;
and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
5. The method of claim 4, wherein the feature points correspond to position markers added to the target subject's skin during a computed tomography CT scan.
6. The method of claim 5, wherein displaying the positioning result during radiation therapy positioning comprises:
determining at least one target position according to the position and the view angle of the target object in the real radiotherapy scene;
and displaying the positioning result at the target position through a display device.
7. The method according to any one of claims 2-6, wherein the target object data comprises: basic information of a target object, CT image data, planning information, structure set information and dose information;
the target object related three-dimensional model further comprises a dose distribution three-dimensional model.
8. Positioning result visualization device based on virtual intelligent medical platform, characterized by comprising:
the virtual image construction module is used for obtaining a three-dimensional visualized virtual image according to the target object data;
the virtual-real registration module is used for carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result; the target object related three-dimensional model comprises a target region three-dimensional model and an ROI region three-dimensional model;
the rendering module is used for combining the three-dimensional model of the accelerator beam in the virtual image and the registration result, and rendering to obtain a positioning result;
and the display module is used for displaying the positioning result in the radiation treatment positioning process.
9. Positioning result visualization device based on virtual intelligent medical platform, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any one of claims 1 to 7 when executing the executable instructions stored in the memory.
10. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 7.
CN202010038150.7A 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform Active CN111275825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010038150.7A CN111275825B (en) 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038150.7A CN111275825B (en) 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform

Publications (2)

Publication Number Publication Date
CN111275825A CN111275825A (en) 2020-06-12
CN111275825B true CN111275825B (en) 2024-02-27

Family

ID=71002998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038150.7A Active CN111275825B (en) 2020-01-14 2020-01-14 Positioning result visualization method and device based on virtual intelligent medical platform

Country Status (1)

Country Link
CN (1) CN111275825B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111870825B (en) * 2020-07-31 2023-08-18 于金明 Radiation therapy accurate field-by-field positioning method based on virtual intelligent medical platform
CN112274166A (en) * 2020-10-18 2021-01-29 上海联影医疗科技股份有限公司 Control method, system and device of medical diagnosis and treatment equipment
CN112070903A (en) * 2020-09-04 2020-12-11 脸萌有限公司 Virtual object display method and device, electronic equipment and computer storage medium
CN112076400A (en) * 2020-10-15 2020-12-15 上海市肺科医院 Repeated positioning method and system
CN112401919B (en) * 2020-11-17 2023-04-21 上海联影医疗科技股份有限公司 Auxiliary positioning method and system based on positioning model
CN114306956A (en) * 2021-03-29 2022-04-12 于金明 Spiral tomography radiotherapy system based on virtual intelligent medical platform

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104587609A (en) * 2015-02-03 2015-05-06 瑞地玛医学科技有限公司 Positioning and locating device for radiotherapy and positioning method of dynamic target region
CN105893772A (en) * 2016-04-20 2016-08-24 上海联影医疗科技有限公司 Data acquiring method and data acquiring device for radiotherapy plan
CN108231199A (en) * 2017-12-29 2018-06-29 上海联影医疗科技有限公司 Radiotherapy planning emulation mode and device
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 A kind of image-guided virtual reality fusion processing method and processing device
CN108460843A (en) * 2018-04-13 2018-08-28 广州医科大学附属肿瘤医院 It is a kind of based on virtual reality radiotherapy patient treatment instruct platform
CN109364387A (en) * 2018-12-05 2019-02-22 上海市肺科医院 A kind of radiotherapy AR localization and positioning system
CN110141360A (en) * 2018-02-11 2019-08-20 四川英捷达医疗科技有限公司 Digital technology air navigation aid
CN110237441A (en) * 2019-05-30 2019-09-17 新乡市中心医院(新乡中原医院管理中心) Coordinate method positions in radiotherapy and puts the application of position

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104587609A (en) * 2015-02-03 2015-05-06 瑞地玛医学科技有限公司 Positioning and locating device for radiotherapy and positioning method of dynamic target region
CN105893772A (en) * 2016-04-20 2016-08-24 上海联影医疗科技有限公司 Data acquiring method and data acquiring device for radiotherapy plan
CN108231199A (en) * 2017-12-29 2018-06-29 上海联影医疗科技有限公司 Radiotherapy planning emulation mode and device
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 A kind of image-guided virtual reality fusion processing method and processing device
CN110141360A (en) * 2018-02-11 2019-08-20 四川英捷达医疗科技有限公司 Digital technology air navigation aid
CN108460843A (en) * 2018-04-13 2018-08-28 广州医科大学附属肿瘤医院 It is a kind of based on virtual reality radiotherapy patient treatment instruct platform
CN109364387A (en) * 2018-12-05 2019-02-22 上海市肺科医院 A kind of radiotherapy AR localization and positioning system
CN110237441A (en) * 2019-05-30 2019-09-17 新乡市中心医院(新乡中原医院管理中心) Coordinate method positions in radiotherapy and puts the application of position

Also Published As

Publication number Publication date
CN111275825A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN111275825B (en) Positioning result visualization method and device based on virtual intelligent medical platform
EP3726467B1 (en) Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
US11576645B2 (en) Systems and methods for scanning a patient in an imaging system
TWI663961B (en) Object positioning apparatus, object positioning method, object positioning program, and radiation therapy system
CN110522516B (en) Multi-level interactive visualization method for surgical navigation
US9554772B2 (en) Non-invasive imager for medical applications
RU2711140C2 (en) Editing medical images
JP6768862B2 (en) Medical image processing method, medical image processing device, medical image processing system and medical image processing program
CN112584760A (en) System and method for object positioning and image guided surgery
CN111261265B (en) Medical imaging system based on virtual intelligent medical platform
JP2019519257A (en) System and method for image processing to generate three-dimensional (3D) views of anatomical parts
CN111353524B (en) System and method for locating patient features
CN113662573B (en) Mammary gland focus positioning method, device, computer equipment and storage medium
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
CN111214764B (en) Radiotherapy positioning verification method and device based on virtual intelligent medical platform
Advincula et al. Development and future trends in the application of visualization toolkit (VTK): the case for medical image 3D reconstruction
Sarmadi et al. 3D Reconstruction and alignment by consumer RGB-D sensors and fiducial planar markers for patient positioning in radiation therapy
US10896501B2 (en) Rib developed image generation apparatus using a core line, method, and program
US11850005B1 (en) Use of immersive real-time metaverse and avatar and 3-D hologram for medical and veterinary applications using spatially coordinated multi-imager based 3-D imaging
KR102084251B1 (en) Medical Image Processing Apparatus and Medical Image Processing Method for Surgical Navigator
JP2011182946A (en) Medical image display and medical image display method
EP4298994A1 (en) Methods, systems and computer readable mediums for evaluating and displaying a breathing motion
KR102208577B1 (en) Medical Image Processing Apparatus and Medical Image Processing Method for Surgical Navigator
US20090202118A1 (en) Method and apparatus for wireless image guidance
KR20230066526A (en) Image Processing Method, Apparatus, Computing Device and Storage Medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230801

Address after: 250117 Shandong city of Ji'nan province Yan Ji Road, No. 440 Shandong Provincial Tumor Hospital

Applicant after: Yu Jinming

Applicant after: Affiliated Tumor Hospital of Shandong First Medical University (Shandong cancer prevention and treatment institute Shandong Cancer Hospital)

Address before: 250117 Shandong city of Ji'nan province Yan Ji Road, No. 440 Shandong Provincial Tumor Hospital

Applicant before: Yu Jinming

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20231007

Address after: 201807 2258 Chengbei Road, Jiading District, Shanghai

Applicant after: Shanghai Lianying Medical Technology Co.,Ltd.

Address before: 250117 Shandong city of Ji'nan province Yan Ji Road, No. 440 Shandong Provincial Tumor Hospital

Applicant before: Yu Jinming

Applicant before: Affiliated Tumor Hospital of Shandong First Medical University (Shandong cancer prevention and treatment institute Shandong Cancer Hospital)

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant