CN115205459A - Medical image processing and displaying method and system - Google Patents

Medical image processing and displaying method and system Download PDF

Info

Publication number
CN115205459A
CN115205459A CN202210833395.8A CN202210833395A CN115205459A CN 115205459 A CN115205459 A CN 115205459A CN 202210833395 A CN202210833395 A CN 202210833395A CN 115205459 A CN115205459 A CN 115205459A
Authority
CN
China
Prior art keywords
dimensional
image
target
medical
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210833395.8A
Other languages
Chinese (zh)
Inventor
戴亚康
耿辰
戴斌
周志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority to CN202210833395.8A priority Critical patent/CN115205459A/en
Publication of CN115205459A publication Critical patent/CN115205459A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a medical image processing and displaying method and a system, wherein the method comprises the following steps: acquiring a medical modal image set corresponding to an acquisition object, establishing a first three-dimensional reconstruction model by using the medical modal image set, generating an initial image, and displaying the initial image as a three-dimensional initial image for human eye observation through a light field display; determining a target observation point of an observer on the three-dimensional initial image; determining a target object corresponding to the position of the target observation point, wherein the target object is a local tissue in the acquisition object; and calling a multi-view two-dimensional image set of the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image as a three-dimensional target image for human eye observation through the light field display. In the observation process, an observer can observe a target object in a multi-angle, three-dimensional and visual manner without manual operation in the whole process.

Description

Medical image processing and displaying method and system
Technical Field
The invention relates to the technical field of medical images, in particular to a medical image processing and displaying method and system.
Background
The three-dimensional image is widely applied to the medical field, and a doctor can judge possible problems of a patient according to the three-dimensional image and can perform operation planning or operation treatment on the patient by observing the three-dimensional image. In the medical field, any angle information of an organ or a bone to be detected is particularly important, most of the current medical three-dimensional images are sectional images or three-dimensional reconstruction images, and image information is observed by selecting a target layer image or adjusting the angle of the three-dimensional reconstruction image. But it is still presented by a 2D flat display per se, and there are limitations for medical staff who are not very space-sensitive; in addition, in the actual operation process, the doctor needs to continuously select the required image manually or adjust the angle of the three-dimensional reconstruction image manually and amplify the required image, so that the operation efficiency is reduced, and operation errors are easy to occur.
Disclosure of Invention
Therefore, the present invention provides a method and a system for processing and displaying medical images, which solves the technical problem of the prior art that a required medical image needs to be manually selected, adjusted or enlarged.
According to a first aspect, an embodiment of the present invention provides a medical image processing and displaying method, including the following steps:
acquiring a medical modal image set corresponding to an acquisition object, establishing a first three-dimensional reconstruction model by using the medical modal image set, generating an initial image, and displaying the initial image as a three-dimensional initial image for human eye observation through a light field display;
determining a target observation point of an observer on the three-dimensional initial image;
determining a target object corresponding to the position of the target observation point, wherein the target object is a local tissue in the acquisition object;
and calling a multi-view two-dimensional image set of the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image as a three-dimensional target image for human eye observation through the light field display.
Optionally, the determining a target viewpoint of the observer on the three-dimensional initial image includes:
identifying a left eye pupil and a right eye pupil of an observer, and determining a midpoint between the left eye pupil and the right eye pupil;
and determining a central point of the three-dimensional initial image, and taking a first intersection point of a connecting line of the central point and the midpoint and the three-dimensional initial image as the target observation point.
Optionally, the determining a target object corresponding to the position of the target observation point includes:
and matching the target object containing the position coordinates according to the position coordinates of the target observation point.
Optionally, the establishing a first three-dimensional reconstruction model by using the medical modality image set, and generating an initial image comprises;
performing three-dimensional volume rendering by using the medical modal image set and adopting a ray projection method, and establishing a first three-dimensional reconstruction model;
acquiring two-dimensional images of the first three-dimensional reconstruction model at different angles according to a preset acquisition direction and a preset acquisition angle;
rendering the collected two-dimensional images at different angles respectively, and automatically generating initial images for displaying by the light field display according to the rendered two-dimensional images.
Optionally, after determining the target object corresponding to the position of the target observation point, the method further includes:
preprocessing the target object;
carrying out three-dimensional reconstruction on the preprocessed target object again to generate a second three-dimensional reconstruction model;
and acquiring two-dimensional images of the second three-dimensional reconstruction model from multiple angles to generate the multi-view two-dimensional image set.
Optionally, the preprocessing the target object includes:
rendering processing;
and refining, namely analyzing voxel information of the target object in the first three-dimensional reconstruction model, and adjusting the window width and window level of the target object.
Optionally, after the building of the first three-dimensional reconstruction model, the method further includes:
respectively segmenting tissues in the first three-dimensional reconstruction model;
respectively generating a corresponding second three-dimensional reconstruction model for each divided tissue;
and carrying out multi-angle acquisition on each second three-dimensional reconstruction model, and respectively generating a corresponding multi-view two-dimensional image set.
According to a second aspect, an embodiment of the present invention provides a medical imaging system, including:
the somatosensory interaction device is used for identifying the left eye pupil and the right eye pupil of an observer;
the image intelligent analysis workstation is electrically connected with the somatosensory interaction equipment and is used for acquiring a medical modal image set corresponding to an acquired object, and establishing a first three-dimensional reconstruction model by using the medical modal image set to generate an initial image; determining a target observation point of an observer on the three-dimensional initial image; determining a target object corresponding to the position of the target observation point; calling a multi-view two-dimensional image set of the target object, and generating a target image from the multi-view two-dimensional image set;
the light field display is electrically connected with the image intelligent analysis workstation and used for displaying the initial image or the target image into a three-dimensional initial image for human eye observation;
and the plane display is electrically connected with the intelligent image analysis workstation and is used for displaying the medical modal image set and the multi-view two-dimensional image set.
According to a third aspect, an embodiment of the present invention provides a computer device, including: the medical image processing and displaying method comprises a memory and a processor, wherein the memory and the processor are mutually connected in a communication mode, computer instructions are stored in the memory, and the processor executes the computer instructions so as to execute the medical image processing and displaying method.
According to a fourth aspect, the embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing the computer to execute the medical image processing and displaying method described above.
The technical scheme of the invention has the following advantages:
in the embodiment of the invention, a first three-dimensional reconstruction model is established by utilizing an acquired medical modal image set of an acquired acquisition object, an initial image is generated, and the initial image is displayed as a three-dimensional initial image for human eye observation through a light field display; determining a target object according to the determined target observation point; and calling a multi-view two-dimensional image set corresponding to the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image as a three-dimensional target image for human eye observation through a light field display. In the embodiment, in the actual observation process of the observer, the observer does not need to manually adjust the image in the whole process, the target object which is mainly observed by the observer can be determined according to the target observation point observed by the observer, and the initially displayed three-dimensional initial image is switched into the three-dimensional target image. An observer can observe a target object in a multi-angle, three-dimensional and visual mode in the observation process, the three-dimensional target image is further displayed for the target object, the regenerated target image has a local amplification effect on the basis of the initial image, and details of an organization structure are more prominent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a specific example of a medical image processing and displaying method according to embodiment 1 of the present application;
fig. 2 is a schematic view of a specific example of a three-dimensional image displayed by the light field display viewed at different viewing angles in embodiment 1 of the present application;
FIG. 3 is a diagram showing a specific example of determining a target observation point in embodiment 1 of the present application;
fig. 4 is a schematic diagram of a specific example of the multi-view image acquisition directions in embodiment 1 of the present application;
fig. 5 is a schematic structural diagram of a specific example of a medical imaging system according to embodiment 2 of the present application;
fig. 6 is a schematic structural diagram of a specific example of a computer device in embodiment 3 of the present application.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment provides a medical image processing and displaying method, in which a somatosensory interaction device detects a specific position observed by a pupil of an observer, an intelligent image analysis workstation performs three-dimensional reconstruction on an acquired medical modal image set, determines a target object, and the like, and a light field display displays a processed initial image or a processed target image as a three-dimensional image for human eye observation, so as to implement processing and displaying of a medical image, as shown in fig. 1, including the following steps:
step S101, a medical mode image set corresponding to an acquisition object is obtained, a first three-dimensional reconstruction model is established by using the medical mode image set, an initial image is generated, and the initial image is displayed as a three-dimensional initial image for human eye observation through a light field display.
The medical modality images may be multi-modality medical images, such as: the medical images of all medical modalities of the collected object may be integrated into a medical modality image set, which may be CT modality medical images (Computed Tomography, abbreviated as CT), MR modality medical images (MR), PET modality medical images (Positron Emission Tomography, phased Emission Tomography, abbreviated as PET), and the like, and the medical modality image set may correspond to one modality medical image or to a plurality of modality medical images.
Taking the example that the medical modality image set only includes one medical modality image, in this embodiment, a volume rendering method based on ray projection may be adopted to perform visual rendering on the image in the medical modality image set to form a volume rendering model, and the volume rendering model is used as the first three-dimensional reconstruction model in this embodiment. Further, the first three-dimensional reconstruction model is reconstructed into a plurality of view angles, for example, the first three-dimensional reconstruction model is reconstructed into 100 view angles, every two view angles are spaced by 3.6 °, a virtual camera is arranged on each view angle, the center of the camera is aligned with the center of the first three-dimensional reconstruction model, each view angle forms an image, each image is rendered, a two-dimensional rendering map is generated, and all the rendering maps are output at the same time. The images rendered at each viewing angle are arranged in sequence and input to a light field display, the light field display can automatically generate an initial image and display the initial image as a three-dimensional initial image for human eye observation, in this embodiment, the initial image can also be generated by an image intelligent analysis workstation, and then the initial image is input to the light field display, and the initial image is displayed as the three-dimensional initial image for human eye observation through the light field display.
And step S102, determining a target observation point of the observer on the three-dimensional initial image.
In this embodiment, the left eye pupil and the right eye pupil of the observer may be detected by using the somatosensory interaction device, the central viewing angle, that is, the observation angle, of the left eye pupil and the right eye pupil of the observer is analyzed, the key observation position of the observer when observing the three-dimensional initial image is automatically detected, and the target observation point of the observer on the three-dimensional initial image is determined, where the specific determination method will be described below.
It should be noted that the light field display employed in this embodiment is a display device that uses an optical system to realize that when a viewer views at each different viewing angle, the viewer can view a rendered image at different angles, and can simultaneously display images at multiple viewing angles. As shown in fig. 2, the images are viewed from different viewing angles to view the three-dimensional image displayed by the light field display. The display device displays the processed initial image or target image as a three-dimensional initial image or three-dimensional target image by using an optical principle, and a certain point position on the three-dimensional initial image can be mapped to the same position on the initial image, so that a target observation point on the three-dimensional initial image can be mapped to a target observation point on the initial image.
Step S1013, determining a target object corresponding to the position of the target observation point, where the target object is a local tissue in the acquisition object.
After the target observation point is determined, the image intelligent analysis workstation automatically calls the three-dimensional coordinate corresponding to the target observation point in the coordinate system. And each tissue in the three-dimensional initial image displayed by the initial image and used for human eye observation corresponds to a group of three-dimensional coordinate sets by using the light field display, and the target object containing the three-dimensional coordinates is automatically matched according to the three-dimensional coordinates corresponding to the determined target observation point. For example, assuming that the three-dimensional initial image includes two tissues, namely a liver and a blood vessel 1 connecting the liver, and the set of three-dimensional coordinate sets corresponding to the liver is { (a 1, b1, c 1), (a 2, b2, c 2), (a 3, b3, c 3) }, the set of three-dimensional coordinate sets corresponding to the blood vessel 1 is { (a 11, b11, c 11), (a 22, b22, c 22), (a 33, b33, c 33) }, and the three-dimensional coordinate corresponding to the determined target observation point is (a 2, b2, c 2), the target object corresponding to the position of the target observation point is determined to be the liver.
And step S104, calling a multi-view two-dimensional image set of the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image as a three-dimensional target image for human eye observation through the light field display.
After the target object is determined, performing three-dimensional reconstruction again on the target object to generate a volume rendering model, taking the volume rendering model of the target object as a second three-dimensional reconstruction model in this embodiment, further reconstructing the second three-dimensional reconstruction model into a plurality of view angles, collecting an image corresponding to each view angle, rendering each image, generating a two-dimensional rendered image, and outputting all rendered images at the same time. And arranging the images rendered at each visual angle in sequence to generate a multi-visual-angle two-dimensional image set of the target object, inputting the multi-visual-angle two-dimensional image set to a light field display, automatically generating a target image by the light field display, and displaying the target image as a three-dimensional target image for human eye observation.
In this embodiment, the multi-view two-dimensional image set corresponding to the target object may also be directly retrieved. After the first three-dimensional reconstruction model is established, the tissues in the first three-dimensional reconstruction model are respectively segmented, each segmented tissue is respectively generated into a corresponding second three-dimensional reconstruction model, and each second three-dimensional reconstruction model is subjected to multi-angle acquisition to respectively generate a corresponding multi-view two-dimensional image set. And after the target object is determined, directly calling a multi-view two-dimensional image set corresponding to the target object.
In the embodiment, a first three-dimensional reconstruction model is established by using an acquired medical modal image set of an acquired acquisition object, an initial image is generated, and the initial image is displayed as a three-dimensional initial image for human eye observation through a light field display; determining a target object according to the determined target observation point; and calling a multi-view two-dimensional image set corresponding to the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image into a three-dimensional target image for human eye observation through a light field display. In the embodiment, in the actual observation process of the observer, the observer does not need to manually adjust the image in the whole process, the target object which is mainly observed by the observer can be determined according to the target observation point observed by the observer, and the initially displayed three-dimensional initial image is switched into the three-dimensional target image. An observer can observe a target object in a multi-angle, three-dimensional and visual manner in an observation process, the observer can observe the target object by naked eyes without wearing any equipment, the three-dimensional target object is observed from multiple angles, and a three-dimensional target image is further displayed aiming at the target object, namely, a second three-dimensional reconstruction model is a three-dimensional model reconstructed aiming at the target object again, on the basis of an initial image, the regenerated target image has a local amplification effect, the details of an organization structure are more prominent, and the display effect is improved.
As an alternative implementation manner, in an embodiment of the present invention, the determining a target observation point of the observer on the three-dimensional initial image includes:
identifying a left eye pupil and a right eye pupil of an observer, and determining a midpoint between the left eye pupil and the right eye pupil;
and determining a central point of the three-dimensional initial image, and taking a first intersection point of a connecting line of the central point and the three-dimensional initial image as the target observation point.
As shown in fig. 3, firstly, the somatosensory interaction device is used to identify the left-eye pupil L and the right-eye pupil R of the observer, and the midpoint Q of the connecting line between the left-eye pupil L and the right-eye pupil R is analyzed. And under the condition of determining the central point O of the three-dimensional initial image, calculating a first intersection point P of a connecting line of the central point O and the central point Q and the three-dimensional initial image, and taking the first intersection point as the target observation point.
The specific calculation method may be: the method comprises the steps of taking a central point O of a three-dimensional initial image as a circular point of a three-dimensional coordinate system, detecting positions of a left eye pupil L and a right eye pupil R of an observer in the three-dimensional coordinate system by using a depth camera, calculating a three-dimensional coordinate of a point Q, establishing a relation function of OQ connection lines, calculating all coordinates which meet the relation function in the three-dimensional initial image according to the relation function, finally screening out one coordinate which is closest to the observer from all the coordinates which meet conditions as a first intersection point which is intersected with the three-dimensional initial image, and taking the intersection point as a target observation point.
In this embodiment, the target observation point is determined by using the position relationship between the midpoint of the connection line between the left-eye pupil and the right-eye pupil of the observer and the central point of the three-dimensional initial image, so that the focal position observed by the observer can be quickly determined, and the target object observed by the observer can be further quickly determined according to the determined target observation point, so that the light field display further displays the three-dimensional target image observed by the human eyes, and the effect of highlighting the focal observation position of the observer is realized.
As an optional implementation manner, in an embodiment of the present invention, the determining a target object corresponding to the position of the target observation point includes:
and matching the target object containing the position coordinates according to the position coordinates of the target observation point.
Specifically, the first three-dimensional reconstruction model established according to the medical modality image set of the acquisition object may include a plurality of different tissue structures, that is, organs, blood vessels, and the like, in which each tissue is independently divided, and each tissue corresponds to a set of three-dimensional coordinate sets. In this embodiment, only the position coordinates corresponding to the target observation point need to be matched, so that the tissue corresponding to the position coordinates, that is, the target object, can be matched quickly.
In this embodiment, coordinate establishment is performed when the first three-dimensional reconstruction model is established, the three-dimensional coordinate set corresponding to each tissue is determined, a target object can be quickly matched when an observer actually observes, the operation load of the image intelligent analysis workstation is reduced, the matching speed is high, and the observer can conveniently and quickly determine a required tissue structure.
In this embodiment, the determined target object may also be highlighted, which is convenient for the observer to quickly locate the tissue.
In this embodiment, after the target object is determined by using the position coordinate matching method, the target image may be segmented by using an image segmentation method. Specifically, the lung parenchyma may be segmented by using threshold-based Otsu (OSTU), and the segmentation result may be post-processed by a cavity filling and closing operation; the segmentation of sternum, ribs, etc. can be performed by using a histogram distribution and region growing based method; a template-based segmentation method may be employed for cardiac segmentation; a round detection method based on Hough transformation and a region growing method based on seed points can be adopted for segmenting blood vessels and trachea, and after segmentation, three-dimensional data of a breast tissue trachea model can be obtained; and automatically calling a corresponding segmentation model or a segmentation method according to the tissue type of the target object or the position information contained in the target image to obtain the target object.
As an alternative implementation, in an embodiment of the present invention, the creating a first three-dimensional reconstruction model by using the medical modality image set, and generating an initial image includes;
performing three-dimensional volume rendering by using the medical modal image set and adopting a ray projection method, and establishing a first three-dimensional reconstruction model;
acquiring two-dimensional images of the first three-dimensional reconstruction model at different angles according to a preset acquisition direction and a preset acquisition angle;
rendering the acquired two-dimensional images at different angles respectively, inputting the rendered two-dimensional images to the light field display according to an acquisition sequence, and automatically generating an initial image for display.
Taking the example that the medical modality image set only includes one medical modality image, in this embodiment, a volume rendering method based on ray projection may be adopted to perform visual rendering on the image in the medical modality image set to form a volume rendering model, and the volume rendering model is used as the first three-dimensional reconstruction model in this embodiment.
Further, the first three-dimensional reconstruction model is reconstructed into a plurality of visual angles, and two-dimensional images of the first three-dimensional reconstruction model at different angles are acquired according to a preset acquisition direction and a preset acquisition angle. For example, as shown in fig. 4, three trajectory directions G1, G2, and G3 may be used as the preset collection direction, each collection direction sets the preset collection angle to be 3.6 °, that is, each inter-view interval is 3.6 °, the first three-dimensional reconstruction model may be reconstructed into 300 views, each view is provided with a virtual camera, the center of the camera is aligned with the center of the first three-dimensional reconstruction model, each view forms an image, each image is rendered to generate a two-dimensional rendering image, and all rendering images are output at the same time, although the collection angle may be set according to the requirement. And arranging the images rendered at each visual angle according to the front and back directions of the images during collection according to the sequence, inputting the images to the light field display, automatically generating an initial image by the light field display, and displaying the initial image as a three-dimensional initial image for human eye observation. In this embodiment, an initial image may be generated by the image intelligent analysis workstation, and then the initial image is input to the light field display, and the initial image is displayed as a three-dimensional initial image for human eye observation by the light field display.
In this embodiment, after the images at each viewing angle are collected, each image is rendered, which can be closer to the shape and color of the organ tissue itself, so that the observer can observe more intuitively. Further, for the view angle which is not provided with the virtual camera, that is, is not collected, a linear interpolation method may be adopted to generate the interpolation rendering graph.
As an optional implementation manner, in an embodiment of the present invention, after determining the target object corresponding to the position of the target observation point, the method further includes:
preprocessing the target object;
carrying out three-dimensional reconstruction on the preprocessed target object again to generate a second three-dimensional reconstruction model;
and acquiring two-dimensional images of the second three-dimensional reconstruction model from multiple angles to generate the multi-view two-dimensional image set.
In this embodiment, for the determined target object, three-dimensional reconstruction is performed again according to the medical modality image set corresponding to the target object to generate a second three-dimensional reconstruction model, and a multi-angle acquisition mode is also adopted to generate a multi-view two-dimensional image set.
As an optional implementation manner, in an embodiment of the present invention, the preprocessing the target object includes:
and rendering processing, namely rendering the target object to be the target object which is closer to the actual tissue color or shape, and performing multi-view two-dimensional rendering on the reverse angle of the target object and outputting the rendering image of the forward view at the same time.
And refining, namely analyzing voxel information of the target object in the first three-dimensional reconstruction model, and adjusting the window width and window level of the target object.
In this embodiment, the line connecting the midpoint Q and the center point O may be shifted by ± 5 ° around as a preprocessing range, or only the target object may be preprocessed. According to the range or object to be preprocessed, the average value and standard deviation of the voxel intensity value of the voxel at the position are analyzed, the displayed intensity range and the median are calculated according to one or more of distribution functions such as Gaussian distribution, uniform distribution and gamma distribution, and the like, and the intensity range and the median are converted into a window width and window level, so that the target object or range is displayed more accurately and more finely.
As an optional implementation manner, in an embodiment of the present invention, after the creating the first three-dimensional reconstruction model, the method further includes:
respectively segmenting tissues in the first three-dimensional reconstruction model;
respectively generating a corresponding second three-dimensional reconstruction model for each divided tissue;
and carrying out multi-angle acquisition on each second three-dimensional reconstruction model, and respectively generating a corresponding multi-view two-dimensional image set.
In this embodiment, in addition to the method for reconstructing the target object after determining the target object, after the first three-dimensional reconstruction model is established, the tissues in the first three-dimensional reconstruction model may be segmented, and each segmented tissue is generated into a corresponding second three-dimensional reconstruction model. For example, the first three-dimensional reconstruction model includes three tissues, namely, a liver, a blood vessel 1 connected to the liver, and a blood vessel 2, the three tissues are divided, the divided liver is used as a second three-dimensional reconstruction model of the liver, the divided blood vessel 1 connected to the liver is used as the second three-dimensional reconstruction model of the blood vessel 1, and the divided blood vessel 2 connected to the liver is used as the second three-dimensional reconstruction model of the blood vessel 2. And respectively carrying out multi-angle acquisition on the second three-dimensional reconstruction model of the liver, the second three-dimensional reconstruction model of the tube 1 and the second three-dimensional reconstruction model of the blood vessel 2 to generate a multi-view two-dimensional image set of the liver, a multi-view two-dimensional image set of the blood vessel 1 and a multi-view two-dimensional image set of the blood vessel 2.
When an observer actually observes, after the target object is determined, the corresponding multi-view two-dimensional image set can be directly called, the running load of the intelligent image analysis workstation is reduced, the calling speed is high, the intelligent image analysis workstation can be directly used for generating the target image, and the switching speed to the target image is high.
Example 2
This embodiment provides a medical imaging system, which can be used to execute the medical imaging processing and displaying method of embodiment 1, as shown in fig. 5, and the system includes:
the somatosensory interaction device 11 is used for identifying a left eye pupil and a right eye pupil of an observer;
the image intelligent analysis workstation 12 is electrically connected with the somatosensory interaction equipment and is used for acquiring a medical modal image set corresponding to an acquired object, establishing a first three-dimensional reconstruction model by using the medical modal image set and generating an initial image; determining a target observation point of an observer on the three-dimensional initial image; determining a target object corresponding to the position of the target observation point; and calling a multi-view two-dimensional image set of the target object, and generating a target image from the multi-view two-dimensional image set.
The light field display 13 is electrically connected with the image intelligent analysis workstation and is used for displaying the initial image or the target image into a three-dimensional initial image for human eye observation;
and the plane display 14 is electrically connected with the image intelligent analysis workstation and is used for displaying the medical mode image set and the multi-view two-dimensional image set.
In the embodiment, the somatosensory interaction equipment can adopt Kinect or Leapmotion to detect and track pupils and carry a depth camera; the image intelligent analysis workstation can adopt a deep learning workstation based on an X86 architecture, the light field display can adopt Lokingglass, and the flat panel display can adopt a medical image display.
The medical imaging system may further include a planar interactive device 15 connected to the planar display 14 for performing touch, interaction, etc. for adjusting an angle of an image on the planar display. The planar display and the light field display simultaneously, so that the observation selectivity of an observer can be improved, and the organization details can be confirmed again to improve the observation accuracy.
In the embodiment, an intelligent image analysis workstation is used for acquiring a medical modal image set of an acquired object to establish a first three-dimensional reconstruction model, generating an initial image and displaying the initial image as a three-dimensional initial image for human eye observation through a light field display; determining a target object according to the determined target observation point; and calling a multi-view two-dimensional image set corresponding to the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image as a three-dimensional target image for human eye observation through a light field display. In the embodiment, in the actual observation process of the observer, the observer does not need to manually adjust the image in the whole process, the target object which is mainly observed by the observer can be determined according to the target observation point observed by the observer, and the initially displayed three-dimensional initial image is switched into the three-dimensional target image. An observer can observe a target object in a multi-angle, three-dimensional and visual mode in the observation process, the second three-dimensional reconstruction model is a three-dimensional model reconstructed aiming at the target object again, on the basis of the initial image, the regenerated target image has the function of local amplification, and details of an organization structure are more prominent.
Example 3
The present embodiment provides a computer device, as shown in fig. 6, the computer device includes a processor 301 and a memory 302, where the processor 301 and the memory 302 may be connected by a bus or by other means, and fig. 6 takes the example of connection by a bus as an example.
Processor 301 may be a Central Processing Unit (CPU). The Processor 301 may also be other general purpose processors, digital Signal Processors (DSPs), graphics Processing Units (GPUs), embedded Neural Network Processors (NPUs), or other dedicated deep learning coprocessors, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or any combination thereof.
The memory 302 is used as a non-transitory computer readable storage medium for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as the medical image processing and displaying method according to the embodiment of the present invention. Corresponding program instructions/modules. The processor 301 executes various functional applications and data processing of the processor by running the non-transitory software programs, instructions and modules stored in the memory 302, so as to implement the medical image processing and displaying method in the above method embodiment.
The memory 302 may further include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 301, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 302 may optionally include memory located remotely from the processor 301, which may be connected to the processor 301 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The memory 302 stores one or more modules, which when executed by the processor 301, perform the medical image processing display method in the embodiment shown in fig. 1.
The details of the computer device may be understood with reference to the corresponding related description and effects in the embodiment shown in fig. 1, and are not described herein again.
The embodiment of the invention also provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions can execute the medical image processing and displaying method in any embodiment. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A medical image processing and displaying method is characterized by comprising the following steps:
acquiring a medical modal image set corresponding to an acquisition object, establishing a first three-dimensional reconstruction model by using the medical modal image set, generating an initial image, and displaying the initial image as a three-dimensional initial image for human eye observation through a light field display;
determining a target observation point of an observer on the three-dimensional initial image;
determining a target object corresponding to the position of the target observation point, wherein the target object is a local tissue in the acquisition object;
and calling a multi-view two-dimensional image set of the target object, generating a target image from the multi-view two-dimensional image set, and displaying the target image as a three-dimensional target image for human eye observation through the light field display.
2. The method as claimed in claim 1, wherein said determining a target viewpoint of the viewer on the three-dimensional initial image comprises:
identifying a left eye pupil and a right eye pupil of an observer, and determining a midpoint between the left eye pupil and the right eye pupil;
and determining a central point of the three-dimensional initial image, and taking a first intersection point of a connecting line of the central point and the three-dimensional initial image as the target observation point.
3. The method as claimed in claim 2, wherein said determining the target object corresponding to the position of the target viewpoint comprises:
and matching the target object containing the position coordinates according to the position coordinates of the target observation point.
4. The method according to claim 1, wherein said building a first three-dimensional reconstruction model using said set of medical modality images, generating an initial image, comprises;
performing three-dimensional volume rendering by using the medical modal image set and adopting a ray projection method, and establishing a first three-dimensional reconstruction model;
acquiring two-dimensional images of the first three-dimensional reconstruction model at different angles according to a preset acquisition direction and a preset acquisition angle;
rendering the acquired two-dimensional images at different angles respectively, and automatically generating initial images for the light field display to display the rendered two-dimensional images.
5. The method as claimed in claim 1, further comprising, after determining the target object corresponding to the position of the target observation point:
preprocessing the target object;
carrying out three-dimensional reconstruction on the preprocessed target object again to generate a second three-dimensional reconstruction model;
and acquiring two-dimensional images of the second three-dimensional reconstruction model from multiple angles to generate the multi-view two-dimensional image set.
6. The medical image processing and displaying method according to claim 5, wherein the preprocessing the target object includes:
rendering processing;
and refining, namely analyzing voxel information of the target object in the first three-dimensional reconstruction model, and adjusting the window width and window level of the target object.
7. The medical image processing and displaying method according to claim 4, further comprising, after the building of the first three-dimensional reconstruction model:
respectively segmenting tissues in the first three-dimensional reconstruction model;
respectively generating a corresponding second three-dimensional reconstruction model for each divided tissue;
and carrying out multi-angle acquisition on each second three-dimensional reconstruction model, and respectively generating a corresponding multi-view two-dimensional image set.
8. A medical imaging system, comprising:
the somatosensory interaction device is used for identifying the left eye pupil and the right eye pupil of an observer;
the image intelligent analysis workstation is electrically connected with the somatosensory interaction equipment and is used for acquiring a medical modal image set corresponding to an acquired object, and establishing a first three-dimensional reconstruction model by using the medical modal image set to generate an initial image; determining a target observation point of an observer on the three-dimensional initial image; determining a target object corresponding to the position of the target observation point; calling a multi-view two-dimensional image set of the target object, and generating a target image from the multi-view two-dimensional image set;
the light field display is electrically connected with the image intelligent analysis workstation and used for displaying the initial image or the target image into a three-dimensional initial image for human eye observation;
and the plane display is electrically connected with the image intelligent analysis workstation and is used for displaying the medical modal image set and the multi-view two-dimensional image set.
9. A computer device, comprising:
a memory and a processor, wherein the memory and the processor are communicatively connected, the memory stores computer instructions, and the processor executes the computer instructions to execute the medical image processing and displaying method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a computer to execute the medical image processing display method according to any one of claims 1 to 7.
CN202210833395.8A 2022-07-14 2022-07-14 Medical image processing and displaying method and system Pending CN115205459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210833395.8A CN115205459A (en) 2022-07-14 2022-07-14 Medical image processing and displaying method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210833395.8A CN115205459A (en) 2022-07-14 2022-07-14 Medical image processing and displaying method and system

Publications (1)

Publication Number Publication Date
CN115205459A true CN115205459A (en) 2022-10-18

Family

ID=83582006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210833395.8A Pending CN115205459A (en) 2022-07-14 2022-07-14 Medical image processing and displaying method and system

Country Status (1)

Country Link
CN (1) CN115205459A (en)

Similar Documents

Publication Publication Date Title
CN106934807B (en) Medical image analysis method and system and medical equipment
CN110010249B (en) Augmented reality operation navigation method and system based on video superposition and electronic equipment
US20190021677A1 (en) Methods and systems for classification and assessment using machine learning
US8942460B2 (en) Medical image processing apparatus that normalizes a distance between an inner wall and outer wall of the myocardial region
JP4421016B2 (en) Medical image processing device
US8817076B2 (en) Method and system for cropping a 3-dimensional medical dataset
JP5599709B2 (en) Visualization of voxel data
CN102301393B (en) Visualizing a time-variant parameter in a biological structure
US20120245465A1 (en) Method and system for displaying intersection information on a volumetric ultrasound image
US20110254845A1 (en) Image processing method and image processing apparatus
JP2007296335A (en) User interface and method for specifying related information displayed in ultrasonic system
US9361726B2 (en) Medical image diagnostic apparatus, medical image processing apparatus, and methods therefor
US20130257910A1 (en) Apparatus and method for lesion diagnosis
US20150065877A1 (en) Method and system for generating a composite ultrasound image
CN107106128B (en) Ultrasound imaging apparatus and method for segmenting an anatomical target
JP2016101502A (en) Medical image processing apparatus
CN110807770A (en) Medical image processing, recognizing and displaying method and storage medium
CN107835661B (en) Ultrasonic image processing system and method, ultrasonic diagnostic apparatus, and ultrasonic image processing apparatus
CN101568941A (en) Medical imaging system
KR20160094766A (en) Method and apparatus for displaying medical image
JPH11104072A (en) Medical support system
EP2597622A2 (en) Method and apparatus for combining plurality of 2D images with 3D model
JP2005349199A (en) Medical three-dimensional image display, three-dimensional image processing method, computer tomographic apparatus, work station and computer program product
CN106446515A (en) Three-dimensional medical image display method and apparatus
KR20150080820A (en) Apparatus and Method for indicating region of interest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination