CN113017833A - Organ positioning method, organ positioning device, computer equipment and storage medium - Google Patents
Organ positioning method, organ positioning device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113017833A CN113017833A CN202110210882.4A CN202110210882A CN113017833A CN 113017833 A CN113017833 A CN 113017833A CN 202110210882 A CN202110210882 A CN 202110210882A CN 113017833 A CN113017833 A CN 113017833A
- Authority
- CN
- China
- Prior art keywords
- image
- image data
- body surface
- data
- anatomical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000000056 organ Anatomy 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 47
- 210000003484 anatomy Anatomy 0.000 claims description 16
- 230000000241 respiratory effect Effects 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 10
- 230000004807 localization Effects 0.000 claims description 8
- 210000001835 viscera Anatomy 0.000 abstract description 13
- 238000013461 design Methods 0.000 abstract description 2
- 238000002591 computed tomography Methods 0.000 description 41
- 238000002595 magnetic resonance imaging Methods 0.000 description 27
- 230000002146 bilateral effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 238000001356 surgical procedure Methods 0.000 description 3
- 208000031481 Pathologic Constriction Diseases 0.000 description 2
- 208000002847 Surgical Wound Diseases 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000036262 stenosis Effects 0.000 description 2
- 208000037804 stenosis Diseases 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 210000002659 acromion Anatomy 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- 210000003109 clavicle Anatomy 0.000 description 1
- 210000004351 coronary vessel Anatomy 0.000 description 1
- QTCANKDTWWSCMR-UHFFFAOYSA-N costic aldehyde Natural products C1CCC(=C)C2CC(C(=C)C=O)CCC21C QTCANKDTWWSCMR-UHFFFAOYSA-N 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000000232 gallbladder Anatomy 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- ISTFUJWTQAMRGA-UHFFFAOYSA-N iso-beta-costal Natural products C1C(C(=C)C=O)CCC2(C)CCCC(C)=C21 ISTFUJWTQAMRGA-UHFFFAOYSA-N 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 210000004373 mandible Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000277 pancreatic duct Anatomy 0.000 description 1
- 210000004061 pubic symphysis Anatomy 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 210000003625 skull Anatomy 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 210000000952 spleen Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention relates to the field of medical image identification, and discloses a method and a device for organ positioning, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring CT/MRI data of a specified object; constructing a three-dimensional anatomical image of the designated object according to the CT/MRI data; acquiring image data of a specified object, wherein the image data comprises body surface anatomical landmarks; and performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement. The invention can realize the accurate projection of the internal organs on the body surface, so that a doctor can master the accurate position of the diseased internal organs or the target internal organs from the whole situation so as to design the operation path or select the operation incision and the puncture part more accurately.
Description
Technical Field
The invention relates to the field of medical image recognition, in particular to a method and a device for organ positioning, computer equipment and a storage medium.
Background
More and more digitization techniques are finding applications in the medical field, such as 3D printing techniques. However, there are also a number of clinical problems that need to be modified by digital techniques. For example, when performing surgery, a surgeon needs to determine the precise location of an organ in order to accurately select a surgical incision, puncture site, or design a surgical path.
In the prior art, the main means for determining the precise position of the organ are: 1. dye positioning under endoscope before operation; 2. performing ultrasonic positioning in an operation; 3. x-ray localization (bone). However, these means cannot realize accurate holographic projection of the organ on the body surface. Moreover, the positioning is carried out by means of an endoscope and dye, so that the human body is greatly injured, the dye is easy to disperse, and the positioning is not accurate enough; the ultrasonic positioning needs professional technicians and professional instruments, and can only be carried out aiming at specific visceral organs; x-ray localization is radiation-sensitive and accurate projection of organs other than bones on the body surface is difficult to achieve.
Disclosure of Invention
In view of the above, it is necessary to provide an organ positioning method, an organ positioning apparatus, a computer device, and a storage medium to solve the problem of difficulty in organ positioning.
A method of organ localization comprising:
acquiring CT/MRI data of a specified object;
constructing a three-dimensional anatomical image of the designated object from the CT/MRI data;
acquiring image data of the specified object, wherein the image data comprises body surface anatomical landmarks;
and performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement.
A visceral organ positioning device comprising:
the scanning data acquisition module is used for acquiring CT/MRI data of a specified object;
the three-dimensional anatomical image constructing module is used for constructing a three-dimensional anatomical image of the specified object according to the CT/MRI data;
the acquisition image data module is used for acquiring the image data of the specified object, which contains the body surface anatomical landmarks;
and constructing a mixed image module, which is used for performing superposition reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality superposition image meeting the preset superposition requirement.
A computer device comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, the processor implementing the organ localization method when executing the computer readable instructions.
One or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the method of organ localization as described above.
The organ positioning method, the organ positioning device, the computer equipment and the storage medium acquire CT/MRI data of a specified object, construct a three-dimensional anatomical image of the specified object according to the CT/MRI data, wherein the three-dimensional anatomical image belongs to an internal image of the specified object and is used for determining the actual position of an organ. And acquiring the image data of the specified object, wherein the image data comprises body surface anatomical landmarks, and the image data is an apparent visual image of the specified object and is easy to perceive. And performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement so as to establish an incidence relation between an internal image and an apparent visualization image of the specified object and accurately position the position of the visceral organ. The visceral organ positioning method constructed by the invention can realize the accurate projection of the visceral organs on the body surface, so that a doctor can accurately grasp the accurate positions of the diseased visceral organs or the target visceral organs from the whole world, and the operation path can be more accurately designed or the operation incision and puncture part can be selected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram of an application environment of the organ positioning method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of the organ positioning method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an indicating instrument for indicating organ information according to an embodiment of the present invention;
FIG. 4 is a scan image before surgery in accordance with an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an organ positioning apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for positioning an organ provided in this embodiment can be applied to the application environment shown in fig. 1, in which a client communicates with a server. The client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, an organ positioning method is provided, which is described by taking the example that the method is applied to the server side in fig. 1, and includes the following steps:
s10, CT/MRI data of the specified object is acquired.
Understandably, the designated object may refer to a patient in need of a surgical procedure. The CT/MRI data may refer to CT (Computed Tomography) data or MRI (Magnetic Resonance Imaging) data.
And S20, constructing a three-dimensional anatomical image of the specified object according to the CT/MRI data.
Understandably, the three-dimensional anatomical image is a three-dimensional volumetric model constructed based on CT/MRI data. The larger the data volume of the CT/MRI data, the better the fidelity of the three-dimensional anatomical image. In one example, 64 rows of CT with the thickness of 1.25mm are adopted to scan a specified object, rich CT data are obtained, and a three-dimensional anatomical image constructed based on the CT data has good fidelity, clear image and high identification degree. Moreover, the three-dimensional anatomical image has good data universality, good compatibility and editability, and programming development difficulty is reduced. The three-dimensional anatomical image can also realize data observation of multiple angles and arbitrary sections.
Optionally, step S20, the constructing a three-dimensional anatomical image of the designated object according to the CT/MRI data includes:
s201, extracting a two-dimensional image sequence from the CT/MRI data;
s202, processing the two-dimensional image sequence through a preset image processing algorithm to generate the three-dimensional anatomical image.
Understandably, a set of two-dimensional image sequences can be extracted from the CT/MRI data. For example, by scanning a given object with 64 rows of 1.25mm slice thickness CT, a set of CT images with a slice spacing of 1.25mm can be obtained, which are arranged in sequence, i.e. a two-dimensional image sequence.
The pre-set image processing algorithms include, but are not limited to, multi-planar reconstruction (MPR), Maximum Intensity Projection (MIP), and curved surface reconstruction (CPR). The multi-slice reconstruction method is suitable for structural imaging of any plane, normal tissue organs or lesions are observed at any angle, the cross section of a cavity structure can be displayed so as to observe the stenosis degree of a cavity gap, evaluate the invasion condition of a blood vessel, truly reflect the position relation between organs and the like. The maximum intensity projection projects the voxel with the maximum CT value in a certain thickness (namely, the thickness of the CT layer) onto a background plane to display all or part of the blood vessels and/or organs with high intensity. The curved surface reconstruction method is characterized in that a specific curved path is selected in one dimension, all voxels on the path are displayed on the same plane, and the full-length condition of a structure with large curvature, such as a tubular structure of a spleen artery, a pancreatic duct, a coronary artery and the like, can be evaluated at one time. The curved surface reconstruction method can observe the cavity wall lesions (such as plaques, stenosis and the like) of the lumen structure and can also observe the position relationship between the tubular structure and the surrounding structure, but the curved surface reconstruction method does not show normal anatomical structures and relationships (tubular structure straightening processing), and meanwhile, a plurality of angle curved surface reconstructions are needed to completely evaluate the lesions.
And S30, acquiring the image data of the specified object, wherein the image data comprises the body surface anatomical landmarks.
Understandably, the image data may include three-dimensional image data specifying the surface of the subject's body, as well as high resolution RGB + infrared spectral data. The body surface anatomical markers are markers added on the three-dimensional image data by medical staff. Anatomical landmarks on the body surface include, but are not limited to, bilateral mandible angle, bilateral clavicle, bilateral acromion, bilateral anterior superior iliac spine, bilateral costal arch, pubic symphysis. Wherein, the high resolution RGB + infrared spectrum data can be used for judging the surface state of the body when in operation.
Optionally, in step S30, the acquiring image data of the designated object, which includes the body surface anatomical landmark, includes:
s301, acquiring an array scanning image of the specified object through an imaging sensor array;
s302, constructing three-dimensional image data according to the array scanning image;
s303, receiving a calibration instruction, setting a plurality of body surface anatomical landmarks in the three-dimensional image data according to the calibration instruction, and generating the image data containing the body surface anatomical landmarks.
Understandably, an imaging sensor array may be a device for acquiring image information of a specified object (typically in a lying position). Imaging sensor arrays include, but are not limited to, RGB image sensors, near infrared image sensors, thermal (far) infrared image sensors, depth stereo information sensors. The array scanning image is the image information collected by various sensors. For example, an RGB image sensor may capture an RGB image of a given object; the near-infrared image sensor can acquire a near-infrared image of a specified object.
Three-dimensional image data of a specified object can be constructed based on the array scan image. Here, the three-dimensional image data may be a three-dimensional model specifying the surface of the body of the subject.
The calibration instructions may be instructions entered by medical personnel for setting a plurality of body surface anatomical landmarks for the three-dimensional image data. The selection criteria for body surface anatomical landmarks include: 1. the position is shallow; 2. mainly bony structures; 3. the anatomical variation is less; 4. in the skull or torso; 5. the imaging sensor array is easy to identify after scanning. The body surface anatomical markers meeting the above criteria can be displayed clearly after scanning, and selected and calibrated by medical personnel in a software system for processing the above three-dimensional image data. And after calibration is finished, obtaining the image data of the specified object, wherein the image data comprises the body surface anatomical landmarks.
And S40, performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting the preset coincidence requirement.
Understandably, the three-dimensional anatomical image and the image data are subjected to coincidence reconstruction, namely, the pose and the proportion of the three-dimensional anatomical image are continuously adjusted based on the body surface anatomical marks in the image data, so that all the body surface anatomical marks are completely overlapped (or the overlapping rate is more than 99 percent), and then the mixed reality coincident image meeting the preset coincidence requirement can be obtained. In the mixed reality superimposition image, the position of a certain organ in the three-dimensional anatomical image is the same as the position of the organ in the image data. Therefore, the obtained mixed reality superposition image can be used for calibrating the position of the organ.
In steps S10-S40, CT/MRI data of a designated object is acquired, and a three-dimensional anatomical image of the designated object is constructed from the CT/MRI data, where the three-dimensional anatomical image belongs to an internal image of the designated object, for determining an actual position of an organ. And acquiring the image data of the specified object, wherein the image data comprises body surface anatomical landmarks, and the image data is an apparent visual image of the specified object and is easy to perceive. And performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement so as to establish an incidence relation between an internal image and an apparent visualization image of the specified object and accurately position the position of the visceral organ.
Optionally, in step S40, that is, performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement, the method includes:
s401, selecting at least three body surface anatomical landmarks;
s402, adjusting the three-dimensional anatomical image according to the selected body surface anatomical landmarks, so that the positions of the body surface anatomical landmarks in the adjusted three-dimensional anatomical image coincide with the corresponding body surface anatomical landmarks in the image data;
and S403, when the positions of all the body surface anatomical landmarks in the adjusted three-dimensional anatomical image are overlapped with the corresponding body surface anatomical landmarks in the image data, overlapping the three-dimensional anatomical image and the image data to generate a first state mixed reality overlapped image.
Understandably, in selecting the body surface anatomical landmarks, the selected body surface anatomical landmarks may differ in timing. For example, the positions of the three-dimensional anatomical image and the image data are determined by a first body surface anatomical landmark (i.e., the position of the first body surface anatomical landmark in the three-dimensional anatomical image completely coincides with the first body surface anatomical landmark in the image data), then the positions of the three-dimensional anatomical image and the image data are determined by a second body surface anatomical landmark (i.e., the position of the second body surface anatomical landmark in the three-dimensional anatomical image completely coincides with the second body surface anatomical landmark in the image data), and then the positions of the three-dimensional anatomical image and the image data are determined by a third body surface anatomical landmark (i.e., the position of the third body surface anatomical landmark in the three-dimensional anatomical image completely coincides with the third body surface anatomical landmark in the image data). Generally, the number of the selected body surface anatomical landmarks is at least three. When the three-dimensional anatomical image is adjusted, the size of the three-dimensional anatomical image can be adjusted according to actual needs.
When the positions of all the body surface anatomical landmarks in the adjusted three-dimensional anatomical image coincide with the corresponding body surface anatomical landmarks in the image data, the three-dimensional anatomical image and the image data are superposed to generate a first state mixed reality coincident image. If the three-dimensional anatomical image is generated based on CT data, since the designated object is generally in a deep-inspiration state when CT scanning is performed, the first-state mixed-reality superimposed image may be a mixed-reality superimposed image in which the designated object is in a deep-inspiration state.
Optionally, after step S403, that is, when the positions of all the body surface anatomical landmarks in the adjusted three-dimensional anatomical image are overlapped with the corresponding body surface anatomical landmarks in the image data, the three-dimensional anatomical image and the image data are overlapped to generate the first state mixed reality overlapped image, the method includes:
s404, when the CT/MRI data are CT data, acquiring a first respiratory state of the specified object when the CT data are generated;
s405, acquiring second image data of the designated object in a second respiratory state, wherein the second respiratory state is different from the first respiratory state;
s406, setting a plurality of second body table anatomy marks on the second image data, and finely adjusting the first state mixed reality superposition image based on the positions of the second body table anatomy marks in the second image data, so that the positions of all the second body table anatomy marks in the first state mixed reality superposition image coincide with the corresponding second body table anatomy marks in the second image data;
and S407, when the positions of all the second volumetric body table anatomical landmarks in the first state mixed reality superposed image are coincident with the corresponding second volumetric body table anatomical landmarks in the second image data, determining the finely-adjusted first state mixed reality superposed image as the second state mixed reality superposed image of the specified object.
Understandably, the first respiratory state may generally refer to a deep inhalation state. Thus, the first state mixed reality registered image is actually a mixed reality registered image in which the designated object is in a deep inspiration state.
The second respiratory state may refer to a deep exhalation state. The subject may be prescribed by a jingle, exhaling deeply and holding breath, and then CT scanning is performed to obtain second image data in a state of deep exhalation. Here, the first breathing state is the deep inhalation state and the second breathing state is the reverse of the first breathing state.
After the second image data is obtained, several second volume table anatomy flags may be set at the second image data. And then, fine-tuning the first state mixed reality coincidence image based on each body surface anatomical mark in the second image data, so that the positions of all second body surface anatomical marks in the first state mixed reality coincidence image are coincided with the corresponding second body surface anatomical marks in the second image data.
And when the positions of all the second volumetric body table anatomy marks in the first state mixed reality coincidence image coincide with the corresponding second volumetric body table anatomy marks in the second image data, determining the finely-adjusted first state mixed reality coincidence image as a second state mixed reality coincidence image of the specified object. The second state mixed reality superposition image is a mixed image when the designated object is in a deep expiration state. The mixed reality coincidence image can include a first state mixed reality coincidence image and a second state mixed reality coincidence image. The second-state mixed-reality registration image can further improve the organ recognition accuracy.
Optionally, after step S40, that is, after performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement, the method further includes:
s51, acquiring a detection image containing an indicating instrument of the specified object through an imaging sensor array;
s52, identifying the pointing region of the pointing instrument in the detection image through the mixed reality superposition image;
and S53, outputting organ information corresponding to the pointing region.
Understandably, the indicator instrument may be retrofitted with conventional laparoscopic surgical instruments. As shown in fig. 3, the indicator apparatus (denoted by the letter a in the figure) may be rod-shaped. For example, in a conventional laparoscopic surgical instrument, a scanning mark (corresponding to a pointing region of the pointing instrument) is provided at the tip of the pointing instrument (a dot at the tip of the pointing instrument). As the imaging sensor array scans image data of a given subject, a detection image containing an indicator instrument may be acquired. The scanning mark in the detection image can be identified through a preset identification algorithm, and the position information of the scanning mark is obtained.
By comparing the mixed reality superposition image with the detection image, the position of the scanning mark, namely the position of the pointing region, can be determined. In the mixed reality coincidence image, the positions of the respective organs are specified. Therefore, the corresponding organ information can be acquired from the position of the pointing region. For example, the pointing device points to the B region, and the organ corresponding to the B region is the liver; the pointing instrument points to the C region, and the organ corresponding to the C region is the gallbladder.
Optionally, after step S40, that is, after performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement, the method further includes:
s54, receiving a searching instruction for searching a target organ;
and S55, responding to the searching instruction, and outputting body surface identification information corresponding to the target organ.
Understandably, the medical staff can input a search instruction in a software system for processing three-dimensional image data (the same software system as the software system for processing three-dimensional image data can be used for searching the target organ). The software system can respond to the search instruction, determine the position of the target organ through the mixed reality superposition image, and then output body surface identification information corresponding to the target organ. At this point, body surface identification information may be used to assist in determining the location of the surgical incision. As shown in fig. 4, fig. 4 is a scan image before operation, the target organ is appendix, and two white dots are body surface identification information corresponding to the appendix.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, an internal organ positioning device is provided, which corresponds to the internal organ positioning method in the above embodiments one to one. As shown in fig. 5, the organ positioning apparatus includes a scan data acquisition module 10, a three-dimensional anatomical image construction module 20, an image data acquisition module 30, and a hybrid image construction module 40. The functional modules are explained in detail as follows:
a scan data acquisition module 10 for acquiring CT/MRI data of a specified object;
a construct three-dimensional anatomical image module 20 for constructing a three-dimensional anatomical image of the specified object according to the CT/MRI data;
an image data acquisition module 30, configured to acquire image data of the designated object, where the image data includes a body surface anatomical landmark;
and a mixed image constructing module 40, configured to perform superposition reconstruction on the three-dimensional anatomical image and the image data, so as to obtain a mixed reality superposition image meeting a preset superposition requirement.
Optionally, the organ positioning apparatus further comprises:
the acquisition detection image module is used for acquiring a detection image containing an indicating instrument of the specified object through an imaging sensor array;
a pointing region identification module for identifying a pointing region of the pointing instrument in the detection image from the mixed reality coincidence image;
and the organ information output module is used for outputting organ information corresponding to the pointing region.
Optionally, the organ positioning apparatus further comprises:
the searching module is used for receiving a searching instruction for searching a target organ;
and the body surface identification information output module is used for responding to the search instruction and outputting body surface identification information corresponding to the target organ.
Optionally, the module 20 for constructing three-dimensional anatomical image includes:
an extracted image sequence unit for extracting a two-dimensional image sequence from the CT/MRI data;
and the three-dimensional anatomical image generation unit is used for processing the two-dimensional image sequence through a preset image processing algorithm to generate the three-dimensional anatomical image.
Optionally, the module 30 for acquiring image data includes:
an imaging unit for acquiring an array scan image of the specified object by an imaging sensor array;
a three-dimensional graph constructing unit for constructing three-dimensional image data according to the array scanning image;
and the calibration body surface mark unit is used for receiving a calibration instruction, setting a plurality of body surface anatomical marks in the three-dimensional image data according to the calibration instruction, and generating the image data containing the body surface anatomical marks.
Optionally, the module 40 for constructing a mixed image includes:
a body surface mark selecting unit for selecting at least three body surface anatomical marks;
the adjusting unit is used for adjusting the three-dimensional anatomical image according to the selected body surface anatomical landmarks so that the positions of the body surface anatomical landmarks in the adjusted three-dimensional anatomical image coincide with the corresponding body surface anatomical landmarks in the image data;
and the first coincidence image unit is used for superposing the three-dimensional anatomical image and the image data to generate a first state mixed reality coincidence image when the positions of all the body surface anatomical marks in the adjusted three-dimensional anatomical image coincide with the corresponding body surface anatomical marks in the image data.
Optionally, the module 40 for constructing a mixed image includes:
a respiratory state determining unit, configured to, when the CT/MRI data is CT data, acquire a first respiratory state in which the designated object is located when the CT data is generated;
the second image acquisition unit is used for acquiring second image data of the specified object in a second respiratory state, and the second respiratory state is different from the first respiratory state;
a fine adjustment unit, configured to set a plurality of second volumetric table anatomical flags in the second image data, and perform fine adjustment on the first state mixed reality superposition image based on positions of the second volumetric table anatomical flags in the second image data, so that positions of all the second volumetric table anatomical flags in the first state mixed reality superposition image coincide with corresponding second volumetric table anatomical flags in the second image data;
and the second coincidence image unit is used for determining the finely adjusted first state mixed reality coincidence image as the second state mixed reality coincidence image of the specified object when the positions of all the second volumetric table anatomy marks in the first state mixed reality coincidence image are coincided with the corresponding second volumetric table anatomy marks in the second image data.
For specific limitations of the organ positioning device, reference may be made to the above limitations of the organ positioning method, which are not described herein again. All or part of the modules in the organ positioning device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a readable storage medium and an internal memory. The readable storage medium stores an operating system, computer readable instructions, and a database. The internal memory provides an environment for the operating system and execution of computer-readable instructions in the readable storage medium. The database of the computer device is used for storing data related to the organ positioning method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer readable instructions, when executed by a processor, implement a method of organ localization. The readable storage media provided by the present embodiment include nonvolatile readable storage media and volatile readable storage media.
In one embodiment, a computer device is provided, comprising a memory, a processor, and computer readable instructions stored on the memory and executable on the processor, the processor when executing the computer readable instructions implementing the steps of:
acquiring CT/MRI data of a specified object;
constructing a three-dimensional anatomical image of the designated object from the CT/MRI data;
acquiring image data of the specified object, wherein the image data comprises body surface anatomical landmarks;
and performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement.
In one embodiment, one or more computer-readable storage media storing computer-readable instructions are provided, the readable storage media provided by the embodiments including non-volatile readable storage media and volatile readable storage media. The readable storage medium has stored thereon computer readable instructions which, when executed by one or more processors, perform the steps of:
acquiring CT/MRI data of a specified object;
constructing a three-dimensional anatomical image of the designated object from the CT/MRI data;
acquiring image data of the specified object, wherein the image data comprises body surface anatomical landmarks;
and performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to computer readable instructions, which may be stored in a non-volatile readable storage medium or a volatile readable storage medium, and when executed, the computer readable instructions may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An organ positioning method, comprising:
acquiring CT/MRI data of a specified object;
constructing a three-dimensional anatomical image of the designated object from the CT/MRI data;
acquiring image data of the specified object, wherein the image data comprises body surface anatomical landmarks;
and performing coincidence reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image meeting a preset coincidence requirement.
2. The organ positioning method according to claim 1, wherein the performing the coincidence reconstruction of the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image satisfying a preset coincidence requirement further includes:
acquiring a detection image containing an indicating instrument of the specified object through an imaging sensor array;
identifying a pointing region of the pointing instrument in the detection image from the mixed reality coincidence image;
and outputting organ information corresponding to the pointing region.
3. The organ positioning method according to claim 1, wherein the performing the coincidence reconstruction of the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image satisfying a preset coincidence requirement further includes:
receiving a searching instruction for searching a target organ;
and responding to the search instruction, and outputting body surface identification information corresponding to the target organ.
4. The organ positioning method according to claim 1, wherein the constructing a three-dimensional anatomical image of the designated object from the CT/MRI data includes:
extracting a two-dimensional image sequence from the CT/MRI data;
and processing the two-dimensional image sequence through a preset image processing algorithm to generate the three-dimensional anatomical image.
5. The organ positioning method according to claim 1, wherein the acquiring image data of the designated object including a body surface anatomical landmark includes:
acquiring an array scan image of the designated object by an imaging sensor array;
constructing three-dimensional image data according to the array scanning image;
and receiving a calibration instruction, setting a plurality of body surface anatomical landmarks in the three-dimensional image data according to the calibration instruction, and generating the image data containing the body surface anatomical landmarks.
6. The organ positioning method according to claim 1, wherein the performing the coincidence reconstruction of the three-dimensional anatomical image and the image data to obtain a mixed reality coincidence image satisfying a preset coincidence requirement includes:
selecting at least three body surface anatomical landmarks;
adjusting the three-dimensional anatomical image according to the selected body surface anatomical landmarks, so that the positions of the body surface anatomical landmarks in the adjusted three-dimensional anatomical image coincide with the corresponding body surface anatomical landmarks in the image data;
when the positions of all the body surface anatomical landmarks in the adjusted three-dimensional anatomical image coincide with the corresponding body surface anatomical landmarks in the image data, superposing the three-dimensional anatomical image and the image data to generate a first state mixed reality coincident image.
7. The organ positioning method according to claim 6, wherein the organ is a human organ,
after the positions of all the body surface anatomical landmarks in the adjusted three-dimensional anatomical image are overlapped with the corresponding body surface anatomical landmarks in the image data, the three-dimensional anatomical image and the image data are overlapped to generate the first state mixed reality overlapped image, the method comprises the following steps:
when the CT/MRI data is CT data, acquiring a first respiratory state of the specified object when the CT data is generated;
acquiring second image data of the designated object in a second respiratory state, wherein the second respiratory state is different from the first respiratory state;
setting a plurality of second body table anatomy marks on the second image data, and finely adjusting the first state mixed reality superposition image based on the positions of the second body table anatomy marks in the second image data, so that the positions of all the second body table anatomy marks in the first state mixed reality superposition image coincide with the corresponding second body table anatomy marks in the second image data;
and when the positions of all the second volumetric body anatomy marks in the first state mixed reality superposed image coincide with the corresponding second volumetric body anatomy marks in the second image data, determining the finely-adjusted first state mixed reality superposed image as the second state mixed reality superposed image of the specified object.
8. An organ positioning device, comprising:
the scanning data acquisition module is used for acquiring CT/MRI data of a specified object;
the three-dimensional anatomical image constructing module is used for constructing a three-dimensional anatomical image of the specified object according to the CT/MRI data;
the acquisition image data module is used for acquiring the image data of the specified object, which contains the body surface anatomical landmarks;
and constructing a mixed image module, which is used for performing superposition reconstruction on the three-dimensional anatomical image and the image data to obtain a mixed reality superposition image meeting the preset superposition requirement.
9. A computer device comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions implements the organ localization method according to any one of claims 1 to 7.
10. One or more readable storage media storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the organ localization method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110210882.4A CN113017833A (en) | 2021-02-25 | 2021-02-25 | Organ positioning method, organ positioning device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110210882.4A CN113017833A (en) | 2021-02-25 | 2021-02-25 | Organ positioning method, organ positioning device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113017833A true CN113017833A (en) | 2021-06-25 |
Family
ID=76462304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110210882.4A Pending CN113017833A (en) | 2021-02-25 | 2021-02-25 | Organ positioning method, organ positioning device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113017833A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016685A (en) * | 2017-03-29 | 2017-08-04 | 浙江大学 | A kind of surgical scene augmented reality projective techniques of real-time matching |
CN107049489A (en) * | 2017-03-29 | 2017-08-18 | 中国科学院苏州生物医学工程技术研究所 | A kind of operation piloting method and system |
CN107198568A (en) * | 2017-07-30 | 2017-09-26 | 赵松凌 | A kind of abdominal surgery is precisely performed the operation guiding system and method |
CN107341791A (en) * | 2017-06-19 | 2017-11-10 | 北京全域医疗技术有限公司 | A kind of hook Target process, apparatus and system based on mixed reality |
CN108140242A (en) * | 2015-09-21 | 2018-06-08 | 西门子股份公司 | Video camera is registrated with medical imaging |
CN109259806A (en) * | 2017-07-17 | 2019-01-25 | 云南师范大学 | A method of the accurate aspiration biopsy of tumour for image guidance |
CN111386555A (en) * | 2018-10-30 | 2020-07-07 | 西安大医集团股份有限公司 | Image guidance method and device, medical equipment and computer readable storage medium |
CN111419152A (en) * | 2019-01-10 | 2020-07-17 | 柯惠有限合伙公司 | Endoscopic imaging with enhanced parallax |
CN112349382A (en) * | 2020-11-09 | 2021-02-09 | 中国人民解放军陆军军医大学第二附属医院 | Visual simplified acupuncture system and method |
-
2021
- 2021-02-25 CN CN202110210882.4A patent/CN113017833A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108140242A (en) * | 2015-09-21 | 2018-06-08 | 西门子股份公司 | Video camera is registrated with medical imaging |
CN107016685A (en) * | 2017-03-29 | 2017-08-04 | 浙江大学 | A kind of surgical scene augmented reality projective techniques of real-time matching |
CN107049489A (en) * | 2017-03-29 | 2017-08-18 | 中国科学院苏州生物医学工程技术研究所 | A kind of operation piloting method and system |
CN107341791A (en) * | 2017-06-19 | 2017-11-10 | 北京全域医疗技术有限公司 | A kind of hook Target process, apparatus and system based on mixed reality |
CN109259806A (en) * | 2017-07-17 | 2019-01-25 | 云南师范大学 | A method of the accurate aspiration biopsy of tumour for image guidance |
CN107198568A (en) * | 2017-07-30 | 2017-09-26 | 赵松凌 | A kind of abdominal surgery is precisely performed the operation guiding system and method |
CN111386555A (en) * | 2018-10-30 | 2020-07-07 | 西安大医集团股份有限公司 | Image guidance method and device, medical equipment and computer readable storage medium |
CN111419152A (en) * | 2019-01-10 | 2020-07-17 | 柯惠有限合伙公司 | Endoscopic imaging with enhanced parallax |
CN112349382A (en) * | 2020-11-09 | 2021-02-09 | 中国人民解放军陆军军医大学第二附属医院 | Visual simplified acupuncture system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109419524B (en) | Control of medical imaging system | |
JP7162793B2 (en) | Spine Imaging System Based on Ultrasound Rubbing Technology and Navigation/Localization System for Spine Surgery | |
US7817836B2 (en) | Methods for volumetric contouring with expert guidance | |
JP7051307B2 (en) | Medical image diagnostic equipment | |
US11426241B2 (en) | Device for intraoperative image-controlled navigation during surgical procedures in the region of the spinal column and in the adjacent regions of the thorax, pelvis or head | |
US20160228075A1 (en) | Image processing device, method and recording medium | |
KR20180040664A (en) | A treatment support system, a method of operating the same, and a storage medium storing a treatment support program | |
JP6620252B2 (en) | Correction of probe induced deformation in ultrasonic fusion imaging system | |
CN112057165B (en) | Path planning method, device, equipment and medium | |
JP2003159247A (en) | System and method to visualize inside region of anatomical object | |
CN112258638B (en) | Human body model modeling method and device, storage medium and electronic equipment | |
CN110946652B (en) | Method and device for planning screw path of bone screw | |
JP2008005923A (en) | Medical guide system | |
KR101862359B1 (en) | Program and method for generating surgical simulation information | |
US11478207B2 (en) | Method for visualizing a bone | |
CN115054367A (en) | Focus positioning method and device based on mixed reality and electronic equipment | |
JP6967983B2 (en) | Image processing equipment, image processing methods, and programs | |
CN115887003A (en) | Registration method and device of surgical navigation system and surgical navigation system | |
BR102018076393A2 (en) | COLOR-CODED FACIAL MAPS WITH DISTANCE BETWEEN EAR, NOSE AND THROAT BONES | |
CN112258640A (en) | Skull model establishing method and device, storage medium and electronic equipment | |
KR102437616B1 (en) | 3D image registration providing apparatus, image coordinate matching method and surface data acquisition method using the same | |
CN112288797B (en) | Skull correction scheme generation system, construction method, acquisition method and device | |
KR20160057024A (en) | Markerless 3D Object Tracking Apparatus and Method therefor | |
CN116019571A (en) | Apparatus and method for positioning a patient's body and tracking patient position during surgery | |
CN113017833A (en) | Organ positioning method, organ positioning device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210625 |
|
RJ01 | Rejection of invention patent application after publication |