CN212276454U - Multi-modal image fusion system - Google Patents

Multi-modal image fusion system Download PDF

Info

Publication number
CN212276454U
CN212276454U CN202022143020.8U CN202022143020U CN212276454U CN 212276454 U CN212276454 U CN 212276454U CN 202022143020 U CN202022143020 U CN 202022143020U CN 212276454 U CN212276454 U CN 212276454U
Authority
CN
China
Prior art keywords
image
positioning
small ball
medical
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202022143020.8U
Other languages
Chinese (zh)
Inventor
余建皓
赵贵生
祝军玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smart Image Technology Co ltd
Original Assignee
Beijing Smart Image Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smart Image Technology Co ltd filed Critical Beijing Smart Image Technology Co ltd
Priority to CN202022143020.8U priority Critical patent/CN212276454U/en
Application granted granted Critical
Publication of CN212276454U publication Critical patent/CN212276454U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model discloses a multimode image fusion system, the system includes: the system comprises a video controller, medical imaging equipment, a binocular camera, a human body surface reference plane positioning module, a handheld equipment positioning module, a display, a touch screen and a keyboard mouse, wherein the video controller fuses medical images to be fused of single-mode images based on the small ball position images. The utility model discloses carry out the multimodal image fusion, realize the relevance of each single mode image for medical personnel can fuse the image according to the multimodal and carry out disease diagnosis, have improved the efficiency that the operation scope defined, have shortened the time that the diagnostic process was spent.

Description

Multi-modal image fusion system
Technical Field
The utility model relates to the technical field of medical equipment, more specifically the utility model relates to a multimode image fusion system that says so.
Background
The medical image not only expands the examination range of the human body and improves the diagnosis level, but also can treat certain diseases. In the conventional scheme, each medical imaging device in the operating room displays image information on its own display, so that during operation, a doctor needs to look ahead at the right or left, or even needs to go to a certain imaging device to view the image information of the device, for example, to view the electrocardiographic information of a patient on an electrocardiograph; viewing endoscope information within the body on an endoscope; viewing image information on the C-shaped arm; viewing detailed information on a microscope, etc., thereby greatly wasting surgical time and increasing the risk of patient infection.
To solve this problem, some companies have developed a multi-modality imaging chain technique, in which all single-modality image sources in the operating room are connected together to form a control link, and finally displayed on a medical display. By changing the single-mode display to the multi-mode display, the time taken by the doctor to view the image information can be greatly reduced.
Because the existing multi-modal display is to display a plurality of single-modal images on the same display screen, the single-modal images are independent of each other and have no correlation, and therefore, medical staff needs to synthesize the single-modal images according to experience to diagnose diseases, the efficiency of defining the operation range is low, and the diagnosis process takes long time.
SUMMERY OF THE UTILITY MODEL
In view of this, the utility model discloses a modal image fusion system to realize that each single mode image is relevant, thereby make medical personnel can carry out disease diagnosis according to multi-mode fusion image, improve the efficiency that the operation scope defined greatly, shorten the time that the diagnostic process took.
A multi-modality image fusion system, comprising: the system comprises a video controller, medical imaging equipment, a binocular camera, a human body surface reference plane positioning module, a handheld equipment positioning module, a display, a touch screen and a keyboard and a mouse;
the human body surface reference surface positioning module comprises a plurality of positioning small balls, each positioning small ball is marked as a first positioning small ball, each first positioning small ball is used for being placed at a preset position, each first positioning small ball is provided with a unique corresponding small ball code, and the preset positions are as follows: a position on the body surface of the operative portion near the surgically-set portal;
the handheld device positioning module at least comprises one positioning small ball which is marked as a second positioning small ball, the second positioning small ball is used for being placed on a handle for acquiring an ultrasonic image, and each second positioning small ball is provided with a unique corresponding small ball code;
the binocular camera is used for collecting a small ball position image containing position information of the positioning small ball;
the touch screen is used for inputting a mode selection instruction of an operation mode;
the keyboard and the mouse are used for inputting an image selection instruction of the medical image to be fused;
the display is used for displaying the multi-modal fusion image;
the video controller is respectively connected with the medical image equipment, the binocular camera, the display, the touch screen and the keyboard mouse, and the video controller is used for carrying out multi-modal image fusion.
Optionally, the medical imaging apparatus includes: c-arm, ultrasound equipment, endoscopic equipment, microscopy equipment, and picture archiving and communication systems PACS.
According to the above technical scheme, the utility model discloses a modal image fuses system, fuses the system and includes: the system comprises a video controller, medical imaging equipment, a binocular camera, a human body surface reference plane positioning module, a handheld equipment positioning module, a display, a touch screen and a keyboard mouse. The utility model discloses a carry out the multimodal image fusion, realized associating each single mode image to make medical personnel can carry out disease diagnosis according to the multimodal fusion image, improved the efficiency that the operation scope defined greatly, shortened the time that the diagnostic process was spent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the disclosed drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a multi-modal image fusion system according to an embodiment of the present invention;
fig. 2 is a flowchart of a multi-modal image fusion method disclosed in the embodiment of the present invention;
fig. 3 is a flowchart of an acquisition method for an image fusion datum plane according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. Based on the embodiments in the present invention, all other embodiments obtained by a person skilled in the art without creative work belong to the protection scope of the present invention.
The utility model discloses a modal image fusion system, the fusion system includes: the system comprises a video controller, medical imaging equipment, a binocular camera, a human body surface reference plane positioning module, a handheld equipment positioning module, a display, a touch screen and a keyboard mouse. The utility model discloses a carry out the multimodal image fusion, realized associating each single mode image to make medical personnel can carry out disease diagnosis according to the multimodal fusion image, improved the efficiency that the operation scope defined greatly, shortened the time that the diagnostic process was spent.
Referring to fig. 1, an embodiment of the present invention discloses a structural schematic diagram of a multi-modal image fusion system, which includes: the system comprises a video controller 11, medical imaging equipment 12, a binocular camera 13, a human body surface reference plane positioning module 14, a handheld equipment positioning module 15, a display 16, a touch screen 17 and a keyboard and a mouse 18.
Wherein:
the medical imaging device 12 is connected to the video controller 11 through an HDMI (High Definition Multimedia Interface), and the medical imaging device 12 includes, but is not limited to, a C-arm (DSA device), an ultrasound device, an endoscope device, a microscope device, and a PACS (Picture Archiving and Communication Systems), wherein DSA is generally called in english: digital subset architecture, chinese interpretation: digital subtraction angiography.
Human surface reference surface location module 14 includes a plurality of location bobbles, and for the convenience of description, will include in human surface reference surface location module 14 every location bobble be regarded as first location bobble, every first location bobble is used for placing at preset position, preset position is: on the body surface of the operative part, at a position near the surgically-set entrance.
For example, the human body surface reference plane positioning module 14 includes three positioning beads, which are: positioning bead 1, positioning bead 2, and positioning bead 3. In practical application, the positioning small balls 1, the positioning small balls 2 and the positioning small balls 3 are placed on the surface of a human body of an operation part near an entrance set by an operation, and the positioning small balls 1, the positioning small balls 2 and the positioning small balls 3 are distributed into a non-isosceles triangle.
It should be noted that each positioning ball in the human body surface reference plane positioning module 14 has a unique corresponding ball number.
The handheld device positioning module 15 includes at least one positioning bead, and for convenience of description, each positioning bead included in the handheld device positioning module 15 is referred to as a second positioning bead, and the second positioning bead is used to be placed on a handle for acquiring an ultrasound image.
The data of the second positioning ball contained in the handheld device positioning module 15 depends on the actual needs, and includes: a positioning ball 4 and a positioning ball 5 (see fig. 1).
It should be noted that each second positioning ball in the handheld device positioning module 15 has a unique corresponding ball number.
Binocular camera 13 passes through the net gape with video controller 11 and is connected, and binocular camera 13 installs in the bed edge position, and this bed edge position is: need not to change in whole operation process, can gather the position of each location bobble in human surface reference surface orientation module 14 and handheld device orientation module 15, and can not be by the position that medical personnel sheltered from when carrying out multimodal image fusion.
The binocular camera 13 is used to collect a ball position image containing position information of the positioning ball.
In practical applications, the binocular camera 13 may collect ball position images including position information of all the first positioning balls, or collect ball position images including position information of all the first positioning balls and all the second positioning balls.
The display 16 is connected to the video controller 11 via HDMI, and the display 16 is used to display the multimodal fusion image. To facilitate the medical personnel to view the fused multi-modal images, the display 16 may be a larger screen display, such as a 55 "LCD display.
The display 16 in this embodiment can be displayed as multiple split screens, such as 4 split screens or 6 split screens, and at the same time, can also be selected
Tiled display (i.e., non-overlay mode) and image fusion mode. The display mode of the display screen 16 can be selected by clicking a corresponding button on the touch screen 17.
The touch screen 17 is connected with the video controller 11 through a serial port, and the touch screen 17 is used for controlling the video controller 11 and displaying a human-computer interface. Specifically, the touch screen 17 is used for inputting a mode selection command for a surgical mode, wherein the medical imaging devices 12 corresponding to different surgical modes are the same or different.
The keyboard and mouse 18 are connected to the video controller 11 via a USB (universal serial bus). The keyboard and mouse 18 is used for inputting an image selection instruction of the medical image to be fused.
The video controller 11 is configured to obtain an image selection instruction of a medical image to be fused, select each medical image to be fused corresponding to the image selection instruction from each stored medical image, obtain an image fusion datum plane obtained through preprocessing, and fuse each medical image to be fused on the image fusion datum plane according to a relative position of each first positioning bead on the image fusion datum plane to obtain a multi-modal fusion image.
Wherein, the medical images stored in the video controller are: and medical images acquired by each medical imaging device corresponding to the pre-selected operation mode.
The image fusion datum plane is obtained by processing a small ball position image which is acquired by the binocular camera and contains all first positioning small balls at the preset position.
In summary, the present invention discloses a multi-modal image fusion system, including: a video controller 11, a medical image device 12, a binocular camera 13, a human body surface reference plane positioning module 14, a handheld device positioning module 15, a display 16, a touch screen 17 and a keyboard and mouse 18, before operation, each first positioning small ball in the human body surface datum plane positioning module 14 is placed on the human body surface of the operation part, in the position near the entrance set by the operation, the binocular camera 13 collects the small ball position image containing the position information of all the first positioning small balls, the video controller 11 obtains an image fusion reference plane by processing the small ball position image, the video controller 11 obtains each corresponding medical image to be fused according to the obtained image selection instruction, and fusing the medical images to be fused on the image fusion datum plane according to the relative positions of the first positioning small balls on the image fusion datum plane to obtain the multi-modal fusion image. The utility model discloses a treat that each of single mode image fuses the medical image, realized associating each single mode image to make medical personnel can fuse the image according to the multimode and carry out disease diagnosis, improved the efficiency that the operation scope defined greatly, shortened the time that the diagnostic process was spent.
Referring to fig. 2, a flow chart of a multi-modal image fusion method disclosed in the embodiment of the present invention is applied to a video controller in the embodiment shown in fig. 1, and the method includes:
s101, acquiring an image selection instruction of a medical image to be fused;
it should be noted that, after receiving the image selection instruction, the video controller will automatically enter the image fusion mode.
In this embodiment, before an operation, a doctor selects a medical image to be fused, which needs to undergo image fusion, such as a C-arm image and a B-mode image, through a mouse and a keyboard.
Of course, during the operation, the doctor can also select the medical image to be fused by the mouse and the keyboard. The time point of selecting the medical image to be fused is determined according to actual needs, and the utility model discloses do not limit here.
S102, selecting each medical image to be fused corresponding to the image selection instruction from the stored medical images;
wherein, the medical images stored in the video controller are: and medical images acquired by each medical imaging device corresponding to the pre-selected operation mode.
In practical applications, the medical image to be fused may include: c-shaped arm images (human body orthostatic images and human body lateral position images), G-shaped arm images (human body orthostatic images and human body lateral position images), B-ultrasonic images in the operation process and endoscope images in the operation process.
S103, acquiring an image fusion reference plane obtained by preprocessing;
specifically, referring to fig. 3, an embodiment of the present invention discloses a method for acquiring an image fusion datum plane, where the method is applied to a video controller, and the method includes:
step S201, acquiring actual position information of each first positioning small ball in a human body surface reference surface positioning module placed at a preset position;
wherein, the preset positions are as follows: on the body surface of the operative part, at a position near the surgically-set entrance.
The actual location information includes: the number and volume of the balls of each first ball, and the shortest distance between each first ball and each other.
In this embodiment, the first positioning beads have the same volume.
The reason for obtaining the volume size of the first positioning pellet is as follows: due to the fact that the distance difference between each first positioning small ball and the binocular camera is small, the size of each first positioning small ball displayed on the image collected by the binocular camera is different. The utility model discloses a pixel gray scale region size and the characteristic parameter of the first location bobble of reality that will calculate according to the image contrast, reachs the distance correlation information between two mesh cameras and each first location bobble to this correction when as two mesh cameras calculation distance, thereby improves distance calculation accuracy.
In this embodiment, after each first positioning pellet is placed at the preset position, the medical staff can manually measure the shortest distance between the first positioning pellets, and the measured shortest distance is input to the video controller through the keyboard and the mouse.
Wherein each first positioning small ball has a unique corresponding small ball number.
It should be noted that the human body surface reference plane positioning module includes a plurality of first positioning small balls, for example, three first positioning small balls, which are: positioning bead 1, positioning bead 2, and positioning bead 3. In practical application, the positioning small balls 1, the positioning small balls 2 and the positioning small balls 3 are placed on the surface of a human body of an operation part near an entrance set by an operation, and the positioning small balls 1, the positioning small balls 2 and the positioning small balls 3 are distributed into a non-isosceles triangle.
Step S202, acquiring a small ball position image which is acquired by a binocular camera and contains position information of all first positioning small balls;
step S203, adjusting the position of each first positioning small ball in the small ball position image according to the actual position information of each first positioning small ball, so that the position of each first positioning small ball in the small ball position image is consistent with the actual position;
and step S204, determining the adjusted small ball position image as an image fusion reference surface.
And S104, fusing the medical images to be fused on the image fusion reference surface according to the relative positions of the first positioning small balls on the image fusion reference surface to obtain a multi-modal fusion image.
It should be noted that, in practical application, only the positioning ball is located on the C-arm image, and the other medical images to be fused are not located on the positioning ball image, but only the image information of the respective devices. Although the medical images to be fused other than the C-arm image do not have the image of the positioning bead, each medical image to be fused has a reference plane derived based on the positioning bead 1, the positioning bead 2, and the positioning bead 3.
And (3) fusion process: and after the image fusion datum plane is determined, performing fusion calculation on each medical image to be fused based on the image fusion datum plane.
The process of fusing each medical image to be fused on the image fusion reference plane is illustrated as follows:
suppose the medical image to be fused is: c-arm image, during the operation, the operator places the positioning small ball 1, the positioning small ball 2 and the positioning small ball 3 on the body surface of the patient. The C-shaped arm obtains a C-shaped arm image through the radiation imaging and image reconstruction technology in the equipment self-technology, and the C-shaped arm image contains the image information of the positioning small ball 1, the positioning small ball 2 and the positioning small ball 3.
The C-arm image with the information of the image of the positioning ball is input to the video controller.
The video controller analyzes and compares the images of the positioning small balls, finds out the positions of the positioning small balls 1, the positioning small balls 2 and the positioning small balls 3, and measures the sizes of the positioning small balls 1, the positioning small balls 2 and the positioning small balls 3 through gray value measurement. And matching the known characteristic information of the positioning small balls input in advance with the distance between the positioning small balls in the image. And adjusting the image position and displaying the arm strength in real time to ensure that the position of the positioning ball is the same as the known characteristic information of the positioning ball input in advance.
When both are the same, reference information of the C-arm image is obtained. The image source of the C-shaped arm is considered to be successfully matched with the image source reference plane of the binocular camera.
Suppose the medical image to be fused is: in the B-mode ultrasonic image, a positioning small ball (the number of the small ball is 4) is bound on a B-mode ultrasonic handle, the distance value of the positioning small ball 4 and a point, which is in contact with the body surface of a human body, of a probe is manually measured, and the distance value is input into a video controller.
The binocular camera collects distance position images of the positioning small ball 4 and the positioning small balls 1, 2 and 3 in real time and sends the distance position images to the video controller, and the video controller calibrates the relative relation between the B-mode ultrasonic image and other images according to the distance position images.
Suppose the medical image to be fused is: the endoscope image is similar to the B-ultrasonic image, a positioning small ball (the small ball is numbered as 5) is bound on a handle of the endoscope, and the distance value between the small ball 5 and the extreme end of the endoscope entering the human body side is manually measured. During the operation, the video controller calibrates the entering depth of the endoscope in the human body according to the positions of the positioning ball 5 and the positioning balls 1, 2 and 3.
It should be noted that when the medical image to be fused changes during the operation, the multimodality fused image obtained by fusion changes accordingly.
It should be noted that, each medical image to be fused can be seen in the multi-modality fused image, for example, the multi-modality fused image is obtained by fusing two kinds of images of B-mode ultrasound and C-arm, and the two kinds of images of B-mode ultrasound and C-arm can be seen at the position of the multi-modality fused image, so that the image information is more specific, and the reason for the operation is developed. In practical application, each medical image to be fused before fusion can be displayed in other split screens of the display, so that an operator can compare the images before fusion with the images after fusion more intuitively.
In conclusion, the utility model discloses a multimode image fusion method, before the operation, place the human surface at the operation part with each first location bobble in the human surface reference surface orientation module, near the position of the entry that sets for in the operation, the bobble position image that contains the positional information of all first location bobbles is gathered to the binocular camera, video controller obtains the image fusion reference surface through handling bobble position image, video controller acquires each medical image that treats that corresponds according to the image selection instruction that acquires, according to the relative position of each first location bobble on the image fusion reference surface, with each medical image that treats to fuse on the image fusion reference surface, obtain multimode fusion image. The utility model discloses a treat that each of single mode image fuses the medical image, realized associating each single mode image to make medical personnel can fuse the image according to the multimode and carry out disease diagnosis, improved the efficiency that the operation scope defined greatly, shortened the time that the diagnostic process was spent.
To further optimize the above embodiment, step S104 may further include:
and outputting the multi-modal fusion image to a display for displaying.
Specifically, the multi-modal fusion image can be output to a split screen of a display for display.
In practical application, in order to facilitate a doctor to locate a surgical site, a 3D human anatomy map can be displayed on a display, the 3D human anatomy map is pre-stored in a background database, and the 3D human anatomy map and the multi-mode fusion image are displayed on different split screens of the display.
It should be noted that each medical image to be fused before fusion can also be displayed on the display.
For example, the multi-mode display is often set in the display to display six split screens:
displaying the multimodal fusion image in the upper left corner;
above, medium is fusion source 1 (e.g., C-arm image);
bottom left is fusion source 2 (e.g., B-mode ultrasound image);
the upper right is a real-time changing 3D anatomical image;
the lower middle is PACS medical record data;
the right lower part is the data of the ECG monitor.
To further optimize the above embodiment, before performing step S101, the multi-modal image fusion method may further include:
acquiring a mode selection instruction of an operation mode, and determining each medical imaging device corresponding to the operation mode according to the mode selection instruction;
and acquiring and storing the medical image acquired by each medical imaging device.
In this embodiment, the surgeon may select a surgical mode on the touch screen prior to surgery, including but not limited to general surgery, trauma surgery, minimally invasive intervention, neurosurgery, and the like. Since the medical images required for different surgical modes may be different, different surgical modes may correspond to the medical imaging device.
Alternatively, medical images corresponding to different operation modes can be displayed at different positions of the display, for example, medical images corresponding to general surgery are displayed at the lower right corner of the display.
After determining each medical imaging device corresponding to the operation mode selected by the doctor, the video controller acquires and stores the medical images acquired by each medical imaging device in real time.
In order to facilitate medical staff to check the position of the multi-mode fusion image in the 3D human anatomy map when performing an operation on a patient, the 3D human anatomy map can be displayed on a split screen of a display, or the 3D human anatomy map and each medical image to be fused are fused.
Therefore, to further optimize the above embodiment, step S104 may specifically include:
acquiring a 3D human anatomy map;
and fusing the 3D human anatomy figure and each medical image to be fused on the image fusion datum plane according to the relative position of each first positioning small ball on the image fusion datum plane to obtain the multi-mode fusion image, wherein the 3D human anatomy figure is an image serving as a bottom layer in the multi-mode fusion image.
To further optimize the above embodiment, the multi-modal image fusion method may further include:
and outputting the 3D human anatomy map, the multi-mode fusion images and each medical image to be fused to one split screen of a display for displaying.
In conclusion, the utility model discloses a multimode image fusion method, before the operation, place each first location bobble in the human surface reference surface orientation module on the human surface of operation part, near the position of the entry that sets for in the operation, the bobble position image that contains the positional information of all first location bobbles is gathered to the binocular camera, video controller obtains image fusion reference surface through handling bobble position image, video controller obtains corresponding each medical image of treating to fuse according to the image selection instruction that acquires, and acquire 3D human anatomy picture, according to the relative position of each first location bobble on the image fusion reference surface, fuse 3D human anatomy picture and each medical image of treating to fuse on the image fusion reference surface, obtain multimode fusion image, and show 3D human anatomy picture, multimode fusion image and every medical image of treating to fuse a split screen of display. The utility model discloses a treat each of monomodal image and fuse medical image and the human anatomy picture of 3D and fuse, realized associating each monomodal image to make medical personnel can fuse the image according to the multimode and carry out disease diagnosis, improved the efficiency that the operation scope defined greatly, shortened the time that the diagnostic process spent.
In addition, under the contrast of the human anatomy picture of 3D, each of fusing the selection treats and fuses medical image, can be convenient for medical personnel look over patient's inside relative information in real time to the doctor's that has significantly reduced burden, to the puncture operation under the image guide of needs, no matter whether doctor degree anatomical structure is familiar with, can all the utility model provides a under the technical support, very clear operation, and, the utility model discloses still effectively reduced because of the doctor fixes a position repeatedly the operation time that causes long, lead to the condition that patient's amount of bleeding appears.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (2)

1. A multi-modality image fusion system, comprising: the system comprises a video controller, medical imaging equipment, a binocular camera, a human body surface reference plane positioning module, a handheld equipment positioning module, a display, a touch screen and a keyboard and a mouse;
the human body surface reference surface positioning module comprises a plurality of positioning small balls, each positioning small ball is marked as a first positioning small ball, each first positioning small ball is used for being placed at a preset position, each first positioning small ball is provided with a unique corresponding small ball code, and the preset positions are as follows: a position on the body surface of the operative portion near the surgically-set portal;
the handheld device positioning module at least comprises one positioning small ball which is marked as a second positioning small ball, the second positioning small ball is used for being placed on a handle for acquiring an ultrasonic image, and each second positioning small ball is provided with a unique corresponding small ball code;
the binocular camera is used for collecting a small ball position image containing position information of the positioning small ball;
the touch screen is used for inputting a mode selection instruction of an operation mode;
the keyboard and the mouse are used for inputting an image selection instruction of the medical image to be fused;
the display is used for displaying the multi-modal fusion image;
the video controller is respectively connected with the medical image equipment, the binocular camera, the display, the touch screen and the keyboard mouse, and the video controller is used for carrying out multi-modal image fusion.
2. The multimodal image fusion system of claim 1, wherein the medical imaging device comprises: c-arm, ultrasound equipment, endoscopic equipment, microscopy equipment, and picture archiving and communication systems PACS.
CN202022143020.8U 2020-09-25 2020-09-25 Multi-modal image fusion system Active CN212276454U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202022143020.8U CN212276454U (en) 2020-09-25 2020-09-25 Multi-modal image fusion system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202022143020.8U CN212276454U (en) 2020-09-25 2020-09-25 Multi-modal image fusion system

Publications (1)

Publication Number Publication Date
CN212276454U true CN212276454U (en) 2021-01-01

Family

ID=73870529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202022143020.8U Active CN212276454U (en) 2020-09-25 2020-09-25 Multi-modal image fusion system

Country Status (1)

Country Link
CN (1) CN212276454U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115645013A (en) * 2022-12-29 2023-01-31 山东百多安医疗器械股份有限公司 Multi-mode tracheostomy device combined with electrocardio ultrasonic endoscope

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115645013A (en) * 2022-12-29 2023-01-31 山东百多安医疗器械股份有限公司 Multi-mode tracheostomy device combined with electrocardio ultrasonic endoscope

Similar Documents

Publication Publication Date Title
CN110769740B (en) Universal apparatus and method for integrating diagnostic tests into real-time therapy
KR101929127B1 (en) Apparatus and method for diagnosing a medical condition on the baisis of medical image
JP4786246B2 (en) Image processing apparatus and image processing system
JP5309187B2 (en) MEDICAL INFORMATION DISPLAY DEVICE, ITS OPERATION METHOD, AND MEDICAL INFORMATION DISPLAY PROGRAM
KR20200000820A (en) Video clip selector for medical imaging and diagnosis
AU2021258038B2 (en) Systems and methods for planning medical procedures
KR101636876B1 (en) Apparatus and method for processing medical image
JPH10137231A (en) Medical image processor
US20090080742A1 (en) Image display device and image display program storage medium
CN107854177A (en) A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration
EP2878266A1 (en) Medical imaging system and program
CN105433969A (en) Medical image system and presumed clinical position information display method
JP7100884B2 (en) Video clip Image quality for medical use based on image quality Determining reliability in measuring video clips
US10765321B2 (en) Image-assisted diagnostic evaluation
JP2008259661A (en) Examination information processing system and examination information processor
JP7073661B2 (en) Dynamic analysis device and dynamic analysis system
US20070027408A1 (en) Anatomical Feature Tracking and Monitoring System
CN212276454U (en) Multi-modal image fusion system
US20160199015A1 (en) Image display control device, operating method for same, and image display control program
JP2016209267A (en) Medical image processor and program
CN111951208A (en) Multi-modal image fusion system and image fusion method
CN110946615B (en) Ultrasonic diagnostic apparatus and operation method using the same
JP2020006150A (en) Validity of reference system
JP6540442B2 (en) Display method and display control device
CN113643222A (en) Multi-modal image analysis method, computer device and storage medium

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant