CN117391977A - Fusion imaging method and device, electronic equipment and storage medium - Google Patents

Fusion imaging method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117391977A
CN117391977A CN202210759545.5A CN202210759545A CN117391977A CN 117391977 A CN117391977 A CN 117391977A CN 202210759545 A CN202210759545 A CN 202210759545A CN 117391977 A CN117391977 A CN 117391977A
Authority
CN
China
Prior art keywords
image
registration
edge contour
adjusted
adjusting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210759545.5A
Other languages
Chinese (zh)
Inventor
刘畅
王皓浩
梁峭嵘
刘德清
张力男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonoscape Medical Corp
Original Assignee
Sonoscape Medical Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonoscape Medical Corp filed Critical Sonoscape Medical Corp
Priority to CN202210759545.5A priority Critical patent/CN117391977A/en
Publication of CN117391977A publication Critical patent/CN117391977A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application discloses a fusion imaging method, a fusion imaging device, an electronic device and a computer readable storage medium, wherein the fusion imaging method comprises the following steps: acquiring a first image and a second image of a target examination part; wherein the first image and the second image are images in different modalities; extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display; adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image; and fusing the second image with the first image after image registration to obtain a fused image. The registration effect of the images of different modes is improved, and then the fusion imaging effect of the images of different modes is improved.

Description

Fusion imaging method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technology, and more particularly, to a fusion imaging method and apparatus, and an electronic device and a computer readable storage medium.
Background
Currently, medical imaging devices of various modes are widely used in clinical diagnosis and medical research, and images of different modes cannot be acquired synchronously, so that images of one mode need to be acquired in advance, registered with images of another mode and then subjected to corresponding image fusion.
In the related art, the registration manner of images under different modes is mostly manual registration, that is, one of the images is fixed, the image which is most similar to the other image is searched, and the registration and the real-time synchronization are completed by manual displacement or parameter adjustment operation. In general, two modal images can be presented at the left and right sides of the screen at the same time for comparison, or one of the two modal images is subjected to azimuth adjustment in the fused image. However, both the two methods are difficult to accurately evaluate the registration conditions of the two images, and for the former, the position comparison is not very visual, and the better effect can be achieved by repeated operation; in the latter case, the information is easy to interfere after the non-registered images are overlapped, and the information is difficult to distinguish in the registration process. As can be seen, in the related art, there are inefficiencies in terms of efficiency and/or effectiveness of image registration or even image fusion under different modalities.
Therefore, how to improve the registration effect of images of different modes, so as to obtain more accurate fusion images is a key technical problem that needs to be solved by the technicians in the field.
Disclosure of Invention
The invention aims to provide a fusion imaging method and device, electronic equipment and a computer readable storage medium, which improve the registration effect of images of different modes and further improve the fusion imaging effect of images of different modes.
To achieve the above object, the present application provides a fusion imaging method, including:
acquiring a first image and a second image of a target examination part; wherein the first image and the second image are images in different modalities;
extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display;
adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image;
and fusing the second image with the first image after image registration to obtain a fused image.
If the first image is an image in an ultrasound mode and the second image is an image in a non-ultrasound mode, acquiring a second image of the target examination part includes:
Acquiring three-dimensional data of a target examination part;
determining an initial tangent plane, and acquiring a second image in the three-dimensional data based on the initial tangent plane;
correspondingly, the adjusting the second image according to the edge contour superimposed in the second image includes:
taking the edge contour overlapped in the second image as image reference information;
and adjusting the section according to the image reference information, and acquiring an adjusted second image in the three-dimensional data based on the adjusted section so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
Wherein the adjusting the section according to the image reference information includes:
and changing the azimuth parameters of the tangent plane according to the image reference information, and adjusting the tangent plane based on the azimuth parameters of the tangent plane.
Wherein the adjusting the section according to the image reference information includes:
and re-acquiring the gesture data of the ultrasonic probe according to the image reference information, calculating a transformation matrix according to the gesture data, and adjusting a tangent plane based on the transformation matrix.
Wherein after the adjusted second image is acquired from the three-dimensional data based on the adjusted section, the method further comprises:
Determining corresponding registration points in the first image and the second image respectively;
and continuously adjusting the position of the second image according to the registration points in the first image and the second image so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
Wherein after the adjusted second image is acquired from the three-dimensional data based on the adjusted section, the method further comprises:
receiving an input position adjustment parameter;
and continuously adjusting the position of the second image according to the position adjustment parameter so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
The fusing of the second image and the first image after the image registration to obtain a fused image includes:
and overlapping the second image after image registration on the first image according to a preset color imaging mode to obtain a fusion image.
If the first image is an image in a non-ultrasound mode and the second image is an image in an ultrasound mode, adjusting the second image according to the edge profile superimposed in the second image includes:
Taking the edge contour overlapped into the second image as image reference information;
triggering and adjusting the scanning angle and the position of the ultrasonic probe according to the image reference information to adjust a second image in the window, so that the adjusted second image and the edge contour displayed in the window show a matching form.
Wherein, after triggering and adjusting the scanning angle and the position of the ultrasonic probe according to the image reference information to adjust the second image in the window, the method further comprises:
determining corresponding registration points in the first image and the second image respectively;
and continuously adjusting the position of the second image according to the registration points in the first image and the second image so that the adjusted second image and the edge contour displayed in the window show a matching form.
The fusing of the second image and the first image after the image registration to obtain a fused image includes:
and superposing the first image on the second image after image registration according to a preset color imaging mode to obtain a fusion image.
Wherein one of the first image and the second image is an image in an ultrasound mode, and the other is an image in a non-ultrasound mode;
After the second image and the first image after the image registration are fused to obtain a fused image, the method further comprises:
acquiring attitude data of an ultrasonic probe, and calculating a target tangent plane angle of an image in the non-ultrasonic mode according to the attitude data;
and adjusting the image under the non-ultrasonic mode based on the target section angle, and fusing the adjusted image under the non-ultrasonic mode with the registered image under the ultrasonic mode again to obtain an adjusted fused image.
Wherein the acquiring the first image and the second image of the target examination region includes:
acquiring a first image and a second image of a target examination part in a first direction;
correspondingly, after the second image is adjusted according to the edge contour superimposed in the second image so as to make the second image and the first image perform image registration, the method further comprises:
acquiring a third image and a fourth image of the target examination part in a second direction under different modes;
re-image registering the third image and the fourth image;
correspondingly, the fusing of the second image and the first image after the image registration to obtain a fused image includes:
And fusing the images in different modes after registering the images for multiple times to obtain a fused image.
Wherein the extracting the edge contour in the first image includes:
extracting tissue image characteristics of the first image;
a vessel edge profile and envelope edge profile are derived based on the extracted set of image features.
To achieve the above object, the present application provides a fusion imaging apparatus including:
the first acquisition module is used for acquiring a first image and a second image of the target examination part; wherein the first image and the second image are images in different modalities;
the extraction module is used for extracting an edge contour from the first image and superposing the edge contour to a window of the second image for display;
a first registration module for adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image;
and the first fusion module is used for fusing the second image after the image registration with the first image to obtain a fused image.
To achieve the above object, the present application provides an electronic device, including:
A memory for storing a computer program;
and a processor for implementing the steps of the fusion imaging method as described above when executing the computer program.
To achieve the above object, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a fusion imaging method as described above.
According to the scheme, the fusion imaging method provided by the application comprises the following steps: acquiring a first image and a second image of a target examination part; wherein the first image and the second image are images in different modalities; extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display; adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image; and fusing the second image with the first image after image registration to obtain a fused image.
According to the fusion imaging method, edge contour extraction is carried out in one mode image and then the edge contour extraction is overlapped in the other mode image, so that effective reference guidance is provided for registration, registration operation can be completed more quickly and accurately, registration operation efficiency and accuracy are improved, and display errors caused by fusion under insufficient registration are avoided. Therefore, the fusion imaging method improves the registration effect of images of different modes, and further improves the fusion imaging effect of images of different modes. The application also discloses a fusion imaging device, electronic equipment and a computer readable storage medium, and the technical effects can be achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a block diagram of a fused imaging system shown in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a fusion imaging method according to an exemplary embodiment;
FIG. 3 is a schematic diagram of a fused image shown according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating another fusion imaging method according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating an image registration according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating yet another fusion imaging method according to an exemplary embodiment;
FIG. 7 is a flowchart illustrating yet another fusion imaging method according to an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating another image registration according to an example embodiment;
FIG. 9 is a flowchart illustrating yet another fusion imaging method according to an exemplary embodiment;
fig. 10 is a block diagram of a fusion imaging apparatus according to an exemplary embodiment;
fig. 11 is a block diagram of an electronic device according to an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application. In addition, in the embodiments of the present application, "first," "second," and the like are used to distinguish similar objects, and are not necessarily used to describe a particular order or sequence.
The fusion imaging method provided by the embodiment of the application can be applied to an application scene shown in fig. 1, and comprises an ultrasonic probe, a modal image imaging device, electronic equipment and a display screen. The ultrasonic probe is used for transmitting and receiving ultrasonic waves to a target examination site: the ultrasonic probe is excited by the transmitting pulse, transmits ultrasonic waves to the target examination part, and receives ultrasonic echo signals with tissue information after a certain delay. The ultrasonic probe reconverts the ultrasonic echo signals into electrical signals. The electronic device obtains an ultrasound image therefrom. The user can update the ultrasonic image in real time by adjusting the scanning angle of the ultrasonic probe. The modality image imaging means may specifically be a PET (positron emission computed tomography, positron emission tomography) imaging means, a CT (computed tomography ) imaging means, an MRI (magnetic resonance imaging ) imaging means, etc., for acquiring modality images, which may include PET images, CT images, MRI images, etc. The modal image imaging apparatus may comprise a trackball, and the user may adjust the angle of the tangent plane of the modal image by adjusting the trackball, thereby adjusting the modal image. The electronic equipment is used for registering and fusing the ultrasonic image and the images of other modes, and the display screen is used for displaying the fused image. The electronic device may be an ultrasound host or the like. Alternatively, the modality image imaging arrangement may be a functional module in the electronic device.
The embodiment of the application discloses a fusion imaging method, which improves the image registration effect under different modes, and further improves the image fusion imaging effect under different modes.
Referring to fig. 2, a flowchart of a fusion imaging method is shown according to an exemplary embodiment, and the method may be applied to an electronic device such as an ultrasound device. As shown in fig. 2, the method includes:
s101: acquiring a first image and a second image of a target examination part; wherein the first image and the second image are images in different modalities;
the target examination site is a site targeted for imaging in different modes, and may be a site on a human body or an animal body. Further, the target examination site may be a site where puncture imaging is required. According to the embodiment of the application, the puncture guiding is realized through the image fusion under different modes, and then the target examination part is punctured.
In this step, images under different modes are acquired as a first image and a second image respectively, and may include an image under an ultrasound mode and an image under a non-ultrasound mode, where the image under the ultrasound mode is a real-time image acquired by scanning an ultrasound probe in real time, and the image under the non-ultrasound mode may include a PET image, a CT image, an MRI image, and the like. If the second image is an image in a non-ultrasound mode, as a possible implementation manner, acquiring the image in the non-ultrasound mode of the target examination region includes: acquiring three-dimensional data of a target examination part; an initial tangent plane is determined, and a second image is acquired in the three-dimensional data based on the initial tangent plane. In a specific implementation, the mode image imaging device acquires three-dimensional data of a target examination part, a user can further determine an initial tangent plane by adjusting the angle and the initial position of the initial tangent plane, and then an image under a non-ultrasonic mode is acquired in the three-dimensional data based on the initial tangent plane.
In a specific implementation, the display screen simultaneously displays two images in different modes, that is, the display screen comprises two windows with the same size, one window can be used for displaying the images in the ultrasonic mode, and the other window can be used for displaying the images in the non-ultrasonic mode. The user can update the image under the ultrasonic mode by adjusting the scanning angle of the ultrasonic probe, and can also adjust the angle and the position of the tangent plane in the three-dimensional data, so that the image under the ultrasonic mode and the image under the non-ultrasonic mode displayed by the display screen are registered to a certain degree, the synchronous display mode is entered at the moment, the user adjusts the image under the ultrasonic mode in real time by adjusting the scanning angle and the position of the ultrasonic probe, and meanwhile, the magnetic navigation sensor assembled on the ultrasonic probe can transmit the posture data of the ultrasonic probe to the electronic equipment in real time, and the electronic equipment adjusts the tangent plane according to the posture data, so that the displayed image under the non-ultrasonic mode is updated in real time.
Further, the user can freeze the first image through the key, at the moment, the synchronous display mode is exited, the image registration mode is entered, and the user realizes registration of the first image and the second image through adjusting the second image.
S102: extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display;
in this step, the edge contour is extracted from the first image, and the present embodiment does not limit the algorithm for extracting the edge contour. As a possible implementation manner, the extracting an edge contour in the first image includes: extracting tissue image characteristics of the first image; a vessel edge profile and envelope edge profile are derived based on the extracted set of image features. In a specific implementation, the edge contour is extracted based on the image features in the first image, and may be classified into a vessel edge contour, an envelope edge contour, and the like according to morphology. For the vessel edge profile, a gradient-based edge extraction method can be adopted, and for the envelope edge profile, a gradient derivative-based edge extraction method can be adopted, wherein the two principles are similar but the two principles have respective applicable conditions.
Further, the extracted edge contour may be displayed in a window of the present image in a preset display manner, and displayed in a window of the other superimposed in a preset display manner, for example, in a window of two images in a highlighted color and line type.
S103: adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image;
the purpose of this step is to image register the first image and the second image. In a specific implementation, the second image is adjusted according to the reference guideline superimposed to the edge profile in the second image, and registration of the first image and the second image is achieved. After adjusting the second image, the user can defrost the first image by pressing a button, and simultaneously clear the edge contours superimposed in the two windows, and enter the synchronous display mode again.
S104: and fusing the second image with the first image after image registration to obtain a fused image.
In the step, the second image after image registration is fused with the first image to obtain a fused image. For example, the non-ultrasound modality image after image registration is superimposed on the ultrasound image in a preset color imaging manner. The color imaging mode can be RGB or HSV or other color imaging modes. Whereby the non-ultrasound modality image is displayed in real time in the window of the ultrasound image. When a user adjusts the scanning angle and the scanning position of the ultrasonic probe in real time, the display screen synchronously adjusts the first image and the second image, and the adjusted images in different modes are fused again to obtain an adjusted fused image, which can be seen in fig. 3.
As a possible implementation, one of the first image and the second image is an image in an ultrasound mode, and the other is an image in a non-ultrasound mode; after this step, further comprising: acquiring attitude data of an ultrasonic probe, and calculating a target tangent plane angle of an image in the non-ultrasonic mode according to the attitude data; and adjusting the image under the non-ultrasonic mode based on the target section angle, and fusing the adjusted image under the non-ultrasonic mode with the registered image under the ultrasonic mode again to obtain an adjusted fused image. In specific implementation, after the second image and the first image after image registration are fused to obtain a fused image, in a synchronous display mode, a user adjusts the scanning angle and the position of the ultrasonic probe to adjust the image in an ultrasonic mode in real time, meanwhile, the magnetic navigation sensor assembled on the ultrasonic probe can transmit the gesture data of the ultrasonic probe to the electronic equipment in real time, the electronic equipment adjusts the tangent plane according to the gesture data, further updates the displayed image in a non-ultrasonic mode in real time, and fuses the adjusted image in the non-ultrasonic mode with the image in the ultrasonic mode again to obtain an adjusted fused image.
It should be noted that, the first image and the second image may be registered multiple times and then fused, that is, the process of steps S101 to S103 is repeatedly performed multiple times, and then step S104 is performed. As a preferred embodiment, the acquiring the first image and the second image of the target examination site includes: acquiring a first image and a second image of a target examination part in a first direction; correspondingly, after the second image is adjusted according to the edge contour superimposed in the second image so as to make the second image and the first image perform image registration, the method further comprises: acquiring a third image and a fourth image of the target examination part in a second direction under different modes; re-image registering the third image and the fourth image; correspondingly, the fusing of the second image and the first image after the image registration to obtain a fused image includes: and fusing the images in different modes after registering the images for multiple times to obtain a fused image. In a specific implementation, images of different modes under different orientations of the target examination part are selected for registration each time, for example, a first image and a second image of the target examination part under a first orientation are selected for the first time, then edge contours of the first image are extracted and superimposed on the second image, and the second image is adjusted according to the edge contours superimposed on the second image, so that the second image is in image registration with the first image; and next selecting a third image and a fourth image of the target examination part under the second direction, extracting the edge contour of the third image, superposing the edge contour on the fourth image, adjusting the fourth image according to the edge contour superposed on the fourth image so as to enable the fourth image to be in image registration with the third image, and the like, wherein the images under different modes can be subjected to multiple registration for at least two times, and the first image and the second image under different modes are fused after the multiple registration. It should be noted that, the first image and the second image may be images obtained based on three-dimensional data, and if only one registration is performed, only image registration on one section is achieved, and three-dimensional data under different modalities is not reliably registered, so that multiple registration is necessary.
According to the fusion imaging method provided by the embodiment of the application, edge contour extraction is performed in one mode image and then the edge contour extraction is overlapped in the other mode image, so that effective reference guidance is provided for registration, registration operation can be completed more quickly and accurately, registration operation efficiency and accuracy are improved, and display errors caused by fusion under insufficient registration are avoided. Therefore, the fusion imaging method provided by the embodiment of the application improves the registration effect of images of different modes, and further improves the fusion imaging effect of the images of different modes.
The embodiment of the application discloses a fusion imaging method, and compared with the previous embodiment, the embodiment further describes and optimizes the technical scheme. Specific:
another flow chart of a fusion imaging method according to an exemplary embodiment is shown in fig. 4, comprising:
s201: acquiring a first image and a second image of a target examination part; the first image is an image under an ultrasonic mode, and the second image is an image under a non-ultrasonic mode;
s202: extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display;
In this embodiment, an edge contour is extracted from an image in an ultrasound mode, and the extracted edge contour is superimposed into a window of the image in a non-ultrasound mode for display. As shown in fig. 5, the edge contour extracted from the ultrasound image on the left is superimposed in the window of the image in the non-ultrasound mode on the right.
S203: the edge contour overlapped in the second image is used as image reference information, a tangent plane is adjusted according to the image reference information, and an adjusted second image is obtained in the three-dimensional data based on the adjusted tangent plane, so that the adjusted second image and the edge contour displayed in the window are in a matching form;
in the step, the edge profile overlapped in the second image is used as image reference information to adjust the angle and the position of the tangent plane in the three-dimensional data, then the tangent plane is adjusted according to the adjusted angle and the position of the tangent plane, and then the image under the non-ultrasonic mode is acquired again in the three-dimensional data according to the adjusted tangent plane, so that the registration of the image under the non-ultrasonic mode and the image under the ultrasonic mode is realized.
As a possible implementation manner, the adjusting the section according to the image reference information includes: and changing the azimuth parameters of the tangent plane according to the image reference information, and adjusting the tangent plane based on the azimuth parameters of the tangent plane. In a specific implementation, the user can change the azimuth parameters of the section, namely the angle and the position of the section, by adjusting the track ball or the knob, and then adjust the section according to the azimuth parameters of the section.
As another possible implementation manner, the adjusting the slice according to the image reference information includes: and re-acquiring the gesture data of the ultrasonic probe according to the image reference information, calculating a transformation matrix according to the gesture data, and adjusting a tangent plane based on the transformation matrix. In specific implementation, a user can adjust the scanning angle and the position of the ultrasonic probe, at the moment, an image in an ultrasonic mode is in a frozen state and cannot change, but a magnetic navigation sensor assembled on the ultrasonic probe can transmit gesture data of the ultrasonic probe to electronic equipment in real time, and the electronic equipment calculates a transformation matrix according to the gesture data, wherein the transformation matrix can be a tangent plane position transformation matrix, and then a tangent plane is adjusted based on the transformation matrix. And adjusting the tangential plane under the other mode in a targeted manner based on the posture adjustment of the ultrasonic probe, and further acquiring an adjusted second image from the three-dimensional data of the adjusted tangential plane under the other mode.
It should be noted that, after the above-mentioned process of performing registration based on the edge profile, other registration procedures may be further performed. As a possible implementation manner, after the acquiring the adjusted second image in the three-dimensional data based on the adjusted tangent plane, the method further includes: determining corresponding registration points in the first image and the second image respectively; and continuously adjusting the position of the second image according to the registration points in the first image and the second image so as to enable the adjusted second image to be matched with the edge contour displayed in the window. In specific implementation, corresponding registration points are respectively determined in an image under an ultrasonic mode and an image under a non-ultrasonic mode, rigid registration is carried out on the image under the ultrasonic mode and the image under the non-ultrasonic mode based on position information of the registration points, a transformation matrix from a coordinate system of the image under the non-ultrasonic mode to a coordinate system of the image under the ultrasonic mode is calculated, and then the position of the image under the non-ultrasonic mode is adjusted according to the transformation matrix, so that the adjusted image under the non-ultrasonic mode and the edge contour displayed in the window are in a matched form.
As another possible implementation manner, after the acquiring the adjusted second image in the three-dimensional data based on the adjusted tangent plane, the method further includes: receiving an input position adjustment parameter; and continuously adjusting the position of the second image according to the position adjustment parameter so as to enable the adjusted second image to be matched with the edge contour displayed in the window. In implementations, the user may input position adjustment parameters, i.e., the angle and position of the section, by adjusting a trackball or knob. And the electronic equipment continuously adjusts the position of the image under the non-ultrasonic mode according to the position adjusting parameter so as to enable the adjusted image under the non-ultrasonic mode to be matched with the edge contour displayed in the window.
S204: and overlapping the second image after image registration on the first image according to a preset color imaging mode to obtain a fusion image.
In the step, the image under the non-ultrasonic mode after image registration is superimposed on the fused image obtained by the image under the ultrasonic mode according to a preset color imaging mode, and the fused image is displayed in a window of the image under the ultrasonic mode.
A flowchart of yet another fusion imaging method, as shown in fig. 6, according to an exemplary embodiment, includes the steps of:
1. The electronic device reads ultrasound images and images of other modalities under a certain slice.
2. And carrying out pretreatment such as filtering on the ultrasonic image, and carrying out blood vessel edge extraction and envelope edge extraction on the pretreated ultrasonic image.
3. The edges are superimposed on the other modality images based on the extracted vessel edges and envelope edges.
4. When the manual alignment is not completed, the orientation of the section is adjusted, images under other modes are further determined again, and the alignment is carried out again.
5. And when manual alignment is completed, fusing the ultrasonic image and other mode images, and performing fusion display.
The embodiment of the application discloses a fusion imaging method, and compared with the previous embodiment, the embodiment further describes and optimizes the technical scheme. Specific:
a flowchart of yet another fusion imaging method according to an exemplary embodiment, as shown in fig. 7, includes:
s301: acquiring a first image and a second image of a target examination part; the first image is an image under a non-ultrasonic mode, and the second image is an image under an ultrasonic mode;
s302: extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display;
In this embodiment, an edge contour is extracted from the modal image, and the extracted edge contour is displayed in a window of the superimposed ultrasound image, as shown in fig. 8.
S303: the edge contour overlapped in the second image is used as image reference information, and the scanning angle and the position of the ultrasonic probe are triggered and adjusted according to the image reference information so as to adjust the second image in the window, so that the adjusted second image and the edge contour displayed in the window show a matching form;
in the step, the edge contour overlapped in the second image is used as image reference information to adjust the scanning angle and the position of the ultrasonic probe, so that the image under the ultrasonic mode is adjusted, and the registration of the image under the ultrasonic mode and the image under the non-ultrasonic mode is realized.
Likewise, after the above-described process of performing registration based on edge profiles, other registration procedures may be further performed. As a possible implementation manner, after triggering and adjusting the scanning angle and the position of the ultrasonic probe according to the image reference information to adjust the second image in the window, the method further includes: determining corresponding registration points in the first image and the second image respectively; and continuously adjusting the position of the second image according to the registration points in the first image and the second image so that the adjusted second image and the edge contour displayed in the window show a matching form. In specific implementation, corresponding registration points are respectively determined in an image under an ultrasonic mode and an image under a non-ultrasonic mode, rigid registration is carried out on the image under the ultrasonic mode and the image under the non-ultrasonic mode based on position information of the registration points, a transformation matrix from a coordinate system of the image under the ultrasonic mode to a coordinate system of the image under the non-ultrasonic mode is calculated, and then the position of the image under the ultrasonic mode is adjusted according to the transformation matrix, so that the adjusted image under the ultrasonic mode and the edge contour displayed in the window show a matching form.
S304: and superposing the first image on the second image after image registration according to a preset color imaging mode to obtain a fusion image.
In the step, the image under the non-ultrasonic mode is overlapped to the fusion image obtained by the adjusted image under the ultrasonic mode according to the preset color imaging mode, and the fusion image is displayed in a window of the image under the ultrasonic mode.
A flowchart of yet another fusion imaging method according to an exemplary embodiment, as shown in fig. 9, includes the steps of:
1. the electronic device reads ultrasound images and other modality images.
2. And carrying out preprocessing such as filtering on other modal images, and carrying out blood vessel edge extraction and envelope edge extraction on the preprocessed other modal images.
3. The edge is superimposed on the ultrasound image based on the extracted vessel edge and envelope edge.
4. When manual registration is not completed, the probe orientation is adjusted to thereby redetermine the ultrasound image and the registration is resumed.
5. And when manual alignment is completed, fusing the ultrasonic image and other mode images, and performing fusion display.
A fused imaging apparatus provided in an embodiment of the present application is described below, and a fused imaging apparatus described below and a fused imaging method described above may be referred to with each other.
Referring to fig. 10, a structure diagram of a fusion imaging apparatus according to an exemplary embodiment is shown, as shown in fig. 10, including:
a first acquisition module 100 for acquiring a first image and a second image of a target examination site; wherein the first image and the second image are images in different modalities;
an extracting module 200, configured to extract an edge contour from the first image, and superimpose the edge contour on a window of the second image for display;
a first registration module 300 for adjusting the second image according to the edge profile superimposed into the second image to image register the second image with the first image;
and the first fusion module 400 is configured to fuse the second image after image registration with the first image to obtain a fused image.
According to the fusion imaging device, edge contour extraction is performed in one mode image and then the edge contour extraction is overlapped in the other mode image, so that effective reference guidance is provided for registration, registration operation can be completed more quickly and accurately, registration operation efficiency and accuracy are improved, and display errors caused by fusion under insufficient registration are avoided. Therefore, the fusion imaging device provided by the embodiment of the application improves the registration effect of images of different modes, and further improves the fusion imaging effect of the images of different modes.
On the basis of the foregoing embodiment, as a preferred implementation manner, if the first image is an image in an ultrasound mode and the second image is an image in a non-ultrasound mode, the method is specifically configured to: acquiring three-dimensional data of a target examination part; determining an initial tangent plane, and acquiring a second image in the three-dimensional data based on the initial tangent plane;
accordingly, the first registration module 300 is specifically configured to: taking the edge contour overlapped in the second image as image reference information; and adjusting the section according to the image reference information, and acquiring an adjusted second image in the three-dimensional data based on the adjusted section so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
On the basis of the above embodiment, as a preferred implementation manner, the first registration module 300 is specifically configured to: taking the edge contour overlapped in the second image as image reference information; and changing the azimuth parameters of the tangent plane according to the image reference information, adjusting the tangent plane based on the azimuth parameters of the tangent plane, and acquiring an adjusted second image in the three-dimensional data based on the adjusted tangent plane so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
On the basis of the above embodiment, as a preferred implementation manner, the first registration module 300 is specifically configured to: taking the edge contour overlapped in the second image as image reference information; and re-acquiring the gesture data of the ultrasonic probe according to the image reference information, calculating a transformation matrix according to the gesture data, adjusting a tangent plane based on the transformation matrix, and acquiring an adjusted second image in the three-dimensional data based on the adjusted tangent plane so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
On the basis of the above embodiment, as a preferred implementation manner, the first registration module 300 is further configured to: determining corresponding registration points in the first image and the second image respectively; and continuously adjusting the position of the second image according to the registration points in the first image and the second image so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
On the basis of the above embodiment, as a preferred implementation manner, the first registration module 300 is further configured to: receiving an input position adjustment parameter; and continuously adjusting the position of the second image according to the position adjustment parameter so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
Based on the above embodiment, as a preferred implementation manner, the first fusion module 400 is specifically configured to: and overlapping the second image after image registration on the first image according to a preset color imaging mode to obtain a fusion image.
On the basis of the foregoing embodiment, as a preferred implementation manner, if the first image is an image in a non-ultrasound mode and the second image is an image in an ultrasound mode, the first registration module 300 is specifically configured to: taking the edge contour overlapped into the second image as image reference information; triggering and adjusting the scanning angle and the position of the ultrasonic probe according to the image reference information to adjust a second image in the window, so that the adjusted second image and the edge contour displayed in the window show a matching form.
On the basis of the above embodiment, as a preferred implementation manner, the first registration module 300 is further configured to: determining corresponding registration points in the first image and the second image respectively; and continuously adjusting the position of the second image according to the registration points in the first image and the second image so that the adjusted second image and the edge contour displayed in the window show a matching form.
Based on the above embodiment, as a preferred implementation manner, the first fusion module 400 is specifically configured to: and superposing the first image on the second image after image registration according to a preset color imaging mode to obtain a fusion image.
Based on the above embodiment, as a preferred implementation manner, one of the first image and the second image is an image in an ultrasound mode, and the other is an image in a non-ultrasound mode; the apparatus further comprises:
the computing module is used for acquiring the gesture data of the ultrasonic probe and computing the target tangent plane angle of the image in the non-ultrasonic mode according to the gesture data;
and the second fusion module is used for adjusting the image under the non-ultrasonic mode based on the angle of the target section, and re-fusing the adjusted image under the non-ultrasonic mode and the registered image under the ultrasonic mode to obtain an adjusted fusion image.
On the basis of the above embodiment, as a preferred implementation manner, the first obtaining module 100 is specifically configured to: acquiring a first image and a second image of a target examination part in a first direction;
correspondingly, the device further comprises:
The second acquisition module is used for acquiring a third image and a fourth image of the target examination part in a second direction under different modes;
a second registration module for re-registering the third image and the fourth image;
accordingly, the first fusion module 400 is specifically configured to: and fusing the images in different modes after registering the images for multiple times to obtain a fused image.
On the basis of the above embodiment, as a preferred implementation manner, the extraction module 200 is specifically configured to: extracting tissue image characteristics of the first image; obtaining a vessel edge profile and an envelope edge profile based on the extracted set of image features; and superposing the edge outline into a window of the second image for display.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Based on the hardware implementation of the program modules, and in order to implement the method of the embodiments of the present application, the embodiments of the present application further provide an electronic device, fig. 11 is a block diagram of an electronic device according to an exemplary embodiment, and as shown in fig. 11, the electronic device includes:
A communication interface 1 capable of information interaction with other devices such as network devices and the like;
and the processor 2 is connected with the communication interface 1 to realize information interaction with other devices and is used for executing the fusion imaging method provided by one or more technical schemes when running the computer program. And the computer program is stored on the memory 3.
Of course, in practice, the various components in the electronic device are coupled together by a bus system 4. It will be appreciated that the bus system 4 is used to enable connected communications between these components. The bus system 4 comprises, in addition to a data bus, a power bus, a control bus and a status signal bus. But for clarity of illustration the various buses are labeled as bus system 4 in fig. 11.
The memory 3 in the embodiment of the present application is used to store various types of data to support the operation of the electronic device. Examples of such data include: any computer program for operating on an electronic device.
It will be appreciated that the memory 3 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. Wherein the nonvolatile Memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). The memory 3 described in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the embodiments of the present application may be applied to the processor 2 or implemented by the processor 2. The processor 2 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 2 or by instructions in the form of software. The processor 2 described above may be a general purpose processor, DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 2 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium in the memory 3 and the processor 2 reads the program in the memory 3 to perform the steps of the method described above in connection with its hardware.
The processor 2 implements corresponding flows in the methods of the embodiments of the present application when executing the program, and for brevity, will not be described in detail herein.
In an exemplary embodiment, the present application also provides a storage medium, i.e. a computer storage medium, in particular a computer readable storage medium, for example comprising a memory 3 storing a computer program executable by the processor 2 for performing the steps of the method described above. The computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the prior art, and the computer software product may be stored in a storage medium, and include several instructions to cause an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A fusion imaging method, comprising:
acquiring a first image and a second image of a target examination part; wherein the first image and the second image are images in different modalities;
extracting an edge contour from the first image, and overlapping the edge contour into a window of the second image for display;
adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image;
and fusing the second image with the first image after image registration to obtain a fused image.
2. The fusion imaging method of claim 1, wherein if the first image is an image in an ultrasound modality and the second image is an image in a non-ultrasound modality, obtaining a second image of a target examination site comprises:
Acquiring three-dimensional data of a target examination part;
determining an initial tangent plane, and acquiring a second image in the three-dimensional data based on the initial tangent plane;
correspondingly, the adjusting the second image according to the edge contour superimposed in the second image includes:
taking the edge contour overlapped in the second image as image reference information;
and adjusting the section according to the image reference information, and acquiring an adjusted second image in the three-dimensional data based on the adjusted section so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
3. The fusion imaging method of claim 2, wherein adjusting a slice according to the image reference information comprises:
changing a section azimuth parameter according to the image reference information, and adjusting a section based on the section azimuth parameter;
and/or the number of the groups of groups,
and re-acquiring the gesture data of the ultrasonic probe according to the image reference information, calculating a transformation matrix according to the gesture data, and adjusting a tangent plane based on the transformation matrix.
4. A fusion imaging method according to claim 3, wherein after acquiring the adjusted second image in the three-dimensional data based on the adjusted slice, further comprising:
Determining corresponding registration points in the first image and the second image respectively;
and continuously adjusting the position of the second image according to the registration points in the first image and the second image so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
5. A fusion imaging method according to claim 3, wherein after acquiring the adjusted second image in the three-dimensional data based on the adjusted slice, further comprising:
receiving an input position adjustment parameter;
and continuously adjusting the position of the second image according to the position adjustment parameter so as to enable the adjusted second image to be matched with the edge contour displayed in the window.
6. The fusion imaging method according to claim 2, wherein the fusing the second image and the first image after the image registration to obtain a fused image includes:
and overlapping the second image after image registration on the first image according to a preset color imaging mode to obtain a fusion image.
7. The fusion imaging method of claim 1, wherein if the first image is an image in a non-ultrasound modality and the second image is an image in an ultrasound modality, adjusting the second image according to the edge profile superimposed into the second image comprises:
Taking the edge contour overlapped into the second image as image reference information;
triggering and adjusting the scanning angle and the position of the ultrasonic probe according to the image reference information to adjust a second image in the window, so that the adjusted second image and the edge contour displayed in the window show a matching form.
8. The fusion imaging method of claim 7, wherein after triggering adjustment of the scan angle and position of the ultrasound probe to adjust the second image in the window based on the image reference information, further comprising:
determining corresponding registration points in the first image and the second image respectively;
and continuously adjusting the position of the second image according to the registration points in the first image and the second image so that the adjusted second image and the edge contour displayed in the window show a matching form.
9. The fusion imaging method of claim 7, wherein fusing the second image with the first image after image registration to obtain a fused image comprises:
and superposing the first image on the second image after image registration according to a preset color imaging mode to obtain a fusion image.
10. The fusion imaging method of claim 1, wherein one of the first image and the second image is an image in an ultrasound modality and the other is an image in a non-ultrasound modality;
after the second image and the first image after the image registration are fused to obtain a fused image, the method further comprises:
acquiring attitude data of an ultrasonic probe, and calculating a target tangent plane angle of an image in the non-ultrasonic mode according to the attitude data;
and adjusting the image under the non-ultrasonic mode based on the target section angle, and fusing the adjusted image under the non-ultrasonic mode with the registered image under the ultrasonic mode again to obtain an adjusted fused image.
11. The fusion imaging method of any of claims 1-10, wherein the acquiring the first and second images of the target examination site comprises:
acquiring a first image and a second image of a target examination part in a first direction;
correspondingly, after the second image is adjusted according to the edge contour superimposed in the second image so as to make the second image and the first image perform image registration, the method further comprises:
Acquiring a third image and a fourth image of the target examination part in a second direction under different modes;
re-image registering the third image and the fourth image;
correspondingly, the fusing of the second image and the first image after the image registration to obtain a fused image includes:
and fusing the images in different modes after registering the images for multiple times to obtain a fused image.
12. The fusion imaging method of any of claims 1-10, wherein the extracting edge contours in the first image comprises:
extracting tissue image characteristics of the first image;
a vessel edge profile and a capsule edge profile are obtained based on the extracted tissue image features.
13. A fusion imaging apparatus, comprising:
the first acquisition module is used for acquiring a first image and a second image of the target examination part; wherein the first image and the second image are images in different modalities;
the extraction module is used for extracting an edge contour from the first image and superposing the edge contour to a window of the second image for display;
a first registration module for adjusting the second image according to the edge contour superimposed into the second image to image register the second image with the first image;
And the first fusion module is used for fusing the second image after the image registration with the first image to obtain a fused image.
14. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the fusion imaging method according to any one of claims 1 to 12 when executing said computer program.
15. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the fusion imaging method according to any of claims 1 to 12.
CN202210759545.5A 2022-06-30 2022-06-30 Fusion imaging method and device, electronic equipment and storage medium Pending CN117391977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210759545.5A CN117391977A (en) 2022-06-30 2022-06-30 Fusion imaging method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210759545.5A CN117391977A (en) 2022-06-30 2022-06-30 Fusion imaging method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117391977A true CN117391977A (en) 2024-01-12

Family

ID=89463616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210759545.5A Pending CN117391977A (en) 2022-06-30 2022-06-30 Fusion imaging method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117391977A (en)

Similar Documents

Publication Publication Date Title
JP7125019B2 (en) Surgical support device for assisting surgery on anatomical parts of internal organs that suffer from intraoperative displacement
CN108701170B (en) Image processing system and method for generating three-dimensional (3D) views of an anatomical portion
EP2212716B1 (en) Interventional navigation using 3d contrast-enhanced ultrasound
US20230025480A1 (en) System And Method For A Tracked Procedure
US10685451B2 (en) Method and apparatus for image registration
CN111640100A (en) Tumor image processing method and device, electronic equipment and storage medium
CN109152566B (en) Correcting for probe-induced deformations in ultrasound fusion imaging systems
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
US20170360396A1 (en) Ultrasound imaging apparatus and method for segmenting anatomical objects
CN113662573B (en) Mammary gland focus positioning method, device, computer equipment and storage medium
CN111904379A (en) Scanning method and device of multi-modal medical equipment
US8098917B2 (en) Automatically updating a geometric model
US20190388177A1 (en) Surgical navigation method and system using augmented reality
US11464583B2 (en) Surgery support apparatus and surgical navigation system
TWI836491B (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
US20130218003A1 (en) Rapid entry point localization for percutaneous interventions
CN115887003A (en) Registration method and device of surgical navigation system and surgical navigation system
US10413364B1 (en) Internal organ localization of a subject for providing assistance during surgery
CN117391977A (en) Fusion imaging method and device, electronic equipment and storage medium
CN115966309A (en) Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device
US10299864B1 (en) Co-localization of multiple internal organs based on images obtained during surgery
US10467497B2 (en) System and method for providing assistance in surgery in presence of tissue deformation
US20210287434A1 (en) System and methods for updating an anatomical 3d model
EP3695380B1 (en) Hypersurface reconstruction of microscope view
US10307209B1 (en) Boundary localization of an internal organ of a subject for providing assistance during surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination