CN112634191A - Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium - Google Patents

Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium Download PDF

Info

Publication number
CN112634191A
CN112634191A CN201910906391.6A CN201910906391A CN112634191A CN 112634191 A CN112634191 A CN 112634191A CN 201910906391 A CN201910906391 A CN 201910906391A CN 112634191 A CN112634191 A CN 112634191A
Authority
CN
China
Prior art keywords
image
registration
ultrasonic
images
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910906391.6A
Other languages
Chinese (zh)
Inventor
韩晓涛
丛龙飞
王超
周文兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN201910906391.6A priority Critical patent/CN112634191A/en
Publication of CN112634191A publication Critical patent/CN112634191A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention provides a medical image analysis method, an ultrasonic imaging device and a computer storage medium, wherein the method comprises the following steps: respectively acquiring at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images aiming at a target area; selecting a volume of preoperative three-dimensional ultrasonic images and a volume of postoperative three-dimensional ultrasonic images to form a group of image pairs to be registered and evaluated, and registering the preoperative ultrasonic images and the postoperative ultrasonic images in at least one group of image pairs; automatically evaluating the result of the registration to obtain an evaluation result; and outputting information representing the evaluation result. The invention can automatically evaluate the registration effect and output the information representing the evaluation result, so that a user can know the registration effect more intuitively.

Description

Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium
Technical Field
The present invention generally relates to the field of medical image analysis technology, and more particularly, to a medical image analysis method, an ultrasound imaging apparatus, and a computer storage medium.
Background
The precise Ablation under ultrasonic guidance has the advantages of minimal invasion, simple and convenient operation and the like due to the elimination of restriction factors of operative Ablation treatment such as poor liver function and coagulation mechanism, poor heart and kidney functions and the like of liver cancer patients, and is more and more concerned by doctors, and various non-operative interventional treatments such as Radio Frequency Ablation (RFA), cryoablation, microwave Ablation, Transcatheter Arterial Chemoembolization (TACE) and the like become one of important treatment means of hepatocellular carcinoma. Local Tumor Progression (LTP) is an important index for measuring ablation effect, the lack of ablation safety margin is a high-risk factor for generating tumor Local progression, and the accurate and timely assessment of safety margin is very important for improving the treatment effect of interventional ablation.
Currently, enhanced-CT (CECT), Magnetic Resonance Imaging (MRI), and ultrasound imaging (CEUS) are common methods for evaluating the efficacy of interventional ablation. The prior art based on ultrasound contrast assessment generally measures the maximum diameter of a tumor in a two-dimensional ultrasound contrast map before surgery, measures the maximum diameter of an ablation lesion in the ultrasound contrast map after surgery, and then assesses whether ablation is complete by comparing the two maximum diameters. Since the above evaluation is performed in the two-dimensional image, on one hand, the scheme cannot display whether the tumor is completely ablated in the three-dimensional space, on the other hand, the two acquired two-dimensional images may not be on the same section, and when the ablation center is inconsistent with the tumor center, the measured safety boundary thickness is related to the probe position, and thus, good accuracy cannot be achieved. The evaluation scheme based on the CT/MRI image needs to acquire three-dimensional CT/MRI before operation and acquire CT/MRI once after operation for one month, although the quality of the CT/MRI image is better than that of an ultrasonic image, the method has the disadvantages of tedious operation, long time consumption, high cost, and inconvenience for repeated application in the real-time evaluation in the operation because a patient needs to receive multiple radiation within a short time.
Disclosure of Invention
The invention provides a medical image analysis method, an ultrasonic imaging device and a computer storage medium. The following briefly describes the proposed solution of the present invention for medical image analysis, and more details will be described in the following detailed description with reference to the drawings.
The invention provides a medical image analysis method in one aspect, which comprises the following steps:
respectively acquiring at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images aiming at a target area;
selecting at least one set of image pairs to be registered for evaluation from the at least one pre-operative ultrasound image and the at least one post-operative ultrasound image, wherein each set of image pairs comprises a volume of pre-operative ultrasound images and a volume of post-operative ultrasound images;
registering the pre-operative ultrasound image and the post-operative ultrasound image of the at least one set of image pairs;
automatically evaluating the result of the registration to obtain an evaluation result; and
and outputting information representing the evaluation result.
Illustratively, selecting at least one set of image pairs from the at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images includes selecting a plurality of sets of image pairs from the at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images, the method including:
traversing the multiple groups of image pairs, and performing multiple registration on preoperative ultrasonic images and postoperative ultrasonic images in the multiple groups of image pairs to obtain multiple evaluation results;
and selecting at least one group of image pairs with the best evaluation result for display.
Illustratively, the pre-operative ultrasound image and the post-operative ultrasound image comprise ultrasound contrast images.
Illustratively, the ultrasound contrast image comprises a tissue image and a contrast image with pixel points in one-to-one correspondence, and the registration is performed based on the tissue image and/or the contrast image.
Illustratively, the pre-operative ultrasound image and/or the post-operative ultrasound image comprises a plurality of volumes of three-dimensional images scanned in an uninterrupted succession or a plurality of volumes of three-dimensional images or four-dimensional images scanned at different times.
Illustratively, the registering at least one set of image pairs comprises:
respectively selecting ultrasonic images with better quality from the preoperative ultrasonic images and the postoperative ultrasonic images for registration; or
And randomly selecting an ultrasonic image from the preoperative ultrasonic image and the postoperative ultrasonic image respectively for registration.
Illustratively, the method further comprises:
and if the evaluation result does not meet the preset standard, selecting other ultrasonic images from the preoperative ultrasonic image and/or the postoperative ultrasonic image for registering again.
Illustratively, the registration means includes automatic registration, interactive registration and/or manual registration.
Illustratively, the method further comprises:
and if the evaluation result does not meet the preset standard, prompting the user to change the registration mode.
Illustratively, the method further comprises:
firstly, automatic registration is carried out, and if the evaluation result does not meet the preset standard, a user is prompted to carry out interactive registration or manual registration.
Illustratively, the registration includes rigid body registration and/or non-rigid body registration.
Illustratively, the automatic evaluation is based on a size of a similarity metric function between the pre-operative ultrasound image and the post-operative ultrasound image under registration.
Illustratively, the similarity metric function includes an image intensity-based metric, and/or an image feature-based metric.
Illustratively, the automatic evaluation is based on the number of pairs of feature points that match between the pre-operative ultrasound image and the post-operative ultrasound image being registered.
Illustratively, the automatic evaluation is based on a degree of coincidence of the same anatomical structure between the pre-operative ultrasound image and the post-operative ultrasound image being registered.
Illustratively, the information representing the evaluation result includes one or more of a score value, an indicator light, a sound, a rating, or a text.
Illustratively, the method further comprises: segmenting a tumor region in the preoperative ultrasonic image and an ablation focus region in the postoperative ultrasonic image; and
generating a safe boundary for an ablation procedure based on the tumor region obtained by segmentation.
Illustratively, the segmentation is performed before the registration, or the segmentation is performed on an ultrasound image whose evaluation result satisfies a predetermined criterion after the registration.
Illustratively, the method further comprises: mapping a tumor three-dimensional volume and a safety boundary obtained by segmentation in the pre-operative ultrasonic image into the post-operative three-dimensional image based on the registration result to simultaneously display the tumor, the safety boundary and the ablation focus.
Illustratively, the method further comprises: and displaying the image pair corresponding to the evaluation result in a correlated manner.
Illustratively, the associating and displaying the image pair corresponding to the evaluation result includes:
obtaining a first sub-image of a volume of ultrasonic images in the image pair corresponding to the evaluation result;
obtaining a second sub-image corresponding to the first sub-image from the other volume of the ultrasonic images in the image pair corresponding to the evaluation result based on the registration mapping relation between the preoperative ultrasonic image and the postoperative ultrasonic image in the image pair corresponding to the evaluation result;
displaying the first sub-image and the second sub-image simultaneously.
Illustratively, the automatic evaluation is based on image quality factors.
Illustratively, the image quality factors include one or more of image acquisition time, image content, image vessel sharpness, and image signal-to-noise ratio.
In another aspect, the present invention provides a medical image analysis method, including:
respectively acquiring a plurality of pre-operation ultrasonic images and a plurality of post-operation ultrasonic images aiming at a target area;
selecting at least one set of image pairs to be registered and evaluated from the multi-roll pre-operative ultrasound image and the multi-roll post-operative ultrasound image, wherein each set of image pairs comprises a roll of pre-operative ultrasound image and a roll of post-operative ultrasound image;
registering the preoperative ultrasound image and the postoperative ultrasound image in the plurality of sets of image pairs;
automatically evaluating the result of the registration to obtain an evaluation result; and
and displaying the preoperative ultrasonic image and the postoperative ultrasonic image in at least one group of image pairs with the best evaluation result.
In another aspect, the present invention provides a medical image analysis method, including:
respectively acquiring at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images aiming at a target area;
registering at least one set of the pre-operative and post-operative ultrasound images;
automatically evaluating the result of the registration to obtain an evaluation result; and
and outputting information representing the evaluation result.
Yet another aspect of the present invention provides an ultrasonic imaging apparatus comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the probe to transmit ultrasonic waves to a target area;
the receiving circuit is used for controlling the probe to receive the ultrasonic echo returned from the target area so as to obtain an ultrasonic echo signal;
a processor to:
processing the ultrasonic echo signal to obtain at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images;
selecting at least one set of image pairs to be registered for evaluation from the at least one pre-operative ultrasound image and the at least one post-operative ultrasound image, wherein each set of image pairs comprises a volume of pre-operative ultrasound images and a volume of post-operative ultrasound images;
registering the pre-operative ultrasound image and the post-operative ultrasound image of the at least one set of image pairs;
automatically evaluating the result of the registration to obtain an evaluation result;
outputting information representing the evaluation result; and
a display for displaying the evaluation result and/or the at least one set of image pairs.
A further aspect of the invention provides a computer storage medium having stored thereon a computer program which, when executed by a computer or processor, performs the steps of the above method.
According to the medical image analysis method, the ultrasonic imaging equipment and the computer storage medium provided by the embodiment of the invention, the registration effect can be automatically evaluated, and information representing the evaluation result is output, so that a user can know the registration effect more intuitively.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
In the drawings:
fig. 1 shows a schematic block diagram of an example electronic device for implementing a medical image analysis method according to an embodiment of the invention;
fig. 2 shows a schematic flow diagram of a medical image analysis method according to an embodiment of the invention;
fig. 3 shows a schematic diagram of a display interface for outputting registration results and evaluation results according to an embodiment of the present invention;
fig. 4 shows a schematic flow chart of a medical image analysis method according to another embodiment of the invention;
fig. 5 shows a schematic block diagram of an ultrasound imaging apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
The method is a novel quantitative comparison technology for the tumor ablation effect, and comprises the steps of firstly, imaging a tumor region before and after an interventional operation by using ultrasonic equipment, then segmenting a tumor and an ablation focus based on an automatic, interactive or manual mode, then expanding the tumor edge to generate a safety boundary, establishing a space mapping relation between image data acquired twice by using an image registration technology, fusing and displaying, and visually comparing, displaying or prompting whether the ablation focus completely wraps the safety boundary.
By adopting the imaging method and the contrast evaluation system, the ablation effect can be evaluated on site after the ablation operation treatment, whether the ablation focus reaches a sufficient safe edge or not can be visually shown, if the ablation focus does not cover the whole preset area, the secondary needle supplementing ablation treatment can be carried out on site, the occurrence of LTP is prevented, the delay of a plurality of time required by the clinical evaluation based on CT/MRI and the like is avoided, and the secondary interventional operation is carried out on a patient.
The image registration technology is completely based on, navigation information is not used when the space mapping relation between two three-dimensional contrast ultrasonic images is established, and the registration effect of the image registration technology is influenced by the quality of the three-dimensional ultrasonic contrast images and the scanning position. Because the time that the contrast agent enters the tumor, the tissue and the ablation focus is inconsistent, the gray scales of two contrast images before and after the ablation of the same patient are difficult to keep consistent, the tumor generally shows high echo, the ablation focus usually shows low echo, the gray scale of the image changes violently along with the time, and the quality of the contrast image is related to the image acquisition time. On the other hand, the existing mainstream intensity-based registration algorithm cannot deal with the situation of large displacement and rotation very well, and needs the initial registration parameters to be close to the true values, for example, in the field of ultrasound and CT/MRI multi-mode registration, the existing manufacturers usually require to firstly manually set a reference surface or firstly scan certain specific positions and initialize the registration matrix by using magnetic navigation data. These factors make fully image-based automatic registration techniques, while more flexible, not guaranteed that registration is successful each time.
In the clinic, a physician may acquire multiple volumes of three-dimensional ultrasound contrast images at different times. If the contrast of the tumor and the liver parenchyma is high in the artery stage before ablation, the contrast of the ablation focus and the liver parenchyma is high in the delay stage after ablation, however, the contrast of the liver blood vessels and the liver parenchyma is not high at the moment, and if the registration algorithm is based on blood vessel tree alignment, the registration is more facilitated by acquiring images in the late artery stage after ablation. The current registration technology is to register a certain volume of three-dimensional ultrasonic contrast data in a preoperative data set and a certain volume of three-dimensional ultrasonic contrast data in a postoperative data set, specifically select which volume is determined by a doctor, the doctor needs to verify whether registration is successful or not after registration is completed, when registration is failed due to poor image quality, the doctor needs to repeat a 'selecting-registering-verifying' process or replace a registration mode, operation is complicated, and registration effect cannot be guaranteed.
Based on the above, the application provides a medical image analysis method for automatically displaying the registration effect and selecting the registration image. In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
Specifically, the medical image analysis method of the present application is described in detail below with reference to the drawings. The features of the following examples and embodiments may be combined with each other without conflict.
Fig. 1 is a schematic block diagram of an electronic device according to an embodiment of the present invention, which can implement the method according to the embodiment of the present invention. The electronic device 100 shown in FIG. 1 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image sensor 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may include a Central Processing Unit (CPU) 1021 and a Graphics Processing Unit (GPU) 1022 or other forms of Processing units having data Processing capability and/or Instruction execution capability, such as a Field-Programmable Gate Array (FPGA) or an Advanced Reduced Instruction Set Machine (Reduced Instruction Set Computer) Machine (ARM), and the like, and the processor 102 may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory 1041 and/or non-volatile memory 1042. The volatile Memory 1041 may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The non-volatile Memory 1042 may include, for example, a Read-Only Memory (ROM), a hard disk, a flash Memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 102 to implement the functions of medical image analysis (implemented by the processor) and/or various other desired functions in the embodiments of the present invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The input device 106 may be any interface for receiving information. For example, in embodiments of the present invention, the input device 106 may receive input from a user.
The output device 108 may output various information (e.g., images or sounds) to an outside (e.g., a user), and may include one or more of a display, speakers, printer, and the like. The output device 108 may be any other device having an output function. For example, in an embodiment of the present invention, the output device 108 may display the pre-operative ultrasound image and the post-operative ultrasound image to a user operating the electronic device.
The image sensor 110 may image a target site and store the acquired image in the storage device 104 for use by other components. For example, in an embodiment of the present invention, the image sensor 110 may be an image acquisition device that acquires an ultrasound image of a tumor or an ablation lesion. For example, it may comprise an ultrasound probe or the like.
It should be noted that the components and the structure of the electronic device 100 shown in fig. 1 are only exemplary, and although the electronic device 100 shown in fig. 1 includes a plurality of different apparatuses, those skilled in the art may modify, change, etc. according to the needs, for example, some of the apparatuses may not be necessary, some of the apparatuses may be more numerous, and the invention is not limited thereto.
Illustratively, the electronic device 100 may be implemented as an ultrasound system for performing ultrasound imaging, without limitation.
Next, a medical image analysis method according to an embodiment of the present invention will be described with reference to fig. 2. Fig. 2 is a schematic flow chart of a method of medical image analysis 200 according to an embodiment of the present invention. The medical image analysis method 200 provided by the embodiment of the invention is suitable for evaluating the postoperative treatment effect of the ablation operation on tumors of various diseased organs such as liver, stomach, lung, pancreas, thyroid gland, breast, intestinal tract and the like, and is not used for implementing the ablation operation and diagnosing and treating patients after the ablation operation.
As shown in fig. 2, the method comprises the steps of:
in step S210, at least one volume of pre-operation ultrasound images and at least one volume of post-operation ultrasound images are respectively obtained for the target region.
It will be appreciated that the pre-operative ultrasound image for the target region includes a tumor region, at which time, there is no ablation region; the post-operative ultrasound image will include an ablation region, at which time the tumor region may have been ablated. Ideally, the ablation region already covers the original tumor region.
In one embodiment, the pre-operative and post-operative ultrasound images are ultrasound contrast images acquired using a three-dimensional ultrasound contrast (3d ceuus) technique. Three-dimensional ultrasound contrast is a radiation-free hemodynamic assessment method that utilizes ultrasound contrast agents to enhance the contrast of a region of interest with other regions, thereby better visualizing the region of interest. The three-dimensional ultrasonic radiography technology can be used for dynamically observing the condition of a focus in real time, and the relation between a tumor/ablation focus and peripheral tissues is displayed in a multi-section and multi-angle mode based on the MPR (multiplanar Reconstruction) technology, so that the method has the advantages of being good in real-time performance, low in cost and free of radiation.
Illustratively, echo data may be processed by beam-forming, signal processing, image reconstruction, etc. by transmitting and receiving acoustic waves, thereby generating a three-dimensional ultrasound contrast image containing an ablation or tumor region. The ultrasound contrast image comprises a tissue image and a contrast image which are in one-to-one correspondence with each other in pixel points, and the tissue image and the contrast image have a mapping relation without extra registration operation. Subsequently performed registration operations may be based on the tissue image, based on the contrast image, or based on both the tissue image and the contrast image.
Illustratively, the preoperative ultrasound image and the postoperative ultrasound image include a plurality of three-dimensional images continuously scanned or a plurality of three-dimensional images intermittently scanned during different periods, and the plurality of three-dimensional images continuously scanned is a path of four-dimensional images.
For example, one or more three-dimensional images may be acquired before and after an ablation procedure, two or more four-dimensional images may be acquired before and after the ablation procedure, or three-dimensional images may be acquired before the procedure, four-dimensional images may be acquired after the procedure, or four-dimensional images may be acquired before and after the procedure. The three-dimensional image can be acquired by a volume probe, or can be reconstructed by a convex array or a linear array probe with magnetic navigation equipment based on a Freehand three-dimensional ultrasonic reconstruction technology, or can be scanned by an area array probe. The multiple volumes of three-dimensional images may be acquired intermittently at different times, for example, one volume of three-dimensional images may be acquired each at an arterial phase, an advanced arterial phase, and a delayed phase as a pre-operative ultrasound image.
Preferably, a larger acquisition angle and acquisition range can be applied as much as possible in the acquisition process, so as to acquire data information as much as possible for subsequent registration operation. In addition, the body positions of the patients are ensured to be consistent as much as possible during preoperative and postoperative collection, and the positions and the directions of the probes are ensured to be consistent as much as possible, so that the success rate of registration is improved.
At step S220, at least one set of the pre-operative ultrasound image and the post-operative ultrasound image is registered.
The purpose of registration is to highly align a tumor region in the preoperative ultrasonic image and an ablation region in the postoperative ultrasonic image in spatial position, so that the position and size relationship between the tumor region and the ablation region can be analyzed on the basis, and postoperative evaluation of an ablation operation can be realized.
In one embodiment, better quality ultrasound images can be selected from the plurality of pre-operative and post-operative ultrasound images for registration. The ultrasound images with good image quality are selected for registration, so that the registration time can be saved, and the registration efficiency can be improved.
For example, at least one set of image pairs to be evaluated in registration may be selected from the aforementioned at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images, wherein each set of image pairs includes at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images. The preoperative ultrasound image and the postoperative ultrasound image in the at least one set of image pairs are then registered and the quality of the registration is automatically evaluated.
Here and hereinafter, the preoperative ultrasound image and the postoperative ultrasound image used for registration and evaluation of registration may be referred to as an "image pair".
Various suitable criteria may be employed to evaluate the quality of the ultrasound image. As an example, from the perspective of ultrasound imaging, the acquisition time has a large effect on contrast imaging, and if the vessel tree is used as the registration basis, the acquisition time is preferably in the late stage of the artery. However, the blood vessel elasticity and the circulation state of different individuals are different and need to be determined according to the arrival time of the contrast agent of the patient, and the acquisition time is generally 18-25 s.
As another example, since ultrasound hardly penetrates a target such as a bone or a gas, and since sound attenuation causes low echo behind the target, and valuable information is small, it is preferable to perform registration on an ultrasound image with a small amount of bone or gas, and specifically, whether a bone or gas exists can be determined by calculating the accumulation of each ultrasound receiving line in the depth direction before DSC conversion.
As another example, from the aspect of image characteristics, an image with good quality generally has rich gray scale, high contrast, and fine structure, which can be reflected on a gray histogram, a gray entropy, and a boundary map, so that an ultrasound image with the minimum gray histogram variance, the maximum gray entropy, or the richest boundary map can be selected for registration.
In addition, it is also possible to segment a target region in an image, evaluate image quality based on the segmentation result, perform Principal Component Analysis (PCA) on the automatically segmented target, and determine image quality by observing whether the weight of the principal component of the target is within a reasonable range.
Several evaluation criteria for ultrasound image quality are listed above, but it should be understood that the invention is not limited by the evaluation criteria specifically adopted, and any suitable evaluation criteria for ultrasound image quality can be applied to the method according to the embodiment of the invention.
In another embodiment, the ultrasound images may be randomly selected for registration (i.e., the image pair is randomly selected) from the plurality of pre-operative and post-operative ultrasound images, respectively, for example, the ultrasound images for registration may be randomly selected without replacing the random samples. Of course, if only one volume of pre-operative or post-operative ultrasound images is scanned, the volume of ultrasound images is used for registration.
The registration mode adopted by the embodiment of the invention comprises automatic registration, interactive registration, manual registration or any combination of the three modes.
In one embodiment, automatic registration may be employed first, and if the effect of the automatic registration does not meet a predetermined requirement, the user is prompted to employ interactive registration or manual registration.
Taking automatic registration as an example, the registration algorithm mainly comprises four parts, namely similarity measurement between images, interpolation operation, mapping (coordinate transformation) and an optimization strategy. Specifically, firstly, similarity measurement is carried out on a preoperative ultrasonic image and a postoperative ultrasonic image to obtain an adaptation value, then an optimizer obtains a coordinate transformation parameter according to the adaptation value to control the preoperative ultrasonic image or the postoperative ultrasonic image to carry out coordinate transformation, and the coordinate transformation can be expressed by a transformation type parameter or a displacement field; and performing similarity evaluation on the images obtained in the optimization iteration again, and repeating the steps in a circulating way until the optimizer finds the optimal solution in the meaning of the extreme value of the similarity function, so that the images are transformed to the matched positions, and the registration is completed. In the iteration process, the value at the transformed non-integer pixel point needs to be subjected to the interpolation processing, and the interpolation method includes, as an example, a nearest neighbor interpolation method, a linear interpolation method, a spline interpolation method, and the like.
Wherein, the image registration may include rigid body registration and non-rigid body registration according to different mapping methods. The rigid body registration takes the rotation angle and the translation amount as transformation parameters, and estimates the two parameters of the rotation angle and the displacement through an optimization method, thereby achieving the image alignment in the global sense. The non-rigid body registration has more degrees of freedom, can realize non-linear free deformation, and obtains non-linear transformation parameters or displacement fields in an optimization mode. If the two sets of contrast data are acquired at the same depth, i.e. the pixels acquired are of the same scale, the registration algorithm may be a rigid body transformation, i.e. include rotation and translation. And if the depth design is not consistent during two times of acquisition, scaling the two groups of data to the same scale by adopting an interpolation algorithm, and then performing registration calculation according to a rigid body.
Similarity measures for images are generally classified as intensity-based methods, feature-based methods, and/or point cloud-based methods. Using the intensity-based method as an example, assume a pixel point X in a group of imagesiThe brightness of the image of (A) is f (X)i) And the other group of images has a pixel point YiThe brightness of the image of (1) is g (Y)i) The mapping between two sets of images can be expressed as:
Figure BDA0002213398060000131
meanwhile, when the similarity measurement is performed using the minimum Sum of Absolute Differences (SAD), a similarity measurement function between two sets of data may be defined as
Figure BDA0002213398060000132
In addition, similar similarity metric functions are defined in a manner of gray scale difference Sum of Squares (SSD), and the like. The intensity here may be, in addition to the gray-scale luminance, a gradient, a gray-scale distribution, a higher-order combination of gray-scale and gradient such as MCC (maximum correlation entropy criterion), NGF (normalized gradient field), LC2 (linear correlation operator of linear combination), GOA (gradient direction alignment operator), MI (mutual information), or the like.
Taking a feature-based registration method as an example, firstly, feature points and features thereof in preoperative and postoperative ultrasonic images are extracted, the feature points generally have certain properties of translational invariance, rotational invariance, scale invariance, insensitivity to illumination, insensitivity to modality and the like, and the properties of the feature points are determined by a feature point extraction method. And then extracting the features of the feature points, wherein the features can be generated by a neighborhood gradient histogram, a neighborhood autocorrelation, a gray level and the like, and algorithms such as MIND (mode independent neighborhood descriptor), SIFT (scale invariant feature transform), SURF (accelerated robust feature), SSC (neighborhood self-similarity descriptor), MILBP (local binary pattern), HOG (directional gradient histogram) and the like can be specifically adopted. And then, matching the characteristic points of the preoperative ultrasonic image with the characteristic points of the postoperative ultrasonic image, wherein the matching means that the characteristic distance between a certain characteristic point of the preoperative ultrasonic image and a certain point of all the characteristic points of the postoperative ultrasonic image is the shortest, or the characteristic distance between a certain characteristic point of the postoperative ultrasonic image and a certain point of all the characteristic points of the preoperative ultrasonic image is the shortest. The methods of describing the feature distance are SAD (sum of absolute difference minimum), SSD (sum of squared error minimum), cosine distance, hamming distance, and the like.
Further, the algorithm of the optimizer includes, without limitation, a gradient descent method, a random gradient descent method, or the like; the mapping method may be a matrix method, euler angles, quaternions, etc.
In addition to the above-described automatic registration algorithm, a semi-automatic registration (i.e., interactive registration) embodiment may be used to perform the image registration operation.
Taking rigid body registration as an example, the 3 × 3 matrix at the upper left corner of the mapping matrix a is a symmetric orthogonal matrix. The most direct scheme is that 4 or more than 4 groups of corresponding point pairs are respectively selected on the two groups of three-dimensional ultrasonic data by utilizing interactive operation, and an optimal mapping matrix A is solved by a least square fitting method; or selecting a section on two groups of three-dimensional ultrasonic data respectively, establishing a one-to-one corresponding relation of the two sections in a fusion display mode, and then selecting a corresponding point pair in the two groups of data, wherein the corresponding point pair is outside the two known sections. The two interactive registration methods described above are given only as easy-to-implement embodiments and are not intended to be limiting.
In step S230, the result of the registration is automatically evaluated to obtain an evaluation result.
The automatic evaluation may be performed based on a plurality of evaluation criteria, and several exemplary evaluation criteria will be shown below, but the embodiment of the present invention is not limited to any one of the evaluation criteria, as long as the generated evaluation result can represent the quality of the registration effect.
For example, for an intensity-based registration method, the automatic evaluation may be based on the size of a similarity metric function between the pre-operative ultrasound image and the post-operative ultrasound image being registered. The size of the similarity metric function is closely related to the registration effect, and when the registration effect is poor, the similarity metric function is often larger. Illustratively, the similarity metric function may be a metric based on image intensity (e.g., based on gray scale, gradient, linear combination of gray scale and gradient, vessel probability map, etc.) or image Features (e.g., based on SIFT (scale invariant feature transform), SURF (speeded up Robust Features), MIND, SSC, etc.).
Further, as an example, for a feature point matching based registration method, the automatic evaluation may be based on the number of matched feature point pairs between the preoperative ultrasound image and the postoperative ultrasound image for registration; for a segmentation-based registration method, the automatic evaluation may be based on the degree of coincidence of the same anatomical structure between the pre-operative ultrasound image and the post-operative ultrasound image being registered. If the preoperative ultrasonic image can be better overlapped with the corresponding target of the postoperative ultrasonic image after the registration matrix mapping, the registration is successful, and vice versa, the registration condition can be judged by detecting the coincidence degree of the targets before and after the registration.
In one embodiment, the system automatically evaluates whether the registration quality meets a preset minimum criterion. If the evaluation result does not satisfy the predetermined criteria, the method may return to step S210, and select another ultrasound image from the pre-operation ultrasound image and/or the post-operation ultrasound image for registration again.
In addition, in one embodiment, the system can automatically evaluate the results of the registration based on image quality factors of the pre-operative and post-operative ultrasound images. For example, in one embodiment, this may be done based on one or more of image quality factors such as image acquisition time, image content, image vessel sharpness, and image signal-to-noise ratio.
Further, if there are other pre-operative and/or ultrasound images detected that are not registered, the user may first be asked whether they agree to load a new image; if the user so desires, the method returns to step S210, and a new preoperative ultrasound image and postoperative ultrasound image are obtained again and registered. If the user does not wish to change images, or if no other non-registered preoperative ultrasound images or postoperative ultrasound images are selectable, the user may be prompted to change the registration method, such as from automatic registration to interactive registration or manual registration.
In another embodiment, steps S210 to S230 may be repeatedly and alternately performed until all the permutation combinations of the preoperative ultrasound images and the postoperative ultrasound images are traversed and the evaluation results of each set of the preoperative ultrasound images and the postoperative ultrasound images are recorded, and then, in step S240, only the evaluation result of the set of the images with the best registration effect may be output.
In step S240, information indicating the evaluation result is output.
In the embodiment of the present invention, the information indicating the evaluation result includes, but is not limited to, one or more of a score value, an indicator light, a sound, a rating, or a character.
Exemplarily, red, yellow, green or any three colors can be used to respectively indicate that the registration result is not acceptable, the registration result needs to be confirmed again by a user, and the registration result is acceptable; illustratively, the registration quality may be represented by a fractional value, such as 5 for high registration quality and 1 for poor registration; illustratively, the registration effect may be represented using a label of S, A, B, C, D, S representing the best effect, D representing the effect is not approved; illustratively, the user can be prompted to evaluate the result by using characters, and the characters can prompt the registration effect or prompt the user to change the registration mode; illustratively, a voice may be played to prompt the user to evaluate the results. The information indicating the evaluation result may be output by using only one presentation method, or may be used in combination of a plurality of presentation methods.
In one embodiment, the method further comprises: and displaying the registration result.
The preoperative ultrasonic image and the postoperative ultrasonic image after registration can be displayed in a fusion mode, and MPR (multi-planar reconstruction) sections can also be displayed respectively.
As an example, when the registration is performed based on a three-dimensional ultrasound contrast image, the displayed registration result may be three-dimensional ultrasound tissue image data, three-dimensional ultrasound contrast image data, or a combination of both.
In one embodiment, image pairs corresponding to the displayed evaluation results may also be associated, for example, image pairs displaying all evaluation results or specific evaluation results satisfying specific requirements.
For example, a first sub-image of one volume of ultrasound images (e.g., pre-operative ultrasound images or post-operative ultrasound images) in the image pair corresponding to the evaluation result may be obtained, and then a second sub-image corresponding to the first sub-image may be obtained from the other volume of ultrasound images (e.g., post-operative ultrasound images or pre-operative ultrasound images) in the image pair corresponding to the evaluation result based on the registration mapping relationship between the pre-operative ultrasound images and the post-operative ultrasound images in the image pair corresponding to the evaluation result, and then the first sub-image and the second sub-image may be simultaneously displayed. Here, the first sub-image and the second sub-image may be a part of the corresponding volume of ultrasound images or may be all of the corresponding volume of ultrasound images (i.e., in this case, the first sub-image and the second sub-image are the corresponding volume of ultrasound images themselves), and the first sub-image and the second sub-image may be three-dimensional or two-dimensional. For example, in one embodiment, the first sub-image and the second sub-image may be a frame slice image in the corresponding volume ultrasound image.
In one embodiment, a tumor region in the pre-operative ultrasound image and a lesion region in the post-operative ultrasound image may be segmented; and generating a safety margin for an ablation procedure based on the tumor region obtained by segmentation. Thereafter, a tumor three-dimensional volume and a safety margin obtained by segmentation in the pre-operative ultrasound image may be mapped into the post-operative three-dimensional image based on the registration result to simultaneously display a tumor, a safety margin and an ablation focus.
In embodiments of the present invention, any suitable method may be used to segment the tumor region in the pre-operative ultrasound image and the lesion region in the post-operative ultrasound image. In one embodiment, the segmentation may be performed using an automatic segmentation method, which may, for example, use one or more of a random walk model, region growing, graph cut algorithm, pattern recognition, Markov field, adaptive thresholding, and the like. In other embodiments, a manual segmentation method may also be adopted, in which the edges of the tumor/lesion are delineated on a plurality of two-dimensional slices of the three-dimensional data, interpolation is performed between each two-dimensional slice, or the edges of the tumor/lesion are delineated on each two-dimensional slice, and then the three-dimensional volume and shape of the tumor or lesion are generated based on the two-dimensional edges.
After segmenting the tumor region in the pre-operative ultrasound image, a safety margin for a tumor ablation procedure is generated based on the tumor region. The safe boundary for tumor ablation procedures is defined as the boundary after the tumor region has expanded outward a certain distance (e.g., about 5 mm). The method for generating the safety boundary is not limited to a certain method, for example, a binary image may be generated based on a segmented tumor three-dimensional body, a dilation algorithm is adopted on the binary image to expand the tumor body outward by a certain distance, and the edge of the expanded three-dimensional body is the safety boundary. Or calculating the normal vector of the edge point based on the three-dimensional shape of the tumor, and expanding a certain distance outwards along the normal vector to generate a safety boundary. In addition, a method based on distance transformation may be employed, in which the segmentation result of the tumor is subjected to distance transformation, and the distance transformation value is equal to the set safety margin distance, thereby generating a safety margin. It should be noted that the above method is only exemplary, and the present invention may also adopt any other suitable method to generate the security boundary.
In one embodiment, the segmentation may be performed before the registration, for example, the segmentation and the generation of a safety boundary may be performed when the pre-operation ultrasound image and the post-operation ultrasound image are acquired but not registered, and the target region obtained by the segmentation may be used to judge the image quality, so as to select the image with better quality for the registration; or after the preoperative ultrasound image is acquired, the ultrasonic image can be segmented, and then after the postoperative ultrasound image is acquired, the postoperative ultrasound image can be segmented.
In another embodiment, the segmentation is performed after the registration, e.g. only the ultrasound image whose evaluation result meets a predetermined criterion may be segmented and a safety margin generated.
After the segmentation result of the tumor and the ablation focus and the safety boundary are obtained, the correspondence between the two sets of data can be established according to the registration matrix, that is, the corresponding point of each pixel in the pre-operation ultrasonic image in the post-operation ultrasonic image (or the corresponding point of each pixel in the post-operation ultrasonic image in the pre-operation ultrasonic image) is obtained. Thus, based on the registration results, the tumor three-dimensional volume and the safety margin obtained in one set of data segmentation can be mapped into another set of data. Because the postoperative ultrasonic image after the ablation operation can only display the ablation focus, but does not have a tumor region, the effective information of the original tumor region can not be displayed, the tumor three-dimensional body obtained by segmentation from the preoperative ultrasonic image is mapped into the postoperative ultrasonic image through the mapping relation, and the ablation focus, the tumor and the safety boundary can be simultaneously displayed, so that a user can intuitively judge whether the ablation operation covers the tumor region and the tumor safety boundary.
An exemplary interface for displaying registration effects and evaluation results is shown in fig. 3. On the main interface 301, three windows 302 above the main interface are used for displaying a preoperative ultrasonic image, wherein the three windows respectively display three mutually orthogonal surfaces 1A, 1B and 1C of the preoperative ultrasonic image; the lower three windows 303 are used for displaying the post-operation ultrasound image, and correspondingly, three corresponding mutually orthogonal faces 2A, 2B, and 2C of the post-operation ultrasound image are respectively displayed. The user may perform segmentation, interactive segmentation, registration, interactive registration, adjusting ply view targets, face translation, etc. at these interfaces. After the user registers the preoperative ultrasonic image and the postoperative ultrasonic image, an evaluation result is output on an interface 304 at the upper right of the interface through an indicator light, and if the registration effect is acceptable, a signal light at the left side is turned on; if the registration quality is not acceptable, the right hand signal lights up and the text "suggest interactive registration" is output at the text display interface 305 below. The interface 306 indicates which of the A/B/C is the current activation surface.
According to the medical image analysis method provided by the embodiment of the invention, the registration effect can be automatically evaluated, and information representing the evaluation result is output, so that a user can know the registration effect more intuitively, and the evaluation of the ablation operation effect is better, intuitive and convenient.
A schematic flow chart diagram of a medical image analysis method 400 according to another embodiment of the present application is described below with reference to fig. 4. As shown in fig. 4, the medical image analysis method 400 may include the following steps:
in step S410, a plurality of pre-operation ultrasonic images and a plurality of post-operation ultrasonic images are respectively obtained for a target area;
in step S420, registering a plurality of different sets of the preoperative ultrasound images and the postoperative ultrasound images;
in step S430, automatically evaluating the result of the registration to obtain an evaluation result; and
in step S440, at least one set of pre-operative ultrasound images and post-operative ultrasound images with the best evaluation result is displayed.
Wherein, steps S410 to S440 are respectively similar to steps S210 to S240 in the method 200 described with reference to fig. 2, and different from the method 200 described with reference to fig. 2, the medical image analysis method in this embodiment acquires a plurality of groups of preoperative ultrasonic images and postoperative ultrasonic images, and performs registration on the plurality of groups of different preoperative ultrasonic images and postoperative ultrasonic images to obtain a plurality of groups of registration results; in step S430, the multiple sets of registration results are automatically evaluated, and the evaluation results of each set of images are saved and associated with the IDs of the images. Thereafter, in step S440, information indicating the evaluation result is not output, but at least one set of pre-operation ultrasound image and post-operation ultrasound image having the best registration effect is directly displayed to the user.
Further, in one embodiment, after each registration is completed, it is checked whether there are still preoperative ultrasound images or postoperative ultrasound images that have not been registered until all permutations of preoperative ultrasound images and postoperative ultrasound images are traversed.
According to the medical image analysis method provided by the embodiment of the invention, the preoperative ultrasonic image and the postoperative ultrasonic image for registration can be automatically selected, the registration results of a plurality of groups of images can be evaluated, and a group of preoperative ultrasonic image and postoperative ultrasonic image with the best registration result can be output to a user, so that the user does not need to manually select the registration image or evaluate the registration result, and the evaluation on the ablation operation effect is better and more convenient.
Referring to fig. 5, fig. 5 shows a schematic structural diagram of an ultrasound imaging apparatus 500 provided in an embodiment of the present invention, the ultrasound imaging apparatus includes an ultrasound probe 501, a transmitting circuit 502, a receiving circuit 503, a processor 504, and a display 505, and the transmitting circuit 502 and the receiving circuit 503 may be connected to the ultrasound probe 501 through a transmitting/receiving selection switch 506. The ultrasound imaging apparatus 500 may be used to implement the medical image analysis method 200 or the medical image analysis method 400 described above.
In the ultrasound imaging process, the transmitting circuit 502 sends a delay-focused transmitting pulse with a certain amplitude and polarity to the ultrasound probe 501 through the transmitting/receiving selection switch 506 to excite the ultrasound probe 501 to transmit an ultrasonic beam to a target tissue (for example, an organ, a tissue, a blood vessel, etc. in a human body or an animal body), in an embodiment of the present invention, to a tumor or an ablation lesion. After a certain delay, the receiving circuit 503 receives the echo of the ultrasonic beam through the transmitting/receiving selection switch 506 to obtain an ultrasonic echo signal, and then sends the ultrasonic echo signal to the processor 505 for related processing to obtain a desired ultrasonic image. When the ultrasonic imaging of the tumor or the ablation focus is performed in the embodiment of the invention, the ultrasonic beam is continuously emitted to the tumor or the ablation focus, so that a section of ultrasonic image sequence of a plurality of frames of ultrasonic images including the tumor or the ablation focus is obtained. The term "emitting an ultrasonic beam to a tumor or a lesion" as used herein is not limited to emitting an ultrasonic beam only to a tumor or a lesion; illustratively, emitting an ultrasound beam to a site including a tumor or a lesion is considered consistent with emitting an ultrasound beam to a tumor or a lesion as described herein.
The ultrasound probe 501 typically comprises an array of a plurality of array elements. At each time of transmitting an ultrasound wave, all or a part of all the elements of the ultrasound probe 501 participate in the transmission of the ultrasound wave. At this time, each array element or each part of array elements participating in ultrasonic emission is excited by the emission pulse and emits ultrasonic waves respectively, and the ultrasonic waves emitted by the array elements are superposed in the propagation process to form a synthesized ultrasonic beam emitted to a scanning target.
The display 505 is coupled to the processor 504, for example, the processor 504 may be coupled to the display 505 via an external input/output port, which may display the ultrasound images obtained by the processor 504. In addition, the display can provide a graphical interface for human-computer interaction for a user while displaying the ultrasound image, for example, the interface shown in fig. 3, one or more controlled objects are arranged on the graphical interface, and the user is provided with a human-computer interaction device to input an operation instruction to control the controlled objects, so as to execute corresponding control operation. For example, icons are displayed on the graphical interface, which can be manipulated by the human-computer interaction device to perform a particular function. In practice, the display may be a touch screen display. In addition, the display in this embodiment may include one display, or may include a plurality of displays.
The human-computer interaction device can detect input information of a user, and the input information can be a control instruction for transmitting and receiving the ultrasonic wave, an operation input instruction for editing and labeling the ultrasonic image and the like, or other instruction types. Generally, the operation instruction obtained when the user performs operation input such as editing, labeling, measuring and the like on the ultrasound image is used for measuring the target tissue. The human-computer interaction device may include one or more of a keyboard, a mouse, a scroll wheel, a track ball, a mobile input device (such as a mobile device with a touch display screen, a mobile phone, etc.), a multifunctional knob, etc., so that the corresponding external input/output port may be a wireless communication module, a wired communication module, or a combination of the two. The external input/output port may also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols, etc.
In an embodiment of the present invention, the processor 504 is configured to process the ultrasound echo signal to obtain at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images, and to register the pre-operative ultrasound images and the post-operative ultrasound images; the processor 504 is further configured to automatically evaluate a result of the registration to obtain an evaluation result; and outputting information representing the evaluation result.
Illustratively, the transmission/reception selection switch 506 may also be referred to as a transmission/reception controller or the like, to which the present invention is not limited.
In addition, the embodiment of the invention also provides a computer storage medium, and the computer storage medium is stored with the computer program. One or more computer program instructions may be stored on the computer-readable storage medium, the processor may execute the program instructions stored by the storage device to implement the functions (implemented by the processor) of the embodiments of the present invention described herein and/or other desired functions, for example, to execute the corresponding steps of the medical image analysis method according to the embodiments of the present invention, and various application programs and various data, for example, various data used and/or generated by the application programs, etc., may be stored in the computer-readable storage medium.
For example, the computer storage medium may include, for example, a memory card, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.
In summary, the medical image analysis method, the ultrasound imaging apparatus, and the computer storage medium according to the embodiments of the present invention can automatically evaluate the registration effect and output information representing the evaluation result, so that the user can more intuitively know the registration effect.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (27)

1. A method of medical image analysis, the method comprising:
respectively acquiring at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images aiming at a target area;
selecting at least one set of image pairs to be registered for evaluation from the at least one volume of pre-operative ultrasound images and the at least one volume of post-operative ultrasound images, wherein each set of image pairs includes at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images;
registering the pre-operative ultrasound image and the post-operative ultrasound image of the at least one set of image pairs;
automatically evaluating the result of the registration to obtain an evaluation result; and
and outputting information representing the evaluation result.
2. The medical image analysis method according to claim 1, wherein:
selecting at least one image pair from the at least one pre-operative ultrasound image and the at least one post-operative ultrasound image comprises: selecting a plurality of sets of image pairs from the at least one volume of pre-operative ultrasound images and the at least one volume of post-operative ultrasound images;
the method comprises the following steps:
traversing the multiple groups of image pairs, performing multiple times of registration on preoperative ultrasonic images and postoperative ultrasonic images in the multiple groups of image pairs, and obtaining multiple evaluation results;
and selecting at least one group of image pairs with the best evaluation result for display.
3. The medical image analysis method according to claim 1, wherein the pre-operative ultrasound image and the post-operative ultrasound image include ultrasound contrast images.
4. The method according to claim 3, wherein the ultrasound contrast image comprises a tissue image and a contrast image with pixel points in a one-to-one correspondence, and the registration is performed based on the tissue image and/or the contrast image.
5. The medical image analysis method according to claim 1, wherein the pre-operative ultrasound image and/or the post-operative ultrasound image comprises a plurality of three-dimensional images continuously scanned or a plurality of three-dimensional images or four-dimensional images intermittently scanned during different periods.
6. The medical image analysis method of claim 1, wherein the registering at least one set of image pairs comprises:
respectively selecting ultrasonic images with better quality from the preoperative ultrasonic images and the postoperative ultrasonic images for registration; or
And randomly selecting an ultrasonic image from the preoperative ultrasonic image and the postoperative ultrasonic image respectively for registration.
7. The medical image analysis method according to claim 1, further comprising:
and if the evaluation result does not meet the preset standard, selecting other ultrasonic images from the preoperative ultrasonic image and/or the postoperative ultrasonic image for registering again.
8. The medical image analysis method according to claim 1, wherein the registration means comprises automatic registration, interactive registration and/or manual registration.
9. The medical image analysis method according to claim 8, further comprising:
and if the evaluation result does not meet the preset standard, prompting the user to change the registration mode.
10. The medical image analysis method according to claim 9, further comprising:
firstly, automatic registration is carried out, and if the evaluation result does not meet the preset standard, a user is prompted to carry out interactive registration or manual registration.
11. The medical image analysis method according to claim 1, wherein the registration comprises rigid body registration and/or non-rigid body registration.
12. The medical image analysis method according to claim 1, wherein the automatic evaluation is performed based on a size of a similarity metric function between the preoperative ultrasound image and the postoperative ultrasound image for registration.
13. The method of claim 12, wherein the similarity metric function comprises an image intensity-based metric and/or an image feature-based metric.
14. The medical image analysis method according to claim 1, wherein the automatic evaluation is performed based on the number of pairs of feature points matching between the preoperative ultrasound image and the postoperative ultrasound image for registration.
15. The medical image analysis method according to claim 1, wherein the automatic evaluation is performed based on a degree of coincidence of the same anatomical structure between the pre-operative ultrasound image and the post-operative ultrasound image subjected to the registration.
16. The medical image analysis method according to claim 1, wherein the information indicating the evaluation result includes one or more of a score value, an indicator light, a sound, a rating, or a text.
17. The medical image analysis method according to claim 1, further comprising: segmenting a tumor region in the preoperative ultrasonic image and an ablation focus region in the postoperative ultrasonic image; and
generating a safe boundary for an ablation procedure based on the tumor region obtained by segmentation.
18. The medical image analysis method according to claim 17, wherein the segmentation is performed before the registration or performed on an ultrasound image whose evaluation result satisfies a predetermined criterion after the registration.
19. The medical image analysis method according to claim 17, further comprising: mapping a tumor three-dimensional volume and a safety boundary obtained by segmentation in the pre-operative ultrasonic image into the post-operative three-dimensional image based on the registration result to simultaneously display the tumor, the safety boundary and the ablation focus.
20. The medical image analysis method according to claim 1, further comprising: and displaying the image pair corresponding to the evaluation result in a correlated manner.
21. The method for medical image analysis according to claim 20, wherein the associating and displaying the image pair corresponding to the evaluation result comprises:
obtaining a first sub-image of a volume of ultrasonic images in the image pair corresponding to the evaluation result;
obtaining a second sub-image corresponding to the first sub-image from the other volume of the ultrasonic images in the image pair corresponding to the evaluation result based on the registration mapping relation between the preoperative ultrasonic image and the postoperative ultrasonic image in the image pair corresponding to the evaluation result;
displaying the first sub-image and the second sub-image simultaneously.
22. The medical image analysis method according to any one of claims 1 to 20, wherein the automatic evaluation is performed based on image quality factors.
23. The method of claim 22, wherein the image quality factors include one or more of image acquisition time, image content, image vessel sharpness, and image signal-to-noise ratio.
24. A method of medical image analysis, the method comprising:
respectively acquiring a plurality of pre-operation ultrasonic images and a plurality of post-operation ultrasonic images aiming at a target area;
selecting a plurality of groups of image pairs to be registered and evaluated from the plurality of preoperative ultrasonic images and the plurality of postoperative ultrasonic images, wherein each group of image pairs comprises at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images;
registering the preoperative ultrasound image and the postoperative ultrasound image in the plurality of sets of image pairs;
automatically evaluating the result of the registration to obtain an evaluation result; and
and displaying the preoperative ultrasonic image and the postoperative ultrasonic image in at least one group of image pairs with the best evaluation result.
25. A method of medical image analysis, the method comprising:
respectively acquiring at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images aiming at a target area;
registering at least one set of the pre-operative and post-operative ultrasound images;
automatically evaluating the result of the registration to obtain an evaluation result; and
and outputting information representing the evaluation result.
26. An ultrasound imaging apparatus, comprising:
an ultrasonic probe;
the transmitting circuit is used for exciting the probe to transmit ultrasonic waves to a target area;
the receiving circuit is used for controlling the probe to receive the ultrasonic echo returned from the target area so as to obtain an ultrasonic echo signal;
a processor to:
processing the ultrasonic echo signal to obtain at least one volume of preoperative ultrasonic images and at least one volume of postoperative ultrasonic images;
selecting at least one set of image pairs to be registered for evaluation from the at least one volume of pre-operative ultrasound images and the at least one volume of post-operative ultrasound images, wherein each set of image pairs includes at least one volume of pre-operative ultrasound images and at least one volume of post-operative ultrasound images;
registering the pre-operative ultrasound image and the post-operative ultrasound image of the at least one set of image pairs;
automatically evaluating the result of the registration to obtain an evaluation result;
outputting information representing the evaluation result; and
a display for displaying the evaluation result and/or the at least one set of image pairs.
27. A computer storage medium on which a computer program is stored, the computer program, when being executed by a computer or a processor, realizing the steps of the method according to any one of claims 1 to 25.
CN201910906391.6A 2019-09-24 2019-09-24 Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium Pending CN112634191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906391.6A CN112634191A (en) 2019-09-24 2019-09-24 Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906391.6A CN112634191A (en) 2019-09-24 2019-09-24 Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium

Publications (1)

Publication Number Publication Date
CN112634191A true CN112634191A (en) 2021-04-09

Family

ID=75282915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906391.6A Pending CN112634191A (en) 2019-09-24 2019-09-24 Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium

Country Status (1)

Country Link
CN (1) CN112634191A (en)

Similar Documents

Publication Publication Date Title
CN110087550B (en) Ultrasonic image display method, equipment and storage medium
WO2019100212A1 (en) Ultrasonic system and method for planning ablation
RU2663649C2 (en) Segmentation of large objects from multiple three-dimensional views
US20130257910A1 (en) Apparatus and method for lesion diagnosis
US9504450B2 (en) Apparatus and method for combining three dimensional ultrasound images
WO2022027251A1 (en) Three-dimensional display method and ultrasonic imaging system
JP2010119850A (en) System, apparatus, and process for automated medical image segmentation using statistical model
JP2016539744A (en) Method and apparatus for providing blood vessel analysis information using medical images
KR102439769B1 (en) Medical imaging apparatus and operating method for the same
KR20150027637A (en) Method and Apparatus for registering medical images
CN107106128B (en) Ultrasound imaging apparatus and method for segmenting an anatomical target
CN115486877A (en) Ultrasonic equipment and method for displaying three-dimensional ultrasonic image
CN111281430A (en) Ultrasonic imaging method, device and readable storage medium
CN111836584B (en) Ultrasound contrast imaging method, ultrasound imaging apparatus, and storage medium
CN115317128A (en) Ablation simulation method and device
CN110087551A (en) A kind of fetal rhythm supersonic detection method and ultrasonic image-forming system
CN112568933A (en) Ultrasonic imaging method, apparatus and storage medium
KR20130010732A (en) Method and apparatus for generating 3d volume panorama based on a plurality of 3d volume images
KR20120028106A (en) 3d ultrasound system for extending view of image and method for operating 3d ultrasound system
CN109410170B (en) Image data processing method, device and equipment
JP4528247B2 (en) Ultrasonic diagnostic apparatus and ultrasonic image processing method
US20200305837A1 (en) System and method for guided ultrasound imaging
CN114375179A (en) Ultrasonic image analysis method, ultrasonic imaging system, and computer storage medium
CN112634191A (en) Medical image analysis method, ultrasonic imaging apparatus, and computer storage medium
CN115998334A (en) Ablation effect display method and ultrasonic imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination