CN116137026A - Medical image processing device, medical image processing method, and storage medium - Google Patents

Medical image processing device, medical image processing method, and storage medium Download PDF

Info

Publication number
CN116137026A
CN116137026A CN202211432902.3A CN202211432902A CN116137026A CN 116137026 A CN116137026 A CN 116137026A CN 202211432902 A CN202211432902 A CN 202211432902A CN 116137026 A CN116137026 A CN 116137026A
Authority
CN
China
Prior art keywords
processing
medical image
valve
control unit
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211432902.3A
Other languages
Chinese (zh)
Inventor
赵龙飞
赵舜
青山岳人
薛晓
钟昀辛
肖其林
王艳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Medical Systems Corp
Original Assignee
Canon Medical Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2022166028A external-priority patent/JP2023073968A/en
Application filed by Canon Medical Systems Corp filed Critical Canon Medical Systems Corp
Publication of CN116137026A publication Critical patent/CN116137026A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Embodiments disclosed in the present specification and drawings relate to a medical image processing apparatus, a medical image processing method, and a storage medium. One of the problems to be solved by the embodiments disclosed in the present specification and the drawings is to obtain a more accurate morphological measurement result. The medical image processing device according to the embodiment includes a control unit. The control unit executes a 1 st process including at least two different kinds of processes for the medical image data. The control unit further executes the 1 st processing for a spatially reduced range based on the processing result of the 1 st processing.

Description

Medical image processing device, medical image processing method, and storage medium
The present application is based on and claims priority from chinese patent application No. 202111351852.1, 11/16, 2021 and japanese patent application No. 2022-166028, 10/17, 2022, which are incorporated herein by reference in their entirety.
Technical Field
Embodiments disclosed in the specification and the drawings relate to a medical image processing apparatus, a medical image processing method, and a storage medium.
Background
Heart valves are the fundamental and important structures of the heart, which are valves between the atria and ventricles or between the ventricles and arteries, and the main function is to prevent blood backflow, to ensure blood flow from the atria to the ventricles, or from the ventricles to the aorta/pulmonary arteries, and serious valve diseases can lead to death.
Accurate valve morphology information is critical to the treatment of valve disease, such as Transcatheter Aortic Valve Implantation (TAVI). One of the important clinical decisions in TAVI surgery is the selection of the correct implant device and the corresponding dimensions, and the accuracy of these selections is largely dependent on the accurate extraction of critical anatomical points and structures such as valve commissures, nadir, aortic root, aortic valve leaflet, etc. However, since the valve is located inside the heart, has a small size, is normally always in an open-close motion state, and has a large shape and form variation, it is very difficult to accurately extract valve information. In addition, information extraction is more difficult because valves in unhealthy states such as calcified valves, incomplete valves, etc. become irregular in shape, etc. Thus, there is a need to provide a more accurate solution to extract morphological information of the valve.
Here, since the shape and morphology of the valve may vary greatly from disease to disease and from individual to individual, it is a challenging task to accurately extract the morphology information of the valve. Most of the current algorithms directly extract target information from an original body area at a time, but not by repeated realization from coarse to fine, and do not use any auxiliary information for refinement treatment. Most automated valve information extraction is directed to a single task of morphology, e.g., some automated algorithms detect only anatomical keypoints (landmark) or segment only the aortic root or aortic valve tip. Therefore, they do not obtain accurate morphological information, especially for calcified valves and valve defects. Furthermore, some prior art techniques propose performing segmentation or detection from coarse to fine, but these techniques use only one segmentation or detection algorithm, and there is no solution in the prior art that combines multiple tasks into one workflow.
It is considered that some of the causes of these prior art problems are that, for example, the keypoint detection and the object segmentation are performed separately, and that the relationship between the acquired keypoint processing and the segmentation processing is not considered. It is believed that if the keypoint detection process and the segmentation process can be combined to produce more useful auxiliary information or to provide more accurate morphological information, it would be of great help to accurately measure the valve in therapy planning.
In addition to heart valves, other complex organs or complex parts of organs of the human body may also cause the above-mentioned technical problems.
Disclosure of Invention
One of the technical problems to be solved by the embodiments disclosed in the present specification and the drawings is to obtain a more accurate morphological measurement result. However, the technical problems to be solved by the embodiments disclosed in the present specification and the drawings are not limited to the above technical problems. The technical problems associated with the effects of the configurations described in the embodiments described below may be referred to as other technical problems.
The medical image processing device of the embodiment is provided with a control unit. The control unit executes a 1 st process including at least two different kinds of processes on the medical image data. The control unit executes the 1 st processing again for the spatially reduced range based on the processing result of the 1 st processing.
Effects of the invention
According to the medical image processing apparatus, the medical image processing method, and the storage medium of the related embodiments, more accurate morphological measurement results can be obtained.
Drawings
Fig. 1 is a block diagram showing an example of the configuration of a medical image processing apparatus according to embodiment 1.
Fig. 2 is a flowchart showing a process executed by the medical image processing apparatus according to embodiment 1.
Fig. 3 is a diagram showing a flow of an example of the heart valve detection process performed by the medical image processing apparatus according to embodiment 1 and a result of the process.
Fig. 4A is a diagram for explaining an example of the 1 st process executed in step S100 by the 1 st processing function in the medical image processing apparatus according to embodiment 1.
Fig. 4B is a diagram for explaining an example of the 1 st process executed in step S200 by the 2 nd processing function in the medical image processing apparatus according to embodiment 1.
Fig. 4C is a diagram for explaining an example of the 1 st process after adjustment performed in step S300 by the 1 st process function in the medical image processing apparatus according to embodiment 1.
Fig. 5A is a diagram for explaining a modification of the reinforcement process performed by the 1 st processing function in the medical image processing apparatus according to embodiment 1.
Fig. 5B is a diagram for explaining a modification of the reinforcement process performed by the 1 st processing function in the medical image processing apparatus according to embodiment 1.
Fig. 6A is a diagram for explaining a modification of the information enhancement processing performed by the 2 nd processing function in the medical image processing apparatus according to embodiment 1.
Fig. 6B is a diagram for explaining a modification of the information enhancement processing performed by the 2 nd processing function in the medical image processing apparatus according to embodiment 1.
Fig. 7A is a flowchart for explaining an example of correction processing performed by the medical image processing apparatus according to embodiment 1.
Fig. 7B is a schematic diagram for explaining a processing procedure of the correction processing performed by the medical image processing apparatus according to embodiment 1.
Fig. 7C is a schematic diagram for explaining a processing procedure of the correction processing performed by the medical image processing apparatus according to embodiment 1.
Fig. 8 is a flowchart showing an example of heart valve detection processing performed by the medical image processing apparatus according to embodiment 2.
Fig. 9 is a flowchart showing an example of heart valve detection processing performed by the medical image processing apparatus according to embodiment 3.
Fig. 10 is a flowchart showing an example of lung detection processing performed by the medical image processing apparatus according to embodiment 4.
Fig. 11 is a diagram showing a processing flow and a processing result of an example of processing performed by the medical image processing apparatus according to embodiment 5.
Detailed Description
Embodiments of a medical image processing apparatus, a medical image processing method, and a storage medium according to the present application will be described in detail below with reference to the accompanying drawings. The medical image processing apparatus, the medical image processing method, and the storage medium according to the present application are not limited to the embodiments described below. In the following description, common reference numerals are given to the same constituent elements, and overlapping description is omitted.
(embodiment 1)
First, a medical image processing apparatus according to embodiment 1 will be described. The medical image processing apparatus of the present application may be in the form of a medical image diagnostic apparatus such as an ultrasonic diagnostic apparatus or an MRI (Magnetic Resonance Imaging: magnetic resonance imaging) imaging apparatus, or may be in the form of a workstation or the like.
Fig. 1 is a block diagram showing an example of the configuration of a medical image processing apparatus 1 according to embodiment 1.
As shown in fig. 1, the medical image processing apparatus 1 according to the present embodiment mainly includes a control unit 10 and a memory 20, and when the medical image processing apparatus 1 is incorporated in, for example, an ultrasonic diagnostic apparatus, the medical image processing apparatus 1 further includes an ultrasonic probe, a display, an input/output interface, an apparatus main body, and the like, which are not shown, and the control unit 10 and the memory 20 are communicably connected to the ultrasonic probe, the display, the input/output interface, the apparatus main body, and the like. Since structures and functions of an ultrasonic probe, a display, an input/output interface, and a device body are well known to those skilled in the art, detailed descriptions thereof are omitted.
The memory 20 stores various data necessary for the medical image processing apparatus 1 to execute processing, such as raw volume data as a processing target, various data used and generated by the ultrasonic diagnostic apparatus, image data for display, and the like.
The control unit 10 controls the entire medical image processing apparatus 1, and executes various processes to be executed by the medical image processing apparatus 1.
The control section 10 includes a 1 st processing function 100 and a parameter setting function 300, wherein the 1 st processing function 100 executes a 1 st process including at least two different kinds of processes on medical image data as a processing object. The parameter setting function 300 sets the parameter of the 1 st process executed by the 1 st process function 100. The control unit 10 of the present invention executes the 1 st process including at least two different kinds of processes on the medical image data by the 1 st process function 100, and executes the 1 st process adjusted by spatially narrowing down the process range with respect to the process result of the 1 st process, thereby obtaining an image representing the target region in the medical image data. The adjustment of the spatially reduced processing range is realized by, for example, parameter setting by the parameter setting function 300.
The control unit 10 of the present invention may further include a 2 nd processing function 200, and the control unit 10 may execute the 2 nd processing on the processing result of the executed 1 st processing by the 2 nd processing function 200 before executing the adjusted 1 st processing, further execute the adjusted 1 st processing on the processing result of the 2 nd processing, and obtain an image representing the target region in the medical image data based on the adjusted 1 st processing result.
The processing performed by the medical image processing apparatus 1 according to the present embodiment will be described in detail with reference to fig. 2.
Fig. 2 is a flowchart showing a process executed by the medical image processing apparatus 1 according to embodiment 1.
As shown in fig. 2, first, in step S1, medical image data to be processed is input to the medical image processing apparatus 1.
Next, in step S100, the 1 st processing function 100 of the control section 10 executes the 1 st processing including at least two different kinds of processing, and although not shown in the drawing, in step S100, the control section 10 may set the parameters of the 1 st processing by the parameter setting function 300 before the 1 st processing is executed. Here, as an example of the above-described "1 st process including at least two different kinds of processes", a plurality of kinds of processes made up of processes a to N for medical image data are shown in fig. 2. The order of the processes a to N is fixed, but the contents and the number of the processes a to N are only examples, and some of the processes a to N may be identical or include the same type of processes with different parameters. The 1 st process of the present invention may be any process including at least two different types of processes for processing medical images, and may be any other embodiment.
Next, in step S200, the 2 nd processing function 200 of the control unit 10 executes the 2 nd processing on the processing result of the 1 st processing, wherein, as the 2 nd processing, the information enhancement processing for performing information enhancement and fusion on the result of the 1 st processing is shown in fig. 2, the information enhancement processing is just one example of the 2 nd processing, and other embodiments of the 2 nd processing of the present invention are also possible.
Step S200 is a preferred embodiment, and thus although step S200 is performed, step S200 is not required and may be omitted, i.e., step S300 may be performed subsequently after step S100 is performed. In this case, in step S300, the control unit 10 executes the 1 st process after adjustment, which spatially reduces the process range, on the process result of the 1 st process by the 1 st process function 100. The adjustment of the 1 st process is performed, for example, by the parameter adjustment performed by the parameter setting function 300, and specifically, the 1 st process after adjustment is the 1 st process after adjustment in which the process range is spatially narrowed, and it is ensured that the 1 st process after adjustment can obtain a more accurate result than the 1 st process before adjustment. In fig. 2, the 1 st process after adjustment is represented as a process a 'to a process N', and the process a 'to the process N' are spatially narrowed in the process range as compared with the process a to the process N, and the adjustment may be performed by adjustment of parameters, or may be performed by, for example, fine adjustment in an algorithm or update of input data. The order of the treatments A 'to N' is the same as the treatments A to N.
In addition, in the case of executing the preferred embodiment of step S200, in step S300, the control section 10 executes the 1 st process after the adjustment that spatially reduces the process range with respect to the process result of the 2 nd process by the 1 st process function 100, that is, the control section 10 first executes the information enhancement process with respect to the result of the 1 st process by the 2 nd process function 200 in step S200, and then executes the 1 st process after the adjustment that spatially reduces the process range with respect to the result of the 1 st process after the information enhancement process by the 1 st process function in step S300.
After step S300, the control unit 10 obtains an image representing the target region in the medical image data based on the adjusted processing result of the 1 st processing. The processes a to N (or the processes a 'to N') correspond to the combined process composed of the plurality of processes, and as described above, the combined process composed of the plurality of processes has been performed twice from the thick to the thin through the steps S100, S300, whereby a more accurate morphological measurement result can be obtained.
Next, in step S400, the control unit 10 of the medical image processing apparatus 1 according to the present invention may further operate the 2 nd processing function 200 again, execute the information enhancement processing again with respect to the result of step S300, and operate the 1 st processing function 100 again, execute the 1 st processing after adjustment again with respect to the result after the information enhancement processing again, and the 1 st processing after adjustment again may be expressed as, for example, processing a "to processing N", and may also execute adjustment of parameters, for example, based on the result of the 2 nd processing executed again, so as to spatially reduce the processing range as compared with the 1 st processing of the previous two times. The repeated execution of the 2 nd and 1 st processes indicated by step S400 may be repeated as many times as necessary to form an iterative process, thereby obtaining a more accurate morphological measurement result, and thus the process corresponding to S400 in fig. 2 is shown with an ellipsis.
The parameters for the 1 st process may be set not only by experiments or feedback performed in advance according to the request of the result of the process from the thick to the thin, but also by other factors such as the number of repetitions of the 2 nd process and the 1 st process. The adjustment of the 1 st process may be performed by adjusting parameters, but the parameter adjustment is only an example, and the adjustment of the 1 st process may be performed by, for example, fine adjustment on an algorithm or updating of input data, and any combination of these factors.
As described above, since the combination processing of plural kinds of processing has been performed twice from coarse to fine in steps S100 to S300, a more accurate morphological measurement result can be obtained, the processing in step S400 is not necessary and can be omitted.
Next, in step S500, the control unit 10 of the medical image processing apparatus 1 according to the present invention may perform correction processing on the acquired image representing the target region using the information acquired in the previous steps. The correction is not necessary for the invention in order to make the obtained image more suitable for subsequent processing.
Next, in step S600, medical measurement is completed using the corrected data.
According to the present invention, by performing a combination process composed of a plurality of processes on medical image data from thick to thin, a more accurate morphological measurement result is obtained.
The processing of the medical image processing apparatus according to the present embodiment will be described in detail below with reference to fig. 3 to 7C, taking a heart valve as an example.
Fig. 3 is a diagram showing a flow of an example of the heart valve detection process performed by the medical image processing apparatus according to embodiment 1 and a result of the process.
As shown in fig. 3, first, in step S1, an original volume region of medical image data of a heart to be processed is input to the medical image processing apparatus 1.
Next, in step S100, as an example of the 1 st process, the 1 st process function 100 of the control unit 10 executes a combination process composed of two different processes S101 and S102. That is, as the 1 st process, a key point (landmark) detection process is performed on the medical image data (step S101), and a segmentation process of the target region is performed based on the result of the key point detection process (step S102).
Specifically, in step S101, the control section 10 performs a coarse detection of key points for the medical image data original volume region of the heart, for example, the 1 st processing function 100 of the control section 10 may roughly locate user-defined anatomical key points such as the junction point, the cusp, the left coronary ostium lower edge point (left coronary ostium), and the right coronary ostium lower edge point (Right coronary ostium).
In step S102, the control unit 10 performs a segmentation process on the processing target region using the key points detected in step S101. For example, in S102, the 1 st processing function 100 of the control section 10 divides the aortic root by applying the roughly located anatomical key points obtained in step S101 to obtain a mask (mask) image of the aortic root. The mask image includes, for example, the sinus of Valsalva (SOV), left ventricular outflow tract (Left Ventricular Outflow Tract: LVOT), ascending Aorta (AAO), and associated junction points.
The specific anatomical keypoints in step S101 and the aortic root mask image in step S102 are merely examples, are not fixed, and can be customized as desired.
Next, in step S200, the 2 nd processing function 200 of the control section 10 executes information enhancement processing. As an example of the 2 nd processing, the 2 nd processing function 200 uses the rough anatomical keypoints acquired in step S101 and the aortic root segmentation result acquired in step S102 to perform region restriction, and selects a more accurate VOI (volume of interest: volume of interest). The region limitation is an example of the information enhancement processing, and the region limitation may be, for example, cutting the division result image acquired in step S100, and the information enhancement processing is of course not limited to cutting, and details concerning the information enhancement processing will be described in detail later with reference to fig. 6A and 6B.
Further operations such as checking whether the acquired keypoints and mask images are unified may also be performed in step S200. In addition, as described above, step S200 is not essential and may be omitted.
Next, in step S300, the control unit 10 causes the 1 st processing function 100 to execute the 1 st processing after the adjustment, which spatially reduces the processing range, on the result of the 2 nd processing, and obtains an image representing the heart valve, which is the target region in the medical image data, based on the result of the 1 st processing after the adjustment. That is, as the 1 st process after the adjustment performed again, the control section 10 performs the keypoint accuracy detection process (step S301) on the medical image data after the information enhancement, which is the result of the 2 nd process, by the 1 st processing function 100, and performs a more accurate segmentation, for example, the segmentation process of the aortic valve, based on the result of the keypoint accuracy detection process (step S302).
Specifically, in step S301, the control section 10 locates more accurate anatomical keypoints by the 1 st processing function 100 using the new morphological information after the information enhancement processing in step S200, and in step 302 obtains mask images of an aortic valve including, for example, three leaflets by performing a segmentation process of the aortic valve using the new morphological information after the information enhancement processing in step S200 and the accurate anatomical keypoints obtained in step S301.
As described above, the processing performed in step S300 is the same processing content as the 1 st processing, but the parameters of the processing may be adjusted due to the result of the 2 nd processing to spatially narrow the processing range, ensuring that a more accurate result than that of step S100 can be obtained by step S300. Details regarding the processing procedures of step S300 and step S100 will be described in detail later with fig. 4.
Up to this point, the medical image processing apparatus according to the present embodiment has performed the combination processing composed of the key point detection and the segmentation twice from thick to thin through steps S100 to S300, thereby obtaining a more accurate segmented image of the aortic valve and a more accurate morphological measurement result.
Step S400, which is a preferred embodiment after step S300, is also shown in fig. 3 by an ellipsis, that is, after step S300, the control section 10 may re-execute the information enhancement process, which is the 2 nd process, by the 2 nd process function 200 in step S400, and re-execute the combination of the key point detection and the segmentation, which is the 1 st process, with respect to the result of the re-executed information enhancement process, and such re-execution may be repeated a plurality of times, by continuing to perform finer processes, a more accurate image representing the object region is obtained.
Next, in step S500, the control unit 10 of the medical image processing apparatus 1 of the present invention performs correction processing on the obtained image of the heart valve using the information obtained in the previous steps. The correction is not necessary for the invention in order to make the obtained heart valve image more suitable for subsequent e.g. measurement processing. Details of this correction process will be described in detail later with reference to fig. 7A to 7C.
Next, in step S600, the control part 10 measures some key items required for TAVI surgical planning, for example, the length of the leaflet free edge, the length of the junction of the leaflet and the vessel wall, the geometric height, and the like, using the heart valve image after the correction processing, to complete medical measurement for TAVI, for example.
Details of the processing procedures of step S100, step S200, and step S300 will be further described below with reference to fig. 4A to 4C.
Fig. 4A is a diagram for explaining an example of the processing procedure executed in step S100 by the 1 st processing function in the medical image processing apparatus according to embodiment 1.
First, in step S101, the 1 st processing function 100 executes a key point coarse detection process. This process may be implemented by conventional deep learning neural networks, such as 3D Spatial Configuration Networks (SCNs). The input of the deep learning neural network is, for example, an original volume region containing an image of the whole heart (fig. 4A), and the output is, for example, a clinical key point having significance in the anatomical morphology of the heart (fig. 4A) (b)), and may be, for example, at least some of 8 clinical key points, i.e., a left coronary valve Nadir point (LCC Nadir point), a right coronary valve Nadir point (RCC Nadir point), a no-coronary valve Nadir point (NCC Nadir point), a no-coronary valve-left coronary valve junction (N-L Commissure point), a right coronary valve-left coronary valve junction (R-L Commissure point), a no-coronary valve-right coronary valve junction (N-Rcommissure point), a right coronary mouth lower edge point (Right coronary ostium), and a left coronary mouth lower edge point (Left coronary ostium), in the case that the object region is an aortic valve. By the rough detection of key points in the step, the region where the heart valve structure is located can be roughly positioned, so that the settings such as parameters of the deep learning neural network used in the step do not require high detection precision, but have quick time performance, so that a volume of interest (VOI) can be quickly reduced, and preparation is made for the next operation. Of course, the above-described key point detection process is only an example, and any method may be used as long as a predetermined key point can be detected from the image data of the processing target. For example, the user may manually specify the position of the key point using a GUI.
Next, the 1 st processing function 100 performs an aortic root segmentation process under the guidance of the keypoints using the keypoints in step S102. This process may be implemented by conventional deep learning neural networks (e.g., 3 DUNET). With this deep learning network, the input data can be divided into a processing target area and an area other than the processing target area. In the present embodiment, each pixel constituting the input image data is divided into a pixel corresponding to the root of the aorta and other pixels. By coarse detection of key points in step S101, a more accurate and smaller VOI region can be located, and on the basis of this, segmentation can improve the segmentation performance. The input of the deep learning neural network in step S102 is medical image data obtained based on the coarse detection of the keypoints, and the output is an aortic root mask image inside the VOI region ((d) in fig. 4A). Of course, the above-described division is only an example, and any method may be used as long as the target region can be divided from the image data of the processing target. For example, a method in which the user manually designates all pixels corresponding to the aortic root using a GUI may be used, or a known image segmentation method such as a binarization method or a graph cut (cut) process may be used.
Here, the reinforcement process shown in fig. 4A (c) may be further provided between the key point rough detection process of step S101 shown in fig. 4A (b) and the aortic root segmentation process of step S102 shown in fig. 4A (d). That is, according to the present invention of embodiment 1, the 1 st processing function 100 may also perform the reinforcement processing on the medical image data based on the result of the key point detection processing, and perform the segmentation processing after performing the reinforcement processing.
Fig. 4A (c) shows an example in which medical image data is cut based on the keypoints detected by the keypoint detection process as the reinforcement process. In the example shown in fig. 4A, the input of the deep learning neural network in step S102 is medical image data obtained by cutting with a bounding box defined based on rough key points, and the output is an aortic root mask image inside the cut VOI region.
Fig. 4A illustrates the reinforcement processing performed between the key point detection processing and the division processing in the 1 st processing by taking the cutting processing as an example, but the reinforcement processing is not limited to the cutting processing, and details concerning the reinforcement processing will be described later with reference to fig. 5.
Fig. 4B is a diagram for explaining an example of a procedure of the information enhancement processing executed in step S200 by the 2 nd processing function 200 in the medical image processing apparatus according to embodiment 1.
In step S200, the 2 nd processing function 200 performs information enhancement processing using the rough key point detection result or the aortic root segmentation result in step S100, for example, it is possible to focus on a more accurate VOI region of the result of step S100 by applying an area restriction (hereinafter simply referred to as ROI restriction) of an ROI (region of interest: region of interest). Fig. 4B (a) shows an example in which ROI limitation is performed using the rough keypoint detection result, a bounding box based on rough anatomical keypoints is calculated, and the result of step S100 is cut according to the bounding box. Fig. 4B (B) shows an example in which ROI restriction is performed using the aortic root segmentation result, a root mask-based bounding box is calculated, and an image is cut based on the bounding box.
Since the boundaries of the aortic root mask image are close to some anatomical keypoints, the bounding box using the root mask image preferably extends outward to a larger area, preventing losing some of the keypoint information. But the aortic root mask image is smaller in scope, so the clipping with the aortic root mask region is better than the bounding box calculated based on the coarse anatomical keypoints.
Fig. 4B shows an example of information enhancement processing performed with ROI restriction as the 2 nd processing function, but the information enhancement processing is not limited to the ROI restriction, and details regarding the information enhancement processing will be described later with fig. 6A and 6B.
Fig. 4C is a diagram for explaining an example of the 1 st process after adjustment executed in step S300 by the 1 st process function 100 in the medical image processing apparatus 1 according to embodiment 1.
First, the keypoint accuracy detection process is performed in step S301. The process may use the same neural network as that used in the coarse detection process of the key point in step S101, but the input VOI is updated, and the model parameters may be updated by the parameter setting function 300, so as to spatially narrow the processing range and adapt to the requirement of accurate key point detection. The neural network is input as an image of the sub-region cut by using the accurate VOI region information of step S200, and as can be seen from fig. 4B, the input shown here is an area-limited result by using the coarse keypoint detection result, and it is needless to say that the area-limited result may be input by using the aortic root segmentation result. The output of the neural network is the same as step S101, and is still the clinical key points (fig. 4C (b)) having significance in anatomical morphology information of the heart, for example, by inputting the body region information and the updated network model parameters more precisely, the accurate detection of the key points in step S301 can accurately locate the region where the heart valve structure is located by the key points.
Next, the 1 st processing function 100 performs a heart valve segmentation process under accurate keypoint guidance in step S302. Also, the process may use the same neural network as the aortic root segmentation process in step S102, but the input VOI is updated, and the model parameters may be updated by the parameter setting function 300 to spatially narrow the process to accommodate more accurate segmentation of the heart valve. The output of this process is a mask image of the heart valve inside the VOI region (fig. 4C (d)). In step S102, each pixel constituting the input image data is divided into two parts, i.e., a pixel corresponding to the aortic root and a pixel other than the pixel, but in step S302, each pixel constituting the input image data is divided into four parts, i.e., a pixel corresponding to the right coronary lobe, a pixel corresponding to the left coronary lobe, a pixel corresponding to the no coronary lobe, and a pixel other than the pixel.
The same as the reinforcement process may be provided between the coarse detection process of the key point and the aortic root segmentation process in step S100, and the reinforcement process may be provided between the precise detection process of the key point and the cardiac valve segmentation process in step S300 ((C) of fig. 4C). The strengthening process in step S100 may be used for the strengthening process, and details thereof will not be described.
A modification of the reinforcement process will be described below with reference to fig. 5A and 5B.
Fig. 5A and 5B are diagrams for explaining a modification of the reinforcement process executed by the 1 st processing function 100 in the medical image processing apparatus according to embodiment 1.
Specifically, as the reinforcement processing, not only the key points detected in step S101 or S301 may be used to provide a bounding box for cutting an image to be processed by a neural network for the segmentation processing (hereinafter referred to as a segmented neural network), but also it may be used to enhance the performance of the segmented neural network from other aspects. For example, as shown in fig. 5A, a keypoint heat map may be first generated for the keypoints detected by the keypoint detection process, and then input into the split neural network as another input channel of the split neural network together with an image to be processed by the split neural network. The modification shown in fig. 5A is mainly used for the reasoning process of the deep learning of the split neural network. Because the channels for processing the segmentation neural network are added, the information of the anatomical key points can be added to the segmentation neural network, so that the result of the segmentation processing can contain more information of the anatomical key points, and the performance of the segmentation processing is improved.
Further, as shown in fig. 5B, after the heat map of the keypoints detected by the keypoint detection process is generated, the heat map may be used as an internal loss function of the segmented neural network, and for example, the vicinity of the keypoint region may be given a larger weight than other regions to reduce the undersection (undersection), thereby improving the performance of the segmentation process. The modification shown in fig. 5B is mainly used for the training process of deep learning of the split neural network.
The strengthening treatment described above is not essential. The reinforcement processing according to the present invention may be any combination of the cutting processing shown in fig. 4A and the modification examples shown in fig. 5A and 5B. That is, in the present embodiment, the strengthening treatment includes at least one of the following: cutting the medical image data based on the key points detected by the key point detection process; inputting the key point heat map detected by the key point detection process as a channel to a split neural network; and taking the key point heat map detected by the key point detection process as a loss function of the segmented neural network.
A modification of the information enhancement processing will be described below with reference to fig. 6A and 6B.
Fig. 6A and 6B are diagrams for explaining a modification of the information enhancement processing executed by the 2 nd processing function 200 in the medical image processing apparatus according to embodiment 1.
In the information enhancement processing according to the present embodiment, geometric transformation or geometric constraint can be applied to make the processing target easier to recognize. In fig. 6A, an example of the information enhancement processing showing the result of the aortic valve segmentation by the geometric transformation is shown as an example, and as shown in fig. 6A, for example, regarding the morphological characteristics peculiar to the heart valve, the shape of the aortic valve having three valve leaflets when viewed in the cross-sectional direction of the aorta is considered, and therefore, in the case where the object recognition is not easy to be performed after the cutting of the existing cross-sectional image such as the axial position, the sagittal position, or the coronal position, the obtained key point can be utilized to transform the existing cross-sectional image such as the axial position, the sagittal position, or the coronal position into the cross-sectional image such as the aorta which can be more easily recognized by the geometric transformation, thereby obtaining the information enhancement effect.
In fig. 6B, another example of the information enhancement processing showing the result of aortic valve segmentation by geometric constraint is illustrated, and as illustrated in fig. 6B, for the characteristic morphological features of the heart valve, for example, considering the respective morphological features of the normal valve, the calcified valve, and the systolic/diastolic-only single phase valve, template images reflecting the respective shape constraints of the normal valve, the calcified valve, and the single phase valve, respectively, as illustrated in the left side, the middle side, and the right side of fig. 6B, are acquired in advance by valve labels or the like, and the corresponding shape constraint template images are input to a network model to enhance training performance according to which of the normal valve, the calcified valve, and the single phase valve is the object to be processed, so that unnecessary over-segmentation is avoided. By applying the shape limitation to the result of, for example, step S100, an effect of information enhancement can be obtained, ready for the subsequent processing of, for example, step S300.
In addition, as the information enhancement processing of step S200, conventional gray information constraint, morphological constraint, and the like may be adopted, so long as the existing features acquired in step S100 are used for information extraction, enhancement, and fusion, so that the morphological information processed in step S200 may guide the subsequent processing in step S300 to improve accuracy. In summary, the information enhancement processing of step S200 may be implemented by ROI limitation, geometric transformation, geometric constraint, gray information constraint, morphological constraint, or the like.
In the present embodiment, the ROI restriction in fig. 4B and the modification of the information enhancement processing in fig. 6A and 6B can be arbitrarily selected and used or combined. In the present invention, the 2 nd processing function 200 performs the information enhancement processing by performing at least one of ROI restriction, geometric transformation, geometric constraint, gradation information constraint, and morphological constraint.
Details of the correction process will be described below with reference to fig. 7A to 7C.
Fig. 7A is a flowchart for explaining an example of correction processing performed by the medical image processing apparatus 1 according to embodiment 1. Fig. 7B and 7C are schematic diagrams for explaining a procedure of correction processing performed by the medical image processing apparatus 1 according to embodiment 1.
In this embodiment, for example, after the keypoint detection and the heart valve segmentation of step S300 are completed, accurate keypoints, mask images of the aortic root, and mask images of the aortic valve with, for example, three leaflets are obtained, and the present invention can then use these information to perform the correction process of step S500 to correct, for example, the morphology of the heart valve.
As shown in fig. 7A, the correction process of the present invention may include, for example, three steps: first, in step S501, the control unit 10 extracts the largest connected domain from the division result, and prepares for the subsequent processing, which is to be performed with the extracted largest connected domain as the processing target. Next, since there may be holes inside the segmentation result image, in particular near the heart valve, holes on the heart valve in the maximum communication domain are filled in step S502. In addition, since the steps S100 and S300 perform the dividing operation of dividing twice, the dividing results of the aortic root and the heart valve may not be uniform. For example, the edge (insertion area) of the heart valve may be under-segmented, so that inside the sinuses of e.g. valvular, the edge (insertion area) of the heart valve may not be connected to the vessel wall, and the detachment between the edge of the valve and the vessel wall is obviously not clinical, so that the edge of the valve is extended to the vessel wall in step S503 to unify the segmented images of the root of the aorta and the valve.
Fig. 7B and 7C are schematic views for explaining the procedure of the correction process performed by the medical image processing apparatus pertaining to embodiment 1, showing the procedure of hole filling in step S502 and edge expansion in step S503 in cross-sectional views of the axial position, sagittal position, and coronal position, respectively.
The maximum connected domain extraction in the correction process can be realized by a traditional method for searching the maximum connected domain in the prior art, hole filling and expansion of the segmentation result image can also be realized by a traditional method in the prior art, such as image processing, and details thereof are not described herein.
As described above, according to the medical image processing apparatus 1 of embodiment 1, by repeatedly performing the combination processing composed of the plurality of processes on the medical image data, the previous feature extraction guides and assists the next feature extraction in at least some aspects inside each processing unit, to obtain better performance. Therefore, by repeatedly applying feature extraction for a plurality of times, precise morphological information can be generated gradually from thick to thin, and particularly, the method has better performance under the complex conditions of a heart valve and the like, and the calcified morphological structure, the incomplete morphological structure and the like of the heart valve.
In addition, by performing information enhancement processing between two processing units to correct the form information, it is possible to ensure that the accuracy of the processing is gradually improved from thick to thin.
Although embodiment 1 has been described above, the technology disclosed in the present invention may be implemented in various forms other than embodiment 1.
(embodiment 2)
In embodiment 1 described above, the description has been made taking, as an example, two kinds of processing including the key point detection and the segmentation as the 1 st processing executed by the 1 st processing function 100. However, the embodiment is not limited thereto, and for example, the 1 st process may include three or more processes. Embodiment 2 of the present invention in which the 1 st process includes three processes is described below with reference to fig. 8.
Fig. 8 is a flowchart showing an example of the heart valve detection process performed by the medical image processing apparatus 1 according to embodiment 2, and in the description of embodiment 2, mainly the differences from embodiment 1 described above will be described. In the description of embodiment 2, the same components as those of embodiment 1 are denoted by the same reference numerals, and description thereof is omitted.
According to the medical image processing apparatus of embodiment 2, the 1 st processing function 100 executes the classification processing (S103, S303) of the calcified valve in addition to the keypoint detection processing (S101, S301) and the segmentation processing (S102, S302), that is, in this embodiment, the 1 st processing executed by the 1 st processing function 100 is a combined processing composed of three processes of keypoint detection, calcified valve classification, and segmentation.
Compared to embodiment 1, the workflow of embodiment 2 adds a calcified valve classification process, and the control unit 10 classifies whether the treatment target is a calcified valve or a normal valve according to the processing result of the key point detection process before the segmentation process by the process function 1. This added process may help with more accurate segmentation. For example, a user may train two separate segmentation models for a calcified valve and a normal valve, with data being fed into the different segmentation models separately based on the results of the classification. Thus, in addition to the effects of embodiment 1, embodiment 2 can obtain the advantageous technical effects of preventing the data imbalance problem occurring during training and obtaining a better segmentation effect.
The repeatedly performed steps corresponding to step S400 of embodiment 1 are not shown in fig. 8, but it is apparent that the workflow of embodiment 2 may be repeated one or more times after step S300 as in embodiment 1 to obtain better results.
(embodiment 3)
In embodiment 1, a combination process of the two processes, i.e., the key point detection process and the division process, is executed by the 1 st processing function 100. However, the embodiment is not limited to this, and for example, the 1 st processing function 100 may execute a combination process of a key point detection process and a process other than the division process. Embodiment 3 of the present invention, in which the 1 st processing function 100 executes a combination process other than the key point detection process and the segmentation process, is described below with reference to fig. 9.
Fig. 9 is a flowchart showing an example of the heart valve detection process performed by the medical image processing apparatus 1 according to embodiment 3, and in the description of embodiment 3, mainly the differences from embodiments 1 and 2 described above will be described. In the description of embodiment 3, the same components as those of embodiments 1 and 2 are denoted by the same reference numerals, and description thereof is omitted.
According to the medical image processing apparatus 1 of embodiment 3, the 1 st processing function 100 executes the 1 st processing (S100, S300) including the segmentation processing (S104, S304) and the blood flow estimation processing (S105, S305). For example, when performing image processing on the mitral valve that is the atrioventricular valve between the left ventricle and the left atrium, the medical image processing apparatus 1 according to embodiment 3 first inputs dynamic image data of one cardiac cycle in step S10, then the control unit 10 performs processing of the 1 st processing function 100 as processing of step S100, first performs processing of dividing the input dynamic image data into the left ventricle in step S104, divides a mask image of the left ventricle of one cardiac cycle, and then performs blood flow estimation processing in step S105. Since, for example, the mitral valve is located between the left ventricle and the left atrium, the blood flow velocity near the mitral valve is characterized by a large variation, and thus the region where the mitral valve is located can be located by blood flow velocity estimation. The blood flow estimation process of step S105 may be a rough estimation to ensure the temporal performance of the process, for example, to estimate the blood flow velocity of the entire left ventricle only in ventricular diastole. For example, during diastole, the mitral valve is open and the direction of blood flow is from the left atrium to the left ventricle, so that the velocity of blood flow around the mitral valve varies greatly at this time, and the region where the mitral valve is located can be located more quickly by blood flow velocity estimation.
Next, according to the medical image processing apparatus 1 of embodiment 3, the control section 10 executes positioning processing as the 2 nd processing by the 2 nd processing function 200 at step S202. Specifically, the control unit 10 locates, from the left ventricle segmentation result, a region in which the blood flow velocity in the left ventricle changes most, which can be regarded as including the mitral valve structure, according to the result of the blood flow velocity estimation in step S105, by the processing function 200 of the 2 nd.
Then, in the 1 st process (step S300) after the adjustment performed again, the combined process of the segmentation and the blood flow estimation is performed as well, but a more accurate process of spatially narrowing the process range as compared with the step S100 is performed. Specifically, first in step S304, the control unit 10 performs the mitral valve segmentation process on the image data whose range has been narrowed down by the positioning in step S202 by the 1 st processing function 100, and in this case, for example, mitral valve segmentation may be performed for all phases. Subsequently, unlike the rough estimation of the blood flow velocity of the entire left ventricle in step S105, in step S305, the control unit 10 precisely estimates only the blood flow velocity near the mitral valve from the result of the mitral valve segmentation by the 1 st processing function 100, and more precisely locates the region where the mitral valve is located from the blood flow velocity estimation result.
The method of blood flow velocity estimation is not limited and may be implemented by any known method. For example, when the input medical image data is ultrasound data, fluid information of blood flow can be acquired from an ultrasound image based on a doppler method. In the case where the input medical image data is an MRI image, the fluid information may be obtained by performing known four-dimensional fluid analysis (4D flow analysis) on the MRI image. In the case where the input medical image data is a multi-phase contrast CT (Computed Tomography: computed tomography) image, fluid information can be obtained by analyzing changes in CT values at respective positions in the multi-phase contrast CT image. For example, in step S105, fluid information is acquired from morphology information of the left ventricle or left atrium, and in step S305, fluid information is acquired based on the morphology of the mitral valve in addition to the left ventricle and the left atrium, with respect to a circuit module for simulating the circulatory dynamics of a living body (for example, a human body) constructed from a known bellows model (Windkessel model) or pulse wave propagation model.
The present invention according to embodiment 3 can obtain a more accurate morphological measurement result by performing a combination process of a segmentation process and a blood flow estimation process on medical image data from coarse to fine, as in embodiment 1.
The repeatedly executed steps corresponding to step S400 of embodiment 1, as well as the correction steps corresponding to S500 and the measurement steps corresponding to S600 are not shown in fig. 9, but it is apparent that the workflow of embodiment 3 may be provided with these steps as in embodiment 1 to obtain better performance and measurement results.
(embodiment 4)
In the above embodiments 1 to 3, the case of the heart, the aortic valve, and the mitral valve is described as an example. However, the embodiment is not limited to this, and may be applied to other heart valves such as tricuspid valves or other heart valves, for example, to the lungs other than the heart. Embodiment 4, which is a treatment target for the lung, is described below with reference to fig. 10.
Fig. 10 is a flowchart showing an example of lung detection processing performed by the medical image processing apparatus 1 according to embodiment 4, and in the description of embodiment 4, mainly the differences from embodiments 1 to 3 described above will be described. In the description of embodiment 4, the same components as those of embodiments 1 to 3 are denoted by the same reference numerals, and description thereof is omitted.
According to the medical image processing apparatus 1 of embodiment 4, the control unit 10 still executes the 1 st process (S100, S300) including the keypoint detection process (S106, S306) and the segmentation process (S107, S307) by the 1 st process function 100, and in this embodiment, the control unit 10 sequentially repeats the 2 nd process and the 1 st process (S400) through three iterations according to the anatomical morphology feature of the lung after the 1 st process (S100, S300) two times to obtain a more accurate morphological measurement result.
First, in step S30, an original volume region of medical image data of a lung to be processed is input to the medical image processing apparatus 1.
Next, in step S100, the control unit 10 performs, as the 1 st process, the key point detection process of the main bronchus on the medical image data by the 1 st process function 100 (step S106), and thereafter, the control unit 10 performs, based on the key points of the main bronchus detected in step S106, the rough lung segmentation process, for example, the left lung and the right lung segmentation process by the 1 st process function 100 (step S107). The main bronchus comes from the first-stage airways and is divided into parts of the left and right lungs, so that the left and right lungs can be rapidly located and detected as the 1 st process by the key point detection process and the segmentation process of the main bronchus.
Next, in step S203, the control unit 10 performs positioning and information enhancement of the lung parenchyma region as the 2 nd processing by the 2 nd processing function 200, and selects a more accurate VOI for the subsequent accurate processing. Step S203 may employ the same process as step S200, and details thereof will not be described herein.
Next, in step S300, as the 1 st process after the adjustment performed again, the control unit 10 performs accurate key point detection processing and segmentation processing that spatially reduces the processing range based on the processing results of step S100 and step S203 by the 1 st processing function 100, for example, by combining the main bronchus and the lung region, and performs detection processing of bronchiole key points from the second or higher level airway (step S306). Thereafter, the control unit 10 performs accurate segmentation based on the bronchiole key points detected in step S306 by the 1 st processing function 100, and obtains mask images of five lung lobes (step S307).
Subsequently, the control unit 10 sequentially repeats the 2 nd process and the 1 st process (S400 in fig. 10).
Specifically, as the 2 nd processing to be executed again, in step S204, the control unit 10 performs positioning and information enhancement processing on the lobe region according to the anatomical morphology of the lung by the 2 nd processing function 200. The lung lobe areas may be located and corrected based on clinical features, for example, the lung lobe areas may be corrected based on features where the bronchioles belonging to one lung lobe cannot be segmented into other lung lobes.
Subsequently, as the 1 st process after the readjustment performed again, in step S110, the control section 10 performs a more accurate key point detection process and a segmentation process, which further reduce the process range spatially, based on the process results of steps S100, S203, S300, and S204, by the 1 st process function 100. For example, considering that each lobe is an independent system, so that the pulmonary blood vessel of one lobe is not detected in the other lobes, the control part 10 first detects the key points of the pulmonary blood vessel based on the respective lobes obtained in step S204 through the 1 st processing function 100 in step S111, and then performs segmentation of the pulmonary segment based on the detected key points of the pulmonary blood vessel in step S112.
Next, in step S500, the control unit 10 performs correction processing on the obtained image of the lung segment using the information obtained in the previous steps. As in embodiment 1, the correction process is to make the obtained lung image more suitable for the subsequent measurement process (not shown in fig. 10), for example, and is not essential to the present invention.
The present invention according to embodiment 4 can also obtain a more accurate morphological measurement result by performing a combination process composed of a plurality of processes on medical image data from coarse to fine with respect to the lung as a subject.
(embodiment 5)
For example, in the above-described embodiments 1 to 4, the control unit 10 is configured to execute the 1 st process including at least two different types of processes on the medical image data, and to execute the 1 st process after spatially narrowing down the process range on the process result of the 1 st process, but the embodiments of the technology disclosed in the present application are not limited to this.
For example, the control unit 10 may execute the 1 st kind of processing a plurality of times while adjusting the 1 st kind of processing to spatially narrow the processing range, execute the 2 nd kind of processing different from the 1 st kind of processing based on the processing result, and execute the 1 st kind of processing adjusted to spatially further narrow the processing range based on the processing result of the 2 nd kind of processing.
Hereinafter, an example of such a case will be described as embodiment 5. In the description of embodiment 5, differences from the above-described embodiments will be mainly described, and detailed description thereof will be omitted.
In the present embodiment, the control unit 10 has the 1 st processing function 100, the 2 nd processing function 200, and the parameter setting function 300, as in the above-described embodiments.
Specifically, the control unit 10 executes the 1 st type of processing on the medical image data by the 1 st processing function 100, and executes the 1 st type of processing again in a spatially reduced range based on the processing result of the 1 st type of processing. Further, the control unit 10 extracts the target region by executing the 1 st processing function 100, the 2 nd processing different from the 1 st processing, on the processing result of the 1 st processing in which the range is spatially reduced, and executing the 1 st processing on the processing result of the 1 st processing and the 2 nd processing in which the range is spatially reduced based on the processing result of the 1 st processing and the processing result of the 2 nd processing in which the range is spatially reduced.
Fig. 11 is a diagram showing a processing flow and a processing result of an example of the processing performed by the medical image processing apparatus 1 according to embodiment 5.
Here, an example will be described in which the type 1 processing is the division processing and the type 2 processing is the key point detection processing. Here, an example will be described in which the medical image data is of a heart and the target region is an aortic valve.
As shown in fig. 11, first, in step S1, medical image data of a heart to be processed is input to the medical image processing apparatus 1. For example, a CT image of the heart is input to the medical image processing apparatus 1 as medical image data.
Next, in step S121, the control unit 10 executes a segmentation process of the Aortic Root (aort) with respect to the medical image data of the heart input in step S1 by the 1 st processing function 100. Although not shown, in step S121, the control unit 10 may set the parameters of the division process by the parameter setting function 300 before the division process is executed.
At this time, the control unit 10 performs the segmentation process on the medical image data of the heart by the 1 st processing function 100, thereby roughly extracting the region of the aortic root.
For example, the control unit 10 executes the segmentation process of the aortic root by the 1 st processing function 100 using the deep learning neural network. The input of the deep learning neural network is, for example, medical image data of the whole heart, and the output is, for example, a coordinate group of pixels corresponding to a substantial region of the aortic root. Here, as the deep learning neural network, for example, a UNet-based process based on the nnUNet framework may be used.
Next, in step S122, the control unit 10 executes the adjusted division processing in which the processing range is spatially narrowed, based on the processing result of the division processing executed in step S121, by the 1 st processing function 100. Here, the adjustment of the division process is performed by, for example, the parameter setting function 300 performing adjustment of parameters.
At this time, the control unit 10 performs the adjusted segmentation process in which the process range is spatially narrowed by the 1 st process function 100, and thereby can extract the region of the aortic root more accurately than the segmentation process performed in step S121.
Specifically, the control unit 10 cuts only the periphery of the region of the aortic root in the medical image data based on the region of the rough aortic root extracted in step S121 by the 1 st processing function 100, and more precisely extracts the region of the aortic root by performing the segmentation processing on the cut region.
For example, the control unit 10 executes the segmentation process of the aortic root by the 1 st processing function 100 using the deep learning neural network. The input of the deep learning neural network is, for example, an image obtained by cutting out the periphery of only the region of the aortic root in the medical image data of the heart, and the output is, for example, a coordinate group of pixels corresponding to the region of the aortic root. Here, as the deep learning neural network, for example, a UNet-based process based on the nnUNet framework may be used.
Next, in step S221, the control unit 10 executes a correction process on the processing result of the division process executed in step S122 by the 2 nd processing function 200.
Specifically, the control unit 10 executes a correction process of correcting the region of the aortic root extracted in step S122 by image processing by the 2 nd processing function 200. The correction process is a process for making the processing result of the separation process more suitable for the subsequent process, and is not essential in the present embodiment.
Next, in step S123, the control unit 10 executes the key point detection process on the processing result of the division process executed in step S122 by the 1 st processing function 100.
Specifically, the control unit 10 cuts only the periphery of the region of the aortic root in the medical image data based on the region of the aortic root corrected in step S221 by the 1 st processing function 100, and detects clinically important key points of the anatomical morphology of the aorta by performing a key point detection process on the cut region.
For example, the control unit 10 performs the 1 st processing function 100 to detect 8 feature points, i.e., the left coronary lobe Nadir point (LCC Nadir point), the right coronary lobe Nadir point (RCC Nadir point), the no-coronary lobe Nadir point (NCC Nadir point), the no-coronary lobe-left coronary lobe junction (N-L Commissure point), the right coronary lobe-left coronary lobe junction (R-L Commissure point), the no-coronary lobe-right coronary lobe junction (N-R commissure point), the right coronary artery lower edge point (Right coronary ostium), and the left coronary artery lower edge point (Left coronary ostium), as key points.
Here, the lowest point (Nadir point) refers to the point on the cusp at the nearest position to the left ventricular outflow tract (Left Ventricular Outflow Tract: LVOT). The commissure point (point) is a point at which the cusps meet each other. The coronary artery lower edge point (coronary ostium point) is a point located on the most left ventricular side (proximal side) of the inlet portion of each of the right coronary artery (Right Coronary Artery: RCA) and the left coronary artery (Left Coronary Artery: LCA).
For example, the control unit 10 executes the key point detection process using the deep learning neural network by the 1 st processing function 100. The input of the deep learning neural network is, for example, an image of the region of the aortic root, and the output is, for example, coordinates of the left coronary valve nadir, the right coronary valve nadir, the no coronary valve-left coronary valve junction, the right coronary valve-left coronary valve junction, the no coronary valve-right coronary valve junction, the right coronary artery mouth lower edge point, and the left coronary artery mouth lower edge point. Here, as the deep learning neural network, for example, spatial Configuration-Net (SCN: spatial configuration network) may be used.
Next, in step S124, the control unit 10 performs the adjusted segmentation process of further spatially narrowing the process range to extract the area of the aortic valve tip based on the process result of the key point detection process performed in step S123 by the 1 st process function 100. Here, the adjustment of the division process is performed by, for example, the parameter setting function 300 performing adjustment of parameters.
At this time, the control unit 10 performs the adjusted segmentation process of further spatially narrowing the processing range based on the key points extracted in step S123 by the 1 st processing function 100, thereby extracting the area of the aortic valve cusp from the area of the aortic root extracted in step S122.
Specifically, the control unit 10 extracts the region of the aortic valve cusp by cutting only the periphery of the region of the aortic root in the medical image data based on the region of the aortic root corrected in step S221 and performing the segmentation process on the cut region by the 1 st processing function 100.
For example, the control unit 10 performs the segmentation process by the 1 st processing function 100, thereby extracting each region of the right coronary valve (Right Coronary Cusp:RCC), the left coronary valve (Left Coronary Cusp:LCC), and the no-coronary valve (Non Coronary Cusp:NCC).
For example, the control unit 10 executes the segmentation process using the deep learning neural network by the 1 st processing function 100. The input of the deep learning neural network is, for example, an image obtained by cutting only the periphery of the region of the aortic root in the medical image data of the heart, and the output is, for example, a coordinate group of pixels corresponding to each region of the right coronary valve, the left coronary valve, and the no coronary valve. Here, as the deep learning neural network, for example, a UNet-based process based on the nnUNet framework may be used.
Next, in step S222, the control unit 10 executes the correction processing with respect to the processing result of the division processing executed in steps S122 and S124 and the processing result of the key point detection processing executed in step S123 by the 2 nd processing function 200.
Specifically, the control unit 10 performs correction processing for correcting the position of the key point detected in step S123 by image processing in the region of the aortic root extracted in step S122, the region of the aortic valve tip extracted in step S124, and the processing function 200 of the 2 nd. In addition, the correction processing is used to make the processing results of the separation processing and the key point detection processing more suitable for subsequent processing such as measurement processing, and is not necessary in the present embodiment.
Next, in step S600, the control unit 10 performs morphological measurement based on the processing result of the segmentation process and the processing result of the key point detection process.
Specifically, the control unit 10 performs morphological measurement based on the area of the aortic root, the area of the aortic valve cusp, and the position of the key point corrected in step S222, as in the above-described embodiment.
In the above example, the segmentation process is performed on the medical image data in step S121 and then the adjusted segmentation process for spatially narrowing down the processing range is performed 1 time in step S122, but the present embodiment is not limited thereto. For example, in step S122, the control unit 10 may sequentially execute the division processing a plurality of times while adjusting the processing result of the division processing executed immediately before to spatially narrow the processing range. Thus, a morphological measurement result with higher accuracy can be obtained.
In the above example, the key point detection process and the division process are performed 1 time in steps S123 and S124, respectively, but the present embodiment is not limited thereto. For example, the control unit 10 may extract the area of the aortic valve cusp by sequentially repeating the key point detection process and the segmentation process for the spatially further reduced range by the 1 st process function 100. Thus, a morphological measurement result with higher accuracy can be obtained.
In the above example, the 8 feature points of the left coronary valve nadir, the right coronary valve nadir, the no-coronary valve-left coronary valve junction, the right coronary valve-left coronary valve junction, the no-coronary valve-right coronary valve junction, the right coronary artery port down-edge point, and the left coronary artery port down-edge point are detected as the key points by the key point processing of step S123, but the present embodiment is not limited thereto. For example, in step S123, all of the 8 feature points may not necessarily be detected as the key points, and at least the feature points required for the subsequent segmentation process and measurement may be detected as the key points. That is, the control unit 10 may perform the keypoint detection process by the 1 st processing function 100, thereby detecting at least one of the left coronary valve nadir, the right coronary valve nadir, the no-crown valve-left crown valve junction, the right crown valve-left crown valve junction, the no-crown valve-right crown valve junction, the right coronary artery lower edge point, and the left coronary artery lower edge point as the keypoint.
In the above example, the segmentation processing in steps S121, S122 and S124 extracts the regions of the aortic root, the right coronary valve, the left coronary valve and the non-coronary valve, but the present embodiment is not limited thereto. For example, in steps S121, S122, and S124, it is not necessarily necessary to extract all of the four regions, and at least the region required for the subsequent segmentation process and measurement may be extracted. That is, the control unit 10 may extract at least one region of the aortic root, the right coronary valve, the left coronary valve, and the no-coronary valve by performing the segmentation process by the 1 st processing function 100.
In the above example, the case where the target region is an aortic valve was described, but the present embodiment is not limited to this, and can be similarly applied to a case where another heart valve is the target region, such as a mitral valve, a tricuspid valve, and a pulmonary valve. Furthermore, the present embodiment can be applied to a case where a region other than a heart valve is a target region such as a lung, for example.
In the above example, the processing of the type 2 is assumed to be the key point detection processing, and the processing of the type 1 is the division processing, but the present embodiment is not limited to this. For example, when the target region is the mitral valve, similar to embodiment 3, the type 2 process may be a blood flow estimation process, and the type 1 process may be a segmentation process of the left ventricle or mitral valve.
As described above, according to the medical image processing apparatus 1 of embodiment 5, the 1 st kind of processing is performed a plurality of times while the processing range is spatially narrowed by adjusting the 1 st kind of processing, then the 2 nd kind of processing different from the 1 st kind of processing is performed based on the processing result, and the 1 st kind of processing after the adjustment of the processing range is spatially further narrowed based on the processing result of the 2 nd kind of processing, whereby accurate morphological information can be generated in stages from coarse to fine, and in particular, better performance can be obtained in a complex situation such as a complex object such as a heart valve, a calcified morphological structure of a heart valve, an incomplete morphological structure, and the like. Thus, according to embodiment 5, more accurate morphological measurement results can be obtained.
In addition, the image processing, the keypoint detection, the segmentation, the training and the reasoning of the deep learning neural network, etc. described in the above embodiments may be implemented in a conventional manner in the prior art, and a detailed description thereof is omitted here.
The above-described embodiments 1 to 4 have been described for the purpose of specifying the spatial position of the object, but the technique disclosed in the present application is not limited to this, and may be used for specifying time information or density information. For example, when a specific cardiac phase is determined, as the 1 st processing, processing for determining a target phase from the motion of the heart valve and processing for determining a target phase from the blood flow state may be performed.
In addition, although the above embodiment 5 has been described for the purpose of determining the spatial position of the object, the technique disclosed in the present application is not limited to this, and may be used for determining time information or density information. For example, when a specific cardiac phase is determined, a process of determining a target phase from the operation of the heart valve may be performed as the 1 st process, or a process of determining a target phase from the blood flow state may be performed as the 2 nd process.
The control unit 10 described in the above embodiment is implemented by a processor, for example. In this case, the processing functions of the control unit 10 are stored in the memory 20 in the form of, for example, a program executable by a computer. The control unit 10 reads and executes each program stored in the memory 20, thereby realizing a processing function corresponding to each program. In other words, the control unit 10 has the processing functions shown in fig. 1 in a state where the programs are read out.
The control unit 10 described in the above embodiment is not limited to be realized by a single processor, and may be configured by combining a plurality of independent processors, and each processor executes a program to realize each processing function. The processing functions of the control unit 10 may be appropriately distributed or combined in a single or a plurality of processing circuits. The processing functions of the control unit 10 may be realized by hardware such as a circuit alone, software alone, or a mixture of hardware and software. Here, an example has been described in which a program corresponding to each processing function is stored in a single memory 20, but the embodiment is not limited to this. For example, the programs corresponding to the respective processing functions may be stored in a plurality of memories in a distributed manner, and the control unit 10 may read out and execute the programs from the memories.
The technology disclosed in the present application can be implemented not only as the medical image processing apparatus described above, but also as a medical image processing method and a medium storing a medical image processing program.
The medical image processing apparatus according to the present application may be mounted on a medical image diagnostic apparatus, or the medical image processing apparatus may execute processing alone. In this case, the medical image processing apparatus includes a processing circuit that executes the same processing as the 1 st processing function, the 2 nd processing function, and the parameter setting function, and a memory that stores programs and various information corresponding to the respective functions. The processing circuit acquires three-dimensional medical image data from a medical image diagnostic apparatus such as an ultrasonic diagnostic apparatus or an image storage apparatus via a network, and executes the above-described processing using the acquired medical image data. Here, the processing circuit is a processor that realizes functions corresponding to the respective programs by reading out the programs from the memory and executing the programs, for example.
The term "processor" used in the above description refers to, for example, a circuit such as a CPU (Central Processing Unit: central processing unit), a GPU (Graphics Processing Unit: graphics processing unit), or an application specific integrated circuit (Application Specific Integrated Circuit: ASIC, application specific integrated circuit), a programmable logic device (e.g., a simple programmable logic device (Simple Programmable Logic Device: SPLD), a complex programmable logic device (Complex Programmable Logic Device: CPLD), and a field programmable gate array (Field Programmable Gate Array: FPGA)). The processor realizes the functions by reading out and executing the programs stored in the memory. Instead of storing the program in the memory, the program is directly loaded into the circuit of the processor. In this case, the processor realizes the function by reading out and executing the program loaded in the circuit. The processors of the present embodiment are not limited to the case where the processors are configured as a single circuit, and a plurality of independent circuits are combined to form 1 processor, thereby realizing the functions thereof.
In the description of the above embodiments, each constituent element of each device illustrated is functionally conceptual, and is not necessarily physically configured as illustrated. That is, the specific form of the dispersion/combination of the respective devices is not limited to the form shown in the drawings, and all or a part of the devices may be functionally or physically dispersed/combined in arbitrary units according to various loads, use conditions, and the like. Further, the processing functions performed by the respective devices may be realized by the CPU and a program that is executed by the CPU in a parsing manner, or may be realized as hardware of wired logic.
The processing method described in the above embodiment can be implemented by executing a processing program prepared in advance by a personal computer, a workstation, or the like. The processing program may be distributed via a network such as the internet. The processing program may be recorded in a non-transitory recording medium readable by a computer, such as a hard Disk, a Flexible Disk (FD), CD (Compact Disk) to ROM (Read Only Memory), and a Flash memory such as a MO (magnetic-optical Disk), a DVD ((Digital Versatile Disk), USB (Universal Serial Bus) memory, and SD (Secure Digital) card memory), and may be read from the non-transitory recording medium by the computer.
In the respective processes described in the above embodiments, all or part of the processes described as being performed automatically may be performed manually, or all or part of the processes described as being performed manually may be performed automatically by a known method. In addition, the information including the processing order, the control order, the specific names, the various data, and the parameters shown in the above-described text or the drawings may be arbitrarily changed unless otherwise specifically described.
In addition, various data handled in the present specification are typically digital data.
According to at least one embodiment described above, more accurate morphological measurements can be obtained.
Several embodiments of the present invention have been described, but these embodiments are presented as examples and do not limit the scope of the invention. These embodiments can be implemented in various other forms, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications are included in the scope and gist of the invention, and are also included in the invention described in the claims and their equivalents.
The following supplementary notes are disclosed as one aspect and optional features of the present invention with respect to the above embodiments.
(additionally, 1)
A medical image processing apparatus includes a control unit that executes a 1 st type of processing on medical image data; re-executing the category 1 process for a spatially reduced range based on a processing result of the category 1 process; executing a 2 nd kind of processing different from the 1 st kind of processing with respect to a processing result of the 1 st kind of processing of the spatially reduced range; the target region is extracted by executing the processing of the 1 st type with respect to the further spatially narrowed region based on the processing result of the processing of the 1 st type and the processing result of the processing of the 2 nd type.
(additionally remembered 2)
The type 1 process may be a segmentation process; the category 2 process is a keypoint detection process.
(additionally, the recording 3)
The control unit may execute the key point detection process and the segmentation process by using a deep learning neural network.
(additionally remembered 4)
The control unit may execute the correction process on the processing result of the type 1 process in the spatially reduced range before executing the type 2 process.
(additionally noted 5)
The medical image data may be medical image data of a heart; the subject region is a heart valve.
(additionally described 6)
The target area may be an aortic valve; the control section extracts at least one region of an aortic root, a right coronary valve, a left coronary valve, and a non-coronary valve by performing the segmentation process; by performing the keypoint detection process, at least one of a left coronary valve nadir, a right coronary valve nadir, a no-crown valve-left crown valve junction, a right crown valve-left crown valve junction, a no-crown valve-right crown valve junction, a right coronary artery mouth lower edge point, and a left coronary artery mouth lower edge point is detected as a keypoint.
(additionally noted 7)
The control unit may extract the target region by sequentially and repeatedly executing the type 2 processing and the type 1 processing for the spatially further reduced range.
(additionally noted 8)
A medical image processing method comprising: a step in which the control unit executes a type 1 process on the medical image data; a step of re-executing the category 1 process with respect to a spatially reduced range based on a processing result of the category 1 process; a step of executing a 2 nd kind of processing different from the 1 st kind with respect to a processing result of the 1 st kind of processing of the spatially reduced range; and a step of extracting a target region by executing the type 1 process again for a further spatially narrowed region based on the processing result of the type 1 process and the processing result of the type 2 process in the spatially narrowed region.
(additionally, the mark 9)
A computer-readable storage medium storing a program that causes a computer to execute: a step of executing a type 1 process on the medical image data; a step of re-executing the type 1 processing in a spatially reduced range based on the processing result of the type 1 processing; a step of executing a 2 nd type of processing different from the 1 st type with respect to a processing result of the 1 st type of processing in the spatially reduced range; and a step of extracting a target region by executing the type 1 process in a further spatially reduced range based on the processing result of the type 1 process and the processing result of the type 2 process in the spatially reduced range.
(additionally noted 10)
A medical image processing apparatus includes a control unit that executes a 1 st type of processing on medical image data; re-executing the category 1 process for a range narrowed down based on a processing result of the category 1 process; executing a 2 nd kind of processing different from the 1 st kind with respect to a processing result of the 1 st kind of processing of the reduced range; the target region is extracted by executing the 1 st kind of processing on the basis of the processing result of the 1 st kind of processing and the processing result of the 2 nd kind of processing in the reduced range.

Claims (14)

1. A medical image processing apparatus includes a control unit,
the control part is provided with a control part,
performing a 1 st process including at least two different kinds of processes for medical image data; and
the 1 st processing is executed again for the spatially reduced range based on the processing result of the 1 st processing.
2. The medical image processing apparatus according to claim 1,
the control part is provided with a control part,
after the 1 st processing for the medical image data is performed, performing a 2 nd processing on a processing result of the 1 st processing; and is also provided with
The 1 st processing for the spatially reduced range is executed with respect to the processing result of the 2 nd processing, and an image representing the target region in the medical image data is obtained based on the processing result of the 1 st processing.
3. The medical image processing apparatus according to claim 1 or 2,
the processing order of the different kinds of processing in the 1 st processing for the spatially reduced range is the same as the processing order of the different kinds of processing in the 1 st processing for the medical image data.
4. The medical image processing apparatus according to claim 2,
the control part is provided with a control part,
as the 1 st process, performing a keypoint detection process for the medical image data, and performing a segmentation process of the object region based on a processing result of the keypoint detection process;
As the 2 nd processing, information enhancement processing is performed with respect to the processing result of the 1 st processing.
5. The medical image processing apparatus according to claim 2,
the control unit, after executing the 1 st processing for the spatially reduced range, further sequentially and repeatedly executes the 2 nd processing and the 1 st processing for the spatially reduced range, thereby obtaining an image representing the target region.
6. The medical image processing apparatus according to claim 2,
the control unit also executes correction processing on an image representing the target region.
7. The medical image processing apparatus according to claim 4,
the control unit performs the information enhancement processing by performing at least one of area restriction, geometric transformation, geometric constraint, constraint based on gradation information, and morphological constraint of the region of interest, which is the ROI.
8. The medical image processing apparatus according to claim 4,
the control section performs the keypoint detection process and the segmentation process using a deep learning neural network.
9. The medical image processing apparatus according to claim 8,
the control unit executes the segmentation process after executing the reinforcement process on the medical image data based on the processing result of the key point detection process,
The strengthening treatment comprises at least one of the following steps:
cutting the medical image data based on the key points detected by the key point detection process;
inputting the key point heat map detected by the key point detection process into the deep learning neural network as a channel;
and taking the key point heat map detected by the key point detection processing as a loss function of the deep learning neural network.
10. The medical image processing apparatus according to claim 4,
the medical image data is image data of a heart and the object region is a heart valve.
11. The medical image processing apparatus according to claim 10,
the subject region is an aortic valve,
and at least one of a left crown valve lowest point, a right crown valve lowest point, a crown-free valve-left crown valve joint point, a right crown valve-left crown valve joint point, a crown-free valve-right crown valve joint point, a right coronary artery mouth lower edge point and a left coronary artery mouth lower edge point is detected as a key point in the key point detection process.
12. A medical image processing method comprising:
a step in which the control unit executes a 1 st process including at least two different types of processes on the medical image data; and a step in which the control unit re-executes the 1 st processing for a spatially reduced range based on the processing result of the 1 st processing.
13. A computer-readable storage medium storing a program that causes a computer to execute:
a step of executing a 1 st process including at least two different kinds of processes on medical image data; and a step of executing the 1 st processing again for a spatially reduced range based on the processing result of the 1 st processing.
14. A medical image processing apparatus includes a control unit,
the control part is provided with a control part,
performing a 1 st process including at least two different kinds of processes for medical image data; and
the 1 st processing is executed again for the range narrowed down based on the processing result of the 1 st processing.
CN202211432902.3A 2021-11-16 2022-11-16 Medical image processing device, medical image processing method, and storage medium Pending CN116137026A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2021113518521 2021-11-16
CN202111351852 2021-11-16
JP2022166028A JP2023073968A (en) 2021-11-16 2022-10-17 Medical image processing apparatus, medical image processing method and medical image processing program
JP2022-166028 2022-10-17

Publications (1)

Publication Number Publication Date
CN116137026A true CN116137026A (en) 2023-05-19

Family

ID=86332765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211432902.3A Pending CN116137026A (en) 2021-11-16 2022-11-16 Medical image processing device, medical image processing method, and storage medium

Country Status (1)

Country Link
CN (1) CN116137026A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409302A (en) * 2023-11-03 2024-01-16 首都医科大学附属北京朝阳医院 Method and device for processing multitasking image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117409302A (en) * 2023-11-03 2024-01-16 首都医科大学附属北京朝阳医院 Method and device for processing multitasking image

Similar Documents

Publication Publication Date Title
EP3081161B1 (en) Method and system for advanced transcatheter aortic valve implantation planning
US9406142B2 (en) Fully automatic image segmentation of heart valves using multi-atlas label fusion and deformable medial modeling
Tobon-Gomez et al. Benchmark for algorithms segmenting the left atrium from 3D CT and MRI datasets
US8009887B2 (en) Method and system for automatic quantification of aortic valve function from 4D computed tomography data using a physiological model
US8218845B2 (en) Dynamic pulmonary trunk modeling in computed tomography and magnetic resonance imaging based on the detection of bounding boxes, anatomical landmarks, and ribs of a pulmonary artery
US5889524A (en) Reconstruction of three-dimensional objects using labeled piecewise smooth subdivision surfaces
US9292917B2 (en) Method and system for model-based fusion of computed tomography and non-contrasted C-arm computed tomography
US20210209766A1 (en) Method and system for automatically segmenting blood vessel in medical image by using machine learning and image processing algorithm
US8934693B2 (en) Method and system for intervention planning for transcatheter aortic valve implantation from 3D computed tomography data
CN112040908B (en) Patient-specific virtual percutaneous structural cardiac intervention methods and systems
EP3192050B1 (en) Analyzing aortic valve calcification
US9730609B2 (en) Method and system for aortic valve calcification evaluation
US20110052026A1 (en) Method and Apparatus for Determining Angulation of C-Arm Image Acquisition System for Aortic Valve Implantation
US20110096969A1 (en) Method and System for Shape-Constrained Aortic Valve Landmark Detection
Tautz et al. Extraction of open-state mitral valve geometry from CT volumes
CN116137026A (en) Medical image processing device, medical image processing method, and storage medium
Tahoces et al. Deep learning method for aortic root detection
EP2956065B1 (en) Apparatus for image fusion based planning of c-arm angulation for structural heart disease
Weese et al. The generation of patient-specific heart models for diagnosis and interventions
CN111566699A (en) Registration of static pre-procedural planning data to dynamic intra-procedural segmentation data
JP2023073968A (en) Medical image processing apparatus, medical image processing method and medical image processing program
JP7278970B2 (en) Assessment of blood flow obstruction by anatomy
US20240202919A1 (en) Medical image processing apparatus, method, and storage medium
Lopes et al. Automated mitral valve assessment for transcatheter mitral valve replacement planning
EP4299012A1 (en) Three-dimensional shape data generation program, three-dimensional shape data generation method, and information processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination