CN115908225A - Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system - Google Patents

Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system Download PDF

Info

Publication number
CN115908225A
CN115908225A CN202110942799.6A CN202110942799A CN115908225A CN 115908225 A CN115908225 A CN 115908225A CN 202110942799 A CN202110942799 A CN 202110942799A CN 115908225 A CN115908225 A CN 115908225A
Authority
CN
China
Prior art keywords
tubular organ
image
projection
dimensional
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110942799.6A
Other languages
Chinese (zh)
Inventor
王艳华
李冰
肖其林
王扶月
李庚婉
闫立新
赵龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Medical Systems Corp
Original Assignee
Canon Medical Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Medical Systems Corp filed Critical Canon Medical Systems Corp
Priority to CN202110942799.6A priority Critical patent/CN115908225A/en
Priority to JP2022098168A priority patent/JP2023027751A/en
Priority to US17/814,904 priority patent/US20230058183A1/en
Publication of CN115908225A publication Critical patent/CN115908225A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a tubular organ labeling method, a tubular organ labeling result correction method and a tubular organ labeling result correction system. The tubular organ labeling method comprises the following steps: a projection step of performing local density projection on a two-dimensional image constituting three-dimensional volume data including a tubular organ to obtain a projection image; a mapping matrix obtaining step of obtaining a mapping matrix of the local density projection performed in the projecting step; a labeling step of labeling the tubular organ in the projection image; and a reflection mapping step of reversely mapping the tubular organ marked in the projection image into the three-dimensional volume data by using the mapping matrix to generate a three-dimensional tubular organ image.

Description

Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system
Technical Field
The invention relates to a tubular organ labeling method, a tubular organ labeling result correction method and a tubular organ labeling result correction system. In particular, to a tubular organ labeling method, a tubular organ labeling result correction method and a tubular organ labeling result correction system under the guidance of Local intensity projection (Local intensity projection).
Background
The human organ or tissue structure includes a tubular organ or tissue structure such as a blood vessel, a trachea, and the like. These tubular shaped organs or tissue structures are collectively referred to herein as tubular organs. Accurate labeling (annotation) and segmentation (segmentation) of tubular organs in medical images, particularly in three-dimensional volumetric data, is of great value for the diagnosis and surgery of diseases associated with tubular organs, such as blood vessels.
Patent document 1 (US 2017/0178405A) describes a method of labeling a centerline of a tubular structure, and in patent document 1, a contrast medium is injected into a blood flow to enhance an image display of a centerline of a blood vessel, and in addition, patent document 1 finds a path from a start position to an end position located at the centerline of the tubular structure by an iterative algorithm. In a method of tracking a single blood vessel center line by a start point and an end point using a conventional algorithm such as an iteration described in patent document 1, it is difficult to track the blood vessel center line for a minute blood vessel, a curved blood vessel, or a blood vessel having a cross.
Patent document 2 (CN 101732061) discloses an image processing apparatus and an image processing method for image diagnosis of vascular diseases, and patent document 2 proposes a technique for generating display image data from volume data based on a determined visual line direction and visual line position, in which the volume data is subjected to three-dimensional image processing such as MPR (multi Planar Reconstruction) processing, CPR (Curved surface Reconstruction) processing, SPR (Stretched CPR) processing, volume rendering, surface rendering, and MIP (Maximum density Projection) to generate the display image data. Patent document 2 also describes the arrangement of tags in blood vessels, but the so-called blood vessel tags in patent document 2 are annotations of classification names indicating the anatomy of blood vessel branches displayed on a display image.
Other methods for labeling and segmenting tubular organs such as blood vessels exist in the prior art, and especially, methods for labeling and segmenting tubular organs based on Artificial Intelligence (AI) driven by data have been proposed in recent years. However, labeling of vessels in three-dimensional volumetric data remains a very challenging task, as tubular organs such as blood vessels have complex geometric, topological changes, and contain minute structures.
Disclosure of Invention
Problems in the prior art
If the vessel labeling is manually performed in the images forming the three-dimensional volume data, a plurality of images need to be browsed, and the vessel labeling is checked one by one from the root point to the end point of the vessel, which is very time-consuming. The conventional technique as in patent document 1 performs segmentation of a blood vessel using a conventional algorithm, and then manually edits it into a true value (GT). However, the quality of the GT of some published datasets is not very high. In addition, since the shape of the blood vessel is complicated, the structure is minute, and noise/disease clutter and the like exist, manual inspection and modification of the blood vessel GT is also very time-consuming and inefficient.
Fig. 1 is a diagram for explaining a method of manually labeling a three-dimensional blood vessel in the related art. Fig. 1 shows a plurality of continuously scanned images of, for example, a pulmonary blood vessel of a human body, and when performing manual blood vessel labeling, a user needs to sequentially browse and track the blood vessel points formed in each image from the root to the branch of one blood vessel one by one in the sequence shown by the arrow direction in the figure, and due to the existence of other blood vessels and clutter in the image, the tracking is difficult, the tracking is often lost, and in addition, some small branches are easy to miss. Even if the user could get a vessel centerline by tracking, it would be a time consuming process.
Means for solving the problems
The present invention has been made to solve the above-mentioned problems of the prior art. The invention provides a tubular organ labeling method under the guidance of local density projection, a tubular organ labeling result correction method and a tubular organ labeling result correction system, which are characterized in that local maximum/minimum density projection (LMIP) images of tubular organs such as blood vessels are generated firstly, an index mapping matrix between the local maximum/minimum density projection (LMIP) images and original three-dimensional volume data is extracted, then a user labels key points (landmark) on the center line of the tubular organs in the LMIP images respectively, the key points on the marked center line are reflected to the original three-dimensional volume data by using the index mapping matrix to obtain seed points of the blood vessel center line, the blood vessel center line is obtained by fitting the obtained seed points of the blood vessel center line, and the three-dimensional labeling of the blood vessels is completed under the guidance of the blood vessel center line.
Specifically, according to one aspect of the present invention, there is provided a tubular organ labeling method, comprising the steps of: a projection step of performing local density projection on a two-dimensional image constituting three-dimensional volume data including a tubular organ to obtain a projection image; a mapping matrix obtaining step of obtaining a mapping matrix of the local density projection performed in the projecting step; a labeling step of labeling the tubular organ in the projection image; and a reflection mapping step of reversely mapping the tubular organ marked in the projection image into the three-dimensional volume data by using the mapping matrix to generate a three-dimensional tubular organ image.
Therefore, the method and the device can mark key points on the central line of the tubular organ in the LMIP image by utilizing the characteristic that the local density projection can enhance the strength and the geometric figure connectivity of the tubular organ (such as blood vessels, trachea and the like) in the image, thereby easily distinguishing the central line of the tubular organ from other clutter (other blood vessels/disease areas/noise), and tiny blood vessel points can be marked due to the fact that the LMIP enhances the geometric connectivity. And inversely mapping the central point labeled in the LMIP to the original image through the constructed index relation mapping matrix so as to generate the central line of the tubular organ in the three-dimensional space.
According to another aspect of the present invention, there is provided a tubular organ labeling result correction method, comprising the steps of: an input step, inputting three-dimensional volume data containing a tubular organ and a pre-labeling result obtained by pre-labeling the tubular organ aiming at the three-dimensional volume data; a projection step of performing local density projection on a two-dimensional image constituting the three-dimensional volume data to obtain a projection image; a mapping matrix obtaining step of obtaining a mapping matrix of the local density projection performed in the projecting step; a comparison step, comparing the projection image with the pre-labeling result to obtain the missing part of the tubular organ in the pre-labeling result; a labeling step of labeling the missing part of the tubular organ in the projection image; and a reflection mapping step of reversely mapping the missing part of the tubular organ marked in the projection image into the three-dimensional volume data by using the mapping matrix, and synthesizing the missing part with the pre-marking result to generate a three-dimensional tubular organ image.
The present invention can be realized as a tubular organ labeling system, a tubular organ labeling result correction system, or a tubular organ labeling result correction device having functional blocks for realizing the steps of the above-described methods, as a computer program for causing a computer to execute the steps included in the above-described methods, or as a recording medium on which the computer program is recorded.
According to the invention, key points on the center line of the tubular organ are marked in the LMIP image, so that the center line of the tubular organ can be easily distinguished from other clutter (other blood vessels/disease areas/noise), and tiny blood vessel points can be marked due to the enhanced geometric connectivity of the LMIP. And inversely mapping the central point labeled in the LMIP to the original image through the constructed index relation mapping matrix so as to generate the central line of the tubular organ in the three-dimensional space.
Drawings
FIG. 1 is a schematic diagram for explaining a method of manually labeling a three-dimensional blood vessel in the prior art;
FIG. 2 is a schematic diagram for explaining the concept of maximum intensity projection;
FIG. 3 is a flow chart illustrating a method for labeling a tubular organ according to a first embodiment of the present invention;
fig. 4 is a schematic diagram showing the local maximum intensity projection effect of step S200;
FIG. 5 is a diagram showing operations performed in steps S400 to S500;
fig. 6 is a diagram showing the operation performed in step S600;
fig. 7 is a schematic diagram for explaining density projection processing of a plurality of sets of two-dimensional images according to a modification of the first embodiment;
FIG. 8 is a functional block diagram illustrating a tubular organ labeling system according to a first embodiment of the present invention;
FIG. 9 is a flow chart showing a tubular organ labeling result correction method according to a second embodiment of the present invention;
fig. 10 is a schematic diagram showing the operations performed in steps S300', S400'.
Detailed Description
The present invention relates to methods and systems for labeling tubular organs and correcting labeling results of tubular organs, which can be implemented by executing software programs on a device having a Central Processing Unit (CPU) such as a separate computer, and which can be implemented as the separate computer or as hardware as a circuit capable of executing the steps of the method. The system of the present invention may be installed in advance in a medical image acquisition apparatus such as a magnetic resonance imaging apparatus (MRI) as part of the medical image acquisition apparatus.
In the different embodiments, the same reference numerals are used for the same components, and the duplicate description is appropriately omitted.
Before describing the embodiments of the present invention, local maximum/minimum intensity projection (LMIP) is described as the basis of the technical solution of the present invention. The local maximum/minimum intensity projection is sometimes collectively referred to as a local intensity projection (local intensity projection) in the present specification.
maximum/Minimum Intensity Projection (MIP) is a Direct Volume Rendering (DVR) method, and the maximum/minimum intensity values on the projection light are projected onto a projection plane to realize visualization of volume data, and are widely applied in the fields of medical image processing and the like.
The local maximum/minimum intensity projection employed in the present invention is an improvement of MIP, which is an approach similar to MIP in which a maximum/minimum intensity value in a part of volume data is projected onto a projection plane to obtain a projection image when imaginary rays sequentially pass through the part of volume data, not for the entire volume data, but for the part of volume data.
The local maximum intensity projection in the present invention is described in detail below with reference to fig. 2 by taking the local maximum intensity projection as an example.
As shown in fig. 2 (a), for example, 5 slice images constituting part of the volume data, a virtual light ray is projected from the back to the front in a direction perpendicular to each slice image (hereinafter, referred to as a slice direction) shown in the drawing, and the maximum density value in each slice image through which the light ray passes is projected onto a projection plane to form an LMIP image, which is a projection image. For example, (a) in fig. 2 shows that the pixel values of 5 slice images at a certain pixel position are 135, 166, 238, 141, and 169, respectively, and then the maximum value "238" of the 5 pixel values is taken as the pixel value at the pixel position of the LMIP image. The above-described processing is performed on all pixels in the slice image, and finally an LMIP image in the slice direction is obtained as shown in fig. 2 (b). As is evident from the figure, the projection images obtained by the LMIP processing significantly enhance the intensity and geometric connectivity of the vessels, and the vessel centerline is clearly distinguishable from other clutter (other vessels/disease regions/noise).
When the local maximum density projection image is generated using the N slice images, the pixel value of each pixel position in the projection image is derived from one of the N slice images having the maximum pixel value at that position, respectively. Therefore, the image number of the slice image, which is the source of the pixel value of each pixel position in the LMIP image, may be associated with the pixel position to form a table as shown in fig. 2 (c), which is an index mapping matrix (hereinafter sometimes simply referred to as "mapping matrix") of the LMIP in the present invention.
The image numbering rule of the present invention is, as shown in fig. 2 (a), that the middle image of a plurality of consecutive slice images is numbered 0, and the images are numbered sequentially from near to far forward and backward from the 0 th image as 1 st, 2 nd, and 2 nd \8230; \ 8230;. For example, when there is more than one image corresponding to the maximum pixel value at a certain pixel position, the image with the highest pixel value in the order of the number may be taken as the index value of the position, and for example, if the pixel values of the 1 st image and the-1 st image at a certain pixel position are the same and are the maximum pixel values, the index value of the pixel position is defined as "1" to form the mapping matrix.
The foregoing definition of the image numbering rule and the front and rear directions is only an example, and it is needless to say that the present invention is also applicable to the case other than the above, and the mapping matrix in the present invention may be a mapping relation in which each pixel in the projection image indicates from which image of the plurality of two-dimensional images used for generating the projection image the pixel value of the pixel originates, that is, each pixel in the projection image indicates to which image of the plurality of two-dimensional images used for generating the projection image the pixel corresponds.
As shown in fig. 2 (c), the mapping matrix of the present invention indicates, for each pixel in the projection image, from which image among the plurality of two-dimensional images used to generate the projection image the pixel value of the pixel originates. Taking 8 × 8 pixels per slice image as an example, the cell value at the leftmost lower corner (table position 0, 0) is 0, which means that in the LMIP image, the pixel value at the pixel position (0, 0), i.e., the leftmost lower corner, is from the 0 th image, that is, the pixel value at the pixel position (0, 0) of the 0 th image is the largest in all 5 images. Likewise, the right cell next to the bottom-left cell has a value of-2, indicating that in the LMIP image, the pixel value at pixel position (0, 1), i.e., the pixel position to the right next to the bottom-left cell, is from the 2 nd image, i.e., the pixel value at this pixel position (0, 1) is the largest in the 2 nd image of all 5 images.
As shown in fig. 2 d, the three-dimensional volume data may be subjected to the same LMIP projection imaging in the row direction and the column direction (directions (2) and (3) in fig. 2) perpendicular to the slice direction as well as the slice direction (1) in fig. 2), and an LMIP image in the row direction and the column direction may be obtained.
As described above, since the LMIP can not only reconstruct the blood vessel points in each slice image into a continuous blood vessel but also enhance contrast display with a small density change on the MIP image, it can display well, for example, a stenosis, an expansion, a defect, and the like of a blood vessel, and is particularly suitable for image reconstruction of a tubular organ such as a blood vessel.
The local maximum intensity projection of the present invention is described above by taking the local maximum intensity projection as an example, the local maximum intensity projection is used when the target portion (blood vessel) in the image shown in fig. 2 is bright, that is, the target portion has higher intensity compared with the background, and the local minimum intensity projection is used if the target portion (blood vessel) in the image is dark, the local minimum intensity projection and the local maximum intensity projection are different only in that the minimum value of the pixel value at the pixel position in each slice image through which the light passes is taken instead of the maximum value as the pixel value of the pixel position in the projection image, and the rest is the same as the local maximum intensity projection, and the details are not repeated.
Both local maximum intensity projection and local minimum intensity projection are applicable to the solution of the invention. Although the local maximum intensity projection is described as the LMIP in the following embodiments, it is obvious to those skilled in the art that the local maximum intensity projection in each of the embodiments may be replaced with the local minimum intensity projection to constitute the embodiment of the present invention, and the technical effects of the present invention may be achieved.
The following describes specific embodiments of the present invention.
< first embodiment >
The present invention according to a first embodiment is a tubular organ labeling method, including: a projection step of performing local density projection on a two-dimensional image constituting three-dimensional volume data including a tubular organ to obtain a projection image; a mapping matrix acquisition step of acquiring a mapping matrix of the local density projection implemented in the projection step; a labeling step of labeling a tubular organ in the projection image; and a reflection mapping step, namely reversely mapping the tubular organ marked in the projection image into the three-dimensional volume data by using the mapping matrix to generate a three-dimensional tubular organ image.
The first embodiment of the present invention is described in detail below with reference to the drawings.
Fig. 3 is a flowchart illustrating a tubular organ labeling method according to a first embodiment of the present invention. The first embodiment of the present invention will be described by taking a label of a blood vessel as an example.
As shown in fig. 3, the tubular organ labeling method of the present invention comprises the following steps: step S100, inputting original three-dimensional volume data containing blood vessels; step S200, for continuous multiple slice images in a certain direction, local maximum density projection is carried out for example to obtain an LMIP image; step S300, obtaining a projection mapping matrix of the local maximum density projection performed in the step S200; step S400, marking the center line of the blood vessel in the LMIP image; step S500, reversely mapping each point on the central line to the original slice image according to the blood vessel central line marked on the LMIP image in the step S400 by using the mapping matrix obtained in the step S300, and obtaining seed points of the blood vessel central line in each slice image; and step S600, completing the 3D vessel labeling under the guidance of the obtained seed points.
The above steps are described in detail below with reference to the accompanying drawings.
Details of the local maximum intensity projection performed in steps S200 and S300 and the projection mapping matrix for obtaining the local maximum intensity projection are already described above with reference to fig. 2, and are not repeated here. Fig. 4 is a schematic diagram showing the local maximum intensity projection effect in step S200.
The three images at the top of fig. 4 are scanning slice images of the axial position, the sagittal position, and the coronal position of the pulmonary vessel of the human body, respectively, and the three images at the bottom are local maximum density projection images obtained by performing local maximum density projection on the respective multiple scanning slice images of the axial position, the sagittal position, and the coronal position, respectively. As can be seen from the figure, the obtained LMIP image enhances the strength and the geometric connectivity of the blood vessel by carrying out local maximum intensity projection at certain intervals, and the marking of the center line of the blood vessel can be easily carried out in the LMIP image. In the present invention, when performing the local maximum intensity projection, an arbitrary direction may be selected from the three directions of the axis, the sagittal plane, and the coronal plane, and other directions specified by the user, for example, the direction in which the LMIP is performed may be selected according to the orientation of the tubular organ (e.g., the blood vessel). If the angle between the direction of the blood vessel and the plane direction of each scanning slice image is larger (for example, close to vertical), LMIP processing needs to be executed on more slice images to obtain the complete central line of the blood vessel; conversely, if the blood vessel runs at a relatively small angle (e.g., substantially parallel) to the planar direction of each scan slice image, a relatively small number of slice images are required to obtain the complete centerline of the blood vessel, and therefore it is preferable that the scan slice images in the planar direction at a relatively small angle to the blood vessel run to perform the LMIP process.
Fig. 5 is a schematic diagram showing operations performed in steps S400 to S500.
Shown in the upper left of fig. 5 is an LMIP image obtained by step S200, for example, an axile LMIP image.
As shown in fig. 5, according to the tubular organ labeling method of the present invention, in step S400, the centerline of the blood vessel is labeled in the LMIP image obtained in step S200, for example, the key points of the centerline of the blood vessel can be labeled in the LMIP image by a manual method or an automatic method known in the art. These key points may be key location points determined according to the shape characteristics of the blood vessel, and may be, for example, a start point, an end point, an obvious inflection point, a branch point, or the like. In addition, points of overlap between vessels in the LMIP image can be avoided when selecting the key points.
After the key points on the centerline of the blood vessel are marked on the LMIP image in step S400, the key points are then reflected to the original three-dimensional volume data using the mapping matrix obtained in step S300 for the pixel positions of the marked key points in the LMIP image in step S500.
The operation in S500 is specifically described below. The example of the 8 x 8 matrix illustrated in fig. 2 and the LMIPs of 5 images-2, -1, 0,1, 2 is also illustrated.
In the case where the LMIP image is known to be composed of 8 × 8 pixels, the pixel positions of the points in the LMIP image are known, for example, the bright line in the upper left LMIP image in fig. 5 shows the key points on the center line of the blood vessel marked in step S400, and for the key point c1 located at the root of the blood vessel among the key points, it is assumed that the point c1 is the horizontal 4 th pixel and the vertical 3 rd pixel in the 8 pixels, that is, the pixel position of c1 is (4, 3). Since the index value corresponding to the pixel position (4, 3) in the mapping matrix is "2", that is, the pixel value of the "2" image in the five (-2, -1, 0,1, 2) images is taken at the pixel position (4, 3), and the pixel value of the "2" image at the pixel position (4, 3) is the largest, the reflection result of the key point c1 is that the point with the pixel coordinate of (4, 3) in the "2" images in the multiple continuous slice images is determined as the seed point on the blood vessel centerline in the three-dimensional space corresponding to c1 in the LMIP image.
By repeating this procedure, the above inverse mapping is performed on all the key points marked in the LMIP image in step S400, and the three-dimensional space coordinates of each seed point on the 3D centerline, which are composed of the pixel coordinates (for example, (4, 3) above) and the image number (for example, "2") are obtained, thereby obtaining the seed point of the blood vessel centerline in the three-dimensional space.
The above is a description of the basic operation in step S500.
Further, step S500 may also use each seed point on the centerline of the blood vessel in the three-dimensional space obtained by the inverse mapping to fit the 3D centerline of the blood vessel in the three-dimensional space by a fitting method known in the art. In addition, by using a known image processing method, CPR (Curved surface Reconstruction), SPR (stretched Curved surface Reconstruction), and cross-sectional view (Crosscut) of the vessel center line can be generated from the fitted 3D center line for various medical applications. The fitting and generation of the CPR, SPR, cross-sectional view are not essential to step S500 and may be omitted.
Fig. 6 is a schematic diagram showing the operation performed in step S600.
In step S600, three-dimensional vessel labeling is completed under the guidance of the obtained seed points. Specifically, as shown in fig. 6 (a), for example, the center line fitted from each seed point obtained in step S500 may be directly used as the sparse marker blood vessel GT; in addition, as shown in fig. 6 (b), the user may further generate a complete vessel domain by, for example, manual labeling or a conventional adaptive threshold method based on the fitted vessel centerline.
Step S600 may be implemented by any method known in the art, for example, the generation of the complete blood vessel domain from the fitted blood vessel centerline may be implemented by a conventional blood vessel segmentation method based on image intensity or geometric features, a segmentation method based on deep learning, and the details thereof are not described herein again.
The above is a description of the tubular organ labeling method under local intensity projection guidance according to the first embodiment of the present invention. As described above, the present invention utilizes the characteristic that the local intensity projection can enhance the strength and geometric connectivity of the tubular organ (such as blood vessel, trachea, etc.) in the image to label the key points on the centerline of the tubular organ in the LMIP image, so that the centerline of the tubular organ can be easily distinguished from other clutter (other blood vessels/disease regions/noise), and the tiny blood vessel points can also be labeled due to the enhanced geometric connectivity of the LMIP. And inversely mapping the central point labeled in the LMIP to the original image through the constructed index relation mapping matrix so as to generate the central line of the tubular organ in the three-dimensional space.
In addition, the present invention may include various modifications. For example, in the first embodiment, although the LMIP and the subsequent labeling and reflection processing are performed only once on a set of a plurality of slice images (for example, 5 slices), performing the LMIP, labeling, and reflection processing a plurality of times on a plurality of sets of slice images, respectively, can obtain a more favorable labeling result of the 3D blood vessel center line; in addition, performing the LMIP process only once for one set of slice images is effective when only one blood vessel is to be labeled, but it is also possible to perform the LMIP, labeling, and reflection processing for each of a plurality of sets of slice images when, for example, it is desired to label a plurality of blood vessels in the entire pulmonary vascular system.
Fig. 7 is a schematic diagram for explaining density projection processing of a plurality of sets of two-dimensional images according to a first modification of the first embodiment. As shown in fig. 7, in addition to the first embodiment, in which the projected image (1) is obtained by performing the local density projection (1) on the two-dimensional image group (1) numbered-2, -1, 0,1, 2 among the plurality of two-dimensional images constituting the three-dimensional volume data, and the corresponding mapping matrix (1) is acquired, the projected images (2) to (M) may be obtained by performing the local density projections (2) to (M) on the other plurality of two-dimensional image groups (2) to (M), and the corresponding mapping matrices (2) to (M) may be acquired (the local density projections (2) to (M) and the mapping matrices (2) to (M) are omitted in the drawings). In the projection images (2) - (M), for example, different blood vessels are labeled, and the labeled blood vessels are respectively back-mapped to the original three-dimensional volume data by using the corresponding mapping matrixes (2) - (M), and finally synthesized into a plurality of blood vessels in, for example, a pulmonary vascular system.
The number of images in the plurality of sets of two-dimensional images may be the same, and for example, another set may be configured by shifting each image in the two-dimensional image group (1) forward (or backward) by one (or several) pictures, respectively, and repeated a plurality of times to form a plurality of two-dimensional image groups. The number of the sets of two-dimensional images may be different, and may be selected appropriately according to the length, the orientation, and the like of the target blood vessel. In addition, the above synthesis can be achieved by any method known in the art, and will not be described herein.
That is, according to the first modification of the first embodiment, the projection images can be obtained by performing local density projection on each of a plurality of sets of two-dimensional images. Further, in the mapping matrix acquiring step, a mapping matrix for each of the plurality of sets of two-dimensional images is acquired. In the labeling step, the tubular organ is labeled in a plurality of projection images, respectively. In the de-mapping step, the tubular organ is de-mapped to three-dimensional volume data using the mapping matrix of each group, and a three-dimensional tubular organ image is synthesized.
For example, in the first embodiment, although the LMIP and the subsequent labeling and reflection mapping process are performed only on the slice image in one direction of the coronal region, for example, when the precise vessel center line cannot be obtained only by the LMIP image labeling and reflection mapping in one direction due to the influence of the vessel orientation or when a spatial intersection with another vessel exists in some vessel part, for example, in this case, a more favorable labeling result of the 3D vessel center line can be obtained by using the LMIP labeling and reflection mapping in a plurality of directions.
That is, according to the second modification of the first embodiment, it is possible to obtain projection images in a plurality of directions by performing local density projection on two-dimensional images in a plurality of directions constituting three-dimensional volume data, respectively. Further, in the mapping matrix obtaining step, a mapping matrix for each of the plurality of directions is obtained. In the labeling step, the tubular organ is labeled in each of the projection images in the plurality of directions. In the reflection step, key points of the LMIP tubular organ central line in a plurality of directions are reflected into the three-dimensional volume data by using the mapping matrix of each direction, and finally a three-dimensional tubular organ image is synthesized.
The number of two-dimensional images used to generate the LMIP image in step S200 may vary, and the number of two-dimensional images to generate the LMIP image may be selected according to actual circumstances. As the number of two-dimensional images used for generating the LMIP image increases, the strength and geometric connectivity of the tubular organ in the obtained LMIP projection image are more desirable, but at the same time the amount of data processing increases accordingly. The invention can achieve the balance between the data processing amount and the LMIP effect by selecting the number of the LMIP images according to the actual situation. For example, the more densely the tubular organ is distributed, the fewer the number of sheets used to generate the LMIP image.
The tubular organ labeling system of the first embodiment is explained below.
Fig. 8 is a functional block diagram showing a tubular organ labeling system according to a first embodiment of the present invention. As shown in fig. 8, the tubular organ labeling system 1 according to the first embodiment includes: a projection device 10 that performs local intensity projection on a two-dimensional image constituting three-dimensional volume data including a tubular organ to obtain a projection image; a mapping matrix obtaining unit 20 for obtaining a mapping matrix of the local density projection performed by the projection unit 10; a labeling means 30 for labeling the tubular organ in the projection image; and an inverse mapping device 40 for generating a three-dimensional tubular organ image by inverse mapping the tubular organ labeled in the projection image into the three-dimensional volume data using the mapping matrix. The processing of each functional module in the labeling system for tubular organs of the present invention corresponds to each step of the above labeling method for tubular organs, and is not described in detail herein.
The above description is directed to the tubular organ labeling method and system of the first embodiment. According to the invention, the key points of the central line of the tubular organ are marked in the LMIP image, so that the central line of the tubular organ can be easily distinguished from other clutter (other blood vessels/disease areas/noise), and the tiny blood vessel points can be marked due to the enhanced geometric connectivity of the LMIP. And inversely mapping the central point labeled in the LMIP to the original image through the constructed index relation mapping matrix so as to generate the central line of the tubular organ in the three-dimensional space.
< second embodiment >
The present invention according to the second embodiment applies the tubular organ labeling method of the first embodiment to the correction of the tubular organ labeling result.
When the labeling of the initial vessel GT domain has been completed with some conventional algorithms, such as thresholding, the clinician is often required to review and edit the erroneous or missing vessel branches in the labeling results. If there are more vessel branches in a vessel tree, it is difficult to check which vessel has a missing or interrupted labeling result.
For this reason, according to the present invention of the second embodiment, the tubular organ labeling method of the first embodiment is applied to the correction of the labeling result of the tubular organ, and the user can easily and immediately recognize which vessel branch has a deletion or interruption by comparing the LMIP image with the corresponding pre-labeled vessel domain, and then correct the deletion or interruption in the LMIP image using the method of the first embodiment.
The second embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
FIG. 9 is a flowchart illustrating a tubular organ labeling result correction method according to a second embodiment of the present invention.
As shown in fig. 9, the method for correcting the labeling result of tubular organ of the present invention comprises the following steps.
In step S100', three-dimensional volume data including a tubular organ and a pre-labeling result after labeling the tubular organ with respect to the three-dimensional volume data are input.
In step S200', a local maximum intensity projection, for example, is performed on the two-dimensional image constituting the three-dimensional volume data, and an LMIP projection image is obtained. Further, a mapping matrix of the local maximum density projection is obtained.
In step S300', the LMIP projection image obtained in S200' is compared with the pre-labeling result input in S100', the missing part of the tubular organ in the pre-labeling result is obtained, and the missing part of the tubular organ is labeled in the LMIP projection image.
In step S400', the vessel center point of the missing portion of the tubular organ, which is marked in the LMIP projection image, is back-mapped into the three-dimensional volume data using the mapping matrix.
In S500', the reflection results and the pre-labeling results are synthesized to generate a three-dimensional tubular organ image.
The above steps are described in detail below with reference to the accompanying drawings.
Details of the step S200' of performing the local maximum density projection and obtaining the mapping matrix of the local maximum density projection are the same as those of the first embodiment and the corresponding steps in the description with reference to fig. 2, and are not repeated here.
Fig. 10 is a schematic diagram showing the operations performed in steps S300', S400'.
As shown in fig. 10 (a), by comparing the maximum intensity projection image (the gray (or dark) portion in the blood vessel pattern in the upper left drawing of fig. 10) with the pre-labeling result (the red (or bright) portion in the blood vessel pattern), for example, by displaying the two images in a simple superposition, it can be clearly seen that the maximum intensity projection image contains more blood vessel information, for example, the middle portion on the right side in fig. 10 (a) contains a gray blood vessel pattern m1 which is not covered by the red blood vessel pattern, and the gray blood vessel pattern is the missing portion of the tubular organ (blood vessel) in the pre-labeling result. Then, as shown in fig. 10 (b), in step S300', the label as described in step S400 of the first embodiment is given to the missing part m 1. Next, as shown in fig. 10 (c), the mapping matrix is used to perform the reflection, fitting, and the like as described in step S500 of the first embodiment on the marked missing part.
The deleted portion may be a deleted whole blood vessel, or may be a deleted portion of a blood vessel, such as a deleted branch, a deleted terminal portion, a deleted segment, etc.
Here, steps S300 'to S400' are realized by the corresponding steps in the tubular organ labeling method described in the first embodiment. Specifically, the labeling in step S300' and the inverse mapping in step S400' are the same as those in S400 and S500 of the first embodiment, and the differences are only that S400 and S500 of the first embodiment are performed on the entire blood vessel, and S300' and step S400' of the present embodiment are performed only on the missing part obtained by the comparison in step S300', and thus the detailed description thereof is omitted.
Other techniques such as comparison in step S300', synthesis in step S500' and the like may be implemented using techniques known in the art, and are not described herein again.
The second embodiment can obtain the effect of correcting the result of the preliminary labeling of a tubular organ such as a blood vessel by another method, in addition to the effect of the first embodiment.
While the tubular organ labeling result correction method according to the second embodiment has been described above, the second embodiment can be realized as a system or an apparatus corresponding to the tubular organ labeling result correction method.
< other modifications >
The present invention is not limited to the above-described embodiments, and various modifications may be made.
For example, although the blood vessel is described as an example in each of the above embodiments, the present invention can be applied to other tubular organs such as the trachea.
The system of the present invention may be incorporated in a medical device as a circuit capable of realizing the functions described in the embodiments, or may be distributed as a program executable by a computer and stored in a storage medium such as a magnetic disk (a flexible disk, a hard disk, or the like), an optical disk (a CD-ROM, a DVD, a BD, or the like), an optical disk (MO), a semiconductor memory, or the like.
Further, based on instructions from a program installed in the computer from a storage medium, the MW (middleware) such as an OS (operating system), database management software, and network software that operate on the computer may execute a part of the processing for realizing the above embodiments.
Several embodiments of the present invention have been described above, but these embodiments are provided as examples and are not intended to limit the scope of the invention. These new embodiments may be implemented in various other forms, and various omissions, substitutions, and changes may be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are also included in the invention described in the claims and the equivalent scope thereof.

Claims (10)

1. A tubular organ labeling method is characterized by comprising the following steps:
a projection step of performing local density projection on a two-dimensional image constituting three-dimensional volume data including a tubular organ to obtain a projection image;
a mapping matrix obtaining step of obtaining a mapping matrix of the local density projection performed in the projecting step;
a labeling step of labeling the tubular organ in the projection image; and
and a reflection mapping step of reversely mapping the tubular organ marked in the projection image into the three-dimensional volume data by using the mapping matrix to generate a three-dimensional tubular organ image.
2. The tubular organ labeling method of claim 1,
the mapping matrix represents, for each pixel in the projection image, a correspondence relationship of which image among the plurality of two-dimensional images the pixel corresponds to;
in the reflection irradiation step, the two-dimensional image corresponding to each pixel of the tubular organ indicated in the projection image is specified by the pixel coordinates of the pixel based on the correspondence relationship, and a three-dimensional tubular organ image is generated using the pixel coordinates and image number information indicating that the specified two-dimensional image is the second image among the plurality of two-dimensional images.
3. The tubular organ labeling method of claim 1 or 2,
marking key points on the central line of the tubular organ in the marking step;
the step of de-mapping generates key points on the centerline of the tubular organ in three dimensions.
4. The tubular organ labeling method of claim 3,
the step of reflection and reflection is also to fit a three-dimensional tubular organ center line according to key points on the three-dimensional tubular organ center line;
the tubular organ labeling method further comprises the following steps:
and a tubular organ labeling step, wherein a three-dimensional tubular organ image is labeled according to the fitted three-dimensional tubular organ center line.
5. The tubular organ labeling method of any one of claims 1 to 4,
in the projecting step, local density projection is respectively carried out on a plurality of groups of two-dimensional images to obtain a plurality of projection images,
in the mapping matrix obtaining step, the mapping matrix for each of the plurality of sets of two-dimensional images is obtained;
in the labeling step, the tubular organ is respectively labeled in the plurality of projection images;
in the demapping step, the tubular organ is demapped to the three-dimensional volume data using the mapping matrix of each group, and a three-dimensional tubular organ image is synthesized.
6. The tubular organ labeling method of any one of claims 1 to 5,
in the projection step, local density projection is performed on two-dimensional images in a plurality of directions constituting the three-dimensional volume data, and the projection images in the plurality of directions are obtained;
in the mapping matrix obtaining step, the mapping matrix in each of the plurality of directions is obtained;
in the labeling step, the tubular organ is labeled in the projection images in the plurality of directions respectively;
in the demapping step, the tubular organ in the plurality of directions is demapped to the three-dimensional volume data using the mapping matrix in each direction, and a three-dimensional tubular organ image is synthesized.
7. The method for tubular organ labeling according to any one of claims 1 to 6,
the number of sheets of the plurality of two-dimensional images used for generating the projection image is variable.
8. The tubular organ labeling method of any one of claims 1 to 7,
the tubular organ is a blood vessel.
9. A tubular organ labeling result correction method is characterized by comprising the following steps:
an input step, inputting three-dimensional volume data containing a tubular organ and a pre-labeling result after tubular organ labeling aiming at the three-dimensional volume data;
a projection step of performing local density projection on a two-dimensional image constituting the three-dimensional volume data to obtain a projection image;
a mapping matrix acquisition step of acquiring a mapping matrix of the local density projection performed in the projection step;
a comparison step, comparing the projection image with the pre-labeling result to obtain the missing part of the tubular organ in the pre-labeling result;
a labeling step of labeling the missing part of the tubular organ in the projection image; and
and a reflection mapping step of reversely mapping the missing part of the tubular organ marked in the projection image into the three-dimensional volume data by using the mapping matrix, and synthesizing the missing part of the tubular organ marked in the projection image with the pre-marking result to generate a three-dimensional tubular organ image.
10. A tubular organ labeling system is provided with:
a projection device that performs local intensity projection on a two-dimensional image constituting three-dimensional volume data including a tubular organ to obtain a projection image;
a mapping matrix obtaining device for obtaining a mapping matrix of the local density projection implemented by the projection device;
a labeling device that labels the tubular organ in the projection image; and
and the tubular organ marked in the projection image is subjected to inverse mapping into the three-dimensional volume data by using the mapping matrix, so that a three-dimensional tubular organ image is generated.
CN202110942799.6A 2021-08-17 2021-08-17 Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system Pending CN115908225A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110942799.6A CN115908225A (en) 2021-08-17 2021-08-17 Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system
JP2022098168A JP2023027751A (en) 2021-08-17 2022-06-17 Medical image processing device and medical image processing method
US17/814,904 US20230058183A1 (en) 2021-08-17 2022-07-26 Medical image processing apparatus and medical image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110942799.6A CN115908225A (en) 2021-08-17 2021-08-17 Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system

Publications (1)

Publication Number Publication Date
CN115908225A true CN115908225A (en) 2023-04-04

Family

ID=85330394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110942799.6A Pending CN115908225A (en) 2021-08-17 2021-08-17 Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system

Country Status (2)

Country Link
JP (1) JP2023027751A (en)
CN (1) CN115908225A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630450A (en) * 2023-05-29 2023-08-22 中国人民解放军陆军军医大学 Method, device and storage medium for extracting and encoding characteristics in arterial interlayer cavity

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630450A (en) * 2023-05-29 2023-08-22 中国人民解放军陆军军医大学 Method, device and storage medium for extracting and encoding characteristics in arterial interlayer cavity

Also Published As

Publication number Publication date
JP2023027751A (en) 2023-03-02

Similar Documents

Publication Publication Date Title
CN105719324B (en) Image processing apparatus and image processing method
CN104622495B (en) Medical image-processing apparatus and medical image processing method
USRE35798E (en) Three-dimensional image processing apparatus
Palágyi et al. Quantitative analysis of pulmonary airway tree structures
US9472017B2 (en) Fast rendering of curved reformation of a 3D tubular structure
US8160316B2 (en) Medical image-processing apparatus and a method for processing medical images
US7853057B2 (en) Image processing method and device using a virtual ray and vector information
US20070109299A1 (en) Surface-based characteristic path generation
US20120114208A1 (en) Image matching device and patient positioning device using the same
US9129391B2 (en) Semi-automated preoperative resection planning
CN1620990A (en) Method and apparatus for segmenting structure in CT angiography
JP4248399B2 (en) Automatic branch labeling method
WO2017114479A1 (en) Image processing method and system
CN110910405A (en) Brain tumor segmentation method and system based on multi-scale cavity convolutional neural network
CN106981090B (en) Three-dimensional reconstruction method for in-tube stepping unidirectional beam scanning tomographic image
CN112862833A (en) Blood vessel segmentation method, electronic device and storage medium
KR102259275B1 (en) Method and device for confirming dynamic multidimensional lesion location based on deep learning in medical image information
Zheng et al. MsVRL: self-supervised multiscale visual representation learning via cross-level consistency for medical image segmentation
Nowinski et al. A 3D model of human cerebrovasculature derived from 3T magnetic resonance angiography
CN106952264B (en) Method and device for cutting three-dimensional medical target
CN106469445A (en) A kind of calibration steps of 3-D view, device and system
CN115908225A (en) Tubular organ labeling method, tubular organ labeling result correction method and tubular organ labeling result correction system
Ropinski et al. Multimodal vessel visualization of mouse aorta PET/CT scans
Wang et al. Coronary artery segmentation and skeletonization based on competing fuzzy connectedness tree
CN113538209A (en) Multi-modal medical image registration method, registration system, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination