CN112381822B - Method for processing images of focal zones of the lungs and related product - Google Patents

Method for processing images of focal zones of the lungs and related product Download PDF

Info

Publication number
CN112381822B
CN112381822B CN202110040070.XA CN202110040070A CN112381822B CN 112381822 B CN112381822 B CN 112381822B CN 202110040070 A CN202110040070 A CN 202110040070A CN 112381822 B CN112381822 B CN 112381822B
Authority
CN
China
Prior art keywords
lung
value
geometric
region
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110040070.XA
Other languages
Chinese (zh)
Other versions
CN112381822A (en
Inventor
王振常
雷娜
侯代伦
李维
任玉雪
吕晗
魏璇
张茗昱
陈伟
吴伯阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhituo Vision Technology Co ltd
Dalian University of Technology
Beijing Friendship Hospital
Original Assignee
Beijing Zhituo Vision Technology Co ltd
Dalian University of Technology
Beijing Friendship Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhituo Vision Technology Co ltd, Dalian University of Technology, Beijing Friendship Hospital filed Critical Beijing Zhituo Vision Technology Co ltd
Priority to CN202110040070.XA priority Critical patent/CN112381822B/en
Publication of CN112381822A publication Critical patent/CN112381822A/en
Application granted granted Critical
Publication of CN112381822B publication Critical patent/CN112381822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Abstract

The invention discloses a method and related product for processing images of a focal zone of a lung, comprising: acquiring three-dimensional image data of a lung focus region; generating a tetrahedral mesh formed by connecting a plurality of vertexes according to the three-dimensional image data of the lung lesion region; determining a geometric feature value at the vertex using the voxel value at the vertex; and replacing the voxel value with the geometric characteristic value to realize the extraction of the geometric characteristic of the lung focus area. According to the invention, high-order geometric features are extracted from the lung focus region image, so that richer feature information is contained, and the essential geometric attributes of the focus region image can be reflected, thereby facilitating further research.

Description

Method for processing images of focal zones of the lungs and related product
Technical Field
The present invention generally relates to the field of image processing. More particularly, the present invention relates to a method, apparatus and computer-readable storage medium for processing images of a focal zone of a lung.
Background
The lesion region image, particularly the lung lesion region, contains information helpful for clinical diagnosis, so that the extraction of the image features of the lesion region is particularly important. The current traditional processing mode is to extract the image omics characteristics of the focus area and use the image omics characteristics for subsequent analysis and research so as to evaluate the focus area. However, the features extracted in the conventional manner contain less information and thus cannot reflect the essential attributes of the lesion region. Therefore, how to extract high-order geometric features of the focal region and use the extracted geometric features for focal region analysis becomes an urgent problem to be solved, which is particularly important when the focal region includes a lung region infected with a new coronavirus.
Disclosure of Invention
In order to solve at least the above technical problem, the present invention provides a solution for processing an image of a focal zone of a lung, wherein an efficient extraction of geometrical features of the focal zone is involved. The high-order geometric features extracted by the method are expressed as three-dimensional tensors, so that the method is beneficial to subsequent research and analysis, such as evaluation and detection of various pneumonia (including new coronary pneumonia). In some application scenarios, the three-dimensional tensor obtained by the method can be used as a data source to be applied to the field of artificial intelligence, so that geometric features can be analyzed and the result can be evaluated by using a data analysis method such as deep learning. In view of this, the present invention provides corresponding solutions in the following aspects.
In one aspect, the invention discloses a method for processing an image of a focal zone of a lung, comprising: acquiring three-dimensional image data of a lung focus region; generating a tetrahedral mesh formed by connecting a plurality of vertexes according to the three-dimensional image data of the lung lesion region; determining a geometric feature value at the vertex using the voxel value at the vertex; and replacing the voxel value with the geometric characteristic value to realize the extraction of the geometric characteristic of the lung focus area.
In one embodiment, the geometric eigenvalues are three-dimensional tensor data and the lung lesion region is a region of the lung infected with a new coronavirus.
In another embodiment, generating the tetrahedral mesh comprises determining the boundaries and internal vertices of the tetrahedral mesh.
In yet another embodiment, determining the boundaries of the tetrahedral mesh comprises generating a two-dimensional mesh from the three-dimensional image data as the boundaries of the tetrahedral mesh.
In yet another embodiment, determining internal vertices of a tetrahedral mesh comprises determining internal vertices of the tetrahedral mesh from voxel vertices of the three-dimensional image data.
In yet another embodiment, generating a two-dimensional mesh from the boundaries of the three-dimensional image data comprises: marking the three-dimensional image area with a boolean variable and generating the two-dimensional mesh from the marked three-dimensional image area.
In further embodiments, the geometric feature comprises a reed curvature, a gradient, or an average curvature.
In one embodiment, replacing the voxel value with the geometric feature value comprises: calculating a reed, gradient, or average curvature value at the vertex from the voxel values at the vertex; and replacing voxel values at the vertices with the richness value, gradient value, or average curvature value.
In another aspect, the invention discloses an apparatus for processing an image of a focal zone of a lung, comprising: a processor; and a memory coupled to the processor, the memory having stored therein computer program code which, when executed, causes the processor to perform the foregoing method and embodiments.
In yet another aspect, the present disclosure discloses a computer-readable storage medium having stored thereon computer-readable instructions for processing images of a focal zone of a lung, which, when executed by one or more processors, implement a method as described above.
Based on the above general description of the approach of the present invention, one skilled in the art will appreciate that the present invention reconstructs a tetrahedral mesh from the lung lesion area and uses the voxel values at the vertices of the tetrahedral mesh to determine the geometric feature values. Further, by replacing the voxel values at the vertices with geometric eigenvalues, and optionally representing the geometric eigenvalues as tensor data, e.g. a three-dimensional tensor, the solution of the present invention may efficiently extract high-order geometric features of the focal region for subsequent study, analysis and evaluation of the focal region of the lung, in particular for the study of new coronary pneumonia. Furthermore, the geometric feature extraction mode of the invention realizes the high-order geometric feature extraction of the lung focus region image, so that the obtained feature data contains more abundant feature information. Therefore, the scheme of the invention can reflect the essential geometric attributes of the image region of the lung focal region, so as to predict the development of the lung focal region in the subsequent process. In particular, by using the higher-order geometric features obtained by the present invention and representing them as three-dimensional tensors as training data for machine learning algorithms such as deep neural networks, predictive models directed to the trend of lung disease development can be trained and obtained to make accurate predictions of the development of lung disease in patients for effective human intervention.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a flow chart illustrating a method for processing images of a focal zone of a lung according to an embodiment of the present invention;
FIG. 2 is an exemplary flow diagram illustrating a method for generating a two-dimensional grid in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a tetrahedral mesh according to an embodiment of the present invention;
FIG. 4 is an exemplary flow chart illustrating a method of replacing voxel values with geometric feature values according to an embodiment of the invention;
FIG. 5 is an exemplary diagram illustrating a partial mesh vertex and its neighboring edges in accordance with an embodiment of the present invention; and
FIG. 6 is a block diagram illustrating a system for processing images of a focal zone of a lung according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings. It should be understood that the embodiments described in this specification are only some of the embodiments provided by the present disclosure to facilitate a clear understanding of the aspects and to comply with legal requirements, and not all embodiments in which the present invention may be practiced. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed in the specification without making any creative effort, shall fall within the protection scope of the present disclosure.
FIG. 1 is a flow diagram illustrating a method 100 of processing images of a focal zone of a lung in accordance with various embodiments of the invention. It is noted that the method 100 of the present invention may be implemented by various types of computing devices including, for example, a computer, and the three-dimensional image data of the lung lesion region involved therein may be three-dimensional image data obtained by, for example, a Computed Tomography (abbreviated as "CT") technique or device. Further, the three-dimensional image data of the lung lesion region of the present invention includes cubic structures such as volume elements (abbreviated as "voxels").
As known to those skilled in the art, voxels are mainly used in the fields of three-dimensional imaging, scientific data and medical imaging, which is the smallest unit of digital data that can be segmented and identified in three-dimensional space. Further, the values of the voxels (simply "voxel values") may represent different characteristics. For example, in a "CT" image, the aforementioned voxel value is Hounsfield Unit (abbreviated as "HU").
As shown in fig. 1, at step 102, the method 100 may acquire three-dimensional image data of a lesion region of the lung. In one embodiment, three-dimensional image data may be acquired by a device supporting the CT technique and the voxel values of the present invention are obtained by calculation. In this case, the voxel value is the gray value of the image (i.e., the gray value to which the discussion in the embodiments of the present invention below refers). Additionally, the foregoing grayscale values may be converted to obtain CT values in units of hounsfield units as described above.
After obtaining three-dimensional image data based on, for example, the CT technique discussed above, the method 100 may generate a tetrahedral mesh connected by a plurality of vertices from the three-dimensional image data of the lung lesion region at step 104. In one embodiment, the operation of the present invention to generate a tetrahedral mesh may include generating the boundaries of the tetrahedral mesh followed by regenerating the internal vertices of the tetrahedral mesh. In this case, the boundary of the tetrahedral mesh may be a two-dimensional mesh generated from the boundary of the three-dimensional image data, and the internal vertices of the tetrahedral mesh may be the vertices of the voxels. Further, the generated two-dimensional mesh is used as the boundary of the tetrahedral mesh, so that the prior information of the external surface of the tetrahedral mesh can be obtained by using the boundary, and the generation speed of the tetrahedral mesh is accelerated. Further, the embodiment of the present invention can accurately describe the shape of the lung lesion region by constructing (or reconstructing) a tetrahedral mesh. Further, the present invention can determine the position of higher order geometric parameters (e.g., gradient) through the tetrahedral mesh to provide more accurate data for subsequent analysis of the lung lesion area. In one embodiment, the operation of generating the tetrahedral mesh may also be automatically implemented by a software package, for example, a software package containing a three-dimensional Constrained triangulation (CDT) function may be selected to directly generate the tetrahedral mesh.
After the tetrahedral mesh is generated by step 104, the flow proceeds to step 106. At this step 106, the method 100 determines the geometric feature value at the vertex using the voxel value at the vertex. As mentioned before, the voxel value at the vertex can be obtained directly by a device or apparatus supporting, for example, CT technology, and the obtained voxel value is usually a gray value of the CT image (i.e. the lung lesion region image in the embodiment of the present invention), which may be any corresponding value between 0 and 255. According to one or more embodiments of the invention, the aforementioned geometric features may include, but are not limited to, a riech curvature, a gradient, or an average curvature. Next, at step 108, the method 100 replaces the voxel values with the geometric feature values to achieve geometric feature extraction for the lung lesion region.
According to different implementation scenarios, the geometric feature of the present invention may be one of the above-mentioned riedge curvature, gradient or average curvature, so as to calculate the riedge value, gradient value or average value of the curvity at the vertex accordingly. Based on this, the reed-cookie value, the gradient value, or the average-cookie value at the vertex of the tetrahedral mesh obtained at the aforementioned step 106 is utilized as the gray value at the vertex to replace the voxel value. In one embodiment, the richness value, gradient value, or average curvature value obtained here may be tensor data having multiple dimensions, such as three-dimensional tensor data, for use in operations such as feature extraction for deep convolutional networks.
The geometric feature extraction for the focal zone of the lung of the present invention is described above in connection with fig. 1. Based on the above description, those skilled in the art will appreciate that the present invention computes geometric feature values, such as a richness value, a gradient value, or an average curvature value, by reconstructing a tetrahedral mesh for the lesion area and based on voxel values at vertices of the tetrahedral mesh. Further, the voxel values at the vertices are replaced with the obtained geometric feature values, thereby extracting geometric features. In one embodiment, the geometric features may be represented as three-dimensional tensor data for subsequent study and analysis, including, for example, training and predictive evaluation for neural network models.
With the above description, the present invention extracts the high-order geometric features of the lung lesion region (including new coronary pneumonia) image, so that the obtained geometric feature data contains more abundant feature information, and the intrinsic geometric attributes of the lung lesion region image region can be reflected. Meanwhile, compared with the traditional feature extraction, the high-order geometric features extracted based on the lung focus region image are more interpretable for evaluating lung diseases from multiple aspects and angles. Further, by utilizing the high-order geometric features obtained by the method and representing the high-order geometric features as three-dimensional tensors to serve as training data of a machine learning algorithm such as a deep neural network, a prediction model aiming at the development trend of the focus can be trained, so that the development of the focus area can be accurately predicted, and effective human intervention can be performed.
Fig. 2 is an exemplary flow diagram illustrating a method 200 for generating a two-dimensional grid in accordance with multiple embodiments of the present invention. In conjunction with the description of fig. 1 above, those skilled in the art will appreciate that the boundaries of the tetrahedral mesh may be a two-dimensional mesh generated from the boundaries of the three-dimensional image data. Thus, the present invention proposes to obtain the aforementioned two-dimensional grid using the method 200 illustrated in fig. 2. It should be noted that the method 200 is a specific implementation of some of the steps of the method 100 shown in fig. 1, and therefore the corresponding description of the method 100 also applies to the following discussion of the method 200.
As shown in fig. 2, at step 202, the method 200 marks the three-dimensional image region, e.g., including the new coronary pneumonia, with a boolean variable. In one implementation scenario, one skilled in the art will appreciate the generation of a two-dimensional mesh, which is essentially a mesh generation for the outer surface of Boolean-type variable (bool) data. Specifically, a three-dimensional image region of a lesion region may be marked with a cool marker, and made as
Figure 601788DEST_PATH_IMAGE001
Wherein
Figure 315666DEST_PATH_IMAGE002
In order to be a smooth function of the image,
Figure 501928DEST_PATH_IMAGE003
is the area where the three-dimensional image exists. Next, at step 204, the method 200 generates the two-dimensional mesh from the marked three-dimensional image region. For example, three-dimensional image regions based on the aforementioned markers
Figure 428295DEST_PATH_IMAGE001
Can pass through
Figure 914771DEST_PATH_IMAGE004
The representation of the interior voxels,
Figure 409338DEST_PATH_IMAGE005
represents an external voxel, and
Figure 207530DEST_PATH_IMAGE006
(wherein 0)<
Figure 78534DEST_PATH_IMAGE007
<1) Then the boundary of the three-dimensional image data is represented and calculated by smooth interpolation
Figure 416587DEST_PATH_IMAGE002
Of functions
Figure 82054DEST_PATH_IMAGE008
And (4) iso-surface meshes. In one implementation scenario, the foregoing interpolation may be performed using, for example, a Computational Geometry Algorithms Library (abbreviated "CGAL") to generate a two-dimensional mesh.
After the two-dimensional mesh is generated, the two-dimensional mesh may be subjected to the above-mentioned "CDT" processing by those skilled in the art, so as to ensure the consistency of the boundaries of the two-dimensional mesh and the tetrahedral mesh. That is, such that the generated two-dimensional mesh is exactly the boundary of the tetrahedral mesh. Further, the skilled person may also put a stronger constraint on the tetrahedral mesh, i.e. to use the voxel vertices of the three-dimensional data as the internal vertices of the tetrahedral mesh. Thus, a tetrahedral mesh formed by connecting a plurality of vertices is finally generated based on the obtained two-dimensional mesh and voxel vertices. For ease of understanding, FIG. 3 shows an exemplary schematic diagram of a portion of a tetrahedral mesh generated according to an embodiment of the present invention. From the obtained tetrahedral mesh, the gray values at the vertices of the mesh may be used to determine the geometric feature values at the vertices of the mesh. In one embodiment, the geometric feature value may be a reed cookie value, a gradient value, or an average cookie value. Further, the voxel values at the vertices of the tetrahedral mesh are replaced with the aforementioned geometric feature values.
Fig. 4 is an exemplary flow diagram illustrating a method 400 for replacing voxel values with geometric feature values in accordance with multiple embodiments of the present invention. It should be understood that the method 400 is a specific implementation of some of the steps of the method 100 shown in fig. 1, and therefore the corresponding description of the method 100 also applies to the method 400.
After generating a tetrahedral mesh formed by connecting a plurality of vertices, as described above, at step 402, the method 400 may calculate a reed cookie value, a gradient value, or an average cookie value at the vertices of the generated tetrahedral mesh according to the voxel values at the vertices. In one implementation scenario, the cookie value may be calculated by mathematical operations described below. First, the weights of edges where vertices in a tetrahedral mesh abut can be defined
Figure 367542DEST_PATH_IMAGE009
And it is expressed as:
Figure 776658DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 972147DEST_PATH_IMAGE011
representing edges
Figure 198729DEST_PATH_IMAGE012
The weight of (a) is calculated,
Figure 112458DEST_PATH_IMAGE013
and
Figure 59686DEST_PATH_IMAGE014
respectively representing vertices
Figure 375261DEST_PATH_IMAGE015
And
Figure 241585DEST_PATH_IMAGE016
the weight of (a) is determined,
Figure 642611DEST_PATH_IMAGE017
representing all and the vertex
Figure 252584DEST_PATH_IMAGE015
Adjoining edges (excluding edges)
Figure 419735DEST_PATH_IMAGE012
),
Figure 597907DEST_PATH_IMAGE018
Representing all and the vertex
Figure 345283DEST_PATH_IMAGE016
Adjoining edges (excluding edges)
Figure 899892DEST_PATH_IMAGE012
). For ease of understanding, FIG. 5 illustrates an exemplary diagram of a portion of mesh vertices and their neighbors in accordance with various embodiments of the invention.
As shown in figure 5 of the drawings,
Figure 658901DEST_PATH_IMAGE019
and
Figure 132608DEST_PATH_IMAGE020
two vertices sharing an edge in the generated tetrahedral mesh described above may be represented,
Figure 242646DEST_PATH_IMAGE021
is a vertex
Figure 335367DEST_PATH_IMAGE019
And vertex
Figure 339095DEST_PATH_IMAGE020
The connecting edge of (2). Further, the vertex
Figure 124648DEST_PATH_IMAGE019
And also includes an edge adjacent thereto
Figure 581038DEST_PATH_IMAGE022
Figure 477449DEST_PATH_IMAGE023
And
Figure 208121DEST_PATH_IMAGE024
. Similarly, the vertex
Figure 23630DEST_PATH_IMAGE020
Also includes an edge adjacent thereto
Figure 108260DEST_PATH_IMAGE025
Figure 808363DEST_PATH_IMAGE026
And
Figure 662050DEST_PATH_IMAGE027
. In one embodiment, the vertices are combined
Figure 648460DEST_PATH_IMAGE019
The weight of (A) is defined as
Figure 954808DEST_PATH_IMAGE028
Vertex, point
Figure 52077DEST_PATH_IMAGE020
The weight of (A) is defined as
Figure 25849DEST_PATH_IMAGE029
The aforementioned weight
Figure 58527DEST_PATH_IMAGE028
And
Figure 976805DEST_PATH_IMAGE029
may be a vertex
Figure 18710DEST_PATH_IMAGE019
And vertex
Figure 312900DEST_PATH_IMAGE020
The voxel value (i.e., gray value) at (c). Thus, based on the vertex
Figure 906693DEST_PATH_IMAGE019
And vertex
Figure 922053DEST_PATH_IMAGE020
Weight of (1)
Figure 767650DEST_PATH_IMAGE028
And
Figure 309489DEST_PATH_IMAGE029
can obtain
Figure 949549DEST_PATH_IMAGE019
And
Figure 576840DEST_PATH_IMAGE020
are shared
Figure 694968DEST_PATH_IMAGE021
Weight of (2)
Figure 232260DEST_PATH_IMAGE030
Figure 308801DEST_PATH_IMAGE031
Combining the above formula (1) and formula (2) can obtain the weight of the edges adjacent to the vertices in the tetrahedral mesh
Figure 423387DEST_PATH_IMAGE009
. Weights based on the foregoing
Figure 345207DEST_PATH_IMAGE009
Further, the richness curvature Ric at each vertex can be obtained according to the following formula:
Figure 861639DEST_PATH_IMAGE032
in the above-mentioned formula (3),
Figure 840572DEST_PATH_IMAGE033
representation and vertex
Figure 848979DEST_PATH_IMAGE034
The edges of the adjacent edges are arranged to be adjacent,
Figure 574489DEST_PATH_IMAGE035
representing all and the vertex
Figure 945428DEST_PATH_IMAGE034
The edges of the adjacent ones are,
Figure 98192DEST_PATH_IMAGE036
can represent
Figure 797157DEST_PATH_IMAGE033
Number of (i.e. dot)
Figure 450993DEST_PATH_IMAGE034
Number of adjacent edges. In this case, the result of the computation of the cookie value is a numerical value.
In another embodiment, the weights of edges where vertices in a tetrahedral mesh abut may be calculated based only on equations (1) and (2) above. For example, weights on three mutually orthogonal axes (i.e., x-axis, y-axis, z-axis) at the vertices may be calculated, respectively, and the weights of the three axes may be taken as the values of the richness. The aforementioned three axial weights may represent tensor data of a three-dimensional tensor. Thus, the cookie values may be represented as three-dimensional tensors.
An exemplary description of how the computation of the obtained cookie value is made above. Regarding the gradient values referred to by the above-mentioned geometric feature values, in one implementation scenario, a tetrahedral mesh may first be convolved with a gaussian function, and the gradient thereof may be calculated based on the convolved tetrahedral mesh. Further, the modulus length is calculated for the obtained gradient. Mathematically representing the gradient value calculation of a tetrahedral mesh as
Figure 817383DEST_PATH_IMAGE037
. In particular, the amount of the solvent to be used,
Figure 875469DEST_PATH_IMAGE038
represents a variance of
Figure 451944DEST_PATH_IMAGE039
The distribution of the gaussian component of (a) is,
Figure 519257DEST_PATH_IMAGE040
which represents a convolution of the signals of the first and second,
Figure 737224DEST_PATH_IMAGE041
representing voxel values (i.e., gray values) in a tetrahedral mesh. For gaussian convolution operations, one skilled in the art can perform the calculations by directly calling a gaussian filter function through image processing software (e.g., MATLAB). It is to be understood that in this case, the gradient value at the aforementioned obtained vertex is a real number. One skilled in the art can also calculate the partial derivatives of the three axes (i.e., x-axis, y-axis, z-axis) at the vertex by calculating the partial derivatives of the three axes respectivelyAnd taking partial derivatives on the three axes as tensor data on three dimensions of the three-dimensional tensor. Thus, the gradient values can also be expressed as a three-dimensional tensor.
Further, regarding the average curvature involved in the above-mentioned geometric feature values, in yet another implementation scenario, the lesion area image is assumed to be a function
Figure 356424DEST_PATH_IMAGE042
And the vertex
Figure 295561DEST_PATH_IMAGE043
The normal vector of the isosurface is
Figure 25620DEST_PATH_IMAGE044
Then the vertex can be put
Figure 101023DEST_PATH_IMAGE043
The average curvature K of (d) is defined as:
Figure 32070DEST_PATH_IMAGE045
the iso-surface is understood to be a surface consisting of a collection of points having the same grey value. For three-dimensional data, it can be seen as a collection of a plurality of the aforementioned iso-surfaces. It is to be understood that the mean curvature obtained based on the foregoing definition is a real number. Thus, the average curvature can be directly expressed by a real number calculated based on equation (4).
Returning to FIG. 4, after obtaining a Rich cookie value, gradient value, or average cookie value based on the above, the method 400 then proceeds to step 404. At this step 404, the method 400 replaces the voxel value at the vertex with the richness value, the gradient value, or the average curvature value. Specifically, the voxel value at each vertex may be replaced by the above obtained reed cookie value, gradient value, or average cookie value, where the reed cookie value and gradient value may be expressed as a three-dimensional tensor. Therefore, the invention realizes the extraction of the geometric characteristics of the lung focus area, particularly the new coronary pneumonia focus area by determining the Rich cookie value, the gradient value or the average cookie value.
In conjunction with the above description, the embodiment of the present invention extracts a higher-order geometric feature value, such as a richness value, a gradient value, or an average value of the cookie, for a lung lesion region, and represents the richness value and the gradient value as a three-dimensional tensor. Further, the voxel value of the lung lesion area is replaced by the geometric characteristic value, so that subsequent analysis and research are facilitated. In one implementation scenario, a person skilled in the art may apply the obtained geometric feature values including the new coronary pneumonia image as a data source to an artificial intelligence architecture such as a neural network, and after training or deep learning, a prediction model for the development trend of the lung lesion region may be obtained. Therefore, the high-order geometric characteristics of the invention can be used for accurately predicting the development of the lung focus region so as to facilitate medical care personnel to timely and effectively cure the lung focus region.
FIG. 6 is a block diagram illustrating a system 600 for processing images of a focal zone of a lung according to an embodiment of the invention. As shown in fig. 6, the system 600 includes a device 601 and a CT machine 602 according to an embodiment of the present invention, wherein the CT machine 602 can perform a slice scan of a lung lesion region of a patient to obtain a lung lesion region as shown by 603 in the figure, and transmit to the device 601 of the present invention for processing. In response to receiving an image of the lung lesion as shown at 603, the apparatus 601 of the present invention may perform the operations described above in conjunction with fig. 1-5 to extract the high-order geometric features of the lung lesion.
As shown in the figure, the apparatus for geometric feature extraction of the present invention may include a CPU 611, which may be a general-purpose CPU, a dedicated CPU, or an execution unit of other information processing and program execution. Further, the device 601 may further include a mass storage 612 and/or a read only memory ROM 613, wherein the mass storage 612 may be configured to store various types of data including various lung lesion area image data, algorithm data, intermediate results, and various programs required to run the device 601, and the ROM 613 may be configured to store a power-on self test for the device 601, initialization of various functional modules in the system, a driver for basic input/output of the system, and data required to boot the operating system.
Optionally, the device 601 may also optionally include other hardware platforms or components, such as one or more of the illustrated TPU (tensor processing unit) 614, GPU (graphics processing unit) 615, FPGA (field programmable gate array) 616, and MLU (machine learning unit) 617. It is to be understood that although various hardware platforms or components are shown in the device 601, these are merely exemplary and not limiting, and those skilled in the art may add or remove corresponding hardware as may be desired. For example, the device 601 may include only a CPU to implement the operations of the present invention for geometric feature extraction on a lung lesion region image.
The device 601 of the present invention may also include a communication interface 618 such that it may be connected to a local area network/wireless local area network (LAN/WLAN) 605 via the communication interface 618, which in turn may be connected to a local server 606 via the LAN/WLAN or to the Internet ("Internet") 607. Alternatively or additionally, the device 601 of the present invention may also be directly connected to the internet or a cellular network based on wireless communication technology, such as third generation ("3G"), fourth generation ("4G"), or 5 generation ("5G") based wireless communication technology, through communication interface 618. In some application scenarios, the device 601 of the present invention may also access the server 608 and possibly the database 609 of the external network as needed in order to obtain various known image models, data and modules, and may remotely store various data, such as various types of data for extracting geometric features in images of lung lesion regions.
The peripheral devices of the apparatus 600 may include a display device 622, an input device 623, and a data transmission interface 624. In one embodiment, the display device 622 may include, for example, one or more speakers and/or one or more visual displays configured to perform voice prompt and/or image video display on the calculation process or final result of the extraction of geometric features of the lung lesion region according to the present invention. The input device 623 may include, for example, a keyboard, mouse, microphone, gesture capture camera, or other input buttons or controls configured to receive input of lesion area image data and/or user instructions. The data transfer interface 624 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, FireWire ("FireWire"), PCI Express, and a high-definition multimedia interface ("HDMI"), among others, that are configured for data transfer and interaction with other devices or systems. The data transfer interface 624 may also receive lung lesion field images (as shown in 603) or lesion field image data from a CT machine and transmit the lesion field image data or various other types of data and results to associated processing components within the device 601 in accordance with aspects of the present invention.
The aforementioned CPU 611, mass memory 612, read only memory ROM 613, TPU 614, GPU 615, FPGA 616, MLU 617 and communication interface 618 of the device 601 of the invention may be interconnected via a bus 619 and enable data interaction with peripheral devices via the bus. Through the bus 619, the CPU 611 may control other hardware components and their peripherals in the device 601 in one embodiment.
An apparatus of the present invention for processing images of a focal region of a lung (e.g., including images of a new coronary pneumonia lesion region) is described above in connection with fig. 6. It is to be understood that the device architectures herein are merely exemplary, and that the implementations and implementation entities of the present invention are not limited thereto, but may be varied without departing from the spirit of the invention.
It should also be appreciated that any module, unit, component, server, computer, terminal, or device executing instructions of the examples of the invention may include or otherwise have access to a computer-readable medium, such as a storage medium, computer storage medium, or data storage device (removable and/or non-removable) such as a magnetic disk, optical disk, or magnetic tape. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data.
Based on the foregoing, the present invention also discloses a computer readable storage medium having stored therein program instructions adapted to be loaded by a processor and to perform the following operations: acquiring three-dimensional image data of a lung focus region; generating a tetrahedral mesh formed by connecting a plurality of vertexes according to the three-dimensional image data of the lung lesion region; determining a geometric feature value at the vertex using the voxel value at the vertex; and replacing the voxel value with the geometric characteristic value to realize the extraction of the geometric characteristic of the lung focus area. In summary, the computer readable storage medium includes program instructions for performing the processing operations described in connection with fig. 1-5.
It should be understood that the terms "first," "second," "third," and "fourth," etc. may be included in the claims, the description, and the drawings of the present disclosure to distinguish between different objects, rather than to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Although the embodiments of the present invention are described above, the descriptions are only examples for facilitating understanding of the present invention, and are not intended to limit the scope and application scenarios of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for processing an image of a focal zone of a lung, comprising:
acquiring three-dimensional image data of a lung focus region;
generating a tetrahedral mesh formed by connecting a plurality of vertexes according to the three-dimensional image data of the lung lesion region;
determining a geometric feature value at the vertex using the voxel value at the vertex; and
replacing the voxel values with the geometric feature values to enable extraction of geometric features of the lung lesion region,
wherein the geometric features comprise higher order geometric features reflecting geometric attributes of the lung lesion region, and wherein the geometric feature values are three-dimensional tensor data and are used as training data for a deep neural network,
wherein the deep neural network trained on the geometric feature values is used to obtain a predictive model for the trend of the lung lesion region in order to make an accurate prediction of the development of the lung lesion region.
2. The method of claim 1, wherein the focal region of the lung is a region of the lung infected with a new coronavirus.
3. The method of claim 1, wherein generating the tetrahedral mesh comprises determining boundaries and internal vertices of the tetrahedral mesh.
4. The method of claim 3, wherein determining the boundaries of the tetrahedral mesh comprises generating a two-dimensional mesh from the three-dimensional image data as the boundaries of the tetrahedral mesh.
5. The method of claim 3, wherein determining internal vertices of the tetrahedral mesh comprises determining internal vertices of the tetrahedral mesh from voxel vertices of the three-dimensional image data.
6. The method of claim 4, wherein generating a two-dimensional grid from the three-dimensional image data comprises:
labeling the three-dimensional image region with a boolean variable; and
the two-dimensional grid is generated from the marked three-dimensional image region.
7. The method of claim 1, wherein the geometric feature comprises a reed curvature, a gradient, or an average curvature.
8. The method of claim 6, wherein replacing the voxel value with the geometric feature value comprises:
calculating a reed, gradient, or average curvature value at the vertex from the voxel values at the vertex; and
replacing voxel values at the vertices with the Rich cookie value, gradient value, or average cookie value.
9. An apparatus for processing an image of a focal zone of a lung, further comprising:
a processor; and
a memory connected to the processor, the memory having stored therein computer program code which, when executed by the processor, causes the apparatus to perform the method of any of claims 1-8.
10. A computer-readable storage medium having stored thereon computer-readable instructions for processing an image of a focal zone of a lung, the computer-readable instructions, when executed by one or more processors, performing the method of any one of claims 1-8.
CN202110040070.XA 2021-01-13 2021-01-13 Method for processing images of focal zones of the lungs and related product Active CN112381822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110040070.XA CN112381822B (en) 2021-01-13 2021-01-13 Method for processing images of focal zones of the lungs and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110040070.XA CN112381822B (en) 2021-01-13 2021-01-13 Method for processing images of focal zones of the lungs and related product

Publications (2)

Publication Number Publication Date
CN112381822A CN112381822A (en) 2021-02-19
CN112381822B true CN112381822B (en) 2021-05-11

Family

ID=74591045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110040070.XA Active CN112381822B (en) 2021-01-13 2021-01-13 Method for processing images of focal zones of the lungs and related product

Country Status (1)

Country Link
CN (1) CN112381822B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506241A (en) * 2021-05-25 2021-10-15 首都医科大学附属北京友谊医院 Method for processing images of the ossicular chain and related product
CN114708973B (en) * 2022-06-06 2022-09-13 首都医科大学附属北京友谊医院 Device and storage medium for evaluating human health

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354846A (en) * 2015-11-05 2016-02-24 沈阳东软医疗系统有限公司 Method and apparatus for segmenting three-dimensional medical image
JP6586637B2 (en) * 2015-12-25 2019-10-09 富士通株式会社 Physical quantity distribution calculation program, physical quantity distribution calculation method, and information processing apparatus
CN108805913B (en) * 2018-05-14 2021-12-03 首都医科大学附属北京安贞医院 Fusion method of coronary artery CT image and cardiac ultrasonic strain imaging
CN110570515A (en) * 2019-09-03 2019-12-13 天津工业大学 method for carrying out human skeleton three-dimensional modeling by utilizing CT (computed tomography) image

Also Published As

Publication number Publication date
CN112381822A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
EP3382642B1 (en) Highly integrated annotation and segmentation system for medical imaging
CN107492099B (en) Medical image analysis method, medical image analysis system, and storage medium
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
Shi et al. Automatic segmentation of cardiac magnetic resonance images based on multi-input fusion network
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN112381822B (en) Method for processing images of focal zones of the lungs and related product
CN113744183B (en) Pulmonary nodule detection method and system
CN112381824B (en) Method for extracting geometric features of image and related product
US11776130B2 (en) Progressively-trained scale-invariant and boundary-aware deep neural network for the automatic 3D segmentation of lung lesions
CN112767340A (en) Apparatus and related products for assessing focal zone based on neural network model
CN112750110A (en) Evaluation system for evaluating lung lesion based on neural network and related products
Li et al. Automatic quantification of epicardial adipose tissue volume
Irene et al. Segmentation and approximation of blood volume in intracranial hemorrhage patients based on computed tomography scan images using deep learning method
Rebouças Filho et al. 3D segmentation and visualization of lung and its structures using CT images of the thorax
CN113362291A (en) Method for acquiring target object in image and related product
CN112884706B (en) Image evaluation system based on neural network model and related product
CN115439478B (en) Pulmonary lobe perfusion intensity assessment method, system, equipment and medium based on pulmonary perfusion
CN112381825B (en) Method for focal zone image geometric feature extraction and related products
CN116309346A (en) Medical image detection method, device, equipment, storage medium and program product
Ankireddy Assistive diagnostic tool for brain tumor detection using computer vision
CN115035375A (en) Method for feature extraction of chest CT image and related product
CN113850794A (en) Image processing method and device
CN112862786A (en) CTA image data processing method, device and storage medium
CN112785562A (en) System for evaluating based on neural network model and related products
CN112862785A (en) CTA image data identification method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant